Applied Mathematics and Computation 175 (2006) 1298–1319
www.elsevier.com/locate/amc
Evolutionary algorithm using feasibility-based grouping for numerical constrained optimization problems q Ming Yuchi *, Jong-Hwan Kim Department of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701, Republic of Korea
Abstract Different strategies for defining the relationship between feasible and infeasible individuals in evolutionary algorithms can provide with very different results when solving numerical constrained optimization problems. This paper proposes a novel EA to balance the relationship between feasible and infeasible individuals to solve numerical constrained optimization problems. According to the feasibility of the individuals, the population is divided into two groups, feasible group and infeasible group. The evaluation and ranking of these two groups are performed separately. Parents for reproduction are selected from the two groups by a novel parent selection method. The proposed method is tested using (l, k) evolution strategies with 13 benchmark problems. The results show that the proposed method improves the searching performance for most of the tested problems. 2005 Elsevier Inc. All rights reserved.
q
This work was supported by the ITRC-IRRC (Intelligent Robot Research Center) of the Korea Ministry of Information and Communication in 2004. * Corresponding author. E-mail addresses:
[email protected] (M. Yuchi),
[email protected] (J.-H. Kim). 0096-3003/$ - see front matter 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2005.08.049
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1299
Keywords: Numerical constrained optimization; Evolutionary strategies; Feasible and infeasible individuals; Parent selection; Evaluation; Ranking
1. Introduction The general numerical constrained optimization problem (P) is to find ~ x so as to xÞ; min f ð~ ~ x
~ x ¼ ðx1 ; . . . ; xn Þ 2 Rn ;
ð1Þ
where ~ x 2 F S. The objective function f is defined on the search space S Rn and the set F S defines the feasible region. Usually, the search space S is defined as an n-dimensional rectangle in Rn (domains of variables defined by their lower and upper bounds): lðjÞ 6 xj 6 uðjÞ;
j ¼ 1; . . . ; n;
ð2Þ
where the feasible region F S is defined by a set of m additional constraints (m P 0): gk ð~ xÞ 6 0; hk ð~ xÞ ¼ 0;
k ¼ 1; . . . ; l; k ¼ l þ 1; . . . ; m.
ð3Þ ð4Þ
Any point ~ x F is called a feasible solution, otherwise, ~ x is an infeasible solution. Evolutionary algorithms have received a lot of attention regarding their potential for solving the numerical constrained optimization problems. Different kinds of methods were proposed [1–6]. In recent years, several researchers introduced some new evolutionary algorithms for constrained optimization. Runarsson and Yao [7–9] presented a stochastic ranking method. Kazarlis et al. [10] proposed a genetic algorithm with small population and short evolution as a generalized hill-climbing operator. Ray and Liew [11] incorporated the intra and intersociety interactions within a formal society and the civilization model. Farmani and Wright [12] applied a two-stage penalty to the infeasible individuals. When evolutionary algorithms are used for solving numerical constrained optimization problems, the strategy on how to deal with the relationship between feasible and infeasible individuals directly influence the final results. There have been some works on adjusting the relationship between feasible and infeasible individuals. In [4,13], feasible and infeasible individuals were evaluated with different criteria. GENOCOP III [14] repaired the infeasible individuals for evaluation. ASCHEA [15,16] used a population level adaptive penalty function to handle constraints, a constraint-driven mate selection for recombination, and a segregational selection. In [17], a multimembered evolu-
1300
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
tion strategy was proposed and a diversity mechanism based on allowing infeasible individual close to the feasible region to remain in the population was adopted. In [18,19], a two population genetic algorithm was presented to keep infeasible individuals which may carry valuable information. In this paper, based on the idea of fully utilizing infeasible individuals, a novel evolutionary algorithm using feasibility-based grouping (EA_FG) is proposed for numerical constrained optimization problems. In each generation, according to the feasibility of the individuals, the whole population is divided into two groups: feasible group and infeasible group. Evaluation and ranking of these two groups are performed in parallel and separately. The best individuals from feasible and infeasible groups are selected together as parents. The numbers of feasible and infeasible parents are adjusted by one tuning parameter Sp. EA_FG provides an efficient way to guarantee a certain number of infeasible individuals to reproduce, thus avoids the case that feasible individuals dominate most of the evolution process. Any existing evolutionary algorithms for numerical constrained optimization problems, which evaluate and rank feasible and infeasible individuals together, can be incorporated into EA_FG to improve the performance. In this paper, dynamic penalty method and stochastic ranking method are incorporated into EA_FG to test the effectiveness of EA_FG. The initial study on the EA_FG can be found in [20–23]. This paper is organized as follows. Section 2 presents a detailed description of the overall structure of EA_FG. One simple example is provided to show the effectiveness of EA_FG. In Section 3, experimental results on 13 benchmark problems are described, and suggestion on how to select Sp is presented. Concluding remarks follow in Section 4.
2. EA_FG for numerical constrained optimization 2.1. Feasible and infeasible individuals When evolutionary algorithms are adopted to solve numerical constrained optimization problems, intuitively, feasible individuals could be thought to have better fitness values than infeasible individuals (here ‘‘better’’ means to have more chance to survive and reproduce). Since all constraints for feasible individuals have been already satisfied, the only aim left is to find ~ x which minimizes f ð~ xÞ. However, this kind of view ignores one important point that evolutionary algorithm is a probabilistic method. There should be the possibility that some of the infeasible individuals have more useful information than feasible individuals during some evolution generations. Moreover, an individual can reach the optimal point more easily if it is possible to ‘‘cross’’ an infeasible region (especially in nonconvex feasible search space).
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1301
Fig. 1. Feasible area of Example P1.
Example P1: Consider a constrained optimization problem given by f ð~ xÞ ¼ x21 þ x22 ; 1 6 x1 ; x2 6 1; g1 ð~ xÞ ¼ 0:5x1 þ x2 ;
ð5Þ ð6Þ g2 ð~ xÞ ¼ 0:5x1 x2 .
ð7Þ
The solution of P1 is ðx1 ; x2 Þ ¼ ð0; 0Þ. In Fig. 1, shadowed area represents the feasible area. In a certain number of generation, the ideal parents distribution is that all parents are uniformly located inside the circle x21 þ x22 ¼ r2 (r can be an infinitesimal positive number), so that the offsprings generated by the parents can reside inside the circle and get close to the point (0, 0). We try to solve this problem by evolution strategy with a commonly used penalty method: dynamic penalty method [24]. The basic idea of dynamic penalty method is to evaluate the individuals with the following formula1: 2
wð~ xÞ ¼ f ð~ xÞ þ ðg=2Þ /ðgk ð~ xÞ; hk ð~ xÞ; k ¼ 1; . . . ; l; . . . ; mÞ;
ð8Þ
where g is the number of generation, / > 0 is a real-valued function called the penalty function defined as follows:
1
There are several coefficients for the dynamic penalty method in [24]. The recommended values for these parameters by the authors are used in the experimental studies.
1302
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
/ðgk ð~ xÞ; hk ð~ xÞ; k ¼ 1; . . . ; l; . . . ; mÞ l m X X ¼ maxf0; gk ð~ xÞg2 þ jhk ð~ xÞj2 . k¼1
ð9Þ
k¼lþ1
Population and parent sizes are chosen as k = 30 and l = 6. Offsprings are generated by the mutation: xoffspring = xparent + 0.2 * randN(0, 1), where randN(0, 1) is a random number chosen from a normal distribution with mean zero and variance one. Fig. 2 shows the parents distribution of Example P1 in different generations. In Generation 1, six parents are uniformly distributed around (0, 0). As the generation increases, the penalties added to the infeasible individuals increases, and the feasible individuals dominate the reproduction soon. After 100 generations, all the parents reside in the feasible area. It means that the infeasible area is not fully utilized. To solve this problem, the evolutionary algorithm using feasibility-based grouping (EA_FG) is to be proposed in the next subsection. Generation 1
Generation 100
1
1
0.5
0.5
0
0
–0.5
–0.5
–1 –1
–0.5
0
0.5
1
–1 –1
Generation 200 1
0.5
0.5
0
0
–0.5
–0.5
–0.5
0
0.5
0
0.5
1
Generation 500
1
–1 –1
–0.5
1
–1 –1
–0.5
0
0.5
1
Fig. 2. Parents distribution of Example P1, evolved with dynamic penalty method.
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1303
2.2. Overall structure of EA_FG Based on the idea of fully utilizing infeasible individuals, we propose a novel evolutionary algorithm using feasibility-based grouping (EA_FG) for numerical constrained optimization problems. Fig. 3 shows the flowchart of EA_FG. In the beginning of every generation, the whole population is divided into two groups: feasible group and infeasible group according to the feasibility of every individual. Then, the evaluation and ranking of these two groups are performed in parallel and separately. In order to share the information of these two groups, the best feasible and infeasible individuals are selected as parent population. Parent population reproduces and generates offspring population. The offspring population is to be divided into feasible and infeasible groups again. The iteration keeps on until the termination condition is satisfied. In EA_FG, best infeasible individuals can be ensured to participate in reproduction, and the situation that feasible individuals dominate the reproduction can be avoided. It appears that EA_FG is similar to the methods of evaluating feasible and infeasible individuals with different criteria [4,13,14], and the methods of keeping a portion of infeasible individuals in the population [15–19]. Our method is, however, quite different from theirs because the ranking of infeasible individuals is performed separately from that of feasible individuals. Also, EA_FG is an open structure system, so any existing evolutionary algorithms for numerical constrained optimization problems, which evaluate and rank feasible and infeasible individuals together, can be incorporated into EA_FG. More importantly, the motivation of EA_FG comes from the need for balancing feasible and infeasible individuals so that more information from infeasible individuals can be utilized. When EA_FG is used to solve numerical constrained optimization problems, two aspects should be noted. The first one is how to perform evaluation and ranking of feasible and infeasible groups. The second one is how to select parents from feasible and infeasible groups: how to decide the number of feasible parents and the number of infeasible parents. For the first aspect, different kinds of evaluation and ranking strategies can be used for feasible and infeasible groups. For feasible group, the objective function (1) can be directly adopted as the fitness function. For infeasible group, any evolutionary algorithms for numerical constrained optimization problems can be employed. Evaluation and ranking of the infeasible individuals are performed with the evaluation and ranking method which are originally used for the whole population. For example, for the dynamic penalty method, (8) is used as the fitness function for the whole population, and in EA_FG, (8) will be adopted only for the infeasible population. In this paper, the dynamic penalty method [24] and the stochastic ranking method [7] are incorporated into EA_FG. The dynamic penalty method is one of the penalty methods
1304
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Begin
Initial population
Check feasibility Feasible
Infeasible
Feasible group
Infeasible group
Evaluate and rank
Evaluate and rank
Select parents Parents population
Reproduce
Offspring population
No
Check termination condition Yes End
Fig. 3. Flow chart of EA_FG.
which has been investigated by many researchers. The stochastic ranking method is simple to implement and it showed good performance on a set of 13 benchmark problems. In the later part of this paper, we will call the EA_FG
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1305
with dynamic penalty method and stochastic ranking method ÔEA_FG_DÕ and ÔEA_FG_S,Õ respectively. The second aspect of EA_FG is described in the following. Normally, the number of parents and the number of individuals in an evolutionary algorithm are predefined constants. For example, (30, 200)-ES (evolution strategies [25]) means the number of parents is 30 and the number of individuals in a population is 200. Since the evaluation and ranking of feasible and infeasible groups in EA_FG are performed separately, we need a strategy to decide how many feasible and infeasible individuals to be selected as parents for reproduction. For this purpose, a parent selection strategy is proposed, where the number of feasible parents is set to be proportional to the number of feasible individuals by a positive integer parameter Sp. Numbers of feasible and infeasible parents are calculated as follows: numFeaInd ðiÞ numFeaPar ¼ round ; ð10Þ Sp If numFeaPar > numPar; numFeaPar ¼ numPar; ðiiÞ numInfeaPar ¼ numPar numFeaPar;
ð11Þ ð12Þ
where numFeaPar represents Ônumber of feasible parentsÕ, numFeaInd Ônumber of feasible individualsÕ, numPar Ônumber of parentsÕ, numInfeaPar Ônumber of infeasible parentsÕ. Note that (11) restricts numFeaPar not to exceed the predefined number of parents (numPar). Once numbers of feasible and infeasible parents are decided, we can select the corresponding numbers of best individuals from the feasible and infeasible groups, respectively, according to their ranking. To explain this strategy more clearly, assume that (30, 200)-ES is used for computation. If Sp = 5 and numFeaInd = 20, by (10) we can get numFeaPar = 4. And by (12), numInfeaPar = 30 4 = 26. Therefore, we can select the best four individuals from the feasible group and best 26 individuals from the infeasible group, thus to form 30 parents for reproduction. Also, if Sp = 5 and numFeaInd = 180, by (10) we can get numFeaPar = 36. This exceeds numPar = 30. Eq. (11) restricts numFeaPar to be 30. And by (12), numInfeaPar = 30 30 = 0. In this case, all 30 parents are from the feasible group. Fig. 4 shows the relation between numFeaPar and numFeaInd of EA_FG with (30, 200)-ES. Sp is tested from 1 to 6. For example, if Sp = 1 as in Fig. 4 (a), when numFeaInd < 30, all feasible individuals are selected as parents, and the rest are from the infeasible group; when numFeaInd P 30, all of the parents are from the feasible group by (10) and (11). In this case, the feasible group has a reproduction priority over the infeasible group. With the increase of Sp, the reproduction priority of feasible group decreases so that more infeasible individuals are selected as the parents.
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319 35
35
30
30
30
25
25
25
20 15 10
20 15 10
0 0
20 40 60 80 100 120 140 160 180 200
numFeaInd
(b)
15 10 5 0 0
20 40 60 80 100 120 140 160 180 200
numFeaInd
(c) 35
30
30
25
25
25
15 10
20 15 10
5 0 0
(d)
numFeaPar
35
30
20
0 0
numFeaInd
(e)
20 15 10 5
5 20 40 60 80 100 120 140 160 180 200
20 40 60 80 100 120 140 160 180 200
numFeaInd
35
numFeaPar
numFeaPar
(a)
20
5
5 0 0
numFeaPar
35
numFeaPar
numFeaPar
1306
0 0
20 40 60 80 100 120 140 160 180 200
numFeaInd
(f)
20 40 60 80 100 120 140 160 180 200
numFeaInd
Fig. 4. The relation between number of feasible parents (numFeaPar) and number of feasible individuals (numFeaInd) of EA_FG with (30, 200)-ES. (a) Sp = 1. (b) Sp = 2. (c) Sp = 3. (d) Sp = 4. (e) Sp = 5. (f) Sp = 6.
Example P1 is tested again with EA_FG_D, Sp is set to be 5, all other parameters are the same as those of the dynamic penalty method. Fig. 5 shows the parents distribution of Example P1 evolved with EA_FG_D. Comparing to Fig. 2, parents selected in EA_FG_D surrounds the solution (0, 0) in both feasible and infeasible areas. Therefore, the offsprings generated with these parents have more chances to reach the solution.
3. Experimental studies In this section, EA_FG_D and EA_FG_S are tested with 13 benchmark functions [7]. The details of these functions are listed in Appendix. General information about these functions is shown in Table 1. Problems G2, G3, G8 and G12 are maximization problems. They were transformed into minimization problems using f ð~ xÞ. Problems G3, G5, G11 and G13 include one or several equality constraints. All of these equality constraints were converted into inequality constraints, jhð~ xÞj d 6 0, using the degree of violation d = 0.0001. A (l, k)-ES was employed for computation. For impartial comparison, all parameters of the ES used here were the same as those of [7]. For each of the benchmark problems, 30 independent runs were performed using (30, 200)-ES. The initial population of ~ x was generated according to a uniform
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319 Generation 1
Generation 100
1
1
0.5
0.5
0
0
–0.5
–0.5
–1 –1
–0.5
0
0.5
1
–1 –1
–0.5
Generation 200 1
0.5
0.5
0
0
–0.5
–0.5
–0.5
0
0.5
0
0.5
1
Generation 500
1
–1 –1
1307
1
–1 –1
– 0.5
0
0.5
1
Fig. 5. Parents distribution of Example P1, evolved with EA_FG_D, Sp = 5.
n-dimensional probability distribution over the search space S. Sp was set to be 5. The simulation program was coded with C. 3.1. Experimental results of EA_FG with dynamic penalty method (EA_FG_D) Table 2 shows the comparison of the simulation results between EA_FG_D and dynamic penalty method. The table shows the known ÔoptimalÕ solution for each problem and statistics for the 30 independent runs. Best, mean and worst values are used as performance criteria. Thirteen problems totally have 13 · 3 = 39 criteria. Among these 39 criteria, • EA_FG_D performed better on 21 criteria, • EA_FG_D performed worse on five criteria, and • EA_FG_D performed the same on 13 criteria.
1308
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Table 1 Summary of 13 benchmark functions [9] Function
n
Type of f
q (%)
LI
NE
NI
a
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12 G13
13 20 10 5 4 2 10 2 7 8 2 3 5
Quadratic Nonlinear Polynomial Quadratic Cubic Cubic Quadratic Nonlinear Polynomial Linear Quadratic Quadratic Exponential
0.011 99.990 0.002 52.123 0.000 0.006 0.000 0.856 0.512 0.001 0.000 4.779 0.000
9 0 0 0 2 0 3 0 0 3 0 0 0
0 0 1 0 3 0 0 0 0 0 1 0 3
0 2 0 6 0 2 5 2 4 3 0 93 0
6 1 1 2 3 2 6 0 2 3 1 0 3
n means number of variables; q is the relative size of the feasible region in the search space; LI, NE and NI represent the number of linear inequalities, nonlinear equalities and nonlinear inequalities, respectively; a is the number of active constraints at the optimum.
For problems G1, G8 and G11, both algorithms performed well and found the optimal solutions for all 30 runs. For problem G10, both algorithms failed to find the optimal solution. For problem G2, EA_FG_D provides ÔsimilarÕ results to the dynamic penalty method. It performed better on mean, but worse on best and worst than dynamic penalty method. For the rest of problems, EA_FG_D outperformed the dynamic penalty method except that a couple of criteria: worst of G6, mean and worst of G9. For problem G3, best of dynamic penalty method is 0.583, while best of EA_FG_D could reach the optimal value 1.000. For problems G4 and G6, EA_FG_D performed significantly better in terms of all three criteria: best, mean and worst. It should be noted that for problem G5, EA_FG_D could find a feasible solution seven times out of 30 runs, while the dynamic penalty method failed to find the solution. 3.2. Experimental results of EA_FG with stochastic ranking method (EA_FG_S) Table 3 shows the comparison between EA_FG_S and the stochastic ranking method. A stochastic ranking parameter Pf was set to be 0.45. Among the 39 criteria, • EA_FG_S performed better on 12 criteria, • EA_FG_S performed worse on six criteria, and • EA_FG_S performed the same on 20 criteria (both methods can always reach the optimal values).
Function
G1 G2 G3 G4 G5(7,) G6 G7 G8 G9 G10 G11 G12 G13
Optimal
15.000 0.803619 1.000 30 665.539 5126.498 6961.814 24.306 0.095825 680.630 7049.331 0.750 1.000000 0.053950
Best result
Mean result
Worst result
EA_FG_D
DP
EA_FG_D
DP
EA_FG_D
DP
15.000 0.803542 1.000 30 665.314 5289.147 6955.392 24.308 0.095825 680.631 – 0.750 1.000000 0.440485
15.000 0.803587 0.583 30 365.488 – 6911.247 24.309 0.095825 680.632 – 0.750 1.000000 0.514152
15.000 0.786685 0.134 30 569.466 5509.922 6578.087 24.364 0.095825 680.662 – 0.750 0.999993 0.950496
15.000 0.784868 0.103 30 072.458 – 6540.012 24.421 0.095825 680.659 – 0.750 0.999838 0.965397
15.000 0.723099 0.009 30 348.470 5714.789 5282.978 24.503 0.095825 680.869 – 0.750 0.999937 0.998079
15.000 0.751624 0.001 29 871.442 – 5868.028 25.534 0.095825 680.775 – 0.750 0.999573 0.998156
The subscript 7 in the function name G5(7,) indicates seven feasible solutions found among 30 tests. ‘‘–’’ means no feasible solutions were found.
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Table 2 Comparison between EA_FG_D and dynamic penalty method (indicated by DP, data are from [7])
1309
1310
Function
Optimal
Best result
Mean result
Worst result
EA_FG_S
SR
EA_FG_S
SR
EA_FG_S
SR
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12 G13
15.000 0.803619 1.000 30 665.539 5126.498 6961.814 24.306 0.095825 680.630 7049.331 0.750 1.000000 0.053950
15.000 0.803503 1.000 30 665.539 5126.497 6961.814 24.307 0.095825 680.630 7050.965 0.750 1.000000 0.053947
15.000 0.803515 1.000 30 665.539 5126.497 6961.814 24.307 0.095825 680.630 7054.316 0.750 1.000000 0.053957
15.000 0.790615 0.999 30 665.539 5128.664 6894.914 24.323 0.095825 680.635 7181.072 0.750 1.000000 0.080335
15.000 0.781975 1.000 30 665.539 5128.881 6875.940 24.374 0.095825 680.656 7559.192 0.750 1.000000 0.067543
15.000 0.740292 0.998 30 665.539 5150.440 6453.066 24.379 0.095825 680.648 7526.509 0.750 1.000000 0.438829
15.000 0.726288 1.000 30 665.539 5142.472 6350.262 25.642 0.095825 680.763 8835.655 0.750 1.000000 0.216915
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Table 3 Comparison between EA_FG_S and stochastic ranking method (indicated by SR, data are from [7])
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1311
For problems G1, G4, G8, G11 and G12, both two algorithms performed well and found the optimal solutions for all 30 runs. For problem G2, best of EA_FG_S was a little worse than that of the stochastic ranking method, while mean and worst were much better. For problem G3, the stochastic ranking method was more consistent. For the rest of problems, EA_FG_S performed better than the stochastic ranking method except a couple of criteria: worst of G5, worst and mean of G13. For problems G5 and G13, the best results of EA_FG_S were even better than the optimal solution. This is the consequence of converting equality constraints into inequality constraints, although a very small d was used. For problems G6, G7 and G9, both of these two algorithms found the same best values, but EA_FG_S performed much better on the other two criteria, mean and worst. For problem G10, EA_FG_S performed significantly better than the stochastic ranking method on all criteria: best, mean and worst of stochastic ranking method were 7054.316, 7559.192 and 8835.655, while the corresponding values of EA_FG_S were 7050.965, 7181.072 and 7526.509. It should be noted that the worst of EA_FG_S was even better than the mean of stochastic ranking method. 3.3. Experimental results on different Sp In the former parts of this section, experimental results of EA_FG_D and EA_FG_S were tested with Sp = 5. In order to evaluate the effect of Sp on the results generated by the proposed algorithm, the benchmark suit of 13 constrained problems were tested with different Sp values. Considering the capability of the stochastic ranking method for solving the numerical constrained optimization problems, EA_FG_S was used. All the parameters of EA_FG_S were kept the same as those in the previous experiments except the value of Sp. Four different values, Sp = 1, 3, 5 and 7 were tried. To get a direct impression of the influence of Sp, we put the bests of the 30 runs found by EA_FG_S with different Sps in the same table (Table 4). The best results (closest to the optimal value) of the bests were marked with a bold font. Table 5 shows the optimal Sp for different problems. The results imply that for different problems, best parent selection strategy could be different (different Sps) although the same ranking processes (Pf = 0.45) were performed. Sp = 3, 5 can provide best results for most test functions with (30, 200)-ES, we suggest to choose Sp to be in the range of [3, 5]. If the value inside [3, 5] can not provide satisfactory results, then the user may try the values near [3, 5] like 2 or 6. One and 7 are the last choices. In summary, the numerical experiments with the benchmark problems suggest that EA_FG provides an efficient method to improve the performance of evolutionary algorithms for numerical constrained optimization problems.
1312
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Table 4 Best solutions of 30 runs found by EA_FG_S with different Sps; Pf = 0.45 Function
Optimal
Best
15.000 0.803619 1.000 30 665.539 5126.498 6961.814 24.306 0.095825 680.630 7049.331 0.750 1.000000 0.053950
G1 G2 G3 G4 G5 G6 G7 G8 G9 G10 G11 G12 G13
Sp = 1
Sp = 3
Sp = 5
Sp = 7
15.000 0.803579 1.000 30 665.539 5126.497 6961.814 24.309 0.095825 680.631 7059.975 0.750 1.000000 0.053945
15.000 0.803590 1.000 30 665.539 5126.497 6961.814 24.306 0.095825 680.632 7051.999 0.750 1.000000 0.053942
15.000 0.803503 1.000 30 665.539 5126.497 6961.814 24.307 0.095825 680.630 7050.965 0.750 1.000000 0.053947
15.000 0.803328 1.000 30 665.539 5126.510 6961.814 24.307 0.095825 680.630 7051.459 0.750 1.000000 0.053942
The best one of each problem is emphasized with bold font.
Table 5 Sps for finding best of the bests Function
Sp
G1
G2
G3
G4
G5
G6
G7
G8
G9
G10
G11
G12
G13
All
3
All
All
1, 3, 5
All
3
All
5, 7
5
All
All
3, 7
ÔAllÕ means every Sp performed same.
4. Conclusion In this paper, we proposed a novel constraint handling technique: evolutionary algorithm using feasibility-based grouping. This method divides the population into two groups, feasible group and infeasible group, according to the feasibility of the individuals. The evaluation and ranking of these two groups are performed in parallel and separately. Parent selection from these two groups is tuned by a single parameter Sp which determines the numbers of feasible and infeasible parents. In addition, two existing evolutionary algorithms: dynamic penalty method and stochastic ranking method are modified as EA_FG_D and EA_FG_S to evaluate and rank the infeasible group. Both EA_FG_D and EA_FG_S were tested on a set of 13 benchmark problems and showed improved performance. Also, the influence of Sp to the performance of EA_FG was investigated with different Sps (1, 3, 5 and 7).
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1313
Appendix Problem G1: Minimize
f ð~ xÞ ¼ 5
4 X
xj 5
j¼1
4 X
x2j
j¼1
13 X
xj
j¼5
subject to g1 ð~ xÞ ¼ 2x1 þ 2x2 þ x10 þ x11 10 6 0; g2 ð~ xÞ ¼ 2x1 þ 2x3 þ x10 þ x12 10 6 0; g3 ð~ xÞ ¼ 2x2 þ 2x3 þ x11 þ x12 10 6 0; g4 ð~ xÞ ¼ 8x1 þ x10 6 0; g5 ð~ xÞ ¼ 8x2 þ x11 6 0; g6 ð~ xÞ ¼ 8x3 þ x12 6 0; g7 ð~ xÞ ¼ 2x4 x5 þ x10 6 0; g8 ð~ xÞ ¼ 2x6 x7 þ x11 6 0; g9 ð~ xÞ ¼ 2x8 x9 þ x12 6 0; and bounds 0 6 xj 6 1
ðj ¼ 1; . . . ; 9Þ;
0 6 xj 6 100 ðj ¼ 10; 11; 12Þ;
0 6 x13 6 1.
The global minimum is at ~ x ¼ ð1; 1; 1; 1; 1; 1; 1; 1; 1; 3; 3; 3; 1Þ, and f ð~ x Þ ¼ 15. Problem G2:
Maximize
Pn Qn 4 2 j¼1 cos ðxj Þ 2 j¼1 cos ðxj Þ ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi f ð~ xÞ ¼ Pn 2 j¼1 jxj
subject to g1 ð~ xÞ ¼ 0:75
n Y
xj 6 0;
j¼1
g2 ð~ xÞ ¼
n X
xj 7:5n 6 0;
j¼1
and bounds 0 6 xj 6 10
ðj ¼ 1; . . . ; nÞ;
where n = 20. The global maximum is unknown; the known solution is f ð~ x Þ ¼ 0:803619.
1314
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Problem G3: Maximize
n pffiffiffi n Y xj f ð~ xÞ ¼ ð nÞ j¼1
subject to h1 ð~ xÞ ¼
n X
x2j 1 ¼ 0;
j¼1
and bounds 0 6 xj 6 1
ðj ¼ 1; . . . ; nÞ;
pffiffiffi where n = 10. The global minimum is at xj ¼ 1= n ðj ¼ 1; . . . ; nÞ, and f ð~ x Þ ¼ 1. Problem G4: Minimize f ð~ xÞ ¼ 5:3578547x23 þ 0:8356891x1 x5 þ 37:293239x1 40792:141 subject to g1 ð~ xÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4 0:0022053x3 x5 92 6 0; g2 ð~ xÞ ¼ 85:334407 0:0056858x2 x5 0:0006262x1 x4 þ 0:0022053x3 x5 6 0; xÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2 þ 0:0021813x23 110 6 0; g3 ð~ xÞ ¼ 80:51249 0:0071317x2 x5 0:0029955x1 x2 0:0021813x23 þ 90 6 0; g4 ð~ g5 ð~ xÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3 þ 0:0019085x3 x4 25 6 0; xÞ ¼ 9:300961 0:0047026x3 x5 0:0012547x1 x3 0:0019085x3 x4 þ 20 6 0; g6 ð~
and bounds 78 6 x1 6 102;
33 6 x2 6 45;
27 6 xj 6 45 ðj ¼ 3; 4; 5Þ.
The optimal solution is at ~ x ¼ ð78; 33; 29:995256025682; 45; 36:775812905788Þ, and f ð~ x Þ ¼ 30 665:539. Problem G5: Minimize
f ð~ xÞ ¼ 3x1 þ 0:000001x31 þ 2x2 þ ð0:000002=3Þx32
subject to g1 ð~ xÞ ¼ x4 þ x3 0:55 6 0; g2 ð~ xÞ ¼ x3 þ x4 0:55 6 0; h1 ð~ xÞ ¼ 1000 sinðx3 0:25Þ þ 1000 sinðx4 0:25Þ þ 894:8 x1 ¼ 0; h2 ð~ xÞ ¼ 1000 sinðx3 0:25Þ þ 1000 sinðx3 x4 0:25Þ þ 894:8 x2 ¼ 0; h3 ð~ xÞ ¼ 1000 sinðx4 0:25Þ þ 1000 sinðx4 x3 0:25Þ þ 1294:8 ¼ 0;
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1315
and bounds 0 6 x1 6 1200; 0 6 x2 6 1200; 0:55 6 x3 6 0:55; 0:55 6 x4 6 0:55. ~ x ¼ ð679:9453; 1026:067; 0:1188764;
The best known solution is at 0:3962336Þ, and f ð~ x Þ ¼ 5126:4981. Problem G6: Minimize
3
f ð~ xÞ ¼ ðx1 10Þ þ ðx2 20Þ 2
3
2
subject to g1 ð~ xÞ ¼ ðx1 5Þ ðx2 5Þ þ 100 6 0; 2
2
xÞ ¼ ðx1 6Þ þ ðx2 5Þ 82:81 6 0; g2 ð~ and bounds 13 6 x1 6 100;
0 6 x2 6 100.
The known global solution is ~ x ¼ ð14:095; 0:84296Þ, and f ð~ x Þ ¼ 6961:81388. Problem G7: Minimize 2
2
2
f ð~ xÞ ¼ x21 þ x22 þ x1 x2 14x1 16x2 þ ðx3 10Þ þ 4ðx4 5Þ þ ðx5 3Þ 2
2
2
2
þ 2ðx6 1Þ þ 5x27 þ 7ðx8 11Þ þ 2ðx9 10Þ þ ðx10 7Þ þ 45 subject to g1 ð~ xÞ ¼ 105 þ 4x1 þ 5x2 3x7 þ 9x8 6 0; g2 ð~ xÞ ¼ 10x1 8x2 17x7 þ 2x8 6 0; g3 ð~ xÞ ¼ 8x1 þ 2x2 þ 5x9 2x10 12 6 0; 2
2
xÞ ¼ 3ðx1 2Þ þ 4ðx2 3Þ þ 2x23 7x4 120 6 0; g4 ð~ 2
xÞ ¼ 5x21 þ 8x2 þ ðx3 6Þ 2x4 40 6 0; g5 ð~ 2
xÞ ¼ x21 þ 2ðx2 2Þ 2x1 x2 þ 14x5 6x6 6 0; g6 ð~ xÞ ¼ 0:5ðx1 8Þ2 þ 2ðx2 4Þ2 þ 3x25 x6 30 6 0; g7 ð~ xÞ ¼ 3x1 þ 6x2 þ 12ðx9 8Þ2 7x10 6 0; g8 ð~ and bounds 10 6 xj 6 10
ðj ¼ 1; . . . ; 10Þ.
The optimal solution is at ~ x ¼ ð2:171996; 2:363683; 8:773926; 5:095984; 0:9906548; 1:430574; 1:321644; 9:828726; 8:280092; 8:375927Þ, and f ð~ x Þ ¼ 24:3062091.
1316
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
Problem G8: Maximize
f ð~ xÞ ¼
sin3 ð2px1 Þ sinð2px2 Þ x31 ðx1 þ x2 Þ
xÞ ¼ x21 x2 þ 1 6 0; subject to g1 ð~ 2
xÞ ¼ 1 x1 þ ðx2 4Þ 6 0; g2 ð~ and bounds 10 6 x1 6 10;
10 6 x2 6 10.
The optimal solution is at ~ x ¼ ð1:2279713; 4:2453733Þ, and f ð~ x Þ ¼ 0:095825. Problem G9: Minimize
2
2
f ð~ xÞ ¼ ðx1 10Þ þ 5ðx2 12Þ þ x43 þ 3ðx4 11Þ
2
þ 10x65 þ 7x26 þ x47 4x6 x7 10x6 8x7 subject to g1 ð~ xÞ ¼ 127 þ 2x21 þ 3x42 þ x3 þ 4x24 þ 5x5 6 0; g2 ð~ xÞ ¼ 282 þ 7x1 þ 3x2 þ 10x23 þ x4 x5 6 0; xÞ ¼ 196 þ 23x1 þ x22 þ 6x26 8x7 6 0; g3 ð~ g4 ð~ xÞ ¼ 4x21 þ x22 3x1 x2 þ 2x23 þ 5x6 11x7 6 0; and bounds 10 6 xj 6 10
ðj ¼ 1; . . . ; 7Þ.
The known global solution is at ~ x ¼ ð2:330499; 1:951372; 0:4775414; 4:365726; 0:6244870; 1:038131; 1:594227Þ, and f ð~ x Þ ¼ 680:6300573. Problem G10: Minimize
f ð~ xÞ ¼ x1 þ x2 þ x3
subject to g1 ð~ xÞ ¼ 1 þ 0:0025ðx4 þ x6 Þ 6 0; g2 ð~ xÞ ¼ 1 þ 0:0025ðx5 þ x7 x4 Þ 6 0; g3 ð~ xÞ ¼ 1 þ 0:01ðx8 x5 Þ 6 0; g4 ð~ xÞ ¼ x1 x6 þ 833:33252x4 þ 100x1 83 333:333 6 0; g5 ð~ xÞ ¼ x2 x7 þ 1250x5 þ x2 x4 1250x4 6 0; g6 ð~ xÞ ¼ x3 x8 þ 1250000 þ x3 x5 2500x5 6 0; and bounds 100 6 x1 6 10000; 1000 6 xj 6 10000 10 6 xj 6 1000 ðj ¼ 4; . . . ; 8Þ.
ðj ¼ 2; 3Þ;
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1317
The optimal solution is at ~ x ¼ ð579:3167; 1359:943; 5110:071; 182:0174; 295:5985; 217:9799; 286:4162; 395:5979Þ, and f ð~ x Þ ¼ 7049:3307. Problem G11: Minimize
f ð~ xÞ ¼ x21 þ ðx2 1Þ2
subject to h1 ð~ xÞ ¼ x2 x21 ¼ 0; and bounds 1 6 x1 6 1;
1 6 x2 6 1.
pffiffiffi pffiffiffi The optimal solution is at ~ x ¼ ð1= 2; 1= 2Þ, and f ð~ x Þ ¼ 0:75. Problem G12: Maximize
f ð~ xÞ ¼ ð100 ðx1 5Þ2 ðx2 5Þ2 ðx3 5Þ2 Þ=100
subject to gð~ xÞ ¼ ðx1 pÞ2 þ ðx2 qÞ2 þ ðx3 rÞ2 0:0625 6 0; and bounds 0 6 xj 6 10
ðj ¼ 1; 2; 3Þ;
where p, q, r = 1, 2, . . . , 9. The feasible region of the search space consists of 93 disjointed spheres. A point (x1, x2, x3) is feasible if and only if there exists p, q, r such that the above inequality holds. The optimal solution is at ~ x ¼ ð5; 5; 5Þ, and f ð~ x Þ ¼ 1. Problem G13: Minimize
f ð~ xÞ ¼ ex1 x2 x3 x4 x5
subject to h1 ð~ xÞ ¼ x21 þ x22 þ x23 þ x24 þ x25 10 ¼ 0; h2 ð~ xÞ ¼ x2 x3 5x4 x5 ¼ 0; h3 ð~ xÞ ¼ x31 þ x32 þ 1 ¼ 0; and bounds 2:3 6 xj 6 2:3 ðj ¼ 1; 2Þ;
3:2 6 xj 6 3:2
ðj ¼ 3; 4; 5Þ.
The optimal solution is at ~ x ¼ ð1:717143; 1:595709; 1:827247; 0:7636413; 0:763645Þ, and f ð~ x Þ ¼ 0:0539498. References [1] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary Computation 4 (1) (1996) 1–31. [2] J.-H. Kim, H. Myung, Evolutionary programming techniques for constrained optimization problems, IEEE Transactions on Evolutionary Computation 1 (2) (1997) 129–140.
1318
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
[3] M.J. Tahk, B.C. Sun, Coevolutionary augmented lagrangian methods for constrained optimization, IEEE Transactions on Evolutionary Computation 4 (2) (2000) 114–124. [4] R. Hinterding, Z. Michalewicz, Your brains and my beauty: parent matching for constrained optimisation, in: Proc. 5th IEEE Int. Conf. Evolutionary Computation, Anchorage, AK, 1998, pp. 810–815. [5] S. Koziel, Z. Michalewicz, Evolutionary algorithms, homomorphous mapping and constrained parameter optimization, Evolutionary Computation 7 (1) (1999) 19–44. [6] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolutionary Programs, third ed., Springer, 1999. [7] T.P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294. [8] T.P. Runarsson, X. Yao, Evolutionary search and constraint violations, in: Proc. Conf. Evolutionary Computation 2003, Canberra, Australia, vol. 2, 2003, pp. 1414–1419. [9] T.P. Runarsson, X. Yao, Search biases in constrained evolutionary optimization, IEEE Transactions on System, Man and Cybernetics—Part C 35 (2) (2005) 233–243. [10] S.A. Kazarlis, S.E. Papadakis, J.B. Theocharis, V. Petridis, Microgenetic algorithms as generalized hill-climbing operators for GA optimization, IEEE Transactions on Evolutionary Computation 5 (3) (2001) 204–217. [11] T. Ray, K.M. Liew, Society and civilization: an optimization algorithm based on the simulation of social behavior, IEEE Transactions on Evolutionary Computation 7 (4) (2003) 386–396. [12] R. Farmani, J.A. Wright, Self-adaptive fitness formulation for constrained optimization, IEEE Transactions on Evolutionary Computation 7 (5) (2003) 445–455. [13] F. Hoffmeister, J. Sprave, Problem-independent handling of constraints by use of metric penalty functions, in: L. Fogel, P. Angeline, T. Ba¨ck (Eds.), Proc. 5th Annual Conf. Evolutionary Programming, MIT Press, Cambridge, MA, 1996, pp. 289–294. [14] Z. Michalewicz, G. Nazhiyath, Genocop III: a co-evolutionary algorithm for numerical optimization problems with nonlinear constraints, in: D.B. Fogel et al. (Eds.), Proc. 2nd IEEE Int. Conf. Evolutionary Computation, Piscataway, NJ, 1995, pp. 647–651. [15] S.B. Hamida, M. Schoenauer, An adaptive algorithm for constrained optimization problems, in: M. Schoenauer et al. (Eds.), Proc. 6th Conf. Parallel Problems Solving from Nature, Springer-Verlag, 2000, pp. 529–538. [16] S.B. Hamida, M. Schoenauer, ASCHEA: new results using adaptive segregational constraint handling, in: Proc. Cong. Evolutionary Computation 2002, 2002, pp. 884–889. [17] E.-M. Montes, C.A. Coello Coello, An improved diversity mechanism for solving constrained optimization problems using a multimembered evolution strategy, in: K. Deb et al. (Eds.), Proc. Genetic and Evolutionary Computation Conf. 2004, Lecture Notes in Computer Science, vol. 3102, Springer-Verlag, 2004, pp. 700–712. [18] S.O. Kimbrough, M. Lu, D.H. Wood, D.J. Wu, Exploring a two-population genetic algorithm, in: Proc. Genetic and Evolutionary Computation Conf. 2003, Lecture Notes in Computer Science, vol. 2723, Springer-Verlag, 2003, pp. 1148–1159. [19] S.O. Kimbrough, M. Lu, D.H. Wood, Exploring the evolutionary details of a feasibleinfeasible two-population GA, in: X. Yao et al. (Eds.), Proc. 8th Parallel Problem Solving From Nature, Lecture Notes in Computer Science, vol. 3243, Springer-Verlag, 2004, pp. 292– 301. [20] M. Yuchi, J.-H. Kim, A grouping-based evolutionary algorithm for constrained optimization problem, in: Proc. Cong. Evolutionary Computation 2003, Canberra, Australia, 2003, pp. 1507–1512. [21] M. Yuchi, J.-H. Kim, Grouping-based evolutionary algorithm: seeking balance between feasible and infeasible individuals of constrained optimization problems, in: Proc. Cong. Evolutionary Computation 2004, Portland, OR, 2004, pp. 280–287.
M. Yuchi, J.-H. Kim / Appl. Math. Comput. 175 (2006) 1298–1319
1319
[22] M. Yuchi, J.-H. Kim, Grouping-based evolutionary algorithm improves the performance of dynamic penalty method for constrained optimization problems, in: Proc. 5th Int. Conf. Simulated Evolution and Learning, Busan, Korea, 2004. [23] M. Yuchi, J.-H. Kim, Ecology-inspired evolutionary algorithm using feasibility-based grouping for constrained optimization, in: Proc. Cong. Evolutionary Computation 2005, Edinburgh, UK, 2005, pp. 1455–1461. [24] J. Joines, C. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs, in: Z. Michalewicz et al. (Eds.), Proc. 1st IEEE Int. Conf. Evolutionary Computation, Orlando, FL, 1994, pp. 579–584. [25] H.G. Beyer, H.P. Schwefel, Evolution strategies—a comprehensive introduction, Natural Computing 1 (2002) 3–52.