Expert Systems with Applications 40 (2013) 5621–5634
Contents lists available at SciVerse ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
An improved electromagnetism-like mechanism algorithm for constrained optimization Chunjiang Zhang, Xinyu Li, Liang Gao ⇑, Qing Wu The State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science & Technology, Wuhan, Hubei 430074, PR China
a r t i c l e
i n f o
Keywords: Constrained optimization Electromagnetism-like mechanism algorithm Feasibility and dominance rules
a b s t r a c t Many problems in scientific research and engineering applications can be decomposed into the constrained optimization problems. Most of them are the nonlinear programming problems which are very hard to be solved by the traditional methods. In this paper, an electromagnetism-like mechanism (EM) algorithm, which is a meta-heuristic algorithm, has been improved for these problems. Firstly, some modifications are made for improving the performance of EM algorithm. The process of calculating the total force is simplified and an improved total force formula is adopted to accelerate the searching for optimal solution. In order to improve the accuracy of EM algorithm, a parameter called as move probability is introduced into the move formula where an elitist strategy is also adopted. And then, to handle the constraints, the feasibility and dominance rules are introduced and the corresponding charge formula is used for biasing feasible solutions over infeasible ones. Finally, 13 classical functions, three engineering design problems and 22 benchmark functions in CEC’06 are tested to illustrate the performance of proposed algorithm. Numerical results show that, compared with other versions of EM algorithm and other state-of-art algorithms, the improved EM algorithm has the advantage of higher accuracy and efficiency for constrained optimization problems. Ó 2013 Elsevier Ltd. All rights reserved.
1. Introduction Constrained global optimization problems are frequently encountered in science and engineering such as structural optimization, engineering design, VLSI design, economics, and management science (Kayhan, Ceylan, Ayvaz, & Gurarslan, 2010). Generally, these problems can be formulated mathematically as follows:
min f ðXÞ s:t g k ðXÞ 6 0; hp ðXÞ ¼ 0;
k ¼ 1; 2; . . . ; m p ¼ 1; 2; . . . ; l
ð1Þ
X 2 ½L; U ½L; U :¼ fX 2 Rn jlk 6 xk 6 uk ; k ¼ 1; . . . ; ng where f, g, h are real-valued functions; X = (x1, x2, . . ., xn) is a solution vector; [L, U] is defined as a n-dimensional space with lower and upper bounds; m and l are the number of inequality and equality constraints, respectively. Normally, an equality constraint is dealt with an inequality constraint jhðXÞj e 6 0 for solving con⇑ Corresponding author. Tel.: +86 27 87557742; fax: +86 27 87543074. E-mail address:
[email protected] (L. Gao). 0957-4174/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.028
strained optimization problems, where e, which is a very small positive value, is the tolerance allowed. When the objective functions and the constraints possess characteristics as high dimensions, non-convex, non-differentiable, and existing lots of local minimum, they are very hard to be solved by traditional methods such as augmented Lagrange multiplier method, sequential quadratic programming (SQP) algorithm and generalized reduced gradient algorithm. Recently, evolutionary algorithms incorporated with constraint handling techniques are very popular for constrained optimization problems. Methods dealing with constraints can be classified into the following categories (Takahama & Sakai, 2005): (1) Methods based on preserving the feasible solutions. When a new search point generated in search process, is not feasible, the point is repaired or discarded. El-Gallad, El-Hawary, and Sallam (2001) proposed a method in which an infeasible point is replaced by the best visited point. In practice, the main difficulty in this category is the generation of a population of feasible points when the feasible region is small. Especially, it is almost impossible to find initial feasible points when the problems contain some equality constraints. (2) Methods based on penalty functions. The penalty term, which is the sum of the violation of all constraint functions, is combined with the objective function. The original
5622
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
constrained problem is solved as an unconstrained one by this method. The main difficulty here is the setting of the penalty parameter. If the penalty parameter is large, feasible solutions with low accuracy can be obtained. If the penalty parameter is small, high-quality but infeasible solutions can be obtained. Several methods of updating the penalty parameter dynamically have been proposed. But it is difficult to determine a general control model because of problem dependence. (3) Methods based on basing feasible over infeasible solutions. The constraint violation and the objective function are used separately and optimized by some sort of order in which the constraint violation precedes the objective function. Deb (2000) proposed three simple rules for tournament selection in genetic algorithms where two solutions were compared. Three rules, so-called feasible and domain (FAD) rules, drive the points in the direction from unfeasible region to feasible region which were used widely in literature. Runarsson and Yao (2000) proposed a stochastic ranking method in which the stochastic order, which ignored the constraint violation with some probability, was used. These methods seem to be nowadays the most used strategies for handing constraints. (4) Methods based on multi-objective optimization concepts. The constrained optimization problems are solved as the multi-objective optimization problems by treating the constraint violation as another objective function (Aguirre, Rionda, Coello, Lizárraga, & Montes, 2004). However, solving multi-objective optimization problems is more difficult and expensive than solving single objective optimization problems. What is more, Runarsson and Yao (2005) proved that the unbiased multi-objective approach to constraint handling may not be effective by theoretical analysis and experimental studies.
Ali’s method outperform other versions of EM for constrained problems (Ali & Golalikhani, 2010). However, the efficiency of EM algorithm is not satisfactory because the process of calculating total force is rather complicated. And its precision is limited. For example, the constrained EM version proposed by Ali still cannot catch up with some other stateof-the-art algorithms, such as particle swarm optimization (PSO) algorithms. In order to further improve the precision and efficiency of EM algorithm for constrained optimization, modifications on algorithm itself or constraint-handling technique must be made. Many good works about the modification of EM algorithm have been reported. Rocha and Fernandes (2008c, 2009b) modified the calculation of the charge and introduced a pattern-search-based local search. They also proposed a modification of calculation of total force vector (Rocha & Fernandes, 2009c). Gol-Alikhani, Javadian, and Tavakkoli-Moghaddam (2009) investigated a hybridization of EM algorithm and Solis & Wets local search algorithm, to some extent, which improves the accuracy of EM algorithm. But those modifications made no contribution to the simplification of EM algorithm, some of which contrarily made EM algorithm more complicated. Therefore, this paper tries to simplify the process of EM algorithm and improve its performance. Firstly, a simplified calculation of total force vector is proposed, to reduce the complexity of the algorithm. And then, movement probability is imported to modify the move formula where an elitist strategy is adopted. The remainder of the paper is organized as follows. The improved EM algorithm is introduced in Section 2. And then, the constraint handling techniques and the flowchart are introduced in Section 3. Section 4 presents experimental results on 13 benchmark problems, three engineering design problems and 22 benchmark functions in CEC’06. Finally, the conclusion and suggestions for future work are provided in Section 5.
In this paper, a constraint handling technique for biasing feasible over infeasible solutions is implemented into the improved electromagnetism-like mechanism (EM) algorithm. EM algorithm, a population based meta-heuristic search algorithm for unconstrained optimization problems, was first proposed by Birbil and Fang (2003). It has been applied to solve many optimization problems successfully, such as neural network training (Wu, Yang, & Wei, 2004), circle detection (Cuevas, Oliva, Zaldivar, Marco, & Sossa, 2012), PID controller optimization (Lee & Chang, 2010), job shop scheduling (Tavakkoli-Moghaddam, Khalili, & Naderi, 2009), unicost set covering problem (Naji-Azimi, Toth, & Galli, 2010), feature selection (Su & Lin, 2011). It is proved EM algorithm is a powerful and promising global optimization method. Based on the literature review, there exist five papers about EM algorithm for constrained optimization problems. Birbil (2002) used a simple penalty based method to handle constraints in EM algorithm. Three papers presented EM algorithm for constrained optimization problems by Rocha and Fernandes (2008a, 2008b, 2009a). The first one presented the use of the feasible and dominance (FAD) rules in EM algorithm (Rocha & Fernandes, 2008a). The second one incorporated the elite-based local search in EM algorithm for engineering optimization problems and the FAD rules were used again (Rocha & Fernandes, 2008b). A self-adaptive penalty approach for dealing with constraints within EM algorithm was proposed in the third paper (Rocha & Fernandes, 2009a). Since the FAD rules were simple and efficient for constraints handling, they were also applied in Ali’s paper where another EM algorithm was applied for nonlinearly constrained global optimization (Ali & Golalikhani, 2010). The highlight of Ali’s paper was the charge calculation based on both the function value and the total constraint violations. The results of 13 benchmark test problems proved that
EM algorithm was first proposed for unconstrained optimization problems (Birbil & Fang, 2003). It cannot be applied for constrained optimization problems directly. Therefore, in order to use EM algorithm to solve constrained optimization problems, firstly, some improvements are made from Sections 2.2 to 2.3 for satisfying both high accuracy and high efficiency in EM algorithm. Secondly, the constraint handling methods including FAD rules and new charge calculation formula must be added into EM algorithm, which is introduced in Section 3. The next section introduces the original EM algorithm briefly.
2. The improved electromagnetism-like mechanism algorithm
2.1. The original EM algorithm The algorithm imitates the attraction–repulsion mechanism of the electromagnetism theory so it is called electromagnetism-like mechanism algorithm. A solution in EM algorithm can be seen as a charged particle in search space and its charge relates to the objective function value. Electromagnetic force exists between two particles. With the force, the particle with more charge will attract the other and the other one will repulse the former. The charge also determines the magnitude of attraction or repulsion – the better the objective function value, the higher the magnitude of attraction or repulsion. There are four phases in EM algorithm: initialization of the algorithm, calculation of the total force, movement along the direction of the force and neighborhood search to exploit the local minima. Fig. 1 shows the EM’s general scheme. 2.1.1. Initialization In the primary initialization part, a population is randomly generates in the search space as same as other population-based
5623
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Fig. 1. General scheme of EM algorithm.
meta-heuristic algorithms. Each coordinate of a particle obeys a uniformly distributed between the corresponding upper bound and lower bound. After calculating the function value of each particle, the point with best function value is stored in a best particle. 2.1.2. Local search EM has a good performance for diversification benefiting from attraction–repulsion mechanism. But its ability of intensification is unsatisfactory. So the original EM needs a local search to gather the neighborhood information. It is a simple random line search algorithm with two parameters – LSITER and d, representing the number of iterations and the multiplier for the neighborhood search, respectively. 2.1.3. Calculation of total force The calculation of total force may be the core of the EM algorithm, because the total force determines the detection and step length of movement. In other words, the update of population is determined by the total force. Before calculating the total force, we should compute the charges of the points according to the objective function value. The charge of each point i, qi, determining the power of attraction or repulsion to other points, is computed as follows:
f ðxi Þ f ðxbest Þ ; 8i: qi ¼ exp n Pm k best ÞÞ k¼1 ðf ðx Þ f ðx
ð2Þ
It is worth notice that the particle with better function value has a bigger qi value and the qi value is always positive. According to the mechanism that the particle with better objective function value attract others and otherwise repulse others, the total force is calculated by formula (3). i
F ¼
8 j i m < ðX X Þ X
qi qj jjX j X i jj2
; if f ðX Þ < f ðX Þ
: ðX i X j Þ
qi qj jjX j X i jj2
; if f ðX j Þ P f ðX i Þ
j–i
j
Fi kF i k
ðRangeÞ
2.2.2. Remove the Euclidean distance from the total force formula The formula used here is the improved total force formula in Gao, Wang, Wei, and Chen (2006). There is a distance quadratic term in the force calculation formula of original EM algorithm in order to simulate the Coulomb equation of electromagnetic theory. But after the careful analysis, it can be found that this quadratic term is harmful and useless for the algorithm. It can be seen from the formula, when the charge is invariant, the attraction (or repulsion) force in short distance between the points is greater than in far distance between the particles. Then if the current point is far away from a good point, the attraction between the two points will be very weak and the point is still attracted by poor points nearby. That will lead to such a result: the point is so easy trapped in a local optimum and is very difficult to jump out. Although that is beneficial for the search of local minima, it will reduce the convergence rate as the EM algorithm itself contains a local search step. So in order to enhance the global search ability of algorithm, it is not necessary to influence the points’ move direction by the distance between the points. The purpose of calculating the force is to ensure that all points have the tendency to move to the better points and to be far away from the worse points than themselves. On the basis of no impact on this principle, the distance term of the formula is weakened or even eliminated, which helps to speed up the algorithm convergence speed. Therefore, the total force calculation formula is changed as follows:
Fi ¼
i
ð3Þ
2.1.4. Movement A movement of particle is executed after evaluating the total force which determines the direction and step length of movement, according to Eq. (4). k is assumed to be uniformly distributed between 0 and 1. Range is a vector whose components denote the allowed feasible movement toward the upper bound or the lower bound for the corresponding dimension.
Xi ¼ Xi þ k
component force calculation in each iteration, because M 1 particles need to be calculated the total force except the best particle. The component force calculation is not simple, so the running cost of original EM is very high. Especially, the running time increases fast when population size M increases. What is more, when so many component forces sum up, they may cancel each other out and the total force is small. The process of calculation total force is simplified in the improved algorithm. When calculating the total force of a particle, only part of particles’ component forces are used for summing up. Considering the balance of accuracy and efficiency, the number of part particle is set as three by trial and error. And the three particles are selected from other M 1 particles randomly. The improvement can reduce the computational complexity obviously and avoid too many component forces cancelling each other out. Fig. 2 intuitively shows the simplification.
ð4Þ
2.2. Simplify the calculation of the total force 2.2.1. Decrease the number of particles in total force calculation The sum of M 1 particles’ component force is the total force of a particle in original EM algorithm. There are (M 1)2 numbers of
X j¼R1;R2;R3
(
ðX j X i Þqi qj
if f ðX j Þ < f ðX i Þ
ðX i X j Þqi qj
if f ðX j Þ P f ðX i Þ
; 8i
ð5Þ
This can eliminate the influence of distance on points’ move direction in order to avoid falling into local optimum. And this also reduces the computational time of Euclidean distance, which helps to improve the searching speed. 2.3. Import movement probability to modify the move formula The move formula of the original EM algorithm has some limitations. The range of movement Range in the formula (4) means the difference between the value of current location and the boundary value and its value is closely related to boundary value. If each dimension of the boundary value is different, the values of different dimensions in Range are different. In this case, the particles’ actual direction of movement will deviate from the force direction. As shown in Fig. 3, the pane represents the feasible region with two dimensions where one point is acted upon by the force F. But because of the difference of particles’ moving range of different dimensions, the point’s final moving direction deviates from the original F direction.
5624
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Fig. 2. A simple example of decreasing the number of particles in total force calculation.
This paper proposes a new method to move the points, in which some components of the total force are added to the original position. To be specific, this method sets a new parameter MP, called move probability. If the generated random number is less than MP on a dimension, then the improved move formula is used, otherwise the original move formula is adopted. In addition, boundary violation may occur when using the improved move formula. In this case, the original formula is also adopted. An elitism strategy is also used: if the solution is worse after movement, then the original solution is retained; otherwise the solution is accepted. In summary, the movement formula changed to the following equations.
8 j j > < xi þ kF i j j yi ¼ x > : i 8 > < Yi Xi ¼ Xi > :
if ðRand < MPÞ if ðRand > MPÞ
ð6Þ
if ðf ðX i Þ > f ðY i ÞÞ if ðf ðX i Þ < f ðY i ÞÞ
ð7Þ
Parameters k and Rand are random between 0 and 1. If yji is infeasible, it is reset as xji :
2.4. A comparison between the improved EM algorithm and differential evolution
with the continuous optimization problem proposed by Storn and Price (1997). In the phase of initialization, namely, at generation G = 0, a population ~ xi;0 ¼ fxi;1;0 ; xi;2;0 ; . . . ; xi;D;0 ; i ¼ 1; 2; . . . ; NPg is generated from the feasible search space. D is the dimension of the optimization problem and NP is the population size. After initialization, DE contains three steps at each generation G: mutation, crossover and selection. In the mutation, DE creates a mutant vector ~ v i;G ¼ ðv i;1;G ; v i;2;G ; . . . ; v i;D;G Þ for each population member ~xi;G in the current generation. The five most frequently used mutation strategies are listed as follows. (1) ‘‘DE/rand/1’’: ~ v i;G ¼ ~xr1;G þ F ð~xr2;G ~xr3;G Þ (2) ‘‘DE/best/1’’: ~ v i;G ¼ ~xbest;G þ F ð~xr1;G ~xr2;G Þ (3) ‘‘DE/current-to-best/1’’: ~ v i;G ¼ ~xi;G þ F ð~xbest;G ~xi;G Þ þ F ð~xr1;G ~xr2;G Þ (4) ‘‘DE/best/2’’: ~ v i;G ¼ ~xbest;G þ F ð~xr1;G ~xr2;G ÞF ð~xr3;G ~xr4;G Þ (5) ‘‘DE/rand/2’’: ~ v i;G ¼ ~xr1;G þ F ð~xr2;G ~xr3;G Þ þ F ð~xr4;G ~xr5;G Þ In the above equations, r1, r2, r3, r4, and r5 are distinct integer randomly selected from the range [1, NP] and are also different from i. The parameter F which is called the scaling factor is a positive control parameter for amplifying the difference vectors. ~ xbest;G is the best individual in the current population. xi;G After mutation, a binomial crossover operator is applied to ~ and ~ v i;G to generate a trial vector ~ui;G ¼ ðui;1;G ; ui;2;G ; . . . ; ui;D;G Þ: The trial vector is obtained as
ui;j;G ¼ Actually, the modified EM algorithm with the new movement formula is much alike to another evolution algorithm differential evolution (DE). DE is a simple and powerful algorithm for dealing
v i;j;G ;
if randj ð0; 1Þ 6 C r
xi;j;G ;
otherwise
where i = 1, 2, . . ., NP, j = 1, 2, . . ., D, jrand is a randomly chosen integer in [1, D], randj(0, 1) is a uniformly distributed random number between 0 and 1. And Cr e [0, 1] is called the crossover control parameter. The selection operator is performed to determine whether the target or the trial vector survives to the next generation after crossover. The selection operator is expressed as follows:
( ~ xi;Gþ1 ¼
Fig. 3. Total force calculation.
or j ¼ jrand
~ ui;G ; if f ð~ ui;G Þ 6 f ð~ xi;G Þ ~ xi;G ; otherwise
The above three steps are repeated generation after generation until a termination criterion is satisfied. In the improved EM algorithm, Eq. (6) can be seen as the combination of mutation operator and crossover operator in DE. The parameter movement probability MP is equal to parameter crossover probability Cr in DE. And obviously, Eq. (7) is the selection operator in DE.
5625
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
However, there are two differences between the modified EM and DE. Firstly, there is no such a mutation strategy in DE. All the differential vectors in the modified EM are generated between a randomly selected individual and the current individual. What is more, the differential vectors involve the attraction–repulsion mechanism of the electromagnetism theory. Secondly, the parameter scaling factor F in DE is replaced by the product of the charge values calculated by Eq. (4). If the product is defined as a new parameter, the parameter can be considered to be self-adaptive. The parameter adaptation techniques can be divided into three categories in Smith and Fogarty (1997): deterministic, adaptive, and self-adaptive control rules. Self-adaptive rules directly encode parameters into the individuals and evolve them together with the encoded solutions. Parameter values involved in individuals with better fitness values will survive. Obviously, the new parameter is self-adaptive because the charge of each particle is determined by the objective value. The self-adaptive parameter can avoid selecting a proper parameter value by trail-and-error procedure and it can fully utilize the feedback from the search process.
The improved electromagnetism-like mechanism algorithm introduced in Section 2 cannot be applied to constrained optimization until some constrain handling techniques added into it. Two necessary and important constrain handing techniques are introduce in the next two part. And the final process of the improved constrained electromagnetism-like mechanism algorithm (ICEM) is shown in Section 3.3. 3.1. The feasibility and dominance rules For unconstrained optimization problems, the smaller the point’s objective function value is, the better the point is. However, for constrained optimization problems, the performance of solutions is not only related to the objective function value, but also to the amount of constraint violation. The degree of constraint violation is defined as the following formula: m X i¼1
maxðg i ðXÞ; 0Þ þ
l X
maxðjhj ðXÞj e; 0Þ
jðXÞ ¼
f ðXÞ if /ðXÞ ¼ 0 f ðX Worst Þ þ /ðXÞ if /ðXÞ > 0
qi ¼ exp n PN
jðX i Þ f ðX Best Þ /ðX Best Þ
j¼1 ðf ðX j Þ
þ /ðX j Þ f ðX Best Þ /ðX Best ÞÞ
ð9Þ ! ð10Þ
j(Xi) is bigger than f(XBest) + /(XBest), which guarantees the charge qi is bigger than 0 and less than 1. Obviously, the feasible particle’s charge is bigger than the infeasible particle’s, and the particle with less aggregate constraint violation’s charge is bigger than the one with bigger /(X). 3.3. The process of the improved constrained electromagnetism-like mechanism algorithm (ICEM)
3. The improved constrained electromagnetism-like mechanism algorithm (ICEM)
/ðXÞ ¼
Before the new charge formula defining, j(X) is defined as (9). Actually, j(X) is a criterion to apply the FAD rules. The particle with smaller j(X) value is better than the other one when two particles compared. Then the charge is calculated as formula (10) where j(X) is the only dependent variable.
ð8Þ
j¼1
From formula (8), we can get the following conclusions: if the point is feasible, the value of /(X) is always 0; if the point is infeasible, the value of /(X) is greater than 0; the more the point violates the constraints, the bigger the value of /(X) is. The performances between two points are compared frequently in the implementation of EM algorithm, such as finding the best particle Xbest and the worst particle Xworst, deciding a particle is appealed or repelled by another particle in the calculation of the force and applying the elite strategy in the movement. The feasibility and dominance (FAD) rules are used to compare the performances of two points in the proposed constrained EM algorithm. The specific content of the FAD rules are as follows: if both points are infeasible, the one with lower constraints violation is better; if one point is feasible and the other is infeasible, the feasible point is better; if both points are feasible, the one with lower function value is better.
The process of the improved EM algorithm for constrained optimization is as shown in Fig. 4. Compared with EM’s general scheme in Fig. 1, the first difference is that there is no local search in the process of ICEM. The reason why the local search can be removed is that the ICEM’s ability of intensification is strong enough. The good intensification ability comes from the DE’s mechanism. Other differences are the details of calculation of total force and movement which are described above.
4. Experimental results and analysis The PC configuration and the parameters setting is introduced in Section 4.1. Some experiment is done to illustrate the efficiency of ICEM in Section 4.2. Then a set of 13 benchmark test problems (Runarsson & Yao, 2000, 2005) with dimensions ranging from 2 to 20 are tested in Section 4.3 and three well-studied engineering design problems are tested in Section 4.4. Section 4.5 shows the results obtained by ICEM on benchmark functions in CEC’06 (Liang et al., 2006).
3.2. The corresponding charge formula for constrained optimization For constraint optimization, the charge formula should be modified and it relates to the objective function value f(X) and the aggregate constraint violation /(X).
Fig. 4. The process of ICEM.
5626
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
4.1. PC configuration and parameters setting All the experiment is done on a PC with windows XP SP3 system, 2.20 GHz CPU and 2 GB RAM. The programming language is C++. There are several parameters in ICEM. MFE, maximum number of function evaluation, is a terminal condition. In Sections 4.2 and 4.3, MFE is 350,000. For three engineering design problems in Section 4.4, it is set as 80,000. The population size M is set as 100 and move probability MP is 0.9 for all test problems. The equality constraints, h(X) = 0, are converted into inequality constraints by using jhðXÞj e 6 0: Some maximization problems are converted into minimization problems. Thirty independent runs are carried out in Sections 4.3 and 4.4. In Section 4.5, the number of runs is 25. 4.2. Experiment to illustrate the efficiency of ICEM A test is done on four selected benchmark functions in order to show the high efficiency of ICEM. The compared algorithm is the FADEM algorithm proposed by Rocha and Fernandes (2008a). The test functions are G01, G02, G05 and G13. Two FADEMs with different population size are tested. The first one’s population size is 100 particles (FADEM100) which are the same as ICEM and the second one has 40 particles (FADEM40) which were used in Rocha and Fernandes (2008a). Only five times are run randomly. The running time is the focus of attention in this part. The experimental results are shown in Table 1. The running time of ICEM is much shorter than FADEM. Take G02 for example, it takes more than 80 s for FADEM100, however, the time needed by ICEM is less than 7 s. In other words, ICEM can save more than ten times of running time than FADEM100. For some other problems, such as for G13, the saved time reaches to even more than twenty times. FADEM40 takes less a half time than FADEM100 along with the smaller population size. However, it is still very long compared with ICEM. For instance, the time of FADEM40 on G01 and G13 is about 10 times of ICEM. It is clear that the simplification of the total force calculation contributes to the searching speed largely. And it can be concluded that the efficiency of ICEM is much higher than other EM algorithms which used the original total force calculation formula. The results and comparison on accuracy will be shown in following sections. 4.3. Results and comparisons for 13 benchmark problems 4.3.1. Comparisons with other versions of constrained EM algorithm The best, mean, worst objective function values and the standard deviation (Std) over 30 independent runs are reported in Table 2. Some values are better than the best known optimal values for G03, G05, G11 and G13. This has occurred due to the value of e used. When the equality constraints are converted into inequality constraints, e = 104 for ICEM and e = 103 for the other four constrained EM algorithms. Almost all the best objective values obtained by ICEM equal to the best known optimal values. The mean and worst objective function values pretty close to the best
Table 1 Comparison between ICEM and FADEM on the running time. Function
G01 G02 G05 G13
Time/s FADEM100
FADEM40
ICEM
60.898 80.318 31.190 34.128
23.250 29.012 12.325 12.790
2.303 6.437 1.843 1.362
known optimal values except G13. Ten functions’ standard deviations (Std) are less than 1E10 and five functions’ Stds are zero, which means ICEM has a strong robustness. In literature, there are several other constrained EM algorithms. The first version denoted by PEM incorporates penalty functions in the original EM (Birbil, 2002). The results of PEM are from Ali and Golalikhani (2010). The second version was proposed by Rocha and Fernandes (2009a). They used a self-adaptive penalty to handle constraints. Several penalty forms were provided and the best results are taken from Rocha and Fernandes (2009a) for comparison. They also applied feasibility and domain (FAD) rules in EM algorithm (Rocha & Fernandes, 2008a), here denoted by FADEM. The last version is provided by Ali and Golalikhani (2010), they introduced the charge calculation of a point based on both the function value and the total constraint violations. Here, their algorithm is called CEM. The results of PEM, SAPEM, FADEM, and CEM are also obtained after 350,000 function calls. Table 3 shows the best objective values obtained by five versions of constrained EM including ICEM. The best results and the results better than or equal to best known values appear in bold. We can see the results obtained by ICEM are all in bold, which means that ICEM is no worse than any others for all 13 functions in terms of the best objective value. Eight CEM’s results are in bold. The rest five functions’ dimensions are relatively high such as 20(G02), 5(G04), and 10(G07). It shows that CEM’s performance is not good enough for high dimension functions. Actually, PEM, SAPEM and FADEM just achieve good results for G03, G08, G11 and G12 four functions whose dimensions are 10, 2, 2, and 3 respectively. The comparison of ICEM with others in terms of the mean objective value is shown in Table 4. CEM performs for all the problems very well except G13. CEM can obtain a very good value for G13 and ICEM ranks second. CEM and ICEM are very approximate for six functions such as G01, G06. However, ICEM performs better than CEM for another six functions including G02, G04, G05 etc. It is worthwhile to note that their dimensions are relatively high. Obviously, ICEM performs much better than other four constrained EMs. As for the worst objective value in 30 runs shown in Table 5, the results of SAPEM are not reported. The general situation is similar to the mean objective value in Table 4. PEM and FADEM only perform very well for three low dimension functions G08, G11 and G12. ICEM is still better than CEM for six high dimension functions. For some functions, the results obtained by ICEM are much less than the results gotten by CEM. For example, the worst objective value in 30 runs gotten by ICEM is 7049.248 for G10, while the result obtained by CEM is just 7292.7241. 4.3.2. Comparisons with other algorithms The comparisons between ICEM and other constrained EM algorithms show that ICEM is outstanding. In order to further illustrate its effectiveness, here, we compare ICEM with five other classical or latest algorithms designed for constrained optimization. The first algorithm is the classic stochastic ranking (SR) method Runarsson & Yao, 2000. SR uses a constraint-handling technique based on stochastic ranking scheme in evolution strategy. The next algorithm is an improved stochastic ranking (ISR) method which uses a c parameter for scaling step length in the constraint handling method (Runarsson & Yao, 2005). The third algorithm is the improved vector PSO (IVPSO) which used a simple constraint-preserving method as the constraint-handling technique (Sun, Zeng, & Pan, 2011). Experimental results proved that the proposed IVPSO was very competitive among the algorithms for constrained optimization problems. A modified Artificial Bee Colony (MABC) algorithm using the Deb’s rules and a probabilistic selection scheme for constrained optimization was described by Karaboga and Akay (2011).
5627
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634 Table 2 Statistical results of ICEM for 13 benchmark test problems. Function
n
Best known
Best
Mean
Worst
Std
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13
13 20 10 5 4 2 10 2 7 8 2 3 5
15 0.803619 1 30665.539 5126.4981 6961.81388 24.3062091 0.095825 680.630057 7049.248 0.75 1 0.053942
15 0.803619 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 0.053942
15 0.802896 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 0.439162
15 0.794658 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 1.444363
0 1.98E03 1.28E07 1.44E11 3.32E13 0 1.05E14 2.63E17 0 3.96E12 0 0 0.36664
Table 3 Comparison of ICEM with other constrained EM in terms of the best objective value. Function
Best known
PEM
SAPEM
FADEM
CEM
ICEM
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13
15 0.803619 1 30665.539 5126.4981 6961.81388 24.3062091 0.095825 680.630057 7049.248 0.75 1 0.053942
14.957990 0.516353 1.00215 30661.7907 5126.4218 6961.77 26.14955 0.095825 681.2211 7100.8486 0.7490 1.0000 0.346621
14.981720 0.79771 0.999807 30614.3 5127.90 6961.71 40.58663 0.095825 680.8360 7081.32 0.749000 0.999925 0.082410
14.99980 0.444939 1.00298 30642.600 5135.818 6953.41 27.46692 0.095825 681.777 7187.18 0.749001 1.0000 0.063196
15.0000 0.623711 1.00151 30665.513 5126.4842 6961.813 25.11276 0.095825 680.8968 7049.7581 0.7499 1.0000 0.053827
15 0.803619 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 0.053942
Table 4 Comparison of ICEM with other constrained EM in terms of the mean objective value. Function
Best known
PEM
SAPEM
FADEM
CEM
ICEM
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13
15 0.803619 1 30665.539 5126.4981 6961.81388 24.3062091 0.095825 680.630057 7049.248 0.75 1 0.053942
12.624385 0.453309 1.00008 30642.891 5191.5619 6961.74 28.76543 0.095825 682.6084 7226.7802 0.7490 1.0000 0.911530
13.364670 0.751747 0.964702 30567.4 5320.67 6961.14 51.73630 0.095821 681.2683 7790.85 0.749001 0.999524 0.885214
14.59047 0.427705 0.99966 30591.897 5338.583 6942.91 53.08965 0.095825 689.896 9492.73 0.749079 1.0000 2.083421
15.0000 0.517221 1.00167 30660.649 5128.6958 6961.813 27.75496 0.095825 681.3511 7154.6709 0.7499 1.0000 0.056314
15 0.802896 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 0.439162
Table 5 Comparison of ICEM with other constrained EM in terms of the worst objective value. Function
Best known
PEM
SAPEM
FADEM
CEM
ICEM
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13
15 0.803619 1 30665.539 5126.4981 6961.81388 24.3062091 0.095825 680.630057 7049.248 0.75 1 0.053942
11.028430 0.404182 0.99553 30627.758 5384.7132 6961.71 32.74971 0.095825 685.1694 7434.9652 0.7490 1.0000 2.124983
– – – – – – – – – – – – –
12.50051 0.377406 0.99643 30521.434 6117.660 6933.26 103.5507 0.095825 701.344 16395.51 0.749346 1.0000 20.784428
15.0000 0.452234 1.00176 30654.500 5136.6618 6961.813 29.93511 0.095825 681.7680 7292.7241 0.7499 1.0000 0.059852
15 0.794658 1.0005 30665.539 5126.497 6961.814 24.30621 0.095825 680.6301 7049.248 0.7499 1 1.444363
‘‘–’’ Means the result was not reported.
5628
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Generally speaking, the results obtained by MABC are better than several state-of-the-art algorithms. The last algorithm is the differential evolution with dynamic stochastic selection (DSS-MDE) Zhang, Luo, & Wang, 2008. The number of function evaluations taken by SR, ISR, MABC and DSS-MDE is 350,000. However, the IVPSO requires 160,000,000 evaluations of the fitness function which is higher than other algorithms. All equality constraints have been converted into inequality constraints with e = 104 in above algorithms. The comparison of ICEM with other algorithms in the mean objective value and standard deviation is shown in Table 6. From Table 6 we can see that ICEM’s performance is better than others’ for G02, G07, G09 and G10 whose dimension is relatively high. Especially for G02 and G10, only ICEM’s results are in bold type which means the results are the best. However, ICEM’s behavior is worse than some others’ on G13 which is a very difficult problem and only DSS-MDE’ mean value reaches the optimal value. And the row at the end of Table 6 shows each algorithm’s total number of results which are best or reach the optimal value. ICEM and DSSMDE achieve biggest total number among these algorithms. Their total successful number is 12, while others’ are from 6 to 10. In order to illustrate the significant difference, the approximate two-sample t-tests between the other algorithms and our ICEM have been done according to
y1 y2 t 0 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 S1 =n1 þ S22 =n2
ð11Þ
where y1 and y2 denote the mean values, S1 and S2 are the standard deviations of the results obtained by the two approaches, and n1 and n2 are the independent runs of two approaches, respectively. The value of degrees of freedom is calculated as follows:
$ f ¼
%
1 2
2
k =n1 þ ð1 kÞ =n2
;
where k ¼
S21 =n1 2 S1 =n1 þ S22 =n2
ð12Þ
The t-tests are not done on those functions whose optimal value is obtained by at least five algorithms in all runs. After collecting the raw data from the original papers, the results of t-tests are calculated and listed in Table 7. In the table, ‘‘–’’ means the both approaches have obtained the optimal value in all runs on the given functions. ‘‘a’’ means the value of t0 with the corresponding degrees freedom is significant at a = 0.05 and ICEM is better than the other algorithm. ‘‘b’’ means the value of t0 with the correspond-
ing degrees freedom is significant at a = 0.05 and ICEM is worse than the other algorithm. It is clear that most of the differences are significant except the differences between MABC and ICEM on G02. As a conclusion, compared with other algorithms for constraint optimization, ICEM is very competitive. 4.4. Results and comparisons for three engineering design problems In this part, we take three well studied engineering design problems for example to investigate the performances of ICEM and to compare ICEM with some other optimization algorithms. All parameters of ICEM are set as the same as the ones in Section 4.1 except the maximum number of function evaluation (MFE). Eighty thousand function evaluations are used for ICEM in this section. 4.4.1. The tension/compression string design problem The first example is the tension/compression string design problem proposed by Arora (1989). The aim is to minimize the weight of a tension/compression spring subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the wire diameter d(x1), the mean coil diameter D(x2) and the number of active coils Pb(x3). The detail description of the problem can be seen in relative literature. Several optimization methods have been applied to the problem, such as GA based co-evolution model (GA1) (Coello, 2000), GA through the use of dominance-based tournament selection (GA2) (Coello & Montes, 2002), EP (Coello & Becerra, 2004), co-evolutionary particle swarm optimization approach (CPSO) (He & Wang, 2006), hybrid particle swarm optimization (HPSO) (He & Wang, 2007), co-evolutionary differential evolution (CDE) Huang, Wang, & He, 2007, NM–PSO (Zahara & Kao, 2009) and PSOLVER algorithm that combines particle swarm optimization (PSO) and a spreadsheet ‘‘Solver’’ (Kayhan et al., 2010). The best solutions obtained by nine different algorithms for tension/compression spring design problem are presented in Table 8. The smallest objective function value is 0.0126302 obtained by NM–PSO. However, the solution is not feasible because two constraints are violated. g1(X) and g2(X) are 0.001 which is much bigger than the tolerance error. HPSO, PSOLVER and ICEM get the same feasible minimal objective value 0.0126652. The constraint g2(X) is violated, but the small values appear just because of the errors in the digit. And the little constraint violations makes no differences in actual application.
Table 6 Comparison of ICEM with other algorithms in terms of the mean objective value. Fun.
Best known
SR
ISR
IVPSO
MABC
DSS-MDE
ICEM
G01 G02
15 0.803619
15.000 (0.0) 0.781975 (2.0E02)
15.000 (5.8E14) 0.782715 (2.2E02)
15.000 (0) 0.795430 (9.5E03)
15.000 (0) 0.788011 (1.5E02)
15 (0) 0.802896 (2.0E02)
G03 G04
1 30665.539 5126.498 6961.814
1.001 (8.2E09) 30665.539 (1.1E11) 5126.497 (7.2E13) 6961.814 (1.9E12)
1.000 (0) 30665.539 (0)
G05 G06
1.000 (1.9E04) 30665.539 (2.0E05) 5128.881 (3.5E+00) 6875.940 (1.6E+02)
15.000 (0) 0.769889 (4.7E03) 1.005010 (0) 30665.539 (0) 5126.493 (0) 6961. 814 (0)
1.0005 (1.3E07) 30665.539 (1.4E11) 5126.497 (3.3E13) 6961.814 (0)
G07 G08 G09 G10
24.306 0.095825 680.630 7049.248
24.374 (6.6E02) 0.095825 (2.6E17) 680.656 (3.4E02) 7559.192 (5.3E+02)
24.306 (6.3E05) 0.095825 (2.7E17) 680.630 (3.2E13) 7049.250 (3.2E03)
5182.868 (68.584) 6961.814 (4.4E04) 24.447 (1.1E01) 0.095825 (0) 680.636 (2.6E03) 7220.106 (1.2E+02)
1.0005 (2.7E09) 30665.539 (2.7E11) 5126.497 (0) 6961.814 (0) 24.306 (7.0E08) 0.095825 (3.9E17) 680.630 (2.5E13) 7049.248 (3.1E04)
24.30621 (1.1E14) 0.095825 (2.6E17) 680.6301 (0) 7049.248 (2.6E13)
G11 G12 G13
0.75 1 0.053942
0.750 (8.0E05) 1.000 (0.0E+00) 0.067543 (3.1E02)
0.750 (1.1E16) 1.000 (1.2E09) 0.066770 (7.0E02)
0.750 (0) 1.000 (0) 0.975 (5.1E02)
0.750 (0) 1.000 (0) 0.053942 (8.3E17)
0.7499 (0) 1 (0) 0.439162 (3.7E01)
Total
–
6
10
7
12
12
24.423 (4.7E02) 0.095825 (0) 680.630 (3.0E06) 7053.620 (7.7E01) 0.749000 (0) 1.000 (0) 1.795008 (5.6E01) 9
5629
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634 Table 7 Results of the approximate two-sample t-tests between the other algorithms and ICEM. Algorithms
Features of t-test
SR–ICEM
G02
t0 f t0 f t0 f t0 f t0 f
ISR–ICEM IVPSO–ICEM MABC–ICEM DSS-MDE –ICEM
G05 a
G07 a
4.05 60 4.73a 53 8.80a 33 1.84a 42 3.77a 40
3.73 30 – – – – 4.50a 30 – –
G09
G10
a
a
5.64 30 – – – – 6.83a 30 – –
4.19 30 – – – – 12.64a 30 – –
G13 a
5.48b 30 5.48b 30 11.00a 51 7.86a 31 -5.70b 30
5.27 30 6.25a 100 31.09a 51 7.63a 30 – –
Table 8 Comparison of the best solutions for tension/compression spring design problem. Methods
x1(d)
x2(D)
x3(Pb)
g1(X)
g2(X)
g3(X)
g4(X)
f(X)
GA1 GA2 EP CPSO HPSO CDE NM–PSO PSOLVER ICEM
0.051480 0.051989 0.050000 0.051728 0.051706 0.051609 0.051620 0.051686 0.051689
0.351661 0.363965 0.317395 0.357644 0.357126 0.354714 0.355498 0.356650 0.356718
11.632201 10.890522 14.031795 11.244543 11.265083 11.410831 11.333272 11.292950 11.288960
2.08E2 1.3E5 0.000000 8.45E4 3.07E6 3.9E5 0.001 2.0E5 6.4E6
1.1E3 2.1E5 7.5E5 1.26E5 1.39E6 1.83E4 0.001 1.33E5 3.9E6
4.026 4.061 3.968 4.051 4.055 4.048 4.061 4.054 4.054
4.026 0.723 0.755 0.727 0.727 0.729 0.729 0.728 0.728
0.0127048 0.0126810 0.0127210 0.0126747 0.0126652 0.0126702 0.0126302 0.0126652 0.0126652
Table 9 shows the statistical results for tension/compression spring design problem. The standard deviation of the PSOLVER solution is smallest and PSOLVER also requires the least number of function evaluations. ICEM solution’s standard deviation is very close to PSOLVER’s and it is better than HPSO’s. The mean value and the worst value by HPSO are worse than the values by PSOLVER and ICEM, although the best values are the same.
4.4.2. The welded beam design problem The welded beam design problem was firstly proposed by Coello in (2000) and Coello and Montes (2002). The best solutions obtained by all kinds of algorithms are shown in Table 10. The smallest solution was obtained by using NM–PSO and PSOLVER with an objective function value of f(X) = 1.724717. However, Table 11 shows that g3(X) obtained by NM–PSO (Zahara & Kao, 2009) and PSOLVER (Kayhan et al., 2010) equals 0.1E4 which is bigger than zero. The value of x1(h) is 0.205830 and the value of x4(b) is 0.205730, which is obviously violates the constraint g 3 ðXÞ ¼ x1 x4 6 0: With respect to the solutions obtained by NM-PSO and PSOLVER are unfeasible, ICEM, HPSO and EP three algorithms get the same best solution with f(X) = 1.724852. From the statistical results for welded beam design problem shown in Table 12, we can see that the standard deviation of ICEM is much better than the standard deviation of HPSO and PSOLVER, so do the mean and worst solution. As to the number of function evalu-
Table 9 Statistical results for tension/compression spring design problem. Methods
Best
Mean
Worst
Std
NFE
GA1 GA2 EP CPSO HPSO CDE NM–PSO PSOLVER ICEM
0.0127048 0.0126810 0.0127210 0.0126747 0.0126652 0.0126702 0.0126302 0.0126652 0.0126652
0.0127690 0.0127420 0.0135681 0.0127300 0.0127072 0.012703 0.0126314 0.0126652 0.0126653
0.0128220 0.0129730 0.0151160 0.0129240 0.0127190 0.012790 0.0126330 0.0126652 0.0126653
3.94E05 5.90E05 8.42E04 5.20E04 1.58E05 2.70E05 8.74E07 2.46E09 3.67E08
– 80,000 – 200,000 81,000 204,800 80,000 253 80,000
Table 10 Comparison of the best solutions for welded beam design problem. Methods
x1(h)
x2(l)
x3(t)
x4(b)
f(X)
GA1 GA2 EP CPSO HPSO CDE NM–PSO PSOLVER ICEM
0.208800 0.205986 0.205700 0.202369 0.205730 0.203137 0.205830 0.205830 0.205730
3.420500 3.471328 3.470500 3.544214 3.470489 3.542998 3.468338 3.468338 3.470489
8.997500 9.020224 9.036600 9.048210 9.036624 9.033498 9.036624 9.036624 9.036624
0.210000 0.206480 0.205700 0.205723 0.205730 0.206179 0.205730 0.205730 0.205730
1.748309 1.728226 1.724852 1.728024 1.724852 1.733462 1.724717 1.724717 1.724852
ations, ICEM requires less than or equal to the number required by others except PSOLVER. 4.4.3. The pressure vessel design problem In pressure vessel design problem proposed by Kannan and Kramer (1994), the aim is to minimize the total cost, including the cost of material, forming and welding. A cylindrical vessel is capped at both ends by hemispherical heads. There are four design variables in this problem: thickness of the shell Ts(x1), thickness of the head Th(x2), inner radius R(x3) and length of the cylindrical section of the vessel L(x4). Among the four design variables, Ts and Th are expected to be integer multiples of 0.0625 in., R and L are continuous variables. The comparison of the best solutions and the statistical results for pressure vessel design problem are given in Tables 13 and 14, respectively. Previous best solutions by the other algorithms, expect NM–PSO, satisfy the constraint of the decision variables x1 and x2 are expected to be integer multiples of 0.0625 inch. HPSO, PSOLVER and ICEM get the best solution with f(X) = 6059.7143. As to statistical results, PSOLVER and ICEM are much better than the others. The standard deviation of the solution by ICEM is the smallest. PSOLVER has a similar standard deviation with less number of function evaluations. From the comparisons for three engineering examples, it can be concluded that among the above algorithms PSOLVER is the most
5630
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Table 11 Constraints of the best solution for welded beam design problem. Methods
g1(X)
g2(X)
g3(X)
g4(X)
g5(X)
g6(X)
g7(X)
GA1 GA2 EP CPSO HPSO CDE NM–PSO PSOLVER ICEM
0.3378 7.41E2 4.72E4 13.6555 2.54E2 44.5786 2.53E2 2.53E2 2.54E2
353.90 0.2662 1.56E3 75.8141 5.31E2 44.6635 5.31E2 5.31E2 5.31E2
1.20E3 4.95E4 0.0000 3.35E3 0 3.04E3 0.1E4 0.1E4 0
3.4119 3.4300 3.4330 3.4246 3.4330 3.4237 3.4332 3.4332 3.4330
8.38E2 8.10E2 8.07E2 7.74E2 8.073E2 7.81E2 8.083E2 8.083E2 8.073E2
0.2356 0.2355 0.2355 0.2356 0.2355 0.2356 0.2355 0.2355 0.2355
363.23 58.666 7.79E4 4.4729 3.156E2 38.0283 3.156E2 3.156E2 3.156E2
Table 12 Statistical results for welded beam design problem.
Table 14 Statistical results for pressure vessel design problem.
Methods
Best
Mean
Worst
Std
NFE
Methods
Best
Mean
Worst
Std
NFE
GA1 GA2 EP CPSO HPSO CDE NM–PSO PSOLVER ICEM
1.748309 1.728226 1.724852 1.728024 1.724852 1.733461 1.724717 1.724717 1.724852
1.771973 1.792654 1.971809 1.748831 1.749040 1.768158 1.726373 1.724717 1.724852
1.785835 1.993408 3.179709 1.782143 1.814295 1.824105 1.733393 1.724717 1.724852
1.12E02 7.47E02 4.43E01 1.29E02 4.01E02 2.22E02 3.50E03 1.62E11 8.88E12
– 80,000 – 200,000 81,000 204800 80,000 297 80,000
GA1 GA2 CPSO HPSO CDE NM–PSO PSOLVER ICEM
6288.7445 6059.9463 6061.0777 6059.7143 6059.7340 5930.3137 6059.7143 6059.7143
6293.8432 6177.2533 6147.1332 6099.9323 6085.2303 5946.7901 6059.7143 6059.7143
6308.1497 6469.3220 6363.8041 6288.6770 6371.0455 5960.0557 6059.7143 6059.7143
7.413E+00 1.309E+02 8.645E+01 8.620E+01 4.301E+01 9.161E+00 4.625E12 9.095E13
– 80,000 200,000 81,000 204,800 80,000 310 80,000
are recorded in tables. The mean violations v are calculated as Eq. (13) where /(X) is the degree of constraint violation defined in formula (8).
efficient and it is of high accuracy. But the spreadsheet Solver combined in PSOLVER solves the linear and nonlinear optimization problems through GRG algorithm (Lasdon, Waren, Jain, & Ratner, 1978) which requires differentiable objective function. This leads to a limited field of its applicability. What is more, PSOLVER did not do well with constraint handling sometime. Comparing with PSOLVER, ICEM can get similar or better solutions with more number of function evaluations. Actually, the number of function evaluations needed by ICEM is less than or equal to the number needed by other algorithms except PSOLVER. So ICEM is very competitive.
v ¼ /ðXÞ=ðl þ mÞ
ð13Þ
From Tables 15 to 18, it can been seen that, when FES is 5 105, all solutions found by the ICEM for all the 22 benchmark problems are feasible in all the 25 runs except the worst solution for G13. For problems G01, G03–G12, G14–G16, G19–G24, 18 out of 22 problems’ best solutions obtained by the ICEM are very approximate to or equal to the optimal values reported in Liang et al. (2006). The Std of functions’ error values in all the 25 runs is 0.0. What is more, problems G05, G09, G17, G21 and G23’s best solutions are even better than the optimal values. The error values of them which are less than zero are bolded in tables. The best solution for G17 is especially outstanding. The details of the solution are given as follows: x = (201.78446249354971, 99.999999999999758, 383.07103485277281, 419.99999999999983, 10.9077646183 23958, 0.073148231208428685), f(x) = 8853.5338748064842. The value is better than the reported optimal value 8853.53967480648. In Tables 19 and 20, the best, median, worst, mean value and standard deviation of the number of FES to achieve the fixed accuracy level ((f(x) f(x⁄)) 6 0.0001) for the 25 runs on DSS-MDE (Zhang et al., 2008) and ICEM are reported. The feasible rate, success rate and success performance for each problem are also presented. A feasible run means a run during which at least one feasible solution is found in Max_FES. A successful run means a run during which the algorithm finds a feasible solution x satisfying (f(x) f(x⁄)) 6 0.0001. Feasible rate is the quotient of the
4.5. Results on benchmark functions in CEC’06 In this section, we test the ICEM on 22 benchmark functions in CEC’06 (Liang et al., 2006). Including 13 functions used in Section 4.3, 24 functions are defined in CEC’06. However, two functions G20 and G22 are not used to test the ICEM because no feasible solutions ever been found so far. According to the Liang et al. (2006), the function error value (f(x) f(x⁄)) for the achieved best solution after 5 103, 5 104 and 5 105 function evaluations (FES) is recorded respectively. Then the function error values for the 25 trials are compared and the best, median, mean, worst and the standard deviation (Std) values are reported in Tables 15–18. The numbers in the brackets behind Best, Median and Worst values are the number of violated constraints at the corresponding values. The number of violated constraints c (including the number of violations by more than 1, 0.01, 0.0001) and the mean violations v at the medial solution Table 13 Comparison of the best solutions for pressure vessel design problem. Methods
x1(Ts)
x2(Th)
x3(R)
x4(L)
g1(X)
g2(X)
g3(X)
g4(X)
f(X)
GA1 GA2 CPSO HPSO CDE NM–PSO PSOLVER ICEM
0.8125 0.8125 0.8125 0.8125 0.8125 0.8036 0.8125 0.8125
0.4375 0.4375 0.4375 0.4375 0.4375 0.3972 0.4375 0.4375
40.3239 42.0974 42.0913 42.0984 42.0984 41.6392 42.0984 42.0984
200.0000 176.6540 176.7465 176.6366 176.6377 182.4120 176.6366 176.6366
3.43E2 2.00E5 1.38E4 8.00E11 6.68E7 3.66E5 8.00E11 8.00E11
5.28E2 3.59E2 3.59E2 3.59E2 0.0359 3.80E5 3.59E2 3.59E2
27.11 27.89 118.77 2.72E4 3.683 1.59 2.72E4 2.72E4
40.00 63.35 23.25 23.36 63.36 17.588 23.36 23.36
6288.7445 6059.9463 6061.0777 6059.7143 6059.7340 5930.3137 6059.7143 6059.7143
5631
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634 Table 15 Error values achieved when FES = 5 103, FES = 5 104, FES = 5 105 for problems G01–G06. FES
G01
G02
G03
G04
G05
G06
Best Median Worst c(v) Mean Std.
3.9032E+00 (0) 4.8375E+00 (0) 5.7578E+00 (0) 0, 0, 0 (0) 4.7629E+00 5.1346E01
4.1470E01 (0) 4.7457E01 (0) 5.0789E01 (0) 0, 0, 0 (0) 4.7216E01 2.1845E02
2.1861E01 (0) 6.0995E01 (0) 9.4630E01 (0) 0, 0, 0 (0) 5.4272E01 2.0153E01
5.3102E+00 (0) 2.3433E+01 (0) 4.1163E+01 (0) 0, 0, 0 (0) 2.5402E+01 9.2716E+00
3.6021E+00 (0) 3.8891E+02 (0) 2.0438E+01 (0) 0, 0, 0 (0) 1.4004E+02 1.3204E+02
5.2420E+01 (0) 3.3272E+02 (0) 1.0592E+03 (0) 0, 0, 0 (0) 3.4265E+02 2.2359E+02
5 104
Best Median Worst c(v) Mean Std.
4.1048E05 (0) 8.0690E05 (0) 2.4607E04 (0) 0, 0, 0 (0) 8.4172E05 4.0385E05
5.6098E03 (0) 2.2065E02 (0) 9.6551E02 (0) 0, 0, 0 (0) 3.1249E02 2.3548E02
6.4935E02 (0) 2.9923E01 (0) 5.2236E01 (0) 0, 0, 0 (0) 3.0271E01 1.3627E01
7.2760E11 (0) 7.6398E11 (0) 7.6398E11 (0) 0, 0, 0 (0) 7.6107E11 9.8696E13
1.8190E12 (0) 1.4978E+01 (0) 3.2790E+02 (0) 0, 0, 0 (0) 6.7571E+01 9.1740E+01
3.3651E11 (0) 3.3651E11 (0) 3.3651E-11 (0) 0, 0, 0 (0) 3.3651E11 0.0000E+00
5 105
Best Median Worst c(v) Mean Std.
0 (0) 0 (0) 0 (0) 0, 0, 0 (0) 0.0000E+00 0.0000E+00
2.2823E06 (0) 5.4463E06 (0) 1.1014E02 (0) 0, 0, 0 (0) 1.3617E03 2.6951E03
3.2984E11 (0) 3.3199E08 (0) 1.3832E06 (0) 0, 0, 0 (0) 1.3609E07 2.7619E07
7.2760E11 (0) 7.2760E11 (0) 7.6398E11 (0) 0, 0, 0 (0) 7.2905E11 7.1290E13
1.8190E12 (0) 1.8190E12 (0) 9.0950E13 (0) 0, 0, 0 (0) 1.7462E12 2.4674E13
3.3651E11 (0) 3.3651E11 (0) 3.3651E11 (0) 0, 0, 0 (0) 3.3651E11 0.0000E+00
3
5 10
Table 16 Error values achieved when FES = 5 103, FES = 5 104, FES = 5 105 for problems G07–G12. FES
G07
G08
G09
G10
G11
G12
5 103
Best Median Worst c(v) Mean Std.
1.3827E+01 (0) 3.5361E+01 (0) 1.5203E+02 (0) 0, 0, 0 (0) 4.0130E+01 2.5306E+01
1.6259E10 (0) 7.1189E09 (0) 1.9565E07 (0) 0, 0, 0 (0) 2.7757E08 4.3109E08
3.9093E+00 (0) 1.0792E+01 (0) 9.2816E+01 (0) 0, 0, 0 (0) 1.5022E+01 1.7513E+01
2.4864E+03 (0) 3.2259E+03 (0) 6.4007E+03 (0) 0, 0, 0 (0) 3.3587E+03 8.3169E+02
1.1174E04 (0) 1.1371E01 (0) 2.4664E01 (0) 0, 0, 0 (0) 1.0561E01 7.5439E02
2.862E06 (0) 2.1566E05 (0) 1.6015E04 (0) 0, 0, 0 (0) 3.5001E05 3.4236E05
5 104
Best Median Worst c(v) Mean Std.
3.5845E05 (0) 8.0810E05 (0) 3.3641E04 (0) 0, 0, 0 (0) 9.6677E05 6.3017E05
8.1964E11 (0) 8.1964E11 (0) 8.1964E11 (0) 0, 0, 0 (0) 8.1964E11 1.2925E26
9.6179E11 (0) 4.7282E10 (0) 7.0174E06 (0) 0, 0, 0 (0) 3.1050E07 1.3736E06
2.6403E02 (0) 1.1981E01 (0) 3.0142E01 (0) 0, 0, 0 (0) 1.2448E01 7.4788E02
0.0000E+00 (0) 2.1619E03 (0) 1.5183E01 (0) 0, 0, 0 (0) 2.8219E02 4.2572E02
0.0000E+00 (0) 0.0000E+00 (0) 0.0000E+00 (0) 0, 0, 0 (0) 0.0000E+00 0.0000E+00
5 105
Best Median Worst c(v) Mean Std.
7.9762E11 (0) 7.9769E11 (0) 7.9787E11 (0) 0, 0, 0 (0) 7.9770E11 6.0526E15
8.1964E11 (0) 8.1964E11 (0) 8.1964E11 (0) 0, 0, 0 (0) 8.1964E11 3.8774E26
9.8225E11 (0) 9.8225E11 (0) 9.8225E11 (0) 0, 0, 0 (0) 9.8225E11 0.0000E+00
6.1846E11 (0) 6.2755E11 (0) 6.2755E11 (0) 0, 0, 0 (0) 6.2391E11 4.4556E13
0.0000E+00 (0) 0.0000E+00 (0) 0.0000E+00 (0) 0, 0, 0 (0) 0.0000E+00 0.0000E+00
0.0000E+00 (0) 0.0000E+00 (0) 0.0000E+00 (0) 0, 0, 0 (0) 0.0000E+00 0.0000E+00
Table 17 Error values achieved when FES = 5 103, FES = 5 104, FES = 5 105 for problems G13–G18. FES
G13
G14
G15
G16
G17
G18
5 103
Best Median Worst c(v) Mean Std.
6.4960E01 (0) 9.9706E01 (3) 9.7524E01 (3) 2, 3, 3 (1.2E+00) 1.9893E+00 4.6082E+00
3.3277E+00 (3) 2.8204E+01 (3) 5.4583E+01 (3) 2, 3, 3 (9.9E01) 2.9312E+01 1.2901E+01
9.2642E03 (0) 3.1698E+00 (1) 5.1787E+00 (2) 0, 0, 1 (6.8E05) 1.3997E+00 1.5419E+00
2.3970E02 (0) 4.2122E02 (0) 8.4014E02 (0) 0, 0, 0 (0) 4.3029E02 1.3235E02
9.9934E+01 (4) 7.5016E+01 (4) 5.6175E+01 (4) 0, 4, 4 (5.5E01) 8.2859E+01 3.5185E+01
5.8878E01 (0) 8.3105E01 (2) 3.5636E01 (5) 0, 2, 2 (5.1E02) 7.2826E01 1.4419E01
5 104
Best Median Worst c(v) Mean Std.
3.9301E01 (0) 2.1110E+01 (3) 9.5416E01 (3) 0, 2, 2 (4.7E02) 1.9759E+00 4.3835E+00
1.3593E05 (0) 1.4317E04 (0) 1.1511E02 (0) 0, 0, 0 (0) 8.4066E04 2.2671E03
6.0823E11 (0) 1.2525E01 (0) 4.3499E+00 (0) 0, 0, 0 (0) 6.3932E01 1.0421E+00
6.5467E11 (0) 6.9048E11 (0) 8.2830E11 (0) 0, 0, 0 (0) 6.9962E11 3.7636E12
1.4026E+01 (0) 9.9053E+01 (0) 8.7915E+01 (4) 0, 0, 0 (0) 9.7197E+01 4.7141E++01
2.5159E05 (0) 6.8411E05 (0) 1.9134E01 (0) 0, 0, 0 (0) 2.3303E02 6.2039E02
5 105
Best Median Worst c(v) Mean Std.
4.1898E11 (0) 3.8486E01 (0) 5.3327E02 (3) 0, 0, 0 (0) 3.9090E01 6.4960E01
8.5052E12 (0) 8.5123E12 (0) 8.5194E12 (0) 0, 0, 0 (0) 8.5123E12 2.0097E15
6.0823E11 (0) 6.0823E11 (0) 6.0823E11 (0) 0, 0, 0 (0) 6.0823E11 0.0000E+00
6.5214E11 (0) 6.5214E11 (0) 6.5214E11 (0) 0, 0, 0 (0) 6.5214E11 0.0000E+00
5.8000E03 (0) 7.4052E+01 (0) 8.4353E+01 (0) 0, 0, 0 (0) 6.5588E+01 2.4306E+01
1.5561E11 (0) 1.5561E11 (0) 1.9104E01 (0) 0, 0, 0 (0) 2.2925E02 6.2082E02
5632
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Table 18 Error values achieved when FES = 5 103, FES = 5 104, FES = 5 105 for problems G19–G24. FES
G19
G21
G23
G24
Best Median Worst c(v) Mean Std.
1.1067E+02 (0) 1.8731E+02 (0) 2.4440E+02 (0) 0, 0, 0 (0) 1.7505E+02 3.2801E+01
7.4428E+02 (5) 5.3270E+02 (5) 4.5028E+02 (5) 0, 5, 5 (1.1E01) 4.1441E+02 1.3947E+02
4.7447E+02 (4) 1.1117E+02 (5) 1.4510E+02 (5) 0, 4, 5 (1.2E01) 6.9162E+01 1.4842E+02
8.1705E05 (0) 8.0052E04 (0) 2.2677E03 (0) 0, 0, 0 (0) 1.0048E03 6.3617E04
5 104
Best Median Worst c(v) Mean Std.
2.3015E02 (0) 6.3687E02 (0) 2.5754E01 (0) 0, 0, 0 (0) 8.3002E02 5.9864E02
4.5559E+01 (0) 9.4774E+01 (0) 3.6695E+02 (4) 0, 0, 0 (0) 1.0197E+02 5.6980E+01
4.5578E01 (0) 6.7811E+01 (0) 3.7397E+02 (0) 0, 0, 0 (0) 9.6898E+01 9.2659E+01
4.6736E12 (0) 4.6736E12 (0) 4.6736E12 (0) 0, 0, 0 (0) 4.6736E12 0.0000E+00
5 105
Best Median Worst c(v) Mean Std.
4.6313E11 (0) 4.6342E11 (0) 4.6470E11 (0) 0, 0, 0 (0) 4.6350E11 3.2920E14
3.4743E10 (0) 3.3731E10 (0) 2.8948E10 (0) 0, 0, 0 (0) 3.3427E10 1.2013E11
2.8422E13 (0) 5.6843E14 (0) 1.7224E11 (0) 0, 0, 0 (0) 7.2532E13 3.3719E12
4.6736E12 (0) 4.6736E12 (0) 4.6736E12 (0) 0,0,0 (0) 4.6736E12 0.0000E+00
5 10
3
Table 19 Comparison between DSS-MDE and ICEM on Number of FES to achieve the fixed accuracy level ((f(x) f(x⁄)) 6 0.0001), success rate, feasible rate and success performance (1). Function
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13 G14 G15 G16 G17 G18 G19 G21 G23 G24 Average
Best
Median
Worst
Mean
DSS-MDE
ICEM
DSS-MDE
ICEM
DSS-MDE
ICEM
DSS-MDE
ICEM
111,034 71,404 83,476 43,348 40,904 13,834 99,423 855 34,995 164,119 9,160 808 34,088 211,862 28,204 NA 53,453 100,454 258,825 119,496 292,693 5,116 84,645
46,255 125,702 106,355 20,246 23,318 18,483 43,314 1,744 20,433 72,294 2,941 878 139,168 40,714 16,372 14,143 194,963 40,926 92,874 64,134 120,625 5,233 55,051
118,771 88,984 101,847 46,981 45,390 15,529 113,009 2,470 37,605 204,100 19,593 3,244 44,206 275,862 34,722 NA 58,214 111,947 382,792 133,016 322,553 6,632 103,213
49,136 162,648 210,726 22,023 81,512 22,823 48,413 2,401 25,375 84,855 72,864 2,719 240,467 52,233 95,993 16,347 253,091 47,454 103,590 167,044 235,728 6,640 91,095
130,458 109,107 121,467 53,444 48,832 19,076 129,200 3,101 42,496 498,533 30,615 4,061 53,073 479,584 39,992 NA 68,984 138,025 488,370 214,865 401,084 8,744 146,815
50,949 446,937 320,647 24,138 316,289 25,258 56,569 2,993 40,931 90,294 265,807 4,915 402,879 63,542 327,382 17,767 299,253 51,767 131,497 260,762 359,742 7,849 162,189
118,919 90,261 103,585 46,852 45,550 15,776 113,672 2,313 38,226 244,002 19,343 2,968 44,243 313,390 34,580 NA 58,887 113,959 380,668 146,211 334,721 6,679 108,324
48,894 184,779 207,442 22,081 99,811 22,310 48,549 2,452 26,467 83,888 80,238 2,847 239,749 53,176 114,482 16,239 245,866 47,492 104,419 157,874 228,825 6,666 92,934
number of feasible run divided by the total runs and success rate is the quotient of the number of successful run divided by the total runs. Success Performance = mean (FES for successful runs) (number of total runs)/(number of successful runs). ‘‘NA’’ in the tables means no available value. ‘‘Average’’ means the average value for all the problems. The better values between the two algorithms are bolded. It can be seen that the ICEM’s average values of the best, the median and the mean values are better than the DSS-MDE’s. Because the ICEM failed to get all the feasible solutions for G13, the average feasible rate of ICEM is just 99.64% which is slightly less than 100%. As for the success rate, the ICEM can achieve 100% for 19 of the 22 problems while the DSS-MDE just achieves100% for 15 problems. The DSS-DEM failed to find the optimal value for G16 and its success rate on G16 is zero. This situation did not happen on the ICEM. The average value of success rate obtained by the ICEM is bigger than the one obtained by the DSS-DEM. The ICEM gets 13 better values on the success performance and the average
value is much less than the DSS-MDE. As a whole, the ICEM performs better than the DSS-DEM on the 22 benchmark functions.
5. Conclusion and future works The most important contribution of this study is to propose an improved electromagnetism-like mechanism algorithm for constrained optimization. Four modifications are made for improving EM algorithm’s accuracy and efficiency. The FAD rules and corresponding charge formula are used as constrain handling techniques. The experiment on 13 benchmark functions and three engineering design problems illustrates that the proposed EM algorithm outperforms other versions of EM algorithm for constrained optimization and it is also very competitive among the state-of-art algorithms such as SR, ISR, MABC, EP, GAs (GA1, GA2), PSOs(IVPSO, HPSO, CPSO, NM–PSO, PSOLVER) and DEs (DSS-MDE, CDE).
5633
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Table 20 Comparison between DSS-MDE and ICEM on Number of FES to achieve the fixed accuracy level ((f(x) f(x⁄)) 6 0.0001), Success Rate, Feasible Rate and Success Performance (2). Function
G01 G02 G03 G04 G05 G06 G07 G08 G09 G10 G11 G12 G13 G14 G15 G16 G17 G18 G19 G21 G23 G24 Average
Std
Feasible rate (%)
Success rate (%)
Success performance
DSS-MDE
ICEM
DSS-MDE
ICEM
DSS-MDE
ICEM
DSS-MDE
ICEM
4683 12,139 8744 2,066 1857 1184 7070 537 2209 95,451 5401 839 4100 85,671 2637 NA 3422 8604 81,728 28,318 49,010 778 19,355
1409 73,114 52,033 765 65,304 1636 2907 344 5519 4899 67,063 911 74,303 5878 75,680 817 36,285 2316 7958 46,064 70,486 648 27,106
100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100
100 100 100 100 100 100 100 100 100 100 100 100 92 100 100 100 100 100 100 100 100 100 99.64
100 36 100 100 100 100 100 100 100 92 100 100 100 84 100 0 100 100 68 72 16 100 84.91
100 72 100 100 100 100 100 100 100 100 100 100 36 100 100 100 28 68 100 100 100 100 91.09
118,919 250,724 103,585 46,852 45,550 15,776 113,672 2,313 38,226 265,220 19,343 2,968 44,243 373,084 34,580 NA 58,887 113,959 559,806 203,070 2,092,005 6,679 214,736
48,894 256,638 207,442 22,081 99,811 22,310 48,549 2452 26,467 83,888 80,238 2847 665,969 53,176 114,482 16,239 878,094 69,841 104,419 157,874 228,825 6666 145,327
The second contribution is the modifications for EM algorithm. Because of based on the analysis of weakness of the original EM algorithm, they can improve the accuracy and efficiency of EM algorithm observably. Therefore, the modifications are not only useful for the constrained optimization but also helpful when EM algorithm is applied for other optimization problems. And the modifications can be referenced to improve other algorithms. Expanding the application area of proposed algorithm is another direction of the future work. For example, with further modification, we can apply the proposed EM algorithm on the multi-objective optimization problems. Acknowledgment This research is supported by the National Natural Science Foundation of China (NSFC) under Grant Nos. 60973086, 51005088, 51121002. References Aguirre, A. H., Rionda, S. B., Coello, C. A. C., Lizárraga, G. L., & Montes, E. M. (2004). Handling constraints using multiobjective optimization concepts. International Journal for Numerical Methods in Engineering, 59(15), 1989–2017. Ali, M. M., & Golalikhani, M. (2010). An electromagnetism-like method for nonlinearly constrained global optimization. Computers and Mathematics with Applications, 60(8), 2279–2285. Arora, J. S. (1989). Introduction to optimum design. New York: McGraw-Hill. Birbil, S. I. (2002). Stochastic global optimization techniques, Ph.D. Thesis, North Carolina State University, 2002. Birbil, S., & Fang, S. C. (2003). An electromagnetism-like mechanism for global optimization. Journal of Global Optimization, 25(3), 263–282. Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization problems. Computers in Industry, 41, 113–127. Coello, C. A. C., & Becerra, R. L. (2004). Efficient evolutionary optimization through the use of a cultural algorithm. Engineering Optimization, 36(2), 219–236. Coello, C. A. C., & Montes, E. M. (2002). Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Advanced Engineering Informatics, 16, 193–203. Cuevas, E., Oliva, D., Zaldivar, D., Marco, P. C., & Sossa, H. (2012). Circle detection using electro-magnetism optimization. Information Sciences, 182(1), 40–55. Deb, K. (2000). An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, 186(2/4), 311–338. El-Gallad, A. I., El-Hawary, M. E., & Sallam, A. A. (2001). Swarming of intelligent particles for solving the nonlinear constrained optimization problem. Engineering Intelligent Systems for Electrical Engineering and Communications, 9(3), 155–163.
Gao, L., Wang, X. J., Wei, W., & Chen, Y. Z. (2006). A modified algorithm for electromagnetism-like mechanism. Journal of Huazhong University of Science and Technology (Nature Science Edition), 34, 4–6 (in Chinese). Gol-Alikhani, M., Javadian, N., & Tavakkoli-Moghaddam, R. (2009). A novel hybrid approach combining electromagnetism-like method with Solis and Wets local search for continuous optimization problems. Journal of Global Optimization, 44, 227–234. He, Q., & Wang, L. (2006). An effective co-evolutionary particle swarm optimization for engineering optimization problems. Engineering Application of Artificial Intelligence, 20, 89–99. He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibility based rule for constrained optimization. Applied Mathematics and Computation, 186, 1407–1422. Huang, F., Wang, L., & He, Q. (2007). An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and computation, 186, 340–356. Kannan, B. K., & Kramer, S. N. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, 116, 318–320. Karaboga, D., & Akay, B. (2011). A modified artificial bee colony (ABC) algorithm for constrained optimization problems. Applied Soft Computing, 11, 3021–3031. Kayhan, A. H., Ceylan, H., Ayvaz, M. T., & Gurarslan, G. (2010). PSOLVER: A new hybrid particle swarm optimization algorithm for solving continuous optimization problems. Expert Systems with Applications, 37, 6798–6808. Lasdon, L. S., Waren, A. D., Jain, A., & Ratner, M. (1978). Design and testing of a generalized reduced gradient code for nonlinear programming. ACM Transactions on Mathematical Software, 4(1), 34–49. Lee, C. H., & Chang, F. K. (2010). Fractional-order PID controller optimization via improved electromagnetism-like algorithm. Expert Systems with Applications, 37(12), 8871–8878. Liang, J. J., Runarsson, T. P., Mezura-Montes, E., Clerc, M., Suganthan, P. N., Coello, C. A. C., & Deb, K. (2006). Problem definitions and evaluation criteria for the CEC2006 special session on constrained real-parameter optimization, 2006.
. Naji-Azimi, Z., Toth, P., & Galli, L. (2010). An electromagnetism metaheuristic for the unicost set covering problem. European Journal of Operational Research, 205(2), 290–300. Rocha, A. M. A. C., & Fernandes, E. M. G. P. (2008a). Implementation of the electromagnetism-like algorithm with a constraint-handling technique for engineering optimization problems. In Proceedings of the 8th international conference on hybrid intelligent systems (HIS 2008) (pp. 690–695). Rocha, A. M. A. C., & Fernandes, E. M. G. P. (2008b). On charge effects to the electromagnetism-like algorithm. In Proceedings of 20th international conference/ Euro mini conference on continuous optimization and knowledge-based technologies (EurOPT 2008), May 20–23. (pp: 198–203). Neringa, Lithuania. Rocha, A. M. A. C., & Fernandes, E. M. G. P. (2009a). Self-adaptive penalties in the electromagnetism-like algorithm for constrained global optimization problems. In 8th World congress on structural and multidisciplinary optimization. June 1–5, 2009. Lisbon, Portugal. Rocha, A. M. A. C., & Fernandes, E. M. G. P. (2008c). Feasibility and dominance rules in the electromagnetism-like algorithm for constrained global optimization. Lecture Notes in Computer Science, 5071, 768–783.
5634
C. Zhang et al. / Expert Systems with Applications 40 (2013) 5621–5634
Rocha, A. C., & Fernandes, E. M. G. P. (2009b). A modified electromagnetism-like algorithm based on a pattern search method. Lecture Notes in Electrical Engineering, 28(2), 1035–1042. Rocha, A. C., & Fernandes, M. G. P. (2009c). Modified movement force vector in an electromagnetism-like mechanism for global optimization. Optimization Methods and Software, 24(2), 253–270. Runarsson, T. P., & Yao, X. (2000). Stochastic ranking for constrained evolutionary optimization. IEEE Transactions on Evolutionary Computation, 4, 284–294. Runarsson, T. P., & Yao, X. (2005). Search biases in constrained evolutionary optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 35, 233–243. Smith, J. E., & Fogarty, T. C. (1997). Operator and parameter adaptation in genetic algorithms. Soft Computing, 1(2), 81–87. Storn, R., & Price, K. (1997). Differential evolution: A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11(4), 341–359. Su, C. T., & Lin, H. C. (2011). Applying electromagnetism-like mechanism for feature selection. Information Sciences, 181(5), 972–986.
Sun, C., Zeng, J., & Pan, J. (2011). An improved vector particle swarm optimization for constrained optimization problems. Information Sciences, 181, 1153–1163. Takahama, T., & Sakai, S. (2005). Constrained optimization by applying the a constrained method to the nonlinear simplex method with mutations. IEEE Transactions on Evolutionary Computation, 9, 437–451. Tavakkoli-Moghaddam, R., Khalili, M., & Naderi, B. (2009). A hybridization of simulated annealing and electromagnetic-like mechanism for job shop problems with machine availability and sequence-dependent setup times to minimize total weighted tardiness. Soft Computing, 13(10), 995–1006. Wu, P. T., Yang, W. H., & Wei, N. C. (2004). An electromagnetism algorithm of neural network analysis-an application to textile retail operation. Journal of the Chinese Institute of Industrial Engineers, 21(1), 59–67. Zahara, E., & Kao, Y. T. (2009). Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Systems with Applications, 36, 3880–3886. Zhang, M., Luo, W., & Wang, X. (2008). Differential evolution with dynamic stochastic selection for constrained optimization. Information Sciences, 178(15), 3043–3074.