Information Sciences 218 (2013) 85–102
Contents lists available at SciVerse ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins
A piecewise linear chaotic map and sequential quadratic programming based robust hybrid particle swarm optimization Wenxing Xu a, Zhiqiang Geng a, Qunxiong Zhu a,⇑, Xiangbai Gu b a b
College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, China Sinopec Engineering, Beijing 100029, China
a r t i c l e
i n f o
Article history: Received 14 May 2011 Received in revised form 28 April 2012 Accepted 3 June 2012 Available online 15 June 2012 Keywords: Particle swarm optimization Sequential quadratic programming Chaotic optimization Piecewise linear chaotic map Constrained optimization
a b s t r a c t This paper presents a novel robust hybrid particle swarm optimization (RHPSO) based on piecewise linear chaotic map (PWLCM) and sequential quadratic programming (SQP). The aim of the present research is to develop a new single-objective optimization approach which requires no adjustment of its parameters for both unconstrained and constrained optimization problems. This novel algorithm makes the best of ergodicity of PWLCM to help PSO with the global search while employing the SQP to accelerate the local search. Five unconstrained benchmarks, eighteen constrained benchmarks and three engineering optimization problems from the literature are solved by using the proposed hybrid approach. The simulation results compared with other state-of-art methods demonstrate the effectiveness and robustness of the proposed RHPSO for both unconstrained and constrained problems of different dimensions. Ó 2012 Elsevier Inc. All rights reserved.
1. Introduction Particle swarm optimization (PSO) is a heuristic swarm intelligence optimization approach, first introduced by Kennedy and Eberhart in 1995 [25]. As an efficient and powerful problem-solving strategy, PSO has been used to solve many scientific and engineering optimization problems, such as function optimization [8,13,20,28,40], product design and manufacturing [2,52] fault diagnosis [50], process and system design [27,35,36], economic load dispatch (ELD) [49,59], stock market forecast [21] and stock portfolio construction [5], etc. Due to PSO’s general applicability, simplicity of implementation, and capability of balancing between global and local searches, a heap of research has been done to study and improve its performance, especially for single-objective optimization (SOO) problems. Empirical evidence has shown that PSO is a promising tool for global optimization if a good balance is found between exploration and exploitation, and thus many variants of this algorithm have been established to improve its performance [6,14,19,30,31,40,41,47,49,51,60]. Stability analysis and parameter selection criteria are also of great interest [7,16,22,33,48]. Van den Bergh and Engelbrecht [48] proved that under certain conditions each particle converges to a stable predefined point. This point is a weighted average of the personal best and global best positions, where the weights are determined by the values of the acceleration coefficients. Empirical and theoretical studies show that PSO is sensitive to the control parameters, especially the inertia weight and acceleration coefficients. Inappropriate initialization of these parameters may even lead to divergent or cyclic behavior. Appropriate adjustment of the parameters is thus important but usually expensive in time and labor. ⇑ Corresponding author. E-mail addresses:
[email protected] (W.X. Xu),
[email protected] (Z.Q. Geng),
[email protected] (Q.X. Zhu),
[email protected] (X.B. Gu). 0020-0255/$ - see front matter Ó 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ins.2012.06.003
86
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Two major issues related to search performance in PSO algorithms are that: (1) existing PSO algorithms suffer premature convergence due to rapid erosion of species diversity especially while optimizing continuous multimodal functions; and that (2) PSO converges fast at the beginning of search but slows down quickly when the global best found is around the optimum of the local search. As a result, more ineffective iterations are required to get a more accurate estimation of the local optima; furthermore, it is hard to maintain the balance between exploration and exploitation. In order to overcome these drawbacks, PSO has been improved recently. Parsopoulos and Vrahatis [37] integrated exploration and exploitation to form a unified particle swarm optimization (UPSO). Yildiz [52] strengthened the algorithm by hybridizing it with receptor editing property of immune system and applied it to the solution of optimization problems in product design and manufacturing area. Juang et al. [23] proposed an adaptive fuzzy PSO (AFPSO) algorithm which improves the accuracy and efficiency of searches by utilizing fuzzy set theory to adjust PSO acceleration coefficients adaptively. Tripathi et al. [43] introduced time variant multiobjective PSO (TV-MOPSO), allowing its vital parameters (i.e., inertia weight and acceleration coeffcients) to change with iterations, and incorporating a mutation operator to resolve the problem of premature convergence to the local Pareto-optimal front. Shi et al. [42] integrated a mechanism of cellular automata in the velocity update of PSO to produce two versions of cellular PSO for function optimization. Huang et al. [20] used an example set of multiple global best particles to update the positions of the particles in their proposed example-based learning PSO (ELPSO). However, proper tunings of parameters are still a problem and thus an exhaustive study regarding the sensitivity of the algorithms to various parameters are needed. Besides, as one of the evolutionary algorithms and metaheuristics used for optimization, PSO naturally operates as unconstrained search, while most real-world problems have constraints of different types (e.g., physical, time, geometric, etc.) which modify the shape of the search space. During the last couple of decades, a wide variety of metaheuristics have been designed and applied to solve constrained optimization problems [11,18,26,29,32,34,44–46,52–57]. The main aim of this paper is to find a robust single-objective particle swarm optimization algorithm, which resolves the above two drawbacks of PSO simultaneously and can be applied for both unconstrained and constrained optimization problems without changing its parameters. The organization of this paper is as follows. Some preliminaries are introduced in Section 2, followed by description of our approach in Section 3, and the experiments, results and discussion in Section 4. Finally, we draw the conclusions in Section 5. 2. Related work 2.1. Standard PSO algorithm In PSO every solution is a ‘‘bird’’ in the search space, which is called a ‘‘particle’’. Each particle constituting a swarm has its position Xi = (xi1, xi2, . . . , xiN)T and velocity Vi = (vi1, vi2, . . . , viN)T. Then while flying in the problem search space, each particle generates a new solution using directed velocity vector. Each particle modifies its velocity to find a better solution (position) by applying its own flying experience (personal best position found in its earlier flights) Pi = (pi1, pi2, . . . , piN)T and experience of neighboring particles (global best position found of the population) Pg = (pg1, pg2, . . . , pgN)T as shown below:
v kþ1 ¼ w v kid þ c1 randðÞ id
pid xkid þ c2 randðÞ pgd xkid
¼ xkid þ v kþ1 xkþ1 id id
ð1Þ ð2Þ
where i = 1, 2, . . . , M, M is the swarm size. N is the dimension of the search space, d = 1, 2, . . . , N denotes variants’ number, the superscript k indicates the iteration count, rand() is a random value in the range [0, 1], scaling parameters c1 and c2 are positive constants for regulating the maximum step sizes for the particles to fly towards Pi and Pg, respectively. Each component of Vi and Xi can be clamped to the range of [vmin, vmax] and [xmin, xmax] to control excessive roaming of particles outside the search space. In the standard PSO, w is a constant during searching iteration. While w = 1, the standard PSO degenerates into the original PSO. Empirical results have shown that a constant inertia weight of w = 0.7298 and acceleration coefficients with c1 = c2 = 1.49618 provide good convergence [48]. While static inertia values have been used successfully, it has also been shown that adaptive inertia values lead to convergent behavior [16,22,33,48]. In particular, when the parameter w linearly decreases as recommended by Shi and Eberhart [42], the algorithm evolves into the so-called Linear Decreasing Weight Particle Swarm Optimization (LDWPSO). 2.2. PWLCM with dynamic search range Benefiting from the properties of ergodicity and stochasticity, chaos has been employed to optimization recently [3,30,31,33,51]. In the existing chaotic PSO algorithms, the well-known logistic map is prevalently used. However, piecewise linear chaotic map (PWLCM) is gaining increasing attention recently due to its simplicity in representation, efficiency in implementation, as well as good dynamical behavior. It has been known that PWLCMs are ergodic and have uniform invariant density function on their definition intervals [3]. The simplest PWLCM is denoted in (3)
xtþ1 ¼
xt =p; xt 2 ð0; pÞ ð1 xt Þ=ð1 xt Þ;
xt 2 ½p; 1Þ
ð3Þ
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
87
where x behaves chaotically in (0, 1) when p 2 (0, 0.5) [ (0.5, 1). So this chaotic map within range [0, 1] can be abbreviated as CM(0, 1). Based on the fact that chaotic search is efficient in small range, a search radius R was defined in [30,31,33,51]. In those work, chaotic local search for a particle was taken in a range around current location, global best position and personal best position, respectively. In this paper, particles take chaotic steps within a dynamically shrinking range around the global best of the whole swarm, as determined by Eqs. (4) and (5).
CMð1; 1Þ ¼ 2CMð0; 1Þ 1
ð4Þ
^xkid ¼ pgd ½1 þ g CMð1; 1Þ
ð5Þ
Here, scaling parameters g = 1.1 is set from experience [43]. After chaos this particle will still be clamped to the initial problem space of [xmin, xmax] according to (6).
n n o o b k ; xmax ; xmin : Xkþ1 ¼ max min X i i
ð6Þ
2.3. Local search based on sequential quadratic programming (SQP) SQP method was first proposed by Wilson in his doctoral dissertation in 1963, and is suitable for solving nonlinear programming problems. The basic idea of this method is to make a quadratic approximation of the Lagrange function of the original problem, to form a QP sub-problem, as shown in (7) [15].
min n d2R
s:t:
1 T d Hk d 2
þ rf ðxk ÞT d
rgðxk ÞT d þ gðxk Þ 6 0
ð7Þ
where subscript k denotes the current iteration, H is the Hessian matrix, which can be calculated approximately using QuasiNewton methods, such as BFGS, etc. The solution of QP sub-problem is then used to form a search direction for a line search procedure in the next step. In this work, all gradient information is approximated using finite differences. 2.4. Constraint-handling mechanism The original PSO is designed to handle unconstrained optimization problems, and therefore, for constrained problems, a constraint-handling mechanism is required to guide the swarm to search for feasible region. Nowadays, besides penalty functions, one of the widely used constraint-handling mechanisms is the set of feasibility rules proposed by Deb [12]. The rules are as follows: (1) Between two feasible solutions, the one with the highest fitness value wins. (2) If one solution is feasible and the other one is infeasible, the feasible solution wins. (3) If both solutions are infeasible, the one with the lowest sum of constraint violation is preferred. This mechanism is applied here to compare two particles in constrained problems. Thus, without changing the algorithm steps, PSO can also be used in constrained problems. 3. Robust hybrid particle swarm optimization (RHPSO) There are various methods for chaotic map and SQP to be integrated with PSO. In this paper, PSO is first combined with PWLCM to form a PWLCPSO, and then SQP serves as a local search accelerating method for the global best found by PWLCPSO to do exploitation. This combination forms a novel algorithm, called RHPSO. The particles are able to search the entire space while finding local optima fast, which increases the possibility of exploring a global optimum in problems with more local optima while ensuring the convergence of the algorithm. A brief flow chart of the proposed algorithm is shown in Fig. 1. Here, PSO and PWLCM are two parallel dynamic moving regimes for all particles. For a better description, three definitions are made as follows: Definition 1. (PSO step) Moving according to formulas (1) and (2) is called a particle swarm optimization step, namely PSO step. Definition 2. (PLC step) Moving according to formulas (4)–(6) is called a piecewise linear chaotic step, namely PLC step. Definition 3. (Stagnation) If the personal best of a particle stays still after a step is taken, it is said that a stagnation happens to this particle.
88
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Fig. 1. A simple flow chart of the proposed RHPSO.
Fig. 2. Illustration of the search behavior of a particle i in PWLCPSO.
The search behavior of a particle i in PWLCPSO is illustrated in Fig. 2. After taking a step vk, particle i has reached its cur rent position X ki . Then, there are two possible steps, PSO step and PLC step. PSO step is the weighted sum of vk, distance to personal best Pki and distance to global best Pkg . PLC step is a chaotic jump. Which step to take depends on the comparison between X ki and P ik1 . If X ki is better than Pik1 , the PSO step will be taken and particle i will move to the position P0 . Otherwise, the PLC step will be taken and particle i will move to the position P00 . In unconstrained problems, a particle with lower fitness value is better. In constrained problems, it can be determined according to the Deb’s rule described in Section 2.4. Before the next iteration, the global best found so far in PWLCPSO is used as the starting point to begin a local search phase and SQP method is employed. In the experiment part, ‘‘fmincon’’ function in Matlab 7.8 is used. The global best solutions found by SQP and the whole algorithm (RHPSO) are denoted as Pg_sqp and Po, respectively. Then, by comparing Pg_sqp with Pg, if a better solution is found in the local phase, the Po will be updated. The detailed architecture of the proposed RHPSO algorithm is described as follows: (see Table 1). 4. Experiments and results discussion Several experiments on problems from the optimization literature are used to evaluate the approach proposed in Section 3. These experiments include unconstrained benchmarks, constrained benchmarks and engineering optimization problems
89
W.X. Xu et al. / Information Sciences 218 (2013) 85–102 Table 1 Procedure of RHPSO. Algorithm 1: Piecewise linear chaotic map and SQP based robust hybrid particle swarm algorithm (RHPSO) Input parameters: swarm size M, max function evaluations Max_FEs. for each particle i do Xi = xmin + (xmax xmin) U(0, 1) Pi = Xi end for Pg = arg mini=1, . . ., M {f (Xi)} // global best particle of PWLCPSO Po = Pg // global best solution derived by RHPSO repeat for each particle i do if stagnation_interval[i] = 0 then Update the position Xi according to (1) and (2). // PSO else Update position Xi according to (3–6) stagnation_interval[i] = 0 // reset the stagnation interval end if if f (Xi)< f (Pi) then // update personal best Pi = Xi else stagnation_interval[i]++ // increase the stagnation interval by one end if // no improvement of the fitness if f (Pi)< f (Pg) then // update the global best particle f (Pg) = f (Pi) flag = 1 // arouse the SQP local search if f (Pg)< f (Po)then // update optimal function value derived by RHPSO Po = Pg end if end if end for if flag==1 then // use the SQP local search to update Pg_sqp Update Pg_sqp by solving 7 if f (Pg_sqp)< f (Po) then f (Po) = f (Pg_sqp) // update optimal function value derived by RHPSO end if flag = 1 // arouse the SQP local search end if until termination condition met Output variable: Po; f (Po) // global best solution and optimal function value derived by RHPSO
with linear and nonlinear constraints, whose solutions obtained by other techniques are available for comparing with and evaluating the proposed approach.
4.1. Experiment 1: five unconstrained benchmarks Five unimodal and multimodal minimization problems, summarized in Table 2, are considered. The global optimal value of each of these benchmarks is known to be zero. The aim of this experiment is to test the robustness of the proposed RHPSO. Robust test against problem dimension and parameter choices are taken, respectively.
4.1.1. Robust test against problem dimension In order to evaluate the effectiveness and efficiency of RHPSO, we compare its performance with those of the LDWPSO [30], the SQP [39] and the CPSO [30] using the same pre-adjusted parameter settings against the dimensional variation of
Table 2 Benchmark functions for simulations. Function Sphere Rosenbrock Rastrigin Girewank Ackley
Expression Pn
2 i¼1 xi i P h 2 2 f2 ðxÞ ¼ n1 þ ðxi 1Þ2 i¼1 100 xiþ1 xi P f3 ðxÞ ¼ ni¼1 x2i 10 cosð2pxi Þ þ 10 Pn 2 Qn x 1 i f4 ðxÞ ¼ 4000 i¼1 xi i¼1 cos pffii þ 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi P P f5 ðxÞ ¼ 20 exp 0:2 1n ni¼1 x2i exp 1n ni¼1 cosð2pxi Þ þ 20 þ e
f1 ðxÞ ¼
Box constraint [100, 100]n [30, 30]n [5.12, 5.12]n [600, 600]n [32, 32]n
90
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
these functions. Each benchmark problem is solved 50 times by using independent starting points randomly selected inside the box constrained domain of a hyper-rectangular region. For comparison, all parameters are the same as in the literature, which are: swarm size M = 10, 20, 30; the inertial weight is a linear descent from 0.9 to 0.4, c1 = c2 = 2.0; the velocity is clamped to the range [(xmin xmax)/2, (xmax xmin)/2]; maximum function evaluations are 60,000 for Rosenbrock and 4500 for the other functions; and the optimization objective of the five benchmark functions are all 105. A run is considered to be successful only if its result satisfies the optimization objective. The best results are highlighted in bold typeface. Tables 3–5 list the comparisons of the optimal results applying these algorithms to the test problems with 10, 20 and 30 dimensions, where the bold values are that obtained by applying RHPSO. Table 6 lists the comparison of convergence speed, where AvgFES and BestFES represent average and least function evaluations for the algorithm to reach optimization objective, respectively and are asserted to inf if an algorithm cannot converge within the optimization objective. SR, which indicates the robustness of a PSO algorithm, is the ratio of number of times of successful running to that of total running. Fig. 3 shows the iteration courses comparison of the RHPSO and LDWPSO algorithm for the optimization of the five benchmark functions in 30 dimensions. Since if the global optimal value of zero has been obtained by the algorithm, the logarithmic fitness value will be inf, the convergence curve may ‘‘disappear’’ after a certain point. From these experimental results, we can see that the performances of the RHPSO were better than the other three algorithms on the testing suite in terms of the average, best and worst optimization results. The statistical test results from Tables 3–6 confirm that the performances of RHPSO, not only convergence accuracy but also convergence speed and robustness, are significantly better than the other algorithms for the experiments. It can be seen from Fig. 3 that the RHPSO converges much faster than the LDWPSO, and has a longer convergence time, which guarantees its strong exploration ability. 4.1.2. Robust test against parameter choices This section presents experimental results to illustrate the convergence behaviors of the RHPSO for the different parameter combinations discussed in [48]. The objective of the experiments is to illustrate the independence of the proposed algorithm on parameter choices including inertia weight and acceleration coefficients. For all the experiments in this section, 20 particles have been used, no velocity clamping was used, and inertia weights were kept constant. Table 7 summarizes the results with different parameter combinations for all the functions. Figs. 4–8 illustrate the average logarithmic fitness over the 50 simulations of the best particle vs. function evaluations. From Table 7 and Figs. 4–8, it is obvious that except for the extreme case where w = 0.001 and c1 = c2 = 2.0, for all functions the RHPSO can converge to a Table 3 Convergence accuracy comparison on 10-D benchmark functions. Function RHPSO
CPSO [30]
LDWPSO [30]
SQP [39]
Mean BEST WORST Mean BEST WORST Mean BEST WORST Mean BEST WORST
Sphere
Rosenbrock
Rastrigin
Girewank
Ackley
3.5078E245 1.5000e323 5.0380E248 3.4213E12 1.4356E81 1.7103E10 1.2102E04 1.5387E06 1.1486E03 2.5749E27 3.5657E28 8.8173E27
1.2061E07 1.5606E08 3.0398E07 9.3949E03 1.1856E08 9.0066E02 3.1101E+00 2.8453E03 1.1050E+01 1.4352E+00 7.5595E12 3.9866E+00
0.0000E+00 0.0000E+00 0.0000E+00 1.7997E07 0.0000E+00 7.8006E06 9.7363E+00 3.3607E+00 2.5939E+01 6.9249E+01 2.3879E+01 1.5422E+02
0.0000E+00 0.0000E+00 0.0000E+00 2.1287E10 0.0000E+00 6.4174E09 1.7072E01 1.6949E02 7.2835E01 3.5357E01 2.8879E09 3.6312E+00
0.0000E+00 0.0000E+00 0.0000E+00 1.5952E08 8.8178E16 6.3330E07 5.9934E03 1.3078E04 2.5325E02 1.9090E+01 1.5245E+01 1.9959E+01
Sphere
Rosenbrock
Rastrigin
Girewank
Ackley
4.8770E167 1.2337E218 2.4385E165 7.5935E06 1.6348E78 1.5205E04 7.5935E06 1.6348E78 1.5205E04 7.8230E27 1.7954E27 1.6876E26
1.2385E07 1.1510E08 4.6721E07 2.2828E02 2.4387E06 3.6301E01 2.2828E02 2.4387E06 3.6301E01 9.5679E01 5.6252E11 3.9866E+00
0.0000E+00 0.0000E+00 0.0000E+00 7.4358E05 0.0000E+00 2.0145E03 7.4358E05 0.0000E+00 2.0145E03 1.3464E+02 3.9798E+01 2.5471E+02
0.0000E+00 0.0000E+00 0.0000E+00 2.6490E07 0.0000E+00 1.3245E05 2.6490E07 0.0000E+00 1.3245E05 1.4793E04 4.3753E12 7.3961E03
0.0000E+00 0.0000E+00 0.0000E+00 3.2707E06 8.8178E16 1.3865E04 3.2707E06 8.8178E16 1.3865E04 1.9361E+01 1.6701E+01 1.9967E+01
Table 4 Convergence accuracy comparison on 20-D benchmark functions. Function RHPSO
CPSO [30]
LDWPSO [30]
SQP [39]
Mean BEST WORST Mean BEST WORST Mean BEST WORST Mean BEST WORST
91
W.X. Xu et al. / Information Sciences 218 (2013) 85–102 Table 5 Convergence accuracy comparison on 30-D benchmark functions. Function RHPSO
Mean BEST WORST Mean BEST WORST Mean BEST WORST Mean BEST WORST
CPSO [30]
LDWPSO [30]
SQP [39]
Sphere
Rosenbrock
Rastrigin
Girewank
Ackley
4.1446E108 3.2628E147 2.0721E106 3.0421E12 1.4356E81 1.7103E10 3.4849E+01 8.2315E+00 8.2780E+01 1.6649E26 6.1618E27 2.8929E26
1.2732E07 1.3192E08 3.6100E07 4.8167E02 1.3465E05 9.2783E01 4.7716E+01 1.5259E+00 1.3253E+02 1.2757E+00 1.5564E11 3.9866E+00
0.0000E+00 0.0000E+00 0.0000E+00 2.5235E03 0.0000E+00 1.2590E+00 9.8445E+01 5.5295E+01 1.4515E+02 1.9722E+02 9.3526E+01 3.3828E+02
0.0000E+00 0.0000E+00 0.0000E+00 2.2397E04 0.0000E+00 1.1184E02 1.2159E+00 1.0665E+00 1.6145E+00 5.3866E09 1.0070E13 2.1518E08
0.0000E+00 0.0000E+00 0.0000E+00 6.3330E07 1.3865E04 5.8935E03 3.1255E+00 1.5849E+00 4.4928E+00 1.9432E+01 1.4406E+01 2.1709E+01
Table 6 Convergence speed comparison. Fun
Sphere Rosenbrock Rastrigrin Griewank Ackley
Dim
30 10 30 30 30
RHPSO
CPSO [30]
LDWPSO [30]
SQP
Avg FES
Best FES
SR
Avg FES
Best FES
SR
Avg FES
Best FES
SR
SR
65 450.08 490 78.84 986.7
65 261 212 65 651
50/50 50/50 50/50 50/50 50/50
1696 41,544 2480 2154 2201
990 3750 1170 1020 1320
50/50 31/50 50/50 50/50 50/50
34,128 inf inf 34,161 42,546
3153 inf inf 31,920 38,130
50/50 0/50 0/50 23/50 49/50
50/50 32/50 0/50 50/50 0/50
satisfactory solution much better than that was solved using the SPSO with elaborately selected parameters. Especially, for function Rastrigin and Ackley, the results with w = 0.001 and c1 = c2 = 2.0 are much worse than that with other combinations. The reasons are that this configuration violates the heuristic given in [48] which significantly increases the divergence probability of PSO itself and that the local search on function Rastrigin and Ackley may not be improved by SQP dramatically which can be seen in Table 6. These two disadvantages together make the hybrid PSO proposed here not so successful. However, in all the other cases (not as extreme as w = 0.001 and c1 = c2 = 2.0), the proposed RHPSO yields not only higher convergence speed but also more accurate fitness value on all tested functions with different parameter combinations.
Fig. 3. Logarithmic plots of mean function value vs. function evaluations of LDWPSO and RHPSO.
92
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Table 7 Comparison of average number of function evaluations to saturation and average fitness at point of saturation and their corresponding standard errors in the parentheses for different parameter choices. Problem
Sphere
Rosenbrock
Rastrigin
Griewank
Ackley
w
c1 = c2
0.001
2
0.7298
1.49618
0.7
1.4
0.7
2
0.9
2
1
2
0.001
2
0.7298
1.49618
0.7
1.4
0.7
2
0.9
2
1
2
0.001
2
0.7298
1.49618
0.7
1.4
0.7
2
0.9
2
1
2
0.001
2
0.7298
1.49618
0.7
1.4
0.7
2
0.9
2
1
2
0.001
2
0.7298
1.49618
0.7
1.4
0.7
2
0.9
2
1
2
RHPSO
SPSO [48]
Function evaluations
Fitness
Function evaluations
Fitness
45 (0) 45 (0) 45 (0) 45 (0) 45 (0) 45 (0)
6.0943E37 (3.9591E37) 3.07772E83 (1.6492E82) 4.9504E75 (2.9418E74) 1.2267E98 (6.0710E98) 1.0872E109 (7.1429E109) 3.4233E106 (2.4211E105)
1,754,416 (436,108) 14,620 (3346) 231,104 (503625.18) 795,868 (406626.12) 200 (0) 200 (0)
9.4901E+03 (1.9174E+04) 3.9700E11 (2.6780E+00) 4.9600E1 (1.0070E+01) 2.8995E+03 (1.4364E+03) 5.5880E+04 (8.8433E+03) 6.8613E+03 (1.0593E+04)
4769.2 (4755.5) 2940.1 (1936.8) 3432.6 (2400.9) 2818.3 (1652) 3065.7 (1907.8) 3313.8 (2052.1)
1.0667E07 (5.6108E08) 1.2112E07 (7.6566E08) 1.1642E07 (6.3896E08) 1.1828E07 (6.3536E08) 1.4004E07 (7.6794E08) 1.3678E07 (9.3687E08)
1,546,264 (705,298) 671,944 (66,784) 319,800 (442,604) 1,583,292 (681,866) 200 (0) 200 (0)
9.3669E+02 (1.3908E+03) 6.2050E05 (4.1030E04) 3.8170E+00 (3.7490E+00) 5.6693E+02 (1.3516E+03) 3.2998E+03 (7.6311E+02) 3.2590E+03 (9.4296E+02)
2035.8 (1163.1) 962.44 (380.3773) 1040.3 (490.3878) 905.04 (348.2887) 824.2 (303.2178) 801.24 (265.2657)
1.0378E+01 (3.1710E+01) 0.0000E+00 (0.0000E+00) 0.0000E+00 (0.0000E+00) 0.0000E+00 (0.0000E+01) 0.0000E+00 (0.0000E+02) 0.0000E+00 (0.0000E+03)
1,383,752 (771,872) 25,600 (2914) 203,892 (357,834) 1,574,996 (675,956) 200 (0) 204 (28)
2.4586E+02 (1.5935E+02) 8.0313E+01 (2.0493) 8.3340E+01 (2.1083E+01) 9.8332E+01 (1.5212E+02) 4.5680E02 (2.4133E+01) 4.4725E+02 (3.5139E+01)
51.68 (10.066) 51 (5.5181) 50 (7.3983) 53.1 (16.7567) 51.6 (11.3425) 50.14 (8656)
1.4137E12 (7.4165E12) 0.0000E+00 (0.0000E+00) 0.0000E+00 (0.0000E+00) 0.0000E+00 (0.0000E+01) 0.0000E+00 (0.0000E+02) 0.0000E+00 (0.0000E+03)
1,743,140 (374,780) 25,000 (3602) 262,824 (469394.68) 89,072 (487,348) 200 (0) 200 (0)
1.0189E+02 (2.0082E+02) 5.2000E02 (8.7000E02) 6.4000E01 (1.0030E+00) 2.5200E+01 (1.2485E+02) 6.1340E+02 (8.4197E+01) 6.2050E+02 (7.7147E+01)
4383 (1051.7) 2151.8 (340.0476) 2064.9 (307.6766) 1738.8 (227.0143) 1752.8 (276.7358) 1636.5 (262.1626)
2.6732E04 (2.9843E04) 4.4060E12 (2.6339E11) 5.1465E13 (1.8597E12) 2.7711E15 (1.4866E15) 2.6290E15 (1.5742E15) 2.2027E15 (1.7419E15)
1,642,836 (539,034) 23,908 (4356) 122,092 (315,726) 1,068,228 (512,884) 200 (0) 200 (0)
7.5309E+00 (7.5198E+00) 3.6516E+00 (1.5144E+00) 6.4245E+00 (1.7524E+00) 2.4614E+00 (6.7351E+00) 2.0489E+01 (2.40307E01) 2.0489E+01 (2.3800E01)
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
93
Fig. 4. Average logarithmic convergence curve of function sphere.
4.2. Experiment 2: eighteen constrained benchmarks Experiments are conducted on the 18 constrained real-parameter problems with different properties provided in [1]. All these benchmarks are scalable and both 10 and 30 are chosen as the dimension size (D) of each benchmark. For each function, the following: best, median, worst result, mean value and standard deviation for the 25 runs are calculated. The number at the of violated constraints (including the number of violations by more than 1, 0.01, and 0.0001) and the mean violations v median solution are also indicated. Definition 4. (Feasible rate) A run during which at least one feasible solution is found in max function evaluations is called a feasible run; feasible rate is the ratio of the number of feasible runs to total runs, as calculated in (8).
Fig. 5. Average logarithmic convergence curve of function Rosenbrock.
Fig. 6. Average logarithmic convergence curve of function Rastrigin.
94
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Fig. 7. Average logarithmic convergence curve of function Griewank.
Fig. 8. Average logarithmic convergence curve of function Ackley.
Feasible Rate ¼ ð# of feasible runsÞ=ð# of Total runsÞ
ð8Þ
is derived from (10)–(12): For a constrained optimization problem denoted in (9),v
Min
f ðXÞ
s:t:
g i ðXÞ 6 0; hj ðXÞ ¼ 0;
Gi ðXÞ ¼
Hj ðXÞ ¼
v¼
Pp
ð9Þ
g i ðXÞ if g i ðXÞ > 0 0
i ¼ 1; . . . ; p j ¼ p þ 1; . . . ; m
jhi ðXÞj if jhi ðXÞj e > 0; 0
i¼1 Gi ðXÞ
ð10Þ
if g i ðXÞ 6 0
if jhi ðXÞj 6 0 P þ m j¼pþ1 H j ðXÞ m
e ¼ 0:0001
ð11Þ
ð12Þ
To compare with recently proposed algorithm Co-CLPSO [29], all parameters are set the same as in that work, which are: swarm size M = D; inertial weight is a linear descent from 0.9 to 0.4, c1 = c2 = 1.49445, velocity is clamped to the range [0.2 (xmin xmax), 0.2 (xmax xmin)], max fuction evaluations MaxFEs = 2 104 D. Comparison of function values of the best solutions achieved for the 10-D and 30-D test functions with [29] is presented in Table 8. From the results, we can observe that among those 18 test functions, RHPSO almost gets the feasible solution each run for all 10-D problems, except C12, for which the feasible rate is 92%, a little less than that obtained by Co-CLPSO. But for C11, which can hardly be solved by Co-CLPSO, RHPSO gets the feasible rate of 100%. For C1-C6, C13-C16, RHPSO yields larger mean values than Co-CLPSO, but their best values are similar. For C7-C10, C12, C17, C18, RHPSO obtains better mean values. The above analysis shows that RHPSO is more robust for searching feasible solutions in different problems, but its optimization accuracy is a little lower than that of Co-CLPSO. When the dimension increases to 30-D, the number of the local
95
W.X. Xu et al. / Information Sciences 218 (2013) 85–102 Table 8 Comparison of optimal function values achieved for 10D and 30D problems. Function
D10
D30
RHPSO
Co-CLPSO [29]
RHPSO
Co-CLPSO [29]
C01
Best Median Worst c v mean std Feasible rate
7.4056E01 6.4566E01 4.8208E01 000 0 6.3858E01 7.2865E02 100%
7.4731E01 7.4056E01 6.8203E01 000 0 7.3358E01 1.7848E02 100%
4.6178E01 3.8465E01 3.7527E01 0 0 4.0833E01 3.8556E02 100%
8.0688E01 7.2313E01 6.3010E01 000 0 7.1598E01 5.0252E02 100%
C02
Best Median Worst c v mean std Feasible rate
2.2495E+00 1.3279E+00 2.6403E+00 000 0 9.4125E01 1.4408E+00 100%
2.2777E+00 2.2770E+00 2.2313E+00 000 0 2.2666E+00 1.4616E02 100%
2.2289E+00 2.2007E+00 1.3643E+00 0 0 1.9811E+00 3.7443E01 100%
2.2809E+00 2.2660E+00 1.4506E+00 000 0 2.2029E+00 1.9267E01 100%
C03
Best Median Worst c v mean std Feasible rate
5.2004E09 3.3522E08 8.9086E+00 000 0 3.5635E01 1.7817E+00 100%
2.4748E13 2.5020E09 8.8756E+00 0 0 3.5502E01 1.7751E+00 100%
3.5473E07 6.6772E+00 2.0540E+06 0 0 4.1155E+05 9.1814E+05 60%
6.3350E10 2.8674E+01 5.5117E+01 000 1.86E03 3.5106E+01 3.3101E+01 0%
C04
Best Median Worst c v mean std Feasible rate
2.5508E08 3.7575E08 5.0354E08 000 0 3.7609E08 7.5583E09 100%
1.0000E05 9.9509E06 6.6633E06 000 0 9.3385E06 1.0748E06 100%
6.7014E08 7.5307E08 1.6888E07 0 0 9.2367E08 4.3232E08 100%
2.9300E06 1.3136E06 2.8168E+00 000 0 1.1269E01 5.6335E01 80%
C05
Best Median Worst c v mean std Feasible rate
4.8361E+02 1.2165E+02 3.8430E02 000 0 1.7858E+02 1.2629E+02 100%
4.8361E+02 4.8361E+02 4.8352E+02 000 0 4.8360E+02 1.9577E02 100%
4.8361E+02 2.7998E+02 2.5910E+02 0 0 3.1534E+02 9.4687E+01 100%
4.8360E+02 2.7253E+02 2.4395E+02 000 0 3.1249E+02 8.8332E+01 100%
C06
Best Median Worst c v mean std Feasible rate
5.7831E+02 3.5970E+02 2.7782E+01 000 0 3.5034E+02 1.5808E+02 100%
5.7866E+02 5.7866E+02 5.7866E+02 000 0 5.7866E+02 5.7289E04 100%
5.2980E+02 5.2979E+02 5.2950E+02 0 0 5.2973E+02 1.3302E01 100%
2.8601E+02 2.5553E+02 1.1512E+02 000 0 2.4470E+02 3.9481E+01 100%
C07
Best Median Worst c v mean std Feasible rate
2.0666E09 6.3443E09 1.2048E08 000 0 6.3749E09 2.4485E09 100%
1.0711E09 1.7717E08 3.9866E+00 000 0 7.9732E01 1.6275E+00 100%
8.5122E10 8.7656E10 9.3419E10 0 0 8.9072E10 3.8480E11 100%
3.7861E11 3.0221E08 3.9866E+00 000 0 1.1163E+00 1.8269E+00 100%
C08
Best Median Worst c v mean std
1.1931E10 6.0141E10 1.3170E02 000 0 5.2682E04 2.6341E03
9.6442E10 3.2326E08 3.9866E+00 000 0 6.0876E01 1.4255E+00
5.6188E10 6.2269E10 7.3782E10 0 0 6.3529E10 6.6482E11
4.3114E14 1.7650E08 5.0007E+02 000 0 4.7517E+01 1.1259E+02 (continued on next page)
96
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Table 8 (continued) Function
D10
D30
RHPSO
Co-CLPSO [29]
RHPSO
Co-CLPSO [29]
Feasible rate
100%
100%
100%
100%
C09
Best Median Worst c v mean std Feasible rate
9.5542E10 1.6819E09 7.4429E+05 000 0 2.9826E+04 1.4885E+05 100%
3.7551E16 2.3830E14 4.9844E+11 000 0 1.9938E+10 9.9688E+10 100%
4.1998E09 4.5341E09 7.2244E+01 0 0 1.5991E+01 3.1623E+01 100%
1.9695E+02 9.9772E+06 7.9275E+08 000 0 1.4822E+08 2.4509E+08 100%
C10
Best Median Worst c v mean std Feasible rate
8.3612E10 4.1729E+01 2.7127E+03 000 0 1.9099E+02 5.4700E+02 100%
2.3967E15 4.8515E+00 1.2436E+12 000 0 4.9743E+10 2.4871E+11 100%
3.8521E09 6.5471E+00 3.1310E+01 0 0 8.8808E+00 1.2958E+01 100%
3.1967E+01 6.0174E+05 2.8936E+10 000 0 1.3951E+09 5.8438E+09 100%
C11
Best Median Worst c v mean std Feasible rate
7.5311E04 1.6263E05 9.8338E07 000 0 9.1494E05 1.7617E04 100%
6.4065E03 3.2866E02 3.3071E+00 111 2.22E+01 1.6125E01 6.6025E01 0%
6.8380E03 2.5639E04 4.5086E02 0 0 4.1359E03 1.4290E02 44%
4.1223E04 4.4053E02 5.3724E02 111 7.43E+01 2.8186E02 3.2124E02 0%
C12
Best Median Worst c v mean std Feasible rate
3.1385E+02 1.8937E01 1.8936E01 000 0 1.7867E+01 6.6779E+01 92%
1.2639E+01 1.9916E01 1.0233E+02 000 0 2.3369E+00 2.4329E+01 100%
1.8937E01 1.8936E01 1.8936E01 0 0 1.8936E01 3.3110E06 100%
1.9926E01 1.9914E01 1.9890E01 000 0 1.9911E01 1.1840E04 92%
C13
Best Median Worst v v mean std Feasible rate
6.2276E+01 5.7221E+01 4.8155E+01 000 0 5.7022E+01 2.6424E+00 100%
6.8429E+01 6.5578E+01 6.0608E+01 000 0 6.5278E+01 2.5763E+00 100%
6.0948E+01 5.8634E+01 5.6931E+01 0 0 5.8634E+01 1.7292E+00 100%
6.2752E+01 6.0942E+01 5.8698E+01 000 0 6.0774E+01 1.1176E+00 100%
C14
Best Median Worst c v mean std Feasible rate
7.0923E09 3.9470E08 1.0846E+01 000 0 4.4000E01 2.1680E+00 100%
5.7800E12 1.5480E08 3.9866E+00 000 0 3.1893E01 1.1038E+00 100%
1.6939E08 3.9866E+00 3.9930E+02 0 0 8.9022E+01 1.7434E+02 100%
8.1301E14 7.1305E08 3.9866E+00 000 0 1.2757E+00 1.8980E+00 100%
C15
Best Median Worst c v mean std Feasible rate
5.7492E08 1.6245E+01 1.2572E+04 000 0 5.2859E+02 2.5093E+03 100%
3.0469E12 3.6732E+00 1.6245E+01 000 0 2.9885E+00 3.3147E+00 100%
1.3836E08 1.2059E+01 2.0256E+01 0 0 8.8750E+00 8.7656E+00 100%
5.7499E12 2.1603E+01 3.8781E+02 000 0 5.1059E+01 9.1759E+01 100%
C16
Best Median Worst c v mean std
2.4056E03 3.8805E02 3.2873E01 000 0 5.4464E02 6.2820E02
0.0000E+00 0.0000E+00 4.5415E02 000 0 5.9861E03 1.3315E02
1.0815E+00 1.1272E+00 1.1824E+00 0 0 1.1294E+00 4.2031E02
0.0000E+00 4.4409E16 1.9984E15 000 0 5.2403E16 4.6722E16
97
W.X. Xu et al. / Information Sciences 218 (2013) 85–102 Table 8 (continued) Function
D10
D30
RHPSO
Co-CLPSO [29]
RHPSO
Co-CLPSO [29]
Feasible rate
100%
100%
100%
100%
C17
Best Median Worst c v mean std Feasible rate
1.2384E03 2.8482E02 1.3769E+00 000 0 1.6823E01 3.8692E01 100%
7.6677E17 1.0544E01 1.0884 000 0 3.7986E01 4.5284E01 100%
1.8672E01 9.3820E01 1.2236E+00 0 0 7.7006E01 4.6492E01 100%
1.5787E01 3.8574E01 2.1665E+01 000 0 1.3919E+00 4.2621E+00 100%
C18
Best Median Worst c v mean std Feasible rate
3.3758E15 1.9697E14 1.0649E13 0 0 2.5692E14 2.4404E14 100%
7.7804E21 1.3826E11 4.9863E+00 000 0 2.3192E01 9.9559E01 100%
9.6445E14 2.1798E13 5.5932E13 0 0 3.2280E13 2.1406E13 100%
6.0047E02 2.4345E+00 1.8749E+02 000 0 1.0877E+01 3.7161E+01 100%
optima is largely increased and the performance of Co-CLPSO deteriorates. For 30-D problems, Co-CLPSO does not achieve the feasible rate of 100% for C3, C4, C11 and C12, and for C3 and C11, Co-CLPSO can hardly find a feasible solution. In contrast, RHPSO gets the feasible solution each run for C4 and C12. For C3 and C11, RHPSO gets the feasible rate of 60% and 44%, respectively, which is much higher than that achieved by Co-CLPSO. Though the optimal solution obtained in a run is not definitely feasible, RHPSO is more likely to yield a feasible solution than Co-CLPSO. Moreover, from the perspective of mean value, for 30-D problems, the performance of RHPSO does not degrade much. For almost all functions whose feasible rates are 100%, except for C1, C2, C13, C14 and C16, RHPSO yields smaller mean values than those obtained by Co-CLPSO. Thus, for 30-D problems, compared with Co-CLPSO, RHPSO not only has larger possibility to find feasible solutions, but also obtains more accurate optimal results. Since all the parameters are set the same with those used in Co-CLPSO without adjusting, the above experimental results indicate the robustness of RHPSO for solving constrained problems of different dimensions. 4.3. Experiment 3: three engineering optimization problems In order to evaluate the performance of the proposed hybrid approach on engineering optimization problems, singleobjective test problem, tension spring problem, pressure vessel design optimization problems commonly used in the literature of engineering optimization are solved. For each problem, the RHPSO runs 30 times independently and calculates the following: best, worst result, mean value, standard deviation (Std) and function evaluations (FEs). The parameters c1 = c2 = 2.0, w linearly decreases from 0.9 to 0.4. Other parameters used by RHPSO for optimization process are set the same as in [52]. 4.3.1. Single-objective test problem The first test problem is to minimize a single-objective function with 13 variables and nine inequality constraints [10,17,18,26,34,52,58], defined below: 4 4 13 X X X xi 5 x2i xi
Min
f ðxÞ ¼ 5
s:t:
g 1 ðxÞ ¼ 2x1 þ 2x2 þ x10 þ x11 10 6 0
i¼1
i¼1
i¼5
g 2 ðxÞ ¼ 2x1 þ 2x3 þ x10 þ x12 10 6 0 g 3 ðxÞ ¼ 2x2 þ 2x3 þ x11 þ x12 10 6 0 g 4 ðxÞ ¼ 8x1 þ x10 6 0 g 5 ðxÞ ¼ 8x2 þ x11 6 0
ð13Þ
g 6 ðxÞ ¼ 8x3 þ x12 6 0 g 7 ðxÞ ¼ 2x4 x5 þ x10 6 0 g 8 ðxÞ ¼ 2x6 x7 þ x11 6 0 g 9 ðxÞ ¼ 2x8 x9 þ x12 6 0 where 0 6 xi 6 1 (i = 1, . . . , 9, 13) and 0 6 xi 6 1 (i = 10, 11, 12). The global optimum is at x⁄ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1), where f(x⁄) = 15.
98
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
The parameters are set as: swarm size M = 100, max function evaluations Max_Fes = 100,000. The results of the proposed hybrid approach (RHPSO) for this test problem are compared against those provided by Yildiz [52], etc. as shown in Table 9. RHPSO requires only 100,000 function evaluations to find the best-known solution of 15. RHPSO obtains better solutions for this problem compared with those given in Table 9 in terms of the number of function evaluations, the best solution computed, and the mean value of all solutions. However, the standard deviation and worst value are larger than those obtained by Yildiz. By examining all solutions, it is found that in 30 runs, there are two in which the optimal results found are both 13.8281, much larger than the best-known solution. In other runs, the optimal results are all 15. The ratio of getting accurate results is 28/30, which is very high. In the future work, the instability of deriving local optimum will be considered. 4.3.2. Tension spring problem This problem was described by Belegundu [4] and consists of minimizing the weight of a tension spring subject to constraints on minimum deflection, shear stress, surge frequency, and outer diameter. The design variables are the mean coil diameter (x1), the wire diameter (x2) and the number of active coils (x3) as shown in Fig. 9. The mathematical model of the problem is described below:
Min
f ðxÞ ¼ ðx3 þ 2Þx2 x21
s:t:
2 3 g 1 ðxÞ ¼ 1 71785x 4 6 0
x x2
1
4x22 x1 x2
g 2 ðxÞ ¼ 12566
ðx2 x31 x41 Þ
1 þ 5108x 2 1 6 0
ð14Þ
1
1 60 g 3 ðxÞ ¼ 1 140:45x x x2 2 3
þx1 160 g 4 ðxÞ ¼ x21:5
where 0.05 6 x1 6 2, 0.25 6 x1 6 1.3 and 2 6 x1 6 15. The parameters are set as: swarm size M = 60, max function evaluations Max_Fes = 30,000. RHPSO provides better solutions for this problem compared with those given in Tables 10 and 11 in terms of the number of function evaluations, the best solution computed, and the statistical analysis results. The best value computed is 0.012665233 with respect to very low standard deviation of 1.5386 109. In RHPSO the convergence rate is improved by carrying as few function evaluations as 30,000. 4.3.3. Pressure vessel design optimization problem This problem is taken from Kannan and Kramer [24]. A cylindrical vessel is capped at both ends by hemispherical heads as shown in Fig. 10. The objective is to minimize the total cost, including the cost of the material, forming and welding. Four design variables are Ts (thickness of the shell), Th (thickness of the head), R (inner radius), and L (length of the cylindrical section of the vessel, not including the head). Ts and Th are integer multiples of 0.0625 inch, which are the available thicknesses of rolled steel
Table 9 Statistical results of different methods for the single-objective test problem. Methods
Best
Mean
Worst
Std
FEs
RHPSO Yildiz [52] Coello and Cortes [10] Yoo and Hajela [58] Hamida and Schoenauer [18] Koziel and Michalewicz [26] Hadj-Alouane and Bean [17] Michalewicz and Attia [34]
15.000000 15 14.7841 5.2735 15 14.7864 5.16559 7.34334
14.921875 14.876 14.5266 3.7435 14.84 14.7082 3.64004 5.07136
13.828125 14.6819 13.8417 2.4255 N/A 14.6154 2.72518 3.59536
0.297314 0.113 0.2335 0.9696 N/A N/A 0.60624 0.77247
100,000 100,000 150,000 150,000 1,500,000 1,000,000 N/A N/A
Fig. 9. Design variables for tension spring problem.
99
W.X. Xu et al. / Information Sciences 218 (2013) 85–102 Table 10 Comparison of the best solution for the spring design problem by different methods. Variables
RHPSO
Yildiz [52]
Coello and Montes [11]
Coello [9]
Arora [1]
Belungdu [4]
x1(d) x2(D) x3(p) g1(x) g2(x) g3(x) g4(x) f(x)
0.051689061 0.356717727 11.28896667 8.42E09 4.28E09 4.053785539 0.727728808 0.012665233
0.051690402 0.3567500 11.2871200 5.8368E06 1.0045E06 4.053794432 0.727713412 0.01266527
0.051989 0.363965 10.890522 0.000013 0.000021 4.061338 0.722698 0.0126810
0.051480 0.351661 11.632201 0.002080 0.000110 4.026318 4.026318 0.0127048
0.053396 0.399180 9.185400 0.000019 0.000018 4.123832 0.698283 0.127303
0.050000 0.315900 14.250000 0.000014 0.003782 3.938302 0.756067 0.0128334
Table 11 Statistical results of different methods for the spring design problem. Methods
Best
Mean
Worst
Std
FEs
RHPSO Yildiz [52] Coello and Montes [11] Coello [9] Arora [1] Belungdu [4]
0.01266523 0.01266527 0.126810 0.0127048 0.0127303 0.0128334
0.01266523 0.012673 0.0127420 0.012769 N/A N/A
0.01266524 0.012708 0.012973 0.012822 N/A N/A
1.5386E09 6.24E06 5.90E05 3.94E05 N/A N/A
30,000 30,000 80,000 900,000 N/A N/A
plates, while R and L are continuous variables. The mathematical model of pressure vessel design is described below, with Ts, Th, R and L parameters denoted by x1, x2, x3 and x4, respectively:
Min s:t:
f ðxÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3 g 1 ðxÞ ¼ x1 þ 0:0193x3 6 0 g 2 ðxÞ ¼ x2 þ 0:00954x3 6 0 g 3 ðxÞ ¼ p
x23 x4
4 3
ð15Þ
p þ 1296000 6 0 x33
g 4 ðxÞ ¼ x4 240 6 0 where 0.0625 6 xi 6 6.1875, (i = 1, 2) and 10 6 xi 6 200, (i = 3, 4). The values for x1 and x2 were considered as integer (i.e., real values were rounded up to their closest integer value) multiples of 0.0625, and the values of x3 and x4 were considered as real numbers. The parameters are set as: swarm size M = 60, max function evaluations Max_Fes = 30,000. The total of 30,000 function evaluations is divided into two parts. First 10,000 function evaluations are run for RHPSO to search the entire region and get
Fig. 10. Design variables for pressure vessel [24].
100
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
Table 12 Comparison of the best solution for pressure vessel design problem by different methods. Variables
RHPSO
Yildiz [52]
Coello and Montes [11]
Coello [9]
Deb [12]
Kanan and Kramer [24]
Sandgren [38]
x1(Ts) x2(Th) x3(R) x4(L) g1(x) g2(x) g3(x) g4(x) f(x)
0.8125 0.4375 42.09844560 176.6365958 3.96E14 0.035880829 5.94E08 63.36340416 6059.714335
0.8125 0.4375 42.09844559 176.6366 0.00000019 0.03588092 0.03590969 63.3632788 6059.7144
0.812500 0.437500 42.097398 176.654050 0.000020 0.035891 27.886075 63.345953 6059.9463
0.812500 0.437500 40.323900 200.00000 0.034324 0.052847 27.10584 40.00000 6288.7445
0.937500 0.500000 48.329000 112.679000 0.004750 0.038941 3652.876800 127.321000 6410.3811
1.125000 0.625000 58.291000 43.690000 0.000016 0.006890 21.2201 196.3100 7198.0428
1.125000 0.625000 47.700000 117.701000 0.204390 0.169942 54.22601 122.299 8129.1036
Table 13 Table 6 Statistical results of different methods for pressure vessel design problem. Methods
Best
Mean
Worst
Std
FEs
RHPSO Yildiz [52] Coello and Cortes [10] Coello and Montes [11] Coello [9] Deb [12] Kanan and Kramer [24] Sandgren [38]
6059.7143 6059.7144 6061.1229 6059.9463 6288.7445 6410.3811 7198.0428 8129.1036
6059.7145 6097.4460 6734.0848 6177.2533 6293.8432 N/A N/A N/A
6059.7183 6156.5700 738.0602 6469.3220 6308.1497 N/A N/A N/A
0.0007 35.7810 457.9959 130.9297 7.4133 N/A N/A N/A
30,000 30,000 150,000 80,000 900,000 N/A N/A N/A
real optimal solution. Then the following 20,000 function evaluations are run with x1 and x2 rounded up to their closest integer multiples of 0.0625 (four combinations of floor and ceiling of x1 and x2). RHPSO provides better solutions for this problem as shown in Tables 12 and 13 compared with other seven methods. The worst solution found by RHPSO is better than the best solutions found by Coello [9–11], Deb [12], Kannan and Kramer [24] and Sandgren [38]. Compared with the solutions found by Yildiz [52], the use of the RHPSO improves the convergence rate by computing the best value 6059.7143 with respect to very low standard deviation 0.0007 and carrying as few function evaluations as 30,000. Therefore, the RHPSO is efficient (indicated by the small function evaluation number) and among the most robust approaches for finding an optimal solution (indicated by the low standard deviation). In all the above test experiments, all the parameters are set the same as in literature without special adjustment for the algorithm of RHPSO. The robustness of RHPSO for solving both unconstrained and constrained single-objective problems is illustrated by the comparisons with other state-of-art algorithms. 5. Conclusions The aim of this research is to develop a new robust optimization approach, which is insensitive to problem dimension and parameters chosen, to solve both unconstrained and constrained single-objective problems. This paper has proposed a novel RHPSO algorithm by combining PSO with PWLCM and SQP. The RHPSO algorithm have used PWLCM to help PSO increase its species diversity, and employed SQP to accelerate local exploitation. Thus, the particles are able to search the entire space while finding local optima fast, which increases the possibility of exploring a global optimum in problems with more local optima while ensuring the convergence of algorithm. The new RHPSO is successfully applied to the optimization of both unconstrained and constrained problems. Compared with other state-of-art approaches, the results of statistical analysis of both benchmark and engineering problems indicate that the RHPSO is a robust algorithm with the ability to find better solutions for different problems without adjusting its parameters. Our future work is to extend the RHPSO to the area of multi-objective optimization. Acknowledgments This work was supported by the National Natural Science Foundation of China (Grants No. 61074153), and the Fundamental Research Funds for the Central Universities (Grants No. zz1136). References [1] J.S. Arora, Introduction to Optimum Design, McGraw-Hill, New York, 1989. [2] P. Asokan, N. Baskar, K. Babu, G. Prabhaharan, R. Saravanan, Optimization of surface grinding operations using particle swarm optimization technique, Journal of Manufacturing Science and Engineering 127 (2005) 885–892.
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
101
[3] A. Baranovsky, D. Daems, Design of one-dimensional chaotic maps with prescribed statistical properties, International Journal of Bifurcation and Chaos 5 (6) (1995) 1585–1598. [4] A.D. Belegundu, A study of mathematical programming methods for structural optimization, Department of Civil and Environmental Engineering, University of Iowa, Iowa City, Iowa, 1992. [5] J.F. Chang, P. Shi, Using investment satisfaction capability index based particle swarm optimization to construct a stock portfolio, Information Sciences 181 (2011) 2989–2999. [6] Y.P. Chang, Integration of SQP and PSO for optimal planning of harmonic filters, Expert Systems with Applications 37 (2010) 2522–2530. [7] X. Chen, Y.M. Li, On convergence and parameter selection of an improved particle swarm optimization, International Journal of Control, Automation, and Systems 6 (2008) 559–570. [8] W. Chu, X.G. Gao, S. Sorooshian, Handling boundary constraints for particle swarm optimization in high-dimensional search space, Information Sciences 181 (2011) 4569–4581. [9] C.A.C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers & Industrial Engineering 41 (2) (2004) 113– 127. [10] C.A.C. Coello, N.C. Cortes, Hybridizing a genetic algorithm with an artificial immune system for global optimization, Engineering Optimization 36 (5) (2004) 607–634. [11] C.A.C. Coello, E.M. Montes, Constraint-handling in genetic algorithms through the use of dominance-based tournament selection, Advanced Engineering Informatics 16 (3) (2002) 193–203. [12] K. Deb, GeneAS: a robust optimal design technique for mechanical component design, in: Proceedings of Evolutionary Algorithms in Engineering Applications, Springer, Berlin, Heidelberg, New York, 1997. [13] A.P. Engelbrecht, Fundamentals of Computational Swarm Intelligence, John Wiley and Sons, USA, 2006. January. [14] S.S. Fan, E. Zahara, A hybrid simplex search and particle swarm optimization for unconstrained optimization, European Journal of Operational Research 181 (2007) 527–548. [15] R. Fletcher, Practical Methods of Optimization, Wiley, New York, 1987. [16] C.H. Guo, H.W. Tang, Global convergence properties of evolution strategies, Mathemetica Numerica Sinica 23 (2001) 105–110. [17] A.B. Hadj-Alouane, J.C. Bean, A genetic algorithm for the multiple-choice integer program, Operations Research 45 (1) (1997) 92–101. [18] S.B. Hamida, M. Schoenauer, ASCHEA: new results using adaptive segregational constraint handling, in: Proc. Congr. Evolutionary Computation, 2002, pp. 884–889. [19] S.T. Hsieh, T.Y. Sun, C.C. Liu, S.J. Tsai, Solving large scale global optimization using improved particle swarm optimizer, in: Proceedings of 2008 IEEE Congress on Evolutionary Computation, HongKong, China, IEEE Press, 2008, pp. 1777–1784. [20] H. Huang, H. Qin, Z.F. Hao, A. Lim, Example-based learning particle swarm optimization for continuous optimization, Information Sciences 182 (2012) 125–138. [21] J.C. Hung, Adaptive Fuzzy-GARCH model applied to forecasting the volatility of stock markets using particle swarm optimization, Information Sciences 181 (2011) 4673–4683. [22] M. Jiang, Y.P. Luo, S.Y. Yang, Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm, Information Processing Letters 102 (2007) 8–16. [23] Y.T. Juang, S.L. Tung, H.C. Chiu, Adaptive fuzzy particle swarm optimization for global optimization of multimodal functions, Information Sciences 181 (20) (2011) 4539–4549. [24] B.K. Kannan, S.N. Kramer, An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design, ASME Journal of Mechanical Design 116 (1994) 318–320. [25] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Perth, Australia, 1995, pp. 1942–1948. [26] S. Koziel, Z. Michalewicz, Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization, Evolutionary Computation 71 (1) (1999) 19–44. [27] C.H. Lee, Y.C. Lee, Nonlinear systems design by a novel fuzzy neural system via hybridization of electromagnetism-like mechanism and particle swarm optimisation algorithms, Information Sciences 186 (1) (2012) 59–72. [28] J.J. Liang, P.N. Suganthan, A.K. Qin, S. Baska, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation 10 (3) (2006) 281–295. [29] J.J. Liang, P.N. Suganthan, Coevolutionary comprehensive learning particle swarm optimizer, in: Proceedings of IEEE Congress on Evolutionary Computation, 2010, pp. 1–8. [30] L. Liu, W.M. Zhong, F. Qian, An improved chaos-particle swarm optimization algorithm, Journal of East China University of Science and Technology (Natural Science Edition) 36 (2010) 267–272. [31] Q. Luo, D.Y. Yi, A co-evolving framework for robust particle swarm optimization, Applied Mathematics and Computation 199 (2008) 611–622. [32] R. Mallipeddi, P.N. Suganthan, Problem definitions and evaluation criteria for the cec 2010 competition on constrained real-parameter optimization, Technical Report, Nanyang Technological University, Singapore, 2010. [33] E. Mendel, R.A. Krohling, M. Campos, Swarm algorithms with chaotic jumps applied to noisy optimization problems, Information Sciences 181 (20) (2011) 4494–4514. [34] Z. Michalewicz, N. Attia, Evolutionary optimization of constrained problems, in: Proc 3rd Annual Conference on Evolution-ary Programming, 1994, pp. 98–108. [35] S. Mukhopadhyay, S. Banerjee, Global optimization of an optical chaotic system by chaotic multi swarm particle swarm optimization, Expert Systems with Applications 39 (1) (2012) 917–924. [36] T. Navalertporn, N.V. Afzulpurkar, Optimization of tile manufacturing process using particle swarm optimization, Swarm and Evolutionary Computation 1 (2) (2011) 97–109. [37] K.E. Parsopoulos, M.N. Vrahatis, UPSO: a unified particle swarm scheme, in: Lecture Series on Computer and Computational Sciences, vol. 1, 2004, pp. 868–873. [38] E. Sandgren, Nonlinear integer and discrete programming in mechanical design, Journal of Mechanical Design 112 (2) (1990) 223–229. [39] K. Schittkowski, NLQPL: A FORTRAN-subroutine solving constrained nonlinear programming problems, Annals of Operations Research 5 (1985) 485– 500. [40] Y. Shi, R.C. Eberhart, Parameter selection in particle swarm optimization, in: Proceeding of the 7th Conference Evoluationary Programming, New York, 1998, pp. 591–600. [41] Y. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proceeding in IEEE Congress on Evolutionary Computation (CEC), 1998, pp. 69–73. [42] Y. Shi, H.C. Liu, L. Gao, G.H. Zhang, Cellular particle swarm optimization, Information Sciences 181 (15) (2011) 4460–4493. [43] P.K. Tripathi, S. Bandyopadhyay, S.K. Pal, Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients, Information Sciences 177 (2007) 5033–5049. [44] T. Takahama, S. Sakai, Constrained optimization by applying the a constrained method to the nonlinear simplex method with mutations, IEEE Transactions on Evolutionary Computation 9 (5) (2005) 437–451. [45] T. Takahama, S. Sakai, Fast and stable constrained optimization by the e constrained differential evolution, Pacific Journal of Optimization 5 (2) (2009) 261–282. [46] T. Takahama, S. Sakai, Constrained optimization by the e constrained differential evolution with an archive and gradient-based mutation, in: Proceeding of IEEE Congress on Evolutionary Computation, 2010, pp. 1–9.
102
W.X. Xu et al. / Information Sciences 218 (2013) 85–102
[47] G. Ueno, K. Yasuda, N. Iwasaki, Robust adaptive particle swarm optimization, in: Proceedings of IEEE International Conference on Systems, Man and Cybernetics 4, 2005, pp. 3915–3920. [48] F. van den Bergh, A.P. Engelbrecht, A study of particle swarm optimization particle trajectories, Information Sciences 176 (2006) 937–971. [49] Y. Wang, B. Li, T. Weise, J.Y. Wang, B. Yuan, Q.J. Tian, Self-adaptive learning based particle swarm optimization, Information Sciences 181 (20) (2010) 4515–4538. [50] Q. Wu, R. Law, Complex system fault diagnosis based on a fuzzy robust wavelet support vector classifier and an adaptive Gaussian particle swam optimization, Information Sciences 180 (23) (2010) 4514–4528. [51] T. Xiang, X.F. Liao, K.W. Wong, An improved particle swarm optimization algorithm combined with piecewise linear chaotic map, Applied Mathematics and Computation 190 (2007) 1637–1645. [52] A.R. Yildiz, A novel particle swarm optimization approach for product design and manufacturing, International Journal of Advanced Manufacturing Technology 40 (5–6) (2009) 617–628. [53] A.R. Yildiz, A new design optimization framework based on immune algorithm and Taguchi method, Computers In Industry 60 (8) (2009) 613–620. [54] A.R. Yildiz, A novel hybrid immune algorithm for global optimization in design and manufacturing, Robotics and Computer-Integrated Manufacturing 25 (2) (2009) 261–270. [55] A.R. Yildiz, An effective hybrid immune-hill climbing optimization approach for solving design and manufacturing optimization problems in industry, Journal of Materials Processing Technology 50 (4) (2009) 224–228. [56] A.R. Yildiz, Hybrid immune-simulated annealing algorithm for optimal design and manufacturing, International Journal of Materials and Product Technology 34 (3) (2009) 217–226. [57] A.R. Yildiz, Kazuhiro Saitou, Topology synthesis of multicomponent structural assemblies in continuum domains, ASME Journal of Mechanical Design 133 (1) (2011) 011008-1–011008-9. [58] J. Yoo, P. Hajela, Immune network simulations in multi-criterion design, Structural and Multidisciplinary Optimization 1 (8) (1999) 85–94. [59] Y. Zhang, D.W. Gong, Z.H. Ding, A bare-bones multi-objective particle swarm optimization algorithm for environmental/economic dispatch, Information Sciences (2011), http://dx.doi.org/10.1016/j.ins.2011.06.004. [60] S.Z. Zhao, J.J. Liang, P.N. Suganthan, M.F. Tasgetiren, Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization, in: Proceedings of 2008 IEEE Congress on Evolutionary Computation, HongKong, China, IEEE Press, 2008, pp. 3845–3852.