Free Pattern Search for global optimization

Free Pattern Search for global optimization

Applied Soft Computing 13 (2013) 3853–3863 Contents lists available at SciVerse ScienceDirect Applied Soft Computing journal homepage: www.elsevier...

2MB Sizes 0 Downloads 97 Views

Applied Soft Computing 13 (2013) 3853–3863

Contents lists available at SciVerse ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Free Pattern Search for global optimization Long Wen, Liang Gao ∗ , Xinyu Li, Liping Zhang State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China

a r t i c l e

i n f o

Article history: Received 15 November 2011 Received in revised form 19 March 2013 Accepted 3 May 2013 Available online 28 May 2013 Keywords: Free Pattern Search Pattern Search Free Search Global optimization

a b s t r a c t An efficient algorithm named Pattern search (PS) has been used widely in various scientific and engineering fields. However, even though the global convergence of PS has been proved, it does not perform well on more complex and higher dimension problems nowadays. In order to improve the efficiency of PS and obtain a more powerful algorithm for global optimization, a new algorithm named Free Pattern Search (FPS) based on PS and Free Search (FS) is proposed in this paper. FPS inherits the global search from FS and the local search from PS. Two operators have been designed for accelerating the convergence speed and keeping the diversity of population. The acceleration operator inspired by FS uses a self-regular management to classify the population into two groups and accelerates all individuals in the first group, while the throw operator is designed to avoid the reduplicative search of population and keep the diversity. In order to verify the performance of FPS, two famous benchmark instances are conducted for the comparisons between FPS with Particle Swarm Optimization (PSO) variants and Differential Evolution (DE) variants. The results show that FPS obtains better solutions and achieves the higher convergence speed than other algorithms. © 2013 Elsevier B.V. All rights reserved.

1. Introduction Nowadays, the problems become more and more complicated in scientific research, engineering, financial and management field and so on, which creates a demand of more powerful optimizer to solve them within limited time and memory. Over the last decades, nature-inspired algorithms took the responsibility for that and dedicated themselves to the complex problems. Lots of intelligence algorithms had been proposed such as genetic algorithm [1,2] (GA), particle swarm optimization [ 3] (PSO), differential evolution [4] (DE), human search [5] (HS) and artificial bee colony algorithm [6] (ABC). They show a great potential for global optimization and have been applied to solve various engineering problems [7,8] for their simplification and effectiveness [9]. However, almost all of the algorithms suffer from the prematurity. In order to get a higher quality solution and better performance, hybrid algorithms are used for handling this, such as CPSO [10] (hybrid PSO and Cellular automata), DSSA [11] (hybrid SA and Direct search), DE/BBO [12] (hybrid DE and BBO), DTS (hybrid TS and Direct search) [13]. All of these hybrid algorithms have achieved the good performance. However, the real problems become more and more complex, and the benchmarks which are used to test the performance of algorithms also become more and more complex, at the same time the dimensions of benchmarks

∗ Corresponding author. Tel.: +86 27 87559419; fax: +86 27 87543074. E-mail address: [email protected] (L. Gao). 1568-4946/$ – see front matter © 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.asoc.2013.05.004

increase as well, which makes them more difficult to be solved. Even lots of good algorithms have been designed, this road never ends. This research focuses on this topic and a new algorithm called Free Pattern Search (FPS) has been designed for optimization. FPS is inspired by Pattern Search (PS) and Free Search (FS), and it extends PS into a population based formation. PS was firstly proposed by Hooke and Jeeves [14] in 1961. Torczon [15] provided a detailed formal definition of PS. As a kind of direct search, PS does not require gradient information at all, so it shares some similarities with modern evolutionary algorithms. The search step size of PS is very crucial for global convergence of PS and it should be reduced only when no increase or decrease in any one parameter further improved the fit [16], even though PS has been proofed to be a global convergence algorithm [15]. Traditional PS did not perform well on the complex and high dimension problems. Hvattum and Glover [17] proposed a new direct search method called SS (Scatter Search) and proved that HJPS (Hooke and Jeeves Pattern Search) was worse than CS (Compass Search) and SS even though the convergence properties of them were similar. Lots of researchers have been dedicated to modify PS for better performances [18–20]. In fact, the empirical convergence is different from theoretical convergence. The former one is the real performance, which has more impacts on the quality of the results. FS [21,22] is the other resource for FPS. Because of the selfregular management, FS which is a population based algorithm has good global search ability [22]. The acceleration part in FPS uses a FS-inspired part, exactly, a self-regular part to classify the

3854

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Nomenclature PS FS PSO FPS HJPS DE DSSA DE/BBO DTS CPSO ILPSO LeDE CDE M  ϕ  ˛  m n T

∂ Xj xi

Pattern Search Free Search particle swarm optimization Free Pattern Search Hooke and Jeeves Pattern Search differential evolution direct search simulated annealing differential evolution with biogeography-based optimization directed tabu search cellular particle swarm optimization integrated learning particle swarm optimizer learning-enhanced differential evolution clustering-based differential evolution the pattern matrix the current point in Pattern Search the previous point in Pattern Search the base point in Pattern Search the search step size of pattern the search step sizes factor of the pattern the reduce factor of pattern the population size of Free Pattern Search the number of the dimension the numbers of steps for local Pattern Search the rate of the detection range the jth individual in the population the ith element of an individual

population, and then PS take the responsibility to accelerate them. In order to enhance the exploitation phase and keep the diversity of population, FS works with PS to ensure the exploitation and exploration of FPS as well as other operators. The rest of the paper is organized as follows. The next section introduces the traditional HJPS. The detailed description of FPS is presented in Section 3. In Section 4, two sets of experiment are applied to evaluate the performance of FPS. Section 5 is the conclusion and future researches.

The traditional HJPS is a single-point search method, which generates a sequence of non-increasing solutions in whole iterations. The “pattern” which is very important is the neighborhood structure of the base point, and the points on the neighborhood structure are called trials. Fig. 1 shows the HJPS pattern on 2D. The white points are the trials of the black one (base point), as it can be seen clearly that the trials form a grid and the grid is the neighborhood structure of the base point called “pattern”. Typically, the columns of the pattern are the unit vectors in HJPS and it is denoted by the pattern matrix M as showed below: 1

⎢ ⎢ 0 M=⎢ ⎢ · ⎣

0

Search [24] using n × 2n pattern. The HJPS pattern is a n × 2n type and it is simple and effective. There are three kinds of points in HJPS, they are the current point , the base point ϕ, the previous point . The current point is the current solution of the HJPS, the base point is used for detecting better solutions, and then it would be assigned to the current point if it is better. The previous point is the last current point, and it helps the base point to find a profit solution. Initially, the base point is initialized by the current point. The HJPS contains exploration move (EMove, same with the coordinate search) and pattern move (PMove). The EMove completes the coordinate search on all of the dimensions of the base point to find the best trial near it. The step of EMove is as followed: Step 1: Construct the pattern M for the base point ϕ. Initialize the parameter i = 0;  Step 2: Testing if f (ϕ) > f ϕ + M:,2i then ϕ = ϕ + M:,2i ;Otherwise



2. Traditional Pattern Search



Fig. 1. Illustration of the pattern on 2D.

−1

0

0

·

·

0

0

0

2

−2

·

·

0

0

·

·

·

i

−i

·

0

0

0

·

·

n



⎥ ⎥ ⎥ ⎥ · ⎦

−n

In HJPS, the search step sizes  on each dimension are same (i = j ). Actually, there are so many types of pattern in the paradigm of PS, such as canonical PS [14] using square with n × 2n pattern, Rank Ordered PS [23] using n × (n+1) pattern, Compass

testing if f (ϕ) > f ϕ + M:,2i+1 ϕ = ϕ + M:,2i+1 .Increment i; Step 3: If i > n, terminated, otherwise go to Step 2.



then

After the EMove, the PMove would be implemented when the best trial is better than the current point, meaning a better solution has been found. Then the current point and the previous point  would be upgraded, a new base point ϕ would be constructed by the current point and the previous point. The formulation of PMove is: =

,

= ϕ, ϕ = 2ϕ − 

The flow chart of HJPS is presented in Fig. 2. The search step size of pattern is very important in the PS, and it determines the size of pattern. During the search, it will decrease by  when no trial is better than the current point. Such as the minimization f(x) = x2 , x ∈ (−0.5, 0.5), and assume that the initialization current position is the point 0.4 (point A) and the search step sizes is 0.1 shown in Fig. 3. Then the pattern matrix would be [0.1, −0.1] indicating that the trials are the points 0.5 and 0.3. Search the pattern of the base point (EMove) and find that 0.3 is the best trial, so the base point is 0.3 (point a) now. From Fig. 3,

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

3855

Fig. 4. The pseudo code of FPS.

point is not better than the current position, then reduce the search step sizes. 3. Free Pattern Search This section presents a new algorithm inspired by HJPS and FS called Free Pattern Search. This algorithm integrates HJPS method to be the local search and some operators from FS to keep the diversity by avoiding the individuals’ repetitious search. 3.1. Pseudo code of FPS Fig. 2. The flow chart of HJPS.

it can be clearly seen that the profit direction is along the decrease the value of x. Then assign the current position to 0.3 (point B) and move another step toward the profit direction (PMove), and find the new base point 2*0.3 − 0.4 = 0.2. EMove again and find that the point 0.1 (point b) is the best trial in this round. Then move the current position to 0.1 (point C) and PMove the base point to 2*0.1 − 0.3 = −0.1 (point c). Repeat above procedure until the base

FPS is a population-based PS-like algorithm. The main parts of FPS consist of initialization, exploration and termination. The most important part is the exploration part including three operators. Search operator aims at searching the local optimum solution, it executes an HJPS based procedure as the local search engine. The acceleration and throw operators are under the premise of the high efficiency of search operator. The acceleration operator is inspired by FS and learned the self-regular management from FS. It works on a pair with two individuals and accelerates the worse one with the help of the better one. The throw operator is used to avoid the individual’s repetitious search and to make full use of individuals’ search ability. The pseudo code of FPS is showed in Fig. 4. 3.2. Initialization and termination There are many kinds of initialization methods which contain their own advantages. In this research, the random initialization strategy is selected by FPS in this research because of its uniformity of the initial population. m is the population size and n is the number of dimensions, i = 1,2, . . ., n, j = 1,2, . . ., m, Upi and Lowi are the search space borders, the random initialization is: Xj,i = (Upi − Lowi ) × rand(0, 1) + Lowi

(1)

For fairness to the elements in different dimensions, the initial search step sizes used in FPS are different than HJPS. They are proportional to their boundary sizes by the size factor ˛ (as showed in Fig. 1, 1 = / 2 in FPS). For each dimension i, the initial search step size is: Fig. 3. The toy problem.

i,init = (Upi − Lowi ) × ˛

(2)

3856

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

During the searching process, it should to prevent the infeasible solution and modify them when they appear, and it can be formulated by:

xi =

⎧ Lowi if xi < Lowi ⎪ ⎨ ⎪ ⎩

Upi

if xi > Upi

xi

otherwise

(3)

The termination is a hybrid, which combines the max step, max function calls and the accuracy of the best solution. Once one of them is satisfied, the process will be terminated and output the results. 3.3. Search operator The search operator is one of the most important parts in FPS. It executes the HJPS based procedure for all individuals in order to find a local optimum. However, the times of EMove is limited to T, because excessive local search will lead to premature. The search operator is shown in Fig. 5. 3.4. Acceleration operator Upon the completion of the search operator, some individuals may be trapped into local optimum and it is necessary to exchange the solution space information among the whole population, in order to get a better performance including the better solution and convergence speed. The acceleration operator separates the population into two groups. The individuals in the first group will be accelerated. This classification comes from the self-regular management in FS [25], and it uses sensibilities to judge. The sensibility sj is the luck degree to the individual and it is a random number in the fitness interval of current population. It is generated as followed: Fmax = max{f (Xj )} Fmin = min{f (Xj )}

(4)

sj = Fmin + (Fmax − Fmin ) × rand(0, 1) When the sensibility of an individual is better than its fitness, this individual will be abandoned because of its bad fitness and luck. Then, it would be classified into the first group. Otherwise its fitness is good enough and should be marked to the second group. For the minimization optimization, better stands for less. If sj < f (Xj ) Xj ∈ first group If sj ≥ f (Xj ) Xj ∈ second group

(5)

The classification in FPS is very similar with FS. The only difference is how to deal with the abandoned individuals. FPS accelerates them while FS abandons them. The acceleration is a PMove-like process. It uses the profit direction to accelerate the solution. Each individual in the first group should select a partner in the second group randomly, and then make a pair to the PMove-like process and product a new solution in the profit direction. In the acceleration operator, the individual in the first group is denoted by Xj1 , and the individual denoted by Xr2 would be randomly selected in the second group. There is no way to guarantee that the fitness of Xr2 is better than Xj1 , even though the first group is the abandoned group. The PMove-like process needs the fitness order of individuals. The new solution is the accelerated result of Xj1 , Xr2 stays unchanged. The acceleration formula is:



Xj1

=

2 × Xj1 − Xr2

if f (Xi1 ) < f (Xr2 )

2 × Xr2 − Xj1

otherwise

(6)

Fig. 5. The flow chart the search operator.

After the acceleration, almost all worst individuals will be accelerated, but the best individual stays unchanged because of its best fitness. 3.5. Throw operator During the local search, the individuals would gather, which may cause a reduplicative search in a small space. In order to keep the diversity of population, throw operator detects those reduplications and scatters them. The detection range denoted by D ⊂ Rn is a constant vector and it is independent of the population. Actually it is proportional to the initial search step size (i,init ). ∂ is the scaling factor. Di = i,init × ∂

(7)

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Fig. 6. Individuals status of throw operator.

The distance d ⊂ Rn between two individuals (X1 and X2 ) is also a vector, it is the absolute difference of those two individuals: d = abs (X1 − X2 )

(8)

3857

There are two sets of testing instances. The first set is conducted for the comparison with PSO variants using the traditional benchmarks, while DE variants are selected for the second set using the first ten functions in CEC 2005 benchmark set [26]. Two lastest PSOs and two lastest DEs are selected for the testing; they are Cellular Particle Swarm Optimization (CPSO) [10], Integrated Learning Particle Swarm Optimizer (ILPSO) [27], Learning-Enhanced Differential Evolution (LeDE) [28] and Clustering-Based Differential Evolution (CDE) [29] respectively. The first testing shows the performance of FPS in the traditional benchmarks. FPS outperforms CPSO in 10D and 30D and also outperforms ILPSO slightly in 10D, 30D and 50D. In the second testing, the first ten functions in CEC 2005 benchmark set are marked as F1–F10. Contrasting to the traditional benchmarks, these functions are more difficult to be solved [30,31] because of the three features, including the shift of global optimum solution from origin, the bias of global optimum from zero and the rotation of function coordinate, making. FPS is dedicated to them and the results show the convergence speed of FPS is much faster than DE variants and the quality of solutions is also comparable with DEs. The default parameters of FPS are listed below:

When one individual is beyond the detection range of an other individual, it means these two individuals are not reduplicative and they are searching at different places. Otherwise, they will be regarded as identical and need to be scattered in order to make full use of population’s search ability. It should be pointed out that di < Di for all dimensions is an essential condition for the equality of two individuals, meaning that:

(a) (b) (c) (d) (e)

max(di − Di ) < 0

The max step of FPS is 500 when compared with PSO variants, while this is replaced by max function evaluations (FEs) when compared with DE variants.

(9)

There are two positions needed to be distinct about an individual. The start position is the original position of an individual before the search and accelerations operators, while the current position is the current position of the individual. The start position almost determines the searching result. When two individuals are identical, meaning the current positions of them are too close, but the start positions should be responsible for this. In the throw process, scattering the gathering is realized by moving the worse individual away. Adding or subtracting a i,init length to every dimension of the start position of the worse individual is enough to change the gathering. Then, the worse one will start a new searching round at the new start position with the initial search step size. As showed in Fig. 6, the worse individual and the better individual are gathering. Even the start positions of them are totally far, the current positions of them are gathering. All dimensions of worse individual are moved i,init length away based on the better individual and the new position is denoted by the throw position. The throw position xt will be accepted, when it was feasible. Otherwise a new feasible position should be generated to replace it. As showed in Fig. 6, the throw position is outside of the feasible domain, then the throw operator would generate a new position xnew , and the relationship between the throw position and the new position is:

xinew =

⎧ Upi − Lowi + xit ⎪ ⎨ ⎪ ⎩

if xit < Lowi

−Upi + Lowi + xit

if xit > Upi

xit

otherwise

(10)

4. Experimental setup and results To evaluate the performance of FPS, various famous benchmark instances have been used and two famous algorithms have been adopted to compare with the proposed FPS. The first one is the PSO variants, and the other one is the DE variants.

Population size (POP): 10. Search step sizes rate (˛): 0.1. Numbers of step (T): 5. The rate of individual domain (∂): 0.5. The reduce factor of pattern (): 0.5.

4.1. Comparison with PSO variants This section focuses on the comparison of FPS and PSO variants. Two PSOs are selected from various PSO variant. They both show a good performance on continuous function optimization. One PSO variant is the CPSO proposed by Shi et al. [10]; the other is the ILPSO introduced by Sabat et al. [27]. They are both the newest variants of PSO and achieve better results than other PSO variants. The traditional PSO-w is also used for comparison. In order to make a fair comparison, the benchmarks are chosen from the original literature [13,32,33] and tested by FPS. The comparison was separated, since the benchmarks each algorithm used are not exactly same and the dimensions of the benchmark functions are different too. 4.1.1. Comparison with CPSO This section describes the detailed results of CPSO and FPS. CPSO-outer and CPSO-inner are two kinds of CPSO, while the performance of CPSO-outer is better [10]. In order to show the performance of FPS, CPSO-outer (CPSO for short in the rest sections) is selected. PSO-w is also compared with FPS, because it is a traditional PSO variants used widely, and the performance of FPS can be valid with the comparison with PSO-w. Ten well-known functions used as benchmarks are taken from the original literature of CPSO [10]. The dimensions of the benchmark functions were set to 10 and 30. The best, worst and mean optimums were set as the terms for comparison as well as the standard deviation of the results. Table 1 represents the result of 10D, and Table 2 gives the result of 30D. The successful rates of all algorithms are showed in Table 3. From Table 1, the results show the performance of FPS. CPSO wins 7 functions totally, while FPS scores 6. The performance of CPSO and FPS is almost equal on 10 dimensions as showed in Table 1. There are 3 functions which CPSO and FPS both get the

3858

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Table 1 10 dimension functions of FPS, PSO-w and CPSO-outter. Function

Algorithm

min

max

Mean

Std.

Sphere

FPS PSO-w CPSO-outer

0 6.16E − 140 0

0 3.98E − 131 0

0 3.52E − 132 0

0 9.46E − 132 0

Rosenbrock

FPS PSO-w CPSO-outer

0 1.03E − 02 0

1.14E − 13 3.99E + 00 3.99E + 00

2.68E − 14 1.09E + 00 2.66E − 01

1.03E − 14 8.18E − 01 1.01E + 00

Quadric

FPS PSO-w CPSO-outer

3.02E − 06 5.88E − 136 0

1.32E − 02 2.12E − 128 0

2.08E − 03 8.28E − 130 0

1.24E − 03 3.86E − 129 0

Schwefel

FPS PSO-w CPSO-outer

1.27E − 04 0 1.18E + 02

1.27E − 04 4.74E + 02 8.76E + 02

1.27E − 04 2.44E + 02 5.21E + 02

8.57E − 21 126E + 02 2.17E + 02

Griewank

FPS PSO-w CPSO-outer

0 9.86E − 03 0

1.23E − 02 1.50E − 01 0

5.42E − 03 5.05E − 02 0

1.47E − 03 2.61E − 02 0

Weierstrass

FPS PSO-w CPSO-outer

0 0 0

0 0 0

0 0 0

0 0 0

Quartic

FPS PSO-w CPSO-outer

0 1.51E − 04 7.35E − 06

0 2.02E − 03 7.36E − 04

0 7.77E − 04 1.99E − 04

0 4.10E − 04 1.87E − 04

Nonc-Rasti

FPS PSO-w CPSO-outer

0 0 0

2.98E + 00 3.00E + 00 0

1.39E + 00 3.67E − 01 0

2.51E − 01 7.18E − 01 0

Rastrigin

FPS PSO-w CPSO-outer

0 0 0

0 2.98E + 00 0

0 1.45E + 00 0

0 8.70E − 01 0

Ackeley

FPS PSO-w CPSO-outer

4.44E − 15 2.66E − 15 0

1.33E − 14 6.22E − 15 0

7.77E − 15 5.39E − 15 0

9.55E − 16 1.53E − 15 0

Bold type of function names standards for the result of FPS and CPSO tied with each other, while the bold italic style means that FPS wins.

global optimum zero. FPS is good at some multi-modal functions such as Rosenbrock and Schwefel functions, while CPSO is better on the other multi-modal functions like Ackeley function and Nonc-Rasti. When making a comparison of FPS and PSO-w, the result will be deterministic and the scalability of FPS is valid. The result of FPS is much better in most functions in all terms, except FPS losses slightly on Quadric and Ackeley functions. The results of 30 dimensions testing are presented in Table 2. This round CPSO only scores 2 while FPS gets 8 points. That means FPS beats CPSO by 8:2 in 30 dimension functions optimization. It implies that FPS is much more scalable than CPSO with the increase of dimension of functions. CPSO still can get the best solution of Ackeley function and perform better than FPS in all terms of Ackeley function, while FPS gets the best solution of Schwefel function with 0 of STD term. Compared to CPSO, FPS still performs well on most multi-modal functions, and outperforms CPSO. The results of PSO-w support the above points. When compared with PSO-w, FPS only loses on Quadric function, and most of the results of FPS are much better. Table 3 represents the successful rate of each test. The result shows that CPSO totally gets 8 scores and wins in 10 dimensions while FPS wins in 30 dimensions. In fact, CPSO only won 4 functions when the dimension was 30. The successful rate of FPS decreases little when dimensions increases, but in Griewank function the successful rate even increases from 40% to 60%. For most functions, FPS was scalable to the dimension increasing. 4.1.2. Comparison with ILPSO This section discusses the comparison of ILPSO and FPS. ILPSO modifies the learning strategy of PSO in order to enhance the

convergence speed and quality of the search. It achieves good results on some functions. In this research, first nine functions used in the original literature of ILPSO [27] are set as benchmarks. The rotated version of function 10–18 had not been tested now because the original rotate matrix M used in ILPSO cannot be found. It is very vital to the result. This round, mean results and standard deviation of results are studied, and the dimensions are set to 10, 30 and 50. And the result of PSO-w is also selected for the comparison. There are 10 trials for FPS, and the results are listed below. The Table 4 presents that the result of FPS and ILPSO in all dimensions. It shows that FPS and ILPSO end in a tie. FPS is better on 4 functions while it lost on the other 4 function. The result indicates that some functions are proper to FPS while some are more fit to ILPSO. During the testing, FPS wins on some unimodal functions like SumSquare function and some multi-modal functions like Schwefel functions; this verifies the performances of both global search and local accuracy of FPS. 4.2. Comparison with DE variants In this section, ten functions are selected as benchmarks from the special session on real-parameter optimization of 2005 IEEE Congress on Evolutionary Computation (CEC2005) [26]. They are notated as F1–F10 in this research. CEC2005 is a novel function set and almost all functions are shifted and/or rotated, which made it more difficult to be solved by many algorithms. According to Salomon [30], the performance of algorithm loss can be observed by a rotation of the coordinate system of a function. The global optimum of traditional benchmark functions has five shortages. Liang et al. [31] proposed three methods to avoid these shortcomings by shifting the global optimum to a random position,

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

3859

Table 2 30 dimension functions of FPS, PSO-w and CPSO-outer. Function

Algorithm

min

max

mean

std.

Sphere

FPS PSO-w CPSO-outer

0 1.13E − 33 0

0 3.45E − 28 2.81E − 69

0 1.72E − 29 9.48E − 71

0 6.53E − 29 5.13E − 70

Rosenbrock

FPS PSO-w CPSO-outer

4.75E − 16 1.05E + 00 1.05E − 04

9.82E − 13 4.38E + 00 3.46E + 00

2.56E − 13 2.07E + 00 1.01E + 00

9.72E − 14 1.20E + 00 6.65E − 01

Quadric

FPS PSO-w CPSO-outer

6.41E + 02 3.16E − 30 0

2.88E + 03 3.58E − 27 3.67E − 51

2.02E + 03 8.00E − 28 8.10E − 52

2.24E + 02 1.42E − 27 1.24E − 51

Schwefel

FPS PSO-w CPSO-outer

3.81E − 04 7.11E + 02 4.74E + 02

3.81E − 04 1.90E + 03 6.24E + 03

3.81E − 04 1.33E + 03 1.51E + 03

0 3.76E + 02 1.42E + 03

Griewank

FPS PSO-w CPSO-outer

0 1.72E − 02 0

8.07E − 02 1.35E − 01 1.17E − 01

1.10E − 02 7.06E − 02 1.52E − 02

7.47E − 03 3.13E − 02 2.22E − 02

Weierstrass

FPS PSO-w CPSO-outer

0 7.11E − 15 1.32E − 01

3.00E + 00 2.25E − 03 3.00E + 00

3.00E − 01 9.53E − 04 1.91E + 00

2.84E − 01 1.02E − 03 1.38E + 00

Quartic

FPS PSO-w CPSO-outer

0 1.37E − 02 6.05E − 05

0 4.41E − 02 4.45E − 04

0 2.68E − 02 1.54E − 04

0 1.18E − 02 1.10E − 04

Nonc-Rasti

FPS PSO-w CPSO-outer

6.00E + 00 7.00E + 00 2.50E + 01

1.00E + 01 3.90E + 01 1.25E + 02

8.3E + 00 1.70E + 01 7.52E + 01

4.01E − 01 1.18E + 01 3.52E + 01

Rastrigin

FPS PSO-w CPSO-outer

0 1.29E + 01 0

0 3.68E + 01 1.08E + 02

0 2.89E + 01 5.00E + 01

0 7.45E + 00 2.49E + 01

Ackeley

FPS PSO-w CPSO-outer

1.55E − 14 2.04E − 14 0

3.77E − 14 5.24E − 14 6.22E − 15

2.55E − 14 4.00E − 14 5.03E − 15

2.26E − 15 1.25E − 14 2.90E − 15

locating the optimum on the bounds and rotating the function using Salomon’s method [30]. F1–F5 are unimodal functions and F6–F10 are multi-modal. Among them, F3, F7, F8 and F10 are the rotated version. This section discusses the comparison with FPS and DE variants. Two famous DE variants are chosen for the comparison. One DE variants is Learning-enhanced Differential Evolution (LeDE/exp) while the other one is Clustering-based Differential Evolution (CDE). The DE/rand/1/bin is used for the contrast. The termination of FPS is the max FEs while the dimension is 30 and 50. 4.2.1. Comparison with LeDE In this section, a newest DE variant is compared with FPS, which is the LeDE/exp proposed by Cai et al. [28]. The conventional DE algorithm named DE/rand/1/bin is used for controlling to demonstrating the result of FPS and LeDE/exp.

The testing was organized into 2 groups. The dimensions of them are set to 30 and 50 respectively. The global optimums and the convergence speed are set as the term for comparison, and the tolerance error was 1E-008. For a fair comparison, the terminal of the algorithms is the max FEs instead of max steps of algorithm. When an algorithm needs more FEs to get the global optimum, it means that it is more consumptive. The max FEs are set to 300,000. The experimental results of the first group are shown in Tables 5 and 6. The results highlighted in boldface represent it is the best results while the one who’s interface is bold and italic with underlines means these results are tied to each other. Since the global optimums of these functions are not at the origin, the truncation error forbids getting a very accurate result below 1E-008. The results below 1E-008 will be regarded as the same.

Table 3 The successful rate of FPS, PSO-w and CPSO-outer. Function

Sphere Rosenbrock Quadric Schwefel Griewank Weierstrass Quartic Nonc-Rasti Rastrigin Ackeley Total win

10 dimension

30 dimension

FPS

PSO-w

CPSO-outer

FPS

PSO-w

CPSO-outer

100 40 0 100 40 100 100 10 100 0 5

0 0 0 10 0 100 0 10 16.7 0 1

100 86.7 100 0 100 100 0 100 100 100 8

100 0 0 100 60 70 100 0 100 0 8

0 0 0 0 0 0 0 0 0 0 0

90 0 40 0 26.7 0 0 0 10 20 4

3860

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Table 4 FPS compared with ILPSO using 10, 30 and 50 dimensions functions. Function

Algorithm

Dimensions 10

30

50

Mean

Std.

Mean

Std.

Mean

Std.

Ackeley

FPS PSO-w ILPSO

9.54E − 15 2.66E − 15 4.14E − 16

1.40E − 15 0 1.25E − 15

2.35E − 14 9.53E − 15 2.90E − 15

1.33E − 15 3.67E − 15 9.17E − 16

3.17E − 14 5.67E − 13 2.90E − 15

3.71E − 15 1.68E − 12 9.17E − 16

Griewank

FPS PSO-w ILPSO

3.81E − 02 6.04E − 02 0

1.13E − 02 1.87E − 02 0

1.06E − 02 1.70E − 02 0

4.36E − 03 1.86E − 02 0

2.46E − 03 8.53E − 03 0

1.20E − 03 1.18E − 02 0

Powel

FPS PSO-w ILPSO

1.82E − 07 2.53E − 05 8.68E − 08

2.81E − 08 9.15E − 07 2.43E − 07

1.30E − 06 9.15E − 02 3.75E − 15

4.37E − 08 2.13E − 01 1.44E − 14

2.71E − 06 1.48E + 00 5.48E − 12

5.02E − 08 3.50E + 00 2.12E − 11

Rastrigin

FPS PSO-w ILPSO

0 2.19E + 00 0

0 1.47E + 00 0

0 3.85E + 01 0

0 1.08E + 01 0

0 8.82E + 01 0

0 1.72E + 01 0

Rosenbrock

FPS PSO-w ILPSO

9.54E − 15 3.04E + 00 7.24

3.72E − 15 3.05E + 00 4.93E − 1

4.77E − 13 3.70E + 01 2.74E + 1

2.71E − 13 3.52E + 01 5.26E − 1

0.797324 9.16E + 01 4.78E + 1

0.504272 5.33E + 01 5.86E − 2

SumSquare

FPS PSO-w ILPSO

0 7.88E − 251 6.02E − 251

0 0 0

0 2.97E − 52 7.97E − 251

0 1.15E − 51 0

0 1.06E − 01 7.46E − 251

0 4.11E − 01 0

Zakharov

FPS PSO-w ILPSO

2.10E − 121 2.74E − 159 6.29E − 251

1.95E − 121 1.05E − 158 0

1.49E − 06 1.54E − 10 1.20E − 102

4.26E − 07 2.95E − 10 4.64E − 102

0.47466 2.65E − 01 1.63E − 54

0.0903379 1.94E − 01 6.28E − 54

Elliptic

FPS PSO-w ILPSO

0 8.83E − 251 5.78E − 251

0 0 0

0 1.60E − 01 6.69E − 251

0 6.18E − 01 0

0 5.02E + 06 7.49E − 251

0 7.42E06 0

Schwefel

FPS PSO-w ILPSO

1.27E − 04 3.60E + 03 3.60E + 03

0 2.35E + 01 8.39E + 01

3.81E − 04 1.09E + 04 1.11E + 04

0 5.69E + 01 5.77E + 02

6.36E − 04 1.82E + 04 1.82E + 04

0 1.14E + 02 8.16E + 02

Table 5 Average and standard deviation of the best error values of LeDE/exp and FPS for D = 30. Func.

MNFES

DE/rand/1/bin mean ± std.

LeDE/exp mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000

1.62E − 029 ± 5.53E − 029 3.45E − 005 ± 3.40E − 005 4.27E + 005 ± 2.78E + 005 1.81E − 002 ± 1.80E − 002 4.55E + 001 ± 3.78E + 001 9.30E − 002 ± 1.97E − 001 1.48E − 004 ± 1.05E − 003 2.10E + 001 ± 4.66E − 002 1.28E + 002 ± 2.48E + 001 1.81E + 002 ± 9.42E + 000

0.00E + 000 ± 0.00E + 000 9.12E − 021 ± 1.67E − 020 4.85E + 005 ± 2.48E + 005 1.31E − 010 ± 3.37E − 010 1.91E + 003 ± 3.69E + 002 1.22E − 005 ± 8.51E − 005 1.16E − 002 ± 1.18E − 002 2.10E + 001 ± 5.47E − 002 0.00E + 000 ± 0.00E + 000 8.08E + 001 ± 1.96E + 001

2.68E − 013 ± 1.74708E − 014 7.61E − 008 ± 2.10E − 008 1.76E + 005 ± 1.70E + 005 1.51E + 004 ± 8.93E002 5.42E + 002 ± 6.60E + 001 1.64E + 000 ± 3.34E − 001 1.10E − 002 ± 2.29E − 003 2.00E + 001 ± 1.15E − 002 3.31E − 013 ± 1.31E − 14 1.71E + 002 ± 6.65E + 000

The result showed in Table 5, LeDE/exp owns the most of best results which wins 3 best and 4 tie results; this is followed by FPS which has 1 best result and 4 tie results. But if only comparing FPS with LeDE/exp strict, FPS gets slightly better result on F3,F5,F7 and F8 and losses on F4, F6 and F10, which makes FPS slightly win. In fact, FPS totally failed on F4. The result of DE/rand/1/bin

was the worst among this testing, but it still gets two best results. The result shows that FPS outperforms DE/rand/1/bin in this round. In Table 6, another view of the results was given. The results present the FEs needed for an algorithm to get 1E-008 accuracy and the successful rate of each algorithm. In this view, FPS shows a

Table 6 No. of successful runs (in parentheses), and NFES required to achieve 1E − 008 accuracy level of LeDE/exp and FPS for D = 30. Func.

MNFES

DE/rand/1/bin mean ± std.

LeDE/exp mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000

95,348 ± 2283.86 (1.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) 165,047 ± 4177.00 (0.98) NA (0.00) NA (0.00) NA (0.00)

50,720 ± 1373.46 (1.00) 153,484 ± 5807.57 (1.00) NA (0.00) 251,683 ± 13,522.22 (1.00) NA (0.00) 284,215 ± 8882.80 (0.84) 223,665 ± 13,814.00 (0.34) NA (0.00) 183,859 ± 4446.93 (1.00) NA (0.00)

29,950 ± 13.86 (1.00) 289,096 ± 981.182 (0.24) NA (0.00) NA (0.00) NA (0.00) NA (0.00) 105,972 ± 635.94 (0.36) NA (0.00) 43,891 ± 1435.57 (1.0) NA (0.00)

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Fig. 7. The convergence graph for CEC function F1–F5 (30D).

great superiority than other algorithms. On function F1, F7 and F9, LeDE/exp and FPS both perform well, but FPS needs less FEs. Especially on F4, the FEs needed for LeDE/exp is almost 4 times than FPS. The rest results in this table show the similar trend. DE/rand/1/bin lost completely when compared with FPS, and FPS outperforms these two variants of DEs. The results of the second group were showed in Tables 7 and 8. The only difference between group 1 and group 2 is dimension. The dimension of group 2 is 50, and the result shows the same result with group 1, which means the effect caused by dimension increasing affects DE/rand/1/bin, LeDE/exp and FPS in a same degree. The Figs. 7–10 present the convergence graphs of F1–F10 which are marked as CECf1 to CECf10 in the graphs. Figs. 7 and 8 draws the curves of 30D testing while Figs. 9 and 10 present the 50D testing. 4.2.2. Comparison with CDE This section conducts a comparison between another DE variants and FPS. CDE was proposed by Cai et al. [29] and it hybrid a one-step k-means clustering with Differential Evolution. For a fair contrast, the original DE in was employed too and the result of it is taken from the same literature with CDE. The testing of function of CDE on CEC benchmark functions is very limited, only F1–F4 and F6–F9 these eight functions were

Fig. 8. The convergence graph for CEC function F6–F10 (30D).

3861

Fig. 9. The convergence graph for CEC function F1–F5 (50D).

tested and the dimensions of them is only 30. The tolerance of error is 1E-008 too. Note that the max EFs of CDE and DE in Table 10 is 500,000, while that is only 300,000 in Table 9, but the max EFs is 300,000 for FPS in all 30D testing. The result shows similar trend that the FEs needed by FPS is less while the quality of solution of FPS is slightly better. FPS performs better on function F3, F7, F8 and losses on F4 and F6 with tie on F1, F2 and F9 when taking the result seriously. The score was 3:2 and this round FPS won by luck. There is a note on Table 10. The numbers with a line in middle means that the function calls exceed 300,000 and actually it is unfair for FPS. 5. Discussion PS is an efficient algorithm, but it hardly to solve more complex problems. FPS extends the traditional PS into a population-based formation, and achieves the good results. The interaction between the individuals is very important, and it can enhance the search ability. The performance of FPS is valid through the comparisons with PSO variants and DE variants. And the advantages of FPS are summarized in Table 11: • FPS promotes the famous HJPS and it ensures the enough local search. PS is an old method, and it hardly to solve more complex problem in nowadays. FPS inherits the merit of PS and tries to

Fig. 10. The convergence graph for CEC function F6–F10 (50D).

3862

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

Table 7 Average and standard deviation of the best error values of LeDE/exp and FPS for D = 50. Func.

MNFES

DE/rand/1/bin mean ± std.

LeDE/exp mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000

1.80E − 028 ± 1.92E − 028 3.58E + 000 ± 2.04E + 000 2.52E + 006 ± 9.47E + 005 3.42E + 002 ± 1.90E + 002 1.93E + 003 ± 3.92E + 002 2.28E + 001 ± 1.60E + 001 1.08E − 003 ± 3.03E − 003 2.11E + 001 ± 3.72E − 002 2.03E + 002 ± 4.72E + 001 3.60E + 002 ± 1.72E + 001

0.00E + 000 ± 0.00E + 000 2.62E − 012 ± 3.40E − 012 1.41E + 006 ± 5.37E + 005 4.80E − 001 ± 8.32E − 001 4.88E + 003 ± 6.02E + 002 7.96E + 000 ± 1.67E + 000 3.94E − 003 ± 6.56E − 003 2.11E + 001 ± 3.49E − 002 0.00E + 000 ± 0.00E + 000 1.76E + 002 ± 3.59E + 001

4.93E − 013 ± 1.75E − 14 2.43E + 000 ± 3.13E − 001 5.44E + 005 ± 3.22E + 004 8.13E + 004 ± 4.23E + 003 4.06E + 003 ± 2.45E + 002 4.33E + 001 ± 5.72E + 000 1.13E − 002 ± 2.37E − 003 2.00E + 001 ± 4.73E − 003 6.04E − 013 ± 1.87E − 14 4.24E + 002 ± 1.13E + 001

Table 8 No. of successful runs (in parentheses), and NFES required to achieve 1E − 008 accuracy level of LeDE/exp and FPS for D = 50. Func.

MNFES

DE/rand/1/bin mean ± std.

LeDE/exp mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000 500,000

152,406 ± 2843.35 (1.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00)

78,434 ± 995.69 (1.00) 387,599 ± 10,655.14 (1.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) 303,194 ± 5125.85 (1.00) NA (0.00)

49,818 ± 42.892 (1.0) NA (0.00) NA (0.00) NA (0.00) NA (0.00) NA (0.00) 231,865 ± 1667.07 (0.4) NA (0.00) 102,949 ± 8321.85 (1.0) NA (0.00)

Table 9 Average and standard deviation of the best error values of CDE and FPS for D = 30. Func.

MNFES

DE mean ± std.

CDE mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F6 F7 F8 F9

300,000 300,000 300,000 300,000 300,000 300,000 300,000 300,000

0.00E + 000 ± 0.00E + 000 3.12E − 004 ± 1.28E − 004 1.03E + 006 ± 5.57E + 005 4.11E − 004 ± 1.94E − 004 9.10E − 003 ± 1.90E − 002 2.03E − 005 ± 1.57E − 005 2.10E + 001 ± 6.10E − 002 0.00E + 000 ± 0.00E + 000

0.00E + 000 ± 0.00E + 000 3.60E − 016 ± 3.97E − 016 8.92E + 005 ± 3.06E + 005 4.52E − 016 ± 4.93E − 016 4.10E − 003 ± 1.4E − 002 4.10E − 002 ± 6.20E − 003 2.10E + 001 ± 8.20E − 002 0.00E + 000 ± 0.00E + 000

2.68E − 013 ± 1.74708E − 014 7.61E − 008 ± 2.10E − 008 1.76E + 005 ± 1.70E + 005 1.51E + 004 ± 8.93E002 1.64E + 000 ± 3.34E − 001 1.10E − 002 ± 2.29E − 003 2.00E + 001 ± 1.15E − 002 3.31E − 013 ± 1.31E − 14

Table 10 No. of successful runs (in parentheses), and NFES required to achieve 1E − 008 accuracy level of CDE and FPS for D = 30. Func.

DE mean ± std.

CDE mean ± std.

FPS mean ± std.

F1 F2 F3 F4 F6 F7 F8 F9

89,600 ± 1097 (1.00) 464,600 ± 8139 (1.00) NA (0.00) 467,920 ± 9807 (1.00) 357,646 ± 10,250 (1.00) 428,790 ± 15,032 (1.00) NA (0.00) 207,102 ± 4102 (1.00)

59,176 ± 1114 (1.00) 188,492 ± 3761 (1.00) NA (0.00) 190,753 ± 4294 (1.00) 322,388 ± 13,471 (1.00) 227,382 ± 15,566 (0.34) NA (0.00) 186,611 ± 4381 (1.00)

29,950 ± 13 (1.00) 289,096 ± 981 (0.24) NA (0.00) NA (0.00) NA (0.00) 105,972 ± 635 (0.36) NA (0.00) 43,891 ± 1435 (1.0)

MNFES are 500,000 for CDE and DE, 300,000 for FPS.

Table 11 The comparison of FPS and other method. No.

FPS

CPSO

ILPSO

LeDE

CDE

Convergence speed Solution accuracy Swarm management Robust to dimension

Very fast Better Yes Yes

Fast Good No No

Fast Better No Yes

Fast Good No Yes

Fast Good No Yes

avoid the shortcomings of PS. FPS use a population instead of a single point to enhance the search. • FPS has a good self-regular management learned from FS systematically. In acceleration operator, the self-regular management classifies the population into two groups, and accelerates the first group. The convergence speed is promoted after acceleration.

• FPS uses throw operator to avoid individuals’ reduplicative search. When more than one individuals gathering in a small space, they need to be scattered for full use of their searching abilities. Throw operator promotes the global search and it makes FPS more suitable for multi-modal functions. • FPS is more scalable to the dimension. FPS use the pattern paradigm to finding the profit direction, it is more scalable. As stated in Section 4.1.1, the results of FPS are little affected when the dimension increases from 10 to 30. 6. Conclusion and future researches This research presents a new algorithm called FPS. The motivation behind FPS is to extend PS into a population-base algorithm and get a better performance including higher quality solutions and

L. Wen et al. / Applied Soft Computing 13 (2013) 3853–3863

faster convergence speed. With the help of Free Search, acceleration operator is designed and adopted. It works together with throw operator to keep the diversity of the population and accelerate the searching speed if possible. Two sets of experiments have been carried out. And the comparisons between FPS with PSO variants and DE variants are also provided. The results have verified the performance of FPS. FPS has a higher quality of solution and higher convergence speed than other competitors. The scalability of FPS is also proofed when the dimension increases. FPS has a high degree of the self-regular management. The individuals are classified and scattered to make a full use of their search ability. It is good at the global search in the whole space. FPS inherits many advantages in PS, including a good local search. While lots of disadvantage of PS is also can be found in FPS. For example, the pattern can be varied in FPS. The throw operator is dedicated into scatter the individual and it ensures the searching in the whole space. So, the future studies can be extended in the following ways. Firstly, the pattern of FPS can be extended into a more powerful pattern. The pattern of PS is varied and it can be varied in FPS too. Secondly, the self-regular management can be extended to search operator. The times of EMove can be automatic selected by the algorithm. Thirdly, FPS can be extended to the multi-objective optimization and constraint optimization. Acknowledgements This research work is supported by the Natural Science Foundation of China (NSFC) under Grant nos. 60973086 and 51005088, the National Basic Research Program of China (973 Program) under Grant no. 2011CB706804. References [1] A.H. Wright, Genetic algorithms for real parameter optimization, in: G.J.E. Rawlins (Ed.), Foundations of Genetic Algorithms, Morgan Kaufmann, Missoula, 1991, pp. 205–218. [2] Y.W. Leung, Y.P. Wang, An orthogonal genetic algorithm with quantization for global numerical optimization, IEEE Transactions on Evolutionary Computation 5 (2001) 41–53. [3] J. Kennedy, R. Eberhart, Particle swarm optimization, Proceedings of IEEE International Conference on Neural Networks 4 (1995) 1942–1948. [4] R. Storn, K. Price, Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces, The Journal of Global Optimization 11 (1997) 341–359. [5] O. Montiel, O. Castillo, P. Melin, A.R. Dıaz, R. Sepulveda, Human evolutionary model: a new approach to optimization, Information Science 177 (2007) 2075–2098. [6] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, The Journal of Global Optimization 39 (2007) 459–471. [7] R. Venkata Rao, P.J. Pawar, Parameter optimization of a multi-pass milling process using non-traditional optimization algorithms, Applied Soft Computing 10 (2010) 445–456. [8] K. Vijayakumar, G. Prabhaharan, P. Asokan, R. Saravanan, Optimization of multi-pass turning operations using ant colony system, International Journal of Machine Tools and Manufacture 43 (2003) 1633–1639.

3863

[9] I.H. Osman, J.P. Kelly, Meta-Heuristics: Theory and Applications, Kluwer Academic Publishers, Norwell, MA, USA, 1996. [10] Y. Shi, H.C. Liu, L. Gao, G.H. Zhang, Cellular particle swarm optimization, Information Science 181 (2011) 4460–4493. [11] A. Hedar, M. Fukushima, Hybrid simulated annealing and direct search method for nonlinear unconstrained global optimization, Optimization Methods and Software 17 (2002) 891–912. [12] W.Y. Gong, Z.H. Cai, C.X. Ling, DE/BBO: a hybrid differential evolution with biogeography-based optimization for global numerical optimization, Soft Computing 15 (2011) 645–665. [13] A. Header, M. Fukushima, Tabu search directed by direct search methods for nonlinear global optimization, European Journal of Operational Research 170 (2006) 329–349. [14] R. Hooke, T.A. Jeeves, “Direct search” solution of numerical and statistical problems, Journal of ACM 8 (1961) 212–229. [15] V. Torczon, On the convergence of pattern search algorithms, SIAM Journal on Optimization 7 (1997) 1–25. [16] R.M. Lewis, V. Torczon, M.W. Trosset, Direct search methods: then and now, Journal of Computational and Applied Mathematics 124 (2000) 191–207. [17] L.M. Hvattum, F. Glover, Finding local optima of high-dimensional functions using direct search methods, European Journal of Operational Research 195 (2009) 31–45. [18] S. Sankaran, C. Audet, A.L. Marsden, A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation, Journal of Computational Physics 229 (2010) 4664–4682. [19] T. Wu, Y.S. Yang, L.P. Sun, H. Shao, A heuristic iterated-subspace minimization method with pattern search for unconstrained optimization, Computers & Mathematics with Applications 58 (2009) 2051–2059. [20] T. Wu, Solving unconstrained optimization problem with a filter-based nonmonotone pattern search algorithm, Applied Mathematics and Computation 203 (2008) 380–386. [21] G.Y. Zhu, J.B. Wang, Research and improvement of free search algorithm, in: International Conference on Artificial Intelligence and Computational Intelligence, vol. 1, 2009, pp. 235–239. [22] K. Penev, G. Littlefair, Free Search—a comparative analysis, Information Science 172 (2005) 173–193. [23] R.M. Lewis, V. Torczon, Rank ordering and positive bases in pattern search algorithms, Institute for Computer Applications in Science and Engineering (1996) 71–96. [24] E.D. Dolan, Pattern Search Behaviour in Nonlinear Optimization, 1999 (Thesis). [25] K. Penev, G. Littlefair, Free Search—a novel heuristic method, in: Proceedings of the PREP, 2003, pp. 133–134. [26] P. Suganthan, N. Hansen, J. Liang, K. Deb, Y. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Realparameter Optimization, Nanyang Technology University, Singapore, 2005, pp. 1–50. [27] S.L. Sabat, L. Ali, S.K. Udgata, Integrated learning particle swarm optimizer for global optimization, Applied Soft Computing 11 (2011) 574–584. [28] Y.Q. Cai, J.H. Wang, J. Yin, Learning-enhanced differential evolution for numerical Optimization, Soft Computing (2012), http://dx.doi.org/10.1007/ s00500-011-0744-x. [29] Z.H. Cai, W.Y. Gong, C.X. Ling, H. Zhang, A clustering-based differential evolution for global optimization, Applied Soft Computing 11 (2011) 1363–1379. [30] R. Salomon, Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions, Biosystems 39 (1996) 263–278. [31] J.J. Liang, P.N. Suganthan, K. Deb, Novel composition test functions for numerical global optimization, in: Swarm Intelligence Symposium, 2005. SIS 2005. Proceedings 2005 IEEE 68–75, 2005. [32] X. Yao, Y. Liu, G.M. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary Computation 3 (1999) 82–102. [33] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation 10 (2006) 281–295.