Comparison and application of four versions of particle swarm optimization algorithms in the sequence optimization

Comparison and application of four versions of particle swarm optimization algorithms in the sequence optimization

Expert Systems with Applications 38 (2011) 8858–8864 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: ww...

464KB Sizes 0 Downloads 24 Views

Expert Systems with Applications 38 (2011) 8858–8864

Contents lists available at ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Comparison and application of four versions of particle swarm optimization algorithms in the sequence optimization Wei-Bo Zhang, Guang-Yu Zhu ⇑ College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, Fujian 35002, PR China

a r t i c l e

i n f o

Keywords: Particle swarm optimization algorithm Sequence optimization Global convergence Hole machining Drilling

a b s t r a c t Particle swarm optimization (PSO) algorithm is a well-known optimization approach to deal with discrete problems. There are two models proposed for the operators of PSO algorithm, one is based on value exchange and the other on order exchange, accordingly two versions of PSO algorithms are formed. A new version of PSO algorithm based on order exchange has been presented in our studies, which is capable of converging on the global optimization solution, with the method of generating the stop evolution particle over again. In this paper, we propose another version of PSO algorithm based on value exchange with the same method. There exist, thus, totally four versions of PSO algorithms, which is given a brief introduction individually and the performance of which are compared in solving sequence optimization problems through fifty runs. The performance comparison show that the PSO algorithm with global convergence characteristics based on order exchange outperforms the other versions of PSO in solving sequence optimization problem. Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction With a computer numerically controlled (CNC) machine, more than ten or even more holes can be machined in one working procedure. For the same work-piece, the different machining sequence can cause great difference in terms of the processing time for holes machining. With the widely use of the CNC machine, the need for sequence optimization algorithm for holes machining arises. The strategy minimizing the processing time will have great impact on achieving the firm’s objectives. In order to cut down the production cost, the best machining sequence of the drilling holes must be established. This kind of sequence optimization problem can be modelled as a travelling salesman problem (TSP) (Ghaiebi & Solimanpur, 2007; Onwubolu & Clerc, 2004). Some researches have been carried out in this area. Gölog˘lu (2004) has used an efficient heuristic algorithm, tool change minimization algorithm which belongs to the system’s operation sequencing module, to find nearoptimal operation sequence from all available process plans in a machining set-up. Kolahan and Liang (2000) employed the tabu search algorithm for the holes machining path optimization. Zhang, Lu, and Zhu (2006), the genetic algorithm for the process route optimization. And still others (Guo, Li, Mileham, & Owen, 2009; Guo, Mileham, Owen, & Li, 2006; Onwubolu & Clerc, 2004), the particle swarm optimization (PSO) algorithm for the problem of sequence optimization. ⇑ Corresponding author. Tel.: +86 0591 87893179. E-mail address: [email protected] (G.-Y. Zhu). 0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2011.01.097

Recently evolutionary-based optimization algorithms have become one researching hotspot in area of intelligent optimization and have been developed to arrive at near-optimum solutions to large-scale optimization problems, which traditional mathematical techniques may fail (Emad, Tarek, & Donald, 2005). Through comparing five recent evolutionary-based optimization algorithms: genetic algorithms, memetic algorithms, particle swarm optimization, ant colony systems, and shuffled frog leap, Emad et al. (2005) concluded that the PSO is a promising optimization tool. The PSO algorithm, a modern evolutionary-based computation technique, was originally developed by Kennedy and Eberhart (1995) for continuous optimization problems initially. Recently there are several successful studies focusing on discrete problems such as the travelling salesman problem (TSP), project selection problem and job-shop scheduling problems (Guo et al., 2006; Rabbani, Aramoon, & Baharian, 2010; Sha & Lin, 2010). Different modified PSO algorithms were used in the sequence optimization of machining. The adaptive particle swarm optimization (adaptive PSO) was used by Onwubolu and Clerc (2004) to solve the problem of path optimization in automated drilling operation. The particle swarm optimization algorithm had been employed and modified to solve operation sequencing optimization effectively, some new operators, including mutation, crossover, and shift, had been developed and incorporated in this modified PSO algorithm (Guo et al., 2006). There are two well-established models of four operators of PSO algorithms, separately based on value exchange and order

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

exchange. Accordingly two versions of PSO algorithms are obtained: the standard PSO based on value exchange, and the standard PSO on order exchange. In our studies, the method for getting the ability of convergence on the global optimization solution has been adopted. A modified PSO algorithm is thus obtained: the PSO algorithm with global convergence characteristics based on order exchange. In this paper, with the same method, the standard PSO based on value exchange was modified, and the PSO algorithm with global convergence characteristics based on value exchange obtained. So there exist totally four versions of PSO algorithms. In this paper, the four versions of PSO algorithms are used in the sequence optimization of holes machining and TSP problem. Section 2 describes the PSO algorithm for holes machining sequence optimization. Section 3 focuses on each version of PSO algorithms. Section 4 compares the performance of four versions of PSO algorithms in solving two work-pieces and two TSP benchmarks problems. Section 5 includes conclusions. Based on the result of performance comparison the PSO algorithm with global convergence characteristics based on value exchange was proved to be the best PSO algorithm for sequencing of holes machining or TSP problem. Appendix A provides the information of position of the above-mentioned 4 examples.

2. The particle swarm optimization algorithm for holes machining sequence optimization The PSO was inspired by the social behaviour of a flock of migrating birds trying to reach an unknown destination. This mimics a flock of birds communicating together while flying. Each bird follows it is specific direction, and then after practising and communicating together, they will identify the bird of the best location. Accordingly, each bird speeds towards the best bird with different velocity depending on its current position. Each bird, then, explores the search space from its new position, the process repeated until the flock reaches a desired destination. It is important to note that the process involves both social interaction and intelligence so the birds learn from their own experience (local search) and also from the experience of others around them (global search) (Emad et al., 2005; Lee, 2008; Yannis & Magdalene, 2010). In holes machining, the drill head moves from one hole to the next on a work-piece mounted on the CNC drill worktable. Assume holes number is S, then the number of possible holes machining sequence is (S  1)!/2. Different holes machining sequence has different travel distance that can be used as the fitness function value. When PSO algorithm solves the optimization problem of holes machining sequence, natural number is used for algorithm coding, that is, with labels of all holes, the code chain was formed. Then each solution of PSO, namely, a sequence of holes machining is referred to as a particle, similar to a ‘‘bird’’ in the flock. The swarm composed by a set of random particles is initially ‘‘thrown’’ randomly inside the search space. Each particle has the following features: (a) The position and velocity are the key parameters. (b) Each one knows its position, and the fitness function value of this position. (c) Each one remembers its best previous position. (d) Each one knows the best previous position of all particles

8859

(the swarm) and the fitness function value. For a group of random particles (solutions), N, the ith particle is regarded as a point in S-dimensional search space, where S is the number of variables (holes). At time step t, the particle i monitors three values: its current position (xi(t)); the best position it reached in previous cycles (pi(t)); its flying velocity (vi(t)). These three values are represented as follows:

8 Current position > > > > > xi ðtÞ ¼ ðhi1 ðtÞ; hi2 ðtÞ; . . . ; him ðtÞ; . . . ; his ðtÞÞ; > > > < Best previous position > pi ðtÞ ¼ ðpi1 ðtÞ; pi2 ðtÞ; . . . ; pim ðtÞ; . . . ; pis ðtÞÞ; > > > > > Fly velocity > > : v i ðtÞ ¼ ðv i1 ðtÞ; v i2 ðtÞ; . . . ; v im ðtÞ; . . . ; v is ðtÞÞ;

where him(t), pim(t), are the labels of mth hole of particle i at time step t. The vim(t), the mth item of vi(t). At time step t, the best position (pg(t)) the swarm reached is calculated as the best fitness. Accordingly, each particle updates its’ velocity vi(t + 1) to catch up with two positions pi(t) and pg(t), as follows:

v i ðt þ 1Þ ¼ ðx  v i ðtÞÞ  ½c1  ðpi ðtÞ  xi ðtÞÞ    c2  ðpg ðtÞ  xi ðtÞÞ

The PSO algorithm with global convergence characteristics

ð2Þ

With the new velocity vi(t + 1), the particle’s updated position is given as:

xi ðt þ 1Þ ¼ xi ðtÞ þ v i ðt þ 1Þ

ð3Þ

where xi(t + 1) refers to the positions of particle i at time step t + 1; x, the inertia weight; c1, c2, influence factors in the range [0, 1]; Eqs. (2), (3), the basic description of the PSO algorithm for the sequence optimization; , , , +, four operators of the algorithm. 3. Four versions of PSO In reference of Clerc (2006), a discrete PSO algorithm was presented and illustrated by the Traveling Salesman Problem. There are four operators of this version of algorithm. Through analyzing the four operators and examples, we found that based on value exchange, this discrete PSO algorithm was established. Then we named it the standard PSO based on value exchange (written logogram as vePSO). In our studies (Zhu & Zhang, 2008), we has proposed a modified PSO algorithm for drilling path optimization whose four operators were based on order exchange so we named it the Standard PSO Based on Order Exchange (written logogram as oePSO). At same time we modified this PSO algorithm to get the ability of convergence on the global optimization solution, so a new version of PSO algorithm named the PSO Algorithm with Global Convergence Characteristics Based on Order Exchange (written logogram as oePSOgc) was formed accordingly. In this paper the same method is adopted in the standard PSO based on value exchange to form a new version of PSO algorithm named the PSO Algorithm with Global Convergence Characteristics Based on Value Exchange (written logogram as vePSOgc). So there exist four versions of PSO algorithm for the sequence optimization of holes machining, shown in Table 1.

Table 1 Four versions of particle swarm optimization.

Standard PSO algorithm

ð1Þ

Operators based on value exchange

Operators based on order exchange

The standard PSO algorithm based on value exchange (written logogram as vePSO) The PSO algorithm with global convergence characteristics based on value exchange (written logogram as vePSOgc)

The standard PSO algorithm based on order exchange (written logogram as oePSO) The PSO algorithm with global convergence characteristics based on order exchange (written logogram as oePSOgc)

8860

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

In the following section, these four versions of PSO algorithm will be introduced respectively, especially the established operators of the PSO algorithm and the method for getting the global convergence ability will be described in detail. 3.1. The vePSO algorithm As for vePSO by Clerc (2006), the four operators refer to Move ‘‘position plus velocity’’, Subtraction ‘‘position minus position’’, Addition ‘‘velocity plus velocity’’ and Multiplication ‘‘coefficient times velocity’’. The operators are represented as follows: 3.1.1. Move ‘‘position plus velocity’’ Here x1 denotes a position, and v a velocity. Then through applying the first transposition of v to x1, applying the second transposition of v to the result of first step, and so on, we can get the new position x2. This operator can be expressed by symbol ‘‘+’’, namely x2 = x1 + v. Example 1. Suppose position x1 = (3, 1, 2, 4, 5) and velocity

v = ((1, 3), (2, 3)). Applying v to x1, we obtain successively: ð1;3Þ

ð2;3Þ

x1 ¼ ð3; 1; 2; 4; 5Þ ! ð1; 3; 2; 4; 5Þ ! ð1; 2; 3; 4; 5Þ ¼ x2 : 3.1.2. Subtraction ‘‘position minus position’’ Here x1 and x2 denote two positions. The difference x2  x1 refer to velocity v, the velocity v was obtained by a given method, apply v to x2 to receive x1. This operator can be expressed by symbol ‘‘’’. Example 2. by supposing position x1 = (1, 4, 2, 5, 3) and x2 = (1, 2, 3, 4, 5), we achieve velocity v = x2  x1 = (1, 2, 3, 4, 5)  (1, 4, 2, 5, 3) = ((2, 4), (3, 4), (4,5)), here as for x1, exchange 2 and 4, then 3 and 4, and finally 4 and 5. It is easy to see that the final result is indeed x2. The process can be described as follows: ð2;4Þ

ð3;4Þ

ð4;5Þ

For the jth particle at time step t, when the three positions of the particle xj(t), pj(t), pg(t) superpose, the jth particle will stop evolving, or continue searching with current velocity. In order to improve the convergence of the algorithm, pg(t) is kept as the best position of the particle swarm, the jth particle position xj(t + 1) is generated randomly again in S-dimensional search space, then pj(t + 1) is updated, positions of other particles i can be achieved by Eqs. (2), (3) at time step t+1, then pg(t + 1) is updated. There are three situations after generating the new particle randomly: (1) If positions pj(t + 1) and pg(t + 1) superpose, it means that the randomly-got particle j is in the best position and can not keep on evolving according to Eqs. (2), (3), then the jth particle position xj(t + 1) should be generated randomly again in S-dimensional search space. Then pj(t + 1) will be updated, other particles can evolve according to Eqs. (2), (3), finally pi(t + 1) and pg(t + 1) will be updated. (2) If positions pj(t + 1) and pg(t + 1) do not superpose, and pg(t + 1) is not updated, all particles evolve according to Eqs. (2), (3). (3) If positions pj(t + 1) and pg(t + 1) do not superpose, yet pg(t + 1) is already updated, it means that there is a particle k, k – j, causing the positions xk(t + 1), pk(t + 1) and pg(t + 1) superpose, so particle k stops evolving. Then particle k will be generated randomly again in the searching space S, pk(t + 1) will be updated, other particles can evolve according to Eqs. (2), (3), then finally pi(t + 1) and pg(t + 1) are updated. With this method, there exist at least one particle j whose positions xj(t), pj(t) and pg(t) superpose in certain evolutional generations. Thus at least one particle needs to be generated randomly again in solution space, then the global convergence capability of the new algorithm is reinforced consequentially. 3.3. The oePSO algorithm

x1 ¼ ð1; 4; 2; 5; 3Þ ! ð1; 2; 4; 5; 3Þ ! ð1; 2; 3; 5; 4Þ ! ¼ x2 : 3.1.3. Addition ‘‘velocity plus velocity’’ Here v1 and v2 denote two velocities. In order to compute v1 plus v2 we define the list of transpositions which contains the elements of v1, followed by the elements of v2. This operator can be expressed by symbol ‘‘’’. 3.1.4. Multiplication ‘‘coefficient times velocity’’ Here c denotes a real coefficient and v a velocity. We just ‘‘truncate’’ v with probability c to get cv and the value of c is limited in [0, 1]. This operator can be expressed by symbol ‘‘’’. 3.2. The vePSOgc algorithm For the standard PSO, when the particle locates in the best position of the swarm, the particle may remain motionless and stop evolving, or the particle may keep searching with current velocity, then the global searching capability of the algorithm will be weakened. In order to improve the capability of global searching, the PSO algorithm described in Section 3.1 needs improving to ensure the new PSO converge on the global optimization solution and to be used in discrete space, thus we form a new version of PSO algorithm: vePSOgc algorithm. The method is as followed: When the swarm evolves to certain generation, there is at least one particle located in the best position of the swarm and this particle will stop evolving, then the particle is improved by the following method to reinforce the global convergence of the algorithm.

In our studies, the oePSO was presented, whose operators were based on the order exchange unit (OEU) and order exchange list (OEL). Here we introduce the operators briefly: 3.3.1. Position plus velocity The OEU is made up by two serial numbers of the sequence of holes machining, such as, (1, 3) means the first position and the third position have relationships. The OEL is defined as an ordered queue constituted by one or more OEUs. Here the OEL is regarded as velocity, and a new position can be obtained by ‘‘position plus OEL’’. The operator is expressed by symbol ‘‘+’’. So the ‘‘position plus velocity’’ can be represented as following: Example 3. Suppose position x1 = (3, 1, 2, 4, 5) and OEL = ((1, 3), (2, 3)), then the position x2 = x1 + OEL. When Applying OEL to x1, we obtain successively: ð1;3Þ

ð2;3Þ

x1 ¼ ð3; 1; 2; 4; 5Þ ! ð2; 1; 3; 4; 5Þ ! ð2; 3; 1; 4; 5Þ ¼ x2 ; here, in x1, the serial number 1 and 3 in first OEU correspond to value ‘‘3’’ and ‘‘2’’, exchange value ‘‘3’’ and ‘‘2’’, then exchange value ‘‘1’’ and ‘‘3’’, finally x2 is obtained. 3.3.2. Position minus position The result of ‘‘position minus position’’ is an OEL. Identical new positions will probably be obtained when applying different OEL to the same position x. All OELs with same effect are called the equivalent assembly of the OEL. The OEL with least OEU in the equivalent assembly is called basic OEL.

8861

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

The basic OEL is constructed as follows: Example 4. Suppose position x1 = (1, 4, 2, 5, 3) and x2 = (1, 2, 3, 4, 5), then the basic OEL is constructed to make x1 + OEL = x2. In x1 and x2, x2(2) = x1(3) = ‘‘2’’, then the first order exchange unit OEU(2, 3) is obtained, x01 ¼ x1 þ OEUð2; 3Þ ¼ ð1; 2; 4; 5; 3Þ. x2 ð3Þ ¼ x01 ð5Þ ¼ ‘‘3", thus the second order exchange unit OEU(3, 5) is obtained, x001 ¼ x01 þ OEUð3; 5Þ ¼ ð1; 2; 3; 5; 4Þ. x2 ð4Þ ¼ x001 ð5Þ ¼ ‘‘4", then the third order exchange unit OEU(4,5) is obtained, x000 1 ¼ x001 þ OEUð4; 5Þ ¼ ð1; 2; 3; 4; 5Þ, namely x000 1 ¼ x2 , thus a basic OEL is constructed through the above process:

OEL ¼ x2  x1 ¼ ðð2; 3Þ; ð3; 5Þ; ð4; 5ÞÞ: The operator is expressed by symbol ‘‘’’. 3.3.3. Velocity plus velocity Some OELs can be united to form a new OEL, the operator is expressed by symbol ‘‘’’. Example 5. Suppose OEL1 = (OEU3, OEU2), OEL2 = (OEU5, OEU4), then OEL1  OEL2 = (OEU3, OEU2, OEU5, OEU4). This process can be considered as ‘‘velocity plus velocity’’ 3.3.4. Coefficient times velocity The OEL is multiplied with a coefficient c(c e [0, 1]), which means the OEUs in OEL are kept with probability c. The operator is expressed by symbol ‘‘’’. 3.4. The oePSOgc algorithm For the same reason, the oePSO algorithm needs improving to ensure the algorithm converge on the global optimization solution.

Thus a new version of PSO algorithm named oePSOgc is formed with the method introduced in Section 3.2. With this method when the swarm evolves to certain generation, if there is at least one particle locating in the best position of the swarm and this particle may stop evolving, then at least one particle will be generated randomly again in solution space, thus the global convergence of the algorithm is reinforced. 4. Comparison among four versions of PSO algorithms In this section, the performances of four versions of PSO algorithms for holes machining mentioned above are compared through using these algorithms in solving optimization problems. The comparison is implemented on a personal computer. We program the algorithms in C programming language and run it on an Intel P4 3.0 G with 512 M RAM. And the comparison is completed by using data of two work-pieces and two TSP benchmark problems. Fifty runs of each algorithm have been performed for each experiment. The mean fitness and number of function evaluations from these runs are presented in Tables 2–6. 4.1. The description of test problems and fitness functions The data of two work-pieces and two TSP problems was used as test problems. The data of two work-pieces by Zhu & Zhang (2008) consist of one 9-holes work-piece problem and one 14-holes workpiece problem, the coordinates of holes in two work-piece are given in Appendices A.1 and A.2. The data of two TSP benchmark problems from Li (2004) is consisted of one 20-nodes TSP problem and one 30-nodes TSP problem named Oliver30 problem, the coordinates of nodes are given in Appendices A.3 and A.4.

Table 5 The average evolution generation number when algorithms reach the best known result.

Table 2 Success ratio (%).

Algorithm Algorithm

vePSO oePSO vePSOgc oePSOgc

Test problem 9-Holes workpiece problem

14-Holes workpiece problem

20-Nodes TSP problem

Oliver30 problem

16 36 50 76

0 12 0 32

0 18 0 50

0 0 0 4

Table 3 The best result of four versions algorithms (mm) (best known result are in bold). Algorithm

vePSO oePSO vePSOgc oePSOgc

vePSO oePSO vePSOgc oePSOgc

9-Holes workpiece problem

14-Holes workpiece problem

20-Nodes TSP problem

Oliver30 problem

322.5 322.5 322.5 322.5

286.29 280.0 287.06 280.0

29.236 24.526 26.794 24.526

583.974 434.49 581.147 423.741

9-Holes workpiece problem

14-Holes workpiece problem

20-Nodes TSP problem

Oliver30 problem

7 20 2211 1037

– 847 – 1764

– 9448 – 9643

– – – 15305

Table 6 The minimum evolution generation number when algorithms reach the best known result. Algorithm

Test problem

Test problem

vePSO oePSO vePSOgc oePSOgc

Test problem 9-Holes workpiece problem

14-Holes workpiece problem

20-Nodes TSP problem

Oliver30 problem

4 4 5 2

– 94 – 104

– 4395 – 4062

– – – 10667

Table 4 The average results include the best known result (mm) (The value in the bracket is the average results except the best known result). Algorithm

vePSO oePSO vePSOgc oePSOgc

Test problem 9-Holes work-piece problem

14-Holes work-piece problem

20-Nodes TSP problem

Oliver30 problem

346.12 334.81 329.26 325.70

355.94 299.22 (301.84) 312.19 290.97 (296.13)

33.886 26.183 (26.547) 31.112 25.325 (26.124)

707.777 484.728 692.735 477.339 (477.489)

(350.62) (341.73) (336.02) (335.83)

8862

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

As for example of two work-pieces, the start hole and the end hole of solution do not superpose. During holes machining, the worktable of numerical control machine tool moves in x and y directions driven by the motor in x and y axes, so the distances between two holes are given as Manhattan distance, the distance function is described as following:

    dij ¼ xi  xj  þ yi  yj :

ð4Þ

During the machining operation, the total moving distance of the worktable is the sum of the absolute values of the distance of worktable moving in x and y directions, respectively. The total distance is given as:



X

dij ¼

X  X   xi  x j  þ y  y : i j

ð5Þ

The total distance Eq. (5) is used as fitness functions in this paper for 9-holes work-piece and 14-holes work-piece problems. The minimum distance for 9-holes work-piece problem is 322.5 mm and for 14-holes work-piece problem, 280 mm. As for the two TSP benchmark problems, the start node and the end node superpose. The distances between two nodes are given as Euclidean distance. The distance function is described as fellow:

dij ¼

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxi  xj Þ2 þ ðyi  yj Þ2 :

Fig. 1. The fitness value changing curve corresponded to the best result of 9-holes work-piece problem.

ð6Þ

The distance of a tour is given as:



X

dij ¼

X qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðxi  xj Þ2 þ ðyi  yj Þ2 :

ð7Þ

Eq. (7) is used as fitness functions for the two TSP benchmark problems in this paper. As for 20-nodes TSP problem, the best distance of a tour is 24.526 mm1 and for Oliver30 problem, 423.741 mm (Dorigo, Maniezzo, & Colorni, 1996). 4.2. Parameter setting In our experiments, the maximum generation number is used as termination criteria. The evolution will stop when the termination criteria is fulfilled. As for the 9-holes work-piece problem and the 14-holes work-piece problem, the maximum generation number is set as 10000; and for 20-nodes problem and Oliver30 problem, 20000. For the 9-holes work-piece problem and the 14-holes work-piece problem, swarm population size is set 150; and for 20-nodes TSP problem, 400; for Oliver30 problem, 800. Parameter c1, c2 are generated randomly in the range of [0, 1], and x is set 1.0. 4.3. Results and discussions The results obtained in solving four test problems with the four versions of PSO algorithms are summarized in Tables 2–6 and Figs. 1–4. The performance of different version of algorithms was compared by using four criteria: (1) the percentage of success, as represented by the number of trials required for the objective function to reach its known target value, (2) the best result of four algorithms, (3) the average value of the solution obtained in all trials, (4) the average evolution number while the trail reaches its known target value. The performance of oePSOgc generally outperforms all other three versions. As shown in Tables 2 and 3, when we solve these problems with oePSOgc, every time the best known result can be obtained and when with oePSO, the best known result can be got 1 In reference of Li (2004), the best distance of a tour is 24.523 mm and the tour is 0, 13, 10, 3, 7, 9, 14, 18, 6, 17, 15, 4, 12, 19, 5, 16, 8, 1, 11, 2, 0. But the distance calculated by using this tour was 24.526 mm, the same result, 24.526 mm, was often obtained by PSO algorithms.

Fig. 2. The fitness value changing curve corresponded to the best result of 14-holes work-piece problem.

in three test problems. vePSO and vePSOgc can reach the best known result only in case of small scale problem (the first problem). In terms of success rate, oePSOgc outperforms other algorithms evidently. The ability to reach the local optimum value of each algorithm is shown in Table 4, the result also indicates that oePSOgc performs better than other algorithms. The local optimum value obtained by oePSOgc is the one nearer to the best known result. The PSO algorithm based on order exchange is superior compared to the PSO based on value exchange. As shown in Table 2, in terms of success rate, oePSOgc performs better than vePSOgc, and oePSO performs better than vePSO. As shown in Table 4, for the four test problems, in terms of local optimum value, oePSOgc is superior to vePSOgc, and oePSO outperforms vePSO. The method of generating the stop evolution particle over again to get the ability of convergence on the global optimization solution is beneficial to vePSOgc and oePSOgc. As shown in Table 2, the success rate of oePSOgc is more than 2 times of oePSO’s, and the success rate of vePSOgc, more than 3 times of vePSO’s. As shown in Table 4, for the four test problems, in terms of local

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

Fig. 3. The fitness value changing curve corresponded to the best result of 20-nodes TSP problem.

8863

functions as test problem, Carlisle and Dozier (2001) indicated that all the swarms had a reduction in the number of iterations required to solve all the functions (except the Griewank function) when the population increased. Carlisle and Dozier (2001) consider that, in general, more particles would search more space, and a solution would then be found sooner. In this paper, the swarm population size and iterations of 20-nodes TSP problem were all increased, so the best known result can be obtained more possibly. The changing curve of the fitness value corresponding to the best result of four versions of algorithms in Table 3 is shown in Figs. 1–4. If many best results are obtained when using one algorithm to solve a problem, the fitness value changing curve is completed by using the trial with the minimum evolution number. The minimal evolution number achieved when algorithm reaching the best known result is listed in Table 6. In all figure, the Semi-log scale plot is used in drawing the figure, a base 10 logarithmic scale is used for the x-axis and a linear scale for the y-axis. We can see from the figure, as for the 9-holes work-piece problem, shown in Fig. 1, the difference among the four types of algorithm can not be obtained because each algorithm converged to the best known result quickly (the generation number is less than 5 shown in Table 6). Other figures show that the fitness value changing of oePSO and oePSOgc algorithms is even, shown by the solid line in the figure; the changing of the fitness value of vePSO and vePSOgc algorithms is very acute, shown by the dash-dot line in the figure, but the best known result can be obtained easily when using oePSO or oePSOgc algorithms. It is interesting to find that the behaviour of each algorithm version in all test problems was consistent. In particular, oePSOgc algorithm generally outperforms all other algorithms in solving all the test problems in terms of solution quality. Accordingly, it can be concluded that oePSOgc is a promising optimization tool. 5. Conclusion

Fig. 4. The fitness value changing curve corresponded to the best result of Oliver30 problem.

PSO algorithm is a useful optimization tool for solving discrete optimization problems. In this paper, four versions of PSO algorithms were introduced individually and the C programs were written to implement these PSO algorithms. Two work-pieces problems and two TSP benchmark problems were selected to testify the performance of these four versions of algorithms. The performance comparison shows that the oePSOgc algorithm outperforms the other versions of PSO in solving sequence optimization problem in terms of success rate and solution quality. Acknowledgments

optimum value, oePSOgc performs better than oePSO, and vePSOgc performs better than vePSO. In Table 5, we find that the average evolution number of oePSOgc and vePSOgc is bigger than the average evolution number of oePSO and vePSO. The reason is, during evolution if the algorithm converged to local optimum value, then after some cycles, the status described in Section 3.2 happened, thus a new particle having nothing to do with xj(t), pj(t) and pg(t) was created and some other cycles were needed to obtain the final global optimum. As for oePSO and vePSO, the cycle number is small but the success rate is small too. During the process of some trial runs, the best known result is reached luckily by using Eqs. (2) and (3), but during the most trial runs, only the local optimum value(pg(t)) was obtained by using Eqs. (2) and (3), thus the algorithm finally converged to local optimum value. In Table 2, the success ratio of 20-nodes TSP problem was bigger than 14-holes work-piece problem. The reason is when solving the 20-nodes TSP problem, the swarm population size is increased more than 3 times and iterations is increased more than 2 times. The effect of swarm population size has been extensively studied by Carlisle and Dozier (2001), Jaco and Albert (2005). By using five

This work is partially supported by the supporting Program for New Century Talents in University of Fujian Province (XSJRC200708), Natural Science Foundation of Fujian Province (2009J01246) and the Project of Office of Education of Fujian Province (JA09030). Appendix A In Appendices A.1 and A.2, the data form is given as {, , }. The integers give the number of the respective holes. The real numbers give the associated coordinates. In Appendices A.3 and A.4, the data form is given as {, , }. The integers give the number of the respective cities. The real numbers give the associated coordinates. A.1. The data of the 9-holes work-piece problem {1, 24.75, 24.75}, {2, 37.5, 0.0}, {3, 24.75, 24.75}, {4, 24.75, 24.75}, {5, 39.38, 5.36}, {6, 24.75, 24.75}, {7, 62.0, 37.0}, {8, 52.34, 13.53}, {9, 62.0, 37.0}.

8864

W.-B. Zhang, G.-Y. Zhu / Expert Systems with Applications 38 (2011) 8858–8864

A.2. The data of the 14-holes work-piece problem {1, 10.0, 10.0}, {2, 10.0, 60.0}, {3, 18.0, 53.5}, {4, 18.0, 42.5}, {5, 32.32, 12.66}, {6, 37.71, 43.6}, {7, 37.71, 43.6}, {8, 62.29, 43.6}, {9, 62.29, 26.4}, {10, 90.0, 10.0}, {11, 82.0, 16.5}, {12, 82.0, 27.5}, {13, 72.59, 55.75}, {14, 90.0, 60.0}. A.3. The data of the 20-nodes TSP problem {1, 5.294, 1.588}, {2, 4.286, 3.622}, {3, 4.719, 2.744}, {4, 4.185, 2.23}, {5, 0.915, 3.821}, {6, 4.771, 6.041}, {7, 1.524, 2.871}, {8, 3.447, 2.111}, {9, 3.718, 3.665}, {10, 2.649, 2.556}, {11, 4.399, 1.194}, {12, 4.660, 2.949}, {13, 1.232, 6.440}, {14, 5.036, 0.244}, {15, 2.710, 3.140}, {16, 1.072, 3.454}, {17, 5.855, 6.203}, {18, 0.194, 1.862}, {19, 1.762, 2.693}, {20, 2.682, 6.097}. A.4. The data of the Oliver30 problem {1, 41.0, 94.0}, {2, 37.0, 84.0}, {3, 54.0, 67.0}, {5, 7.0, 64.0}, {6, 2.0, 99.0}, {7, 68.0, 58.0}, {9, 54.0, 62.0}, {10, 83.0, 69.0}, {11, 64.0, 60.0}, {13, 22.0, 60.0}, {14, 83.0, 46.0}, {15, 91.0, 38.0}, {17, 24.0, 42.0}, {18, 58.0, 69.0}, {19, 71.0, 71.0}, {21, 87.0, 76.0}, {22, 18.0, 40.0}, {23, 13.0, 40.0}, {25, 62.0, 32.0}, {26, 58.0, 35.0}, {27, 45.0, 21.0}, {29, 44.0, 35.0}, {30, 4.0, 50.0}.

{4, 25.0, 62.0}, {8, 71.0, 44.0}, {12, 18.0, 54.0}, {16, 25.0, 38.0}, {20, 74.0, 78.0}, {24, 82.0, 7.0}, {28, 41.0, 26.0},

References Carlisle, A., & Dozier, G. (2001). An off-the-shelf PSO. Proceedings of the 2001 workshop on particle swarm optimization (Vol. 1, pp. 1–6). Indianapolis, IN, USA; Piscataway, NJ, USA: IEEE Service Centre. Clerc, M. (2006). Discrete particle swarm optimization illustrated by the traveling salesman problem. Available online at: Accessed 05.04.06.

Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics-Part B Cybernetics, 26(1), 29–41. Emad, E., Tarek, H., & Donald, G. (2005). Comparison among five evolutionary-based optimization algorithms. Advanced Engineering Informatics, 19(1), 43–53. Ghaiebi, H., & Solimanpur, M. (2007). An ant algorithm for optimization of holemaking operations. Computers & Industrial Engineering, 52(2), 308–319. Gölog˘lu, C. (2004). A constraint-based operation sequencing for a knowledge-based process planning. Journal of Intelligent Manufacturing, 15(4), 463–470. Guo, Y. W., Li, W. D., Mileham, A. R., & Owen, G. W. (2009). Applications of particle swarm optimization in integrated process planning and scheduling. Robotics and Computer–Integrated Manufacturing, 25(2), 280–288. Guo, Y. W., Mileham, A. R., Owen, G. W., & Li, W. D. (2006). Operation sequencing optimization using a particle swarm optimization approach. Proceeding IMechE Part B: Journal Engineering Manufacture, 220, 1945–1958. Jaco, F. S., & Albert, Groenwold A. (2005). A study of global optimization using particle swarms. Journal of Global Optimization, 31(1), 93–108. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of the IEEE international conference on neural networks (Vol. IV, pp. 1942). Perth, Australia, Piscataway, NJ: IEEE Service Centre. Kolahan, F., & Liang, M. (2000). Optimization of hole-making operations: A tabusearch approach. International Journal of Machine Tools and Manufacture, 40(12), 1735–1753. Lee, Z.-J. (2008). A novel hybrid algorithm for function approximation. Expert Systems with Applications, 34, 384–390. Li, S. Y. (2004). Ant colony algorithms with applications. Harbin: Harbin Institute of Technology Press. In Chinese. Onwubolu, G. C., & Clerc, M. (2004). Optimal path for automated drilling operations by a new heuristic approach using particle swarm optimization. International Journal Product Resource, 42(3), 473–491. Rabbani, M., Aramoon, Bajestani M., & Baharian, Khoshkhou G. (2010). A multiobjective particle swarm optimization for project selection problem. Expert Systems with Applications, 37, 315–321. Sha, D. Y., & Lin, H.-H. (2010). A multi-objective PSO for job-shop scheduling problems. Expert Systems with Applications, 37, 1065–1070. Yannis, Marinakis, & Magdalene, Marinaki (2010). A hybrid genetic – Particle Swarm Optimization Algorithm for the vehicle routing problem. Expert Systems with Applications, 37, 1446–1455. Zhang, W. B., Lu, Z. H., & Zhu, G. Y. (2006). Optimization of process route by genetic algorithms. Robotics and Computer–Integrated Manufacturing, 22(2), 180–188. Zhu, G. Y., & Zhang, W. B. (2008). Drilling path optimization by the particle swarm optimization algorithm with global convergence characteristics. International Journal Product Resource, 46(8), 2299–2311.