An improved electromagnetism-like algorithm for numerical optimization

An improved electromagnetism-like algorithm for numerical optimization

JID:TCS AID:10801 /FLA Doctopic: Theory of natural computing [m3G; v1.180; Prn:8/06/2016; 11:24] P.1 (1-10) Theoretical Computer Science ••• (••••...

1022KB Sizes 3 Downloads 61 Views

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.1 (1-10)

Theoretical Computer Science ••• (••••) •••–•••

Contents lists available at ScienceDirect

Theoretical Computer Science www.elsevier.com/locate/tcs

An improved electromagnetism-like algorithm for numerical optimization Jian-Ding Tan a,∗ , Mahidzal Dahari a , Siaw-Paw Koh b , Ying-Ying Koay b , Issa-Ahmed Abed c a b c

Department of Mechanical Engineering, University of Malaya, 50603, Kuala Lumpur, Malaysia College of Engineering, Universiti Tenaga Nasional, 43000 Kajang, Selangor, Malaysia Engineering Technical College Basrah, Southern Technical University, Iraq

a r t i c l e

i n f o

Article history: Received 24 October 2015 Received in revised form 28 December 2015 Accepted 31 May 2016 Available online xxxx Communicated by G. Dowek Keywords: Electromagnetism-like Mechanism algorithm Global optimization Meta-heuristic Split-Probe-Compare

a b s t r a c t This paper presents a new Electromagnetism-like Mechanism (EM) algorithm with Split, Probe and Compare feature (SPC-EM). The proposed algorithm replaces the local search segment of a standard EM with a new search scheme named Split, Probe, and Compare (SPC). A nonlinear equation is designed to systematically and dynamically adjust the length of the probes based on the outcome of the Compare segment in each iteration. Extensive computational simulations and comparisons on 10 different benchmark problems from the literature were carried out. Results show that the new modified mechanism outperformed all other algorithms involved in the benchmarking. We thus conclude that the proposed SPC-EM works well with the designed probe-length tuning equation in solving numerical optimization problems. © 2016 Published by Elsevier B.V.

1. Introduction In the early 1960s, computer scientists attempted to implement evolution concepts in solving engineering optimization problems. From it, genetic algorithms (GA) was born [1]. Since then, optimization algorithms have evolved from local optima search to algorithms with better exploration of global optima points. Over the past few decades, researchers around the world have been coming up with many meta-heuristic search techniques to solve complex global optimization problems and ways to improve them. Many of which are nature-inspired, for example particle swarm optimization (PSO) [2], differential evolution (DE) [3], and more recently, the Electromagnetism-like Mechanism algorithm. Electromagnetism-like Mechanism algorithm (EM) is a relatively new meta-heuristic search algorithm [4] first introduced by Birbil and Fang [5]. This algorithm is inspired by the attraction and repulsion mechanism of charges in the electromagnetism theory to solve unconstrained nonlinear optimization problems in a continuous domain. Due to its capability to yield well diversified results and solve complicated global optimization problems [6,7], EM has been widely used as an optimization means in numerous fields such as single machine scheduling problems [7], green energy harvesting [8], maximum betweenness problems [9], machines path-planning [10], inverse kinematic problems for robot manipulator [11] and many more.

*

Corresponding author. Tel.: +60 016 5576 736. E-mail addresses: [email protected], [email protected] (J.-D. Tan).

http://dx.doi.org/10.1016/j.tcs.2016.05.045 0304-3975/© 2016 Published by Elsevier B.V.

JID:TCS

AID:10801 /FLA

2

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.2 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

Fig. 2.1. Total force exerted on Qa by Qb and Qc .

The search mechanism of EM can generally be divided into its exploration and exploitation segments. The exploration segment of EM conducts a much global search by moving the particles in accordance with the superposition theorem. The exploitation segment involves a local search procedure which gathers the information around the neighborhood of a particular solution. Several modifications have been suggested in the literature of some other EM related researches by adapting other search methods into the local or global search sections of EM. Sels and Vanhoucke introduced a fusion of EM and tabu search procedure for single machine scheduling problems [12], Yurtkuran proposed a hybrid of EM and Random-Key Procedure to solve capacitated vehicle routing problems [13], and the successful infusion of EM and Simulated Annealing (SA) by Jamili et al. in solving periodic job shop scheduling problems [14], just to name a few. Most of the proposed infusions have proven to be able to provide competitive results in their respective fields of applications. Even though EM has shown good achievements in solving various types of complex optimization problems, there is still room for improvement especially in terms of accuracy. Generally speaking, the performance of a global optimization algorithm can be influenced by many factors. Among others is the search step. The size of the search step employed in an optimization algorithm can show huge impact in the accuracy and general convergence performance of the algorithm itself [15]. Yet, this issue has received relatively little attention in the EM literature. In fact, in a standard EM, the particle search is based on random step size and the iterations are terminated immediately upon achieving one comparatively better objective value. The method is clearly not acceptable as it may show impact on the balance between the speed and accuracy of the convergence. This motivated us to come up with a modified EM with better exploitation strategy. In this research, a Split, Probe and Compare mechanism (SPC) is implemented in the local search segment of the EM. A nonlinear equation is designed to systematically and dynamically regulate the length of the probe based on certain pre-determined rules and conditions. A better convergence performance is achieved by using this SPC strategy, especially in terms of its accuracy. The contribution of this paper is twofold and can be summarized along the line as follows. Firstly, an analytical study on the effect of the local search step length in the EM is carried out. This is done by modifying the EM into two sets of algorithms with two different extremes of search step size each. The purpose is to investigate the gravity of it to the overall convergence performance especially in terms of accuracy. Secondly, a new EM with SPC feature (SPC-EM) is proposed. The performance of the proposed algorithm is evaluated and benchmarked through a set of 10 benchmark problems from the literature. The outline of this paper can be divided into 5 major sections. In Section 2, the general procedure of a standard EM is summarized. Section 3 offers an analysis of the proposed modification on the algorithm in details. The computational experiment results of the proposed algorithm are benchmarked, compared and discussed in Section 4. Some samples of the convergence process are also shown in the form of graphs. In Section 5, an overall conclusion is drawn. 2. EM Procedure Electromagnetism-like Mechanism (EM) is a stochastic optimization method proposed by Birbil and Fang [5] in 2003. Guided by the electromagnetism theory, EM imitates the attraction–repulsion mechanism of charges in order to reach a global optimal solution using bounded variables. In the algorithm, all solutions are considered as charged particles in the search space and the charge of each particle relates to the objective function value. Particles with better objective yields will apply attracting forces while particles with worse objective values will apply repulsion forces onto other particles [16]. The better the objective function value, the higher the magnitude of attraction or repulsion between the particles will be. The particles are then moved based on superposition theorem. Fig. 2.1 shows an example of the total force, Fa applied on Qa by the repulsive force from Qb and attractive force from Qc . The overall flow of a standard EM is as shown in Table 1. There are five critical operations in EM, namely the initialization, the local search, the charge calculation, the force calculation, and the movement of particles. Like most optimization algorithm, it begins with initialization. Initialization: In the initialization stage of EM, the feasible ranges of all the tuning parameters (upper bound, uk and lower bound, lk ) are defined. Then, m sample of initial particles are randomly picked, each contains an N-tuple of real values (v 1 , v 2 , . . . , v N ). Each random value in the N-dimensional hyper-solid is assumed to be uniformly distributed within the defined feasible range (li < v i < u i ) [17]. After calculating the objective value of each particle, the point with best function value is marked as the best particle.

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.3 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

3

Table 1 General EM flow. Algorithm 1: EM (m, MAXITER, LSITER, λ). m = number of initial particles MAXITER: maximum number of iterations LSITER: maximum number of local search iterations λ: local search step size, λ ∈ (0, 1) 1: 2: 3: 4: 5: 6: 7: 8:

Initialize ( ) iteration 1 while iteration < MAXITER do Local (LSITER, λ) F ← CalcF () Move (F ) iteration ← iteration + 1 end while

Local search: This step in the EM is important to gather local information in the neighborhood of a particle. The local search procedure in a standard EM is a simple line search mechanism. In this search mechanism, all the feasible values of a dimension in a particle forms an analogical ‘line’. The search algorithm tunes the particle along this line of a dimension, evaluating the new solution found at xi + λ or xi − λ, depending on the direction which is determined randomly, in a random step length of λ ∈ (0, 1) [18]. Each of the dimensions in a particle is tuned and evaluated independently. If the new solution shows no improvement, the local search will re-iterate until a pre-determined maximum iteration number is reached, before moving on to the next dimensions. The line search procedure for a dimension will also be immediately terminated upon achieving the first better objective value in comparison with the current solution of the particle. Charge calculation: The total force vector exerted onto each particle is calculated based on the Coulomb’s Law [19]. The charge of each particle is evaluated by its current objective value compared to the best particle in the iteration. The computed charge of a particle, q i will determine the amplitude of the force exerted by the particle onto other respective particles in the force calculation stage. The calculation of q i is shown in equation (1)

 qi = exp − N m

f (xi ) − f (xbest )

k k =1 ( f ( x ) −

f (xbest ))

 ,

∀i

(1)

where N refers the total dimension of the particle and m denotes the population size. F (xbest ) represents the objective value of the best particle. Force calculation: With the charges calculated for all particles, forces generated by one particle onto another can be computed. According to the electromagnetic theory, the force of one particle onto another is inversely proportional to the distance between the two particles and directly proportional to the product of their charges [20]. Since all the optimization functions used in this research are minimization problems, a particle with a higher objective value applies repulsion force onto a particle with a relatively lower objective value. Analogically, it “pushes” the good particle away from the region with bad objective yields. A particle with a lower objective value, on the other hand, exerts attraction forces onto particles with relatively higher objective values. The force vector for a particle can be determined using equation (2).

Fi =

⎧ i j m ⎨ ( x j − xi ) q q  x j −xi 2 j =i

⎩ ( xi − x j )

qi q j

x j −xi 2



if f (x j ) < f (xi ) ⎬ if f (x j ) ≥ f (xi ) ⎭

,

∀i

(2)

where f (x j ) < f (xi ) denotes attraction and f (x j ) ≥ f (xi ) refers to repulsion. Taking all the respective forces generated by all other particles into consideration, a total force vector for a particle is calculated. This combined force vector guides the direction of which the particle will move in the particle movement stage. To keep it feasible, the vector of the total force is normalized as shown in equation (3) below.

Fi ←

Fi

(3)

F i

Particle movement: The movement stage in EM involves relocation of all particles but the best to a new location in space. This step is crucial to ensure better global exploration of possible solutions. The calculation for the movement of a particle is as shown in equation (4), where λ represents the global particle movement step length. It is a random value between 0 and 1, assumed to be uniformly distributed between an upper boundary (uk ) and a lower boundary (lk ).



xki ← xki + λ F ki uk − xki ;

i

xki ← xki + λ F k xki − lk ;

F ki ≥ 0 F ki < 0

(4)

Holding the absolute power of attraction towards all other particles, the best particle of the iteration does not move [21].

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.4 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

4

Fig. 3.1. Variation of probe length, L over 1000 iterations.

3. Split, Probe and Compare In this section, a Split, Probe and Compare sequence (SPC) is proposed to replace the local search mechanism of the original EM. SPC-EM is an enhanced version of EM that can grant the algorithm the ability to hit a more accurate solution without heavily slowing down the entire convergence process. Analogically speaking, SPC-EM probes around the neighborhood of a solution with two separate probes. The results returned by the probes will give the algorithm an idea of the direction to a better solution. The lengths of the probes are systematically regulated based on the feedback results. As the name suggests, the SPC consists of three segments, namely Split, Probe, and Compare. Split: In this segment, the search mechanism is split into two probes (Probe A and Probe B) in all respective dimensions. The probes then reach out to test the surroundings for any better solution in two different directions. The purpose of splitting the search is to gain a better sense of direction to any better solution in the neighborhood of the particle. Probe A explores towards the lower bound while Probe B searches towards the upper bound of the feasible solution range for any better solution. Probe: The size of the search steps is crucial in population-based optimization methods as it determines the overall convergence and exploitation performance of an algorithm [22]. Big search steps can speed up the overall convergence. However, they may skip the best optimal solution when it is in the vicinity of the particle, thereby reducing search performance of the best optimal solution. Small search steps, on the other hand, can ensure a better accuracy of the convergence. The trade off, however, is that it will significantly slow down the whole convergence process. Taking these problems into account, the SPC mechanism is designed in such a way that the Compare segment will decide if the length of the probes need to be adjusted for each iteration. Depending on that decision, the length is dynamically regulated by a carefully designed nonlinear equation. The calculation of the probe length, L is as shown in equation (5).

L=

2 1 + exp( Max10i _LSIte )

(5)

In equation (5), i represents the current number of local search iteration while Max_LSIte refers to the pre-set maximum number of iteration. Fig. 3.1 shows an example of the L variation over the iterations with Max_LSIte set to 1000. The decreasing nature of the resultant L causes the search steps to be relatively larger at early stage, and then decreases as the iterations go on. This can ensure the algorithm hits a more accurate solution at the end of the iterations, in the meanwhile not slowing down the whole convergence process by probing around too finely at the beginning of the search. Compare: The purpose of this segment is to update the current particle with the best solutions found in every iteration. Each time the probes returned with new found solutions, the feasibility of them are first checked. The new found solution is immediately disqualified and replaced with the previous value if it falls outside the feasible range. After making sure of the feasibility, a 3-way comparison of solutions is carried out. Comparison between the two new solutions provides the algorithm an idea on the direction to a better solution, if any. The particle moves towards the lower bound of the dimension if Probe A obtains a better solution. In contrast, if Probe B proves to provide a relatively better solution, the particle will move towards the upper bound of the dimension. The rate of the movement is dependent on the length of the probe at that particular iteration. If the best result among the probes is better than the current solution, the particle will adapt to the new found best solution and the position of the particle will then be updated. This solution improvement process continues until no better solution can be returned by the probes. Then, the length of the sticks is adjusted according to equation (5), and the iterations continue until the predetermined terminating criteria is met. The dimensions of a particle are optimized independently in a fixed order from the lowest dimension to the highest (D 1 , D 2 , D 3 , . . . , D N ) (see Table 2 and Fig. 3.2). 4. Experimental verification The proposed SPC-EM and standard EM are tested on 10 benchmark problems found in the literature, which are as shown in Table 3. All the benchmark functions used in this research are minimization problems. To examine the gravity of

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.5 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

5

Table 2 Local search procedures for SPC-EM. SPC-EM local search procedures Step Step Step Step Step Step Step Step Step

1 2 3 4 5 6 7 8 9

Set maximum number of iteration as terminating criteria. Calculate the length of the probes using equation (5). Split the search into Probe A and Probe B. Extend the probes towards lower and upper bounds respectively to search for better solutions. Check if the solutions returned by the probes are within feasible range. Compare the new found solutions and move particle towards the better yield. Adapt the new found solution if it is better than the current solution. From the new location of the particle, repeat Steps 3 to 8 until no further solution improvement is possible. Exit if the iteration number reaches termination criteria. Otherwise adjust the probe length, move on to the next iteration (i = i + 1) and repeat from Step 2.

Fig. 3.2. The flow of the proposed modification on SPC-EM, where D denotes the parameter of a particular dimension in a particular solution and λ refers to the search step size.

different step sizes to the convergence performance of EM, SPC-EM is also benchmarked with another two variants of EM algorithms with two different extreme sets of fixed search steps. EM with Larger Search Steps (EMLSS) is modified to search locally in a fixed search step of 0.99, while EM with Smaller Search Steps (EMSSS) is set to conduct local search with a fixed search step of 0.01. In this research, SPC-EM is also benchmarked with Genetic Algorithm (GA), which is a well-known meta-heuristic algorithm. The simulations were conducted with a 1.6 Ghz Intel Core i5 CPU with 4 GB-RAM, in WIN-7OS. For all the standard and modified EMs, 10 particles were employed, and the maximum LS iteration number was set to 1000. The dimensions for F3, F6, F8, and F9 are set to 10. To avoid stochastic discrepancy 20 independent runs were adopted for each of the algorithms.

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.6 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

6

Table 3 Benchmark problems. Function

Formulations

F1 Himmelblau

min f (x) = (x21 + x2 − 11)2 + (x1 + x22 − 7)2

[−5.12, 5.12]

F2 Schaffer N2

min f (x) = 0.5 +

[−100, 100]

F3 Rosenbrock F4 Booth F5 Beale

min f (x) = i =1 [100(xi +1 − x2i )2 + (xi − 1)2 ] min f (x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)2 min f (x) = (1.5 − x1 + x1 x2 )2 + (2.25 − x1 + x1 x22 )2 + (2.625 − x1 + x1 x32 )2

d−1

Range sin2 (x21 −x22 )−0.5

[1+0.001(x21 +x22 )]2

d

F6 Rastrigin F7 Six-Hump Camel

min f (x) = 10d + i =1 [x2i − 10 cos(2π )xi ] min f (x) = (4 − 2.1x21 + x41 /3)x21 + x1 x2 + (4x22 − 4)x22

F8 Ackley

min f (x) = −20 exp(−0.2

F9 Sphere F10 Shubert

  d 1

d 1 2 i =1 xi ) − exp( d i =1 cos(2π xi )) + 20 + e d d min f (x) = i =1 x2i 5 5 min f (x) = [ i =1 i cos((i + 1)x1 + i )][ i =1 i cos((i + 1)x2 + i )]

[−5, 10] [−10, 10] [−4.5, 4.5] [−5.12, 5.12] x1 , [−3, 3] x2 , [−2, 2] [−32.768, 32.768] [−5.12, 5.12] [−10, 10]

4.1. Results analysis Table 4 compare the performance of all the algorithms on all 10 of the optimization problems in terms of their best solutions (Best), worst solutions (Worst), mean values of 20 runs (Mean), standard deviations (SD), and number of function evaluation (NFE). The solutions with highest accuracy are highlighted in bold face. The results show that the solutions returned by EMLSS are generally less accurate compared to all the other variants of EM. Inaccurate solutions were kept as best results in EMLSS because some even better possible solutions which falls in between the large step were skipped. EMSSS, on the other hand, returned outcomes which are more accurate compared to EMLSS. Its small search steps enabled it to better exploit solutions with higher accuracies. The solutions obtained by EMSSS are very competitive with that of the standard EM. It even beats standard EM in some of the test functions such as f5, f6 and f7. If we observe closely, the results returned by standard EM are comparatively larger in its range. The inconsistency of standard EM is shown when it provides best values with good accuracy but their respective worst values can be very inaccurate in many cases. This huge gap is due to the random search step size applied by a standard EM. The algorithm could end with the search step size as big as EMLSS, or as fine as EMSSS. Since there is no telling on the search step sizes the algorithm ended with for all 20 runs, the results become inconsistent and the overall precision is affected. The NFE rows in Table 4 shows the average number of function evaluations required by each search mechanism to reach a satisfactory solution for each of the test functions. In many of the test functions, EMLSS and SPC-EM appear to evaluate the objective functions more times than other algorithms. The NFE needed in EMLSS in solving most of the test functions are closely matching to that of the SPC-EM. EMLSS even shows bigger NFE number than SPC-EM in f1, f4, f7, and f9. GA, on the other hand, consistently shows the lowest in NFE values. The benchmarking reveals the striking capability of SPC-EM in obtaining optimal solutions with higher accuracy and precision. The solutions found by SPC-EM are relatively much better than the solutions found by standard EM, EMLSS, EMSSS, and GA. The Split and Probe mechanisms allow the algorithm to adequately explore the neighborhood. The systematicallyself-regulating probe length feature of SPC-EM enabled it to effectively exploit solutions with high accuracies. This finetuned search steps towards the end of the local search every time also ensured the precision of the algorithm, which in turn resulted in lower standard deviation values as shown in SD rows. Even though the numbers of function evaluation of the SPC-EM are relatively higher than GA, EMLSS, and standard EM, the improvement in terms of overall results analysis shows that the proposed algorithm significantly outperformed all the other algorithms involved in terms of accuracies of results. 4.2. Convergence performance analysis Figs. 4.1 show the convergence curves of all the benchmarked algorithms in function f2, f5, f7 and f9. Only four representative convergence graphs are shown in this paper due to space limitations. For the ease of comparison, convergences with closest initial values were sampled as representatives and the graphs are focused on the earlier stage of the convergences. It can be noted from the graphs that SPC-EM performs well in solving variable types of complex optimization problems in terms of the accuracy of the solutions and overall convergence performance. If we observe Table 4 and Fig. 4.1 closely, the results returned from EMSSS search are very competitive with that of SPC-EM’s. The small search steps employed by EMSSS enabled the algorithm to search deeper for better exploitation of solutions. However, it is clearly shown in Fig. 4.1 that the convergence rate of EMSSS is much slower compared to all other EMs in the benchmarks. It can also be noticed from the graphs that SPC-EM progressed very rapidly in early stages and found near-optimal values in relatively earlier iterations in most of the cases. The ability for SPC-EM to reach near optimum values in earlier stage of convergence is due to its long probe lengths at the beginning of the search. The regulated and fine-tuned probe lengths towards the end of the local search of SPC-EM enabled it to hit solutions with relatively higher accuracy. The tuning of the probe lengths helped SPC-EM to outperform other algorithms in their overall convergence process.

AID:10801 /FLA Doctopic: Theory of natural computing

Fig. 4.1. Convergence performance of all algorithms in (a) f2-Schaffer N2 test function, (b) f5-Beale test function, (c) f7-Six-Hump Camel test function, and (d) f9-Sphere test function.

JID:TCS

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

[m3G; v1.180; Prn:8/06/2016; 11:24] P.7 (1-10)

7

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.8 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

8

Table 4 Performance benchmarking of SPC-EM, EMLSS, EMSSS, standard EM, and GA.

F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE Best Worst Mean SD NFE

SPC-EM

EM

EMLSS

EMSSS

GA

3.661565E−06 5.805835E−06 3.874068E−06 4.595800E−07 13,304 1.110223E−16 2.063023E−05 1.454155E−06 4.480907E−06 24,170 2.494467E−05 2.814896E−05 2.591374E−05 9.325919E−07 32,355 1.247214E−06 2.005161E−06 1.395770E−06 2.430686E−07 19,340 3.936435E−07 3.274255E−05 4.611034E−06 7.831021E−06 23,070 4.952626E−05 5.742130E−05 5.133827E−05 2.383190E−06 46,501 −1.031628 −1.031628 −1.031628 7.251065E−08 19,778 1.419273E−03 1.435313E−03 1.424688E−03 4.891446E−06 32,355 2.494381E−07 2.499723E−07 2.496177E−07 1.696899E−10 22,181 −186.730357 −186.730286 −186.730328 2.337868E−05 24,605

1.576128E−05 9.184502E−03 3.368474E−03 2.979160E−03 9,163 4.463263E−12 3.038143E−05 3.325024E−06 6.801350E−06 21,773 4.844137E−05 2.802105E−02 8.420452E−03 7.378930E−03 21,920 1.455078E−05 9.760600E−03 3.577467E−03 3.258795E−03 11,956 3.914543E−05 4.915435E−03 1.770087E−03 1.600108E−03 18,644 1.449315E−04 3.067725E−02 1.111626E−02 9.837885E−03 32,801 −1.031623 −1.030116 −1.031015 4.751056E−04 10,661 3.548847E−03 2.752545E−01 1.060289E−01 7.513069E−02 22,141 7.937138E−06 1.294528E−03 2.633000E−04 2.332767E−04 13,802 −186.725943 −186.527680 −185.36587 3.731602E+00 12,863

1.719371E−04 9.508320E−03 4.370130E−03 3.023697E−03 14,964 1.531410E−07 2.374000E−05 4.566652E−06 6.745203E−06 23,613 9.854618E−04 3.992085E−02 1.424300E−02 1.153226E−02 28,641 5.416751E−04 9.348682E−03 5.128899E−03 2.610987E−03 21,305 2.578702E−04 3.738055E−03 1.342392E−03 8.943944E−04 20,660 1.865511E−03 4.966415E−02 1.987663E−02 1.484539E−02 43,873 −1.031598 −1.030076 −1.031047 3.552574E−04 23,745 4.592129E−02 4.164941E−01 2.048019E−01 1.231398E−01 27,604 1.276562E−04 3.303858E−03 9.047556E−04 7.761418E−04 23,006 −186.701049 −186.357690 −186.509028 9.854528E−02 17,672

1.168281E−05 4.651116E−04 1.950518E−04 1.357456E−04 10,618 6.495071E−11 7.149460E−06 1.854487E−06 2.484839E−06 16,561 4.184602E−05 1.276379E−03 4.766973E−04 4.019340E−04 25,119 5.349732E−05 5.258463E−04 1.521971E−04 1.100939E−04 18,220 9.586097E−07 4.677488E−04 1.646016E−04 1.600959E−04 21,882 6.047104E−05 9.752116E−04 5.112255E−04 2.720759E−04 37,120 −1.031627 −1.031529 −1.031574 2.522233E−05 16,572 3.857319E−03 1.212132E−02 7.615152E−03 2.328801E−03 28,513 2.849675E−06 2.147046E−05 1.052881E−05 5.935259E−06 17,604 −186.730022 −186.674887 −186.703022 1.645272E−02 20,881

3.280935E−05 4.595554E−02 7.948131E−03 1.205463E−02 8,617 1.100000E−07 7.192232E−04 1.824401E−04 2.171330E−04 13,276 1.460726E−04 9.079672E−02 3.310250E−02 3.017251E−02 16,445 1.799739E−05 2.292200E−02 1.210861E−02 9.701860E−03 10,930 1.577896E−06 8.540818E−02 1.302585E−02 2.350881E−02 12,956 1.985504E−04 1.988401E+00 2.631002E−01 5.284306E−01 20,991 −1.031625 −1.026970 −1.030449 1.091484E−03 8,915 3.094707E−02 2.637535E+00 5.933636E−01 7.542619E−01 18,701 4.000000E−06 9.000000E−04 3.761605E−04 3.632090E−04 10,997 −186.729900 −169.580200 −186.660291 5.909787E−02 15,901

4.3. Parameter sensitivity analysis In this section, the impact of the Max_LSIte setting on the overall performance of the SPC-EM algorithm is investigated. Experiments are conducted with different settings of Max_LSIte Simulations on each setting are carried out in 15 independent runs respectively. Since the idea of this analysis is to provide a straightforward and intuitive impression on how the parameter setting of Max_LSIte values affect SPC-EM, only the data that are found to be informative are presented in this paper. The results generated by different Max_LSIte settings for all the test functions are summarized in Tables 5. “Mean” denotes the mean of the 10 independent simulation runs. “Error” stands for the difference between the actual best global optima and the obtained mean value. “NFE” refers to the average number of function evaluation carried out in achieving satisfactory objective values. “ICR” stands for the Improvement to Complexity Ratio, which is calculated by dividing the error value with the average NFE value. This can give an indication on the how much the algorithm complexity will it cost in achieving respective accuracies. A higher ICR value indicates that a relatively bigger improvement is made with a comparatively smaller increase in complexity of the algorithm. Comparison on the ICR values provides a better understanding on the scale of improvement versus the scale of increment in the complexity of the algorithms, from which we can strike a balance between both and find the best optimum setting on the parameter.

JID:TCS

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.9 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

9

Table 5 Effect of Max_LSIte value setting. Max_LSIte values

500

800

1000

1300

F1

1.766235E−02 4,002 7.662981E−03 10,970 2.500738E−02 13,671 3.524800E−02 8,993 2.989168E−02 9,044 1.091337E−02 19,166 −1.031329 8,884 7.593817E−02 12,691 6.977610E−04 9,010 −186.730017 10,468

7.693117E−04 7,681 8.315509E−04 15,863 1.988207E−03 18,662 9.332601E−04 13,395 1.033757E−03 14,771 2.914880E−03 26,988 −1.031482 14,760 1.039972E−02 21,303 2.016337E−05 13,006 −186.730107 16,615

5.161630E−06 12,989 3.006479E−06 23,897 3.024602E−05 32,142 1.964437E−06 19,022 7.637420E−06 22,767 2.786145E−05 46,197 −1.031617 20,051 2.155467E−03 32,177 3.482381E−07 22,617 −186.730317 24,711

1.896108E−06 19,505 9.770113E−07 40,201 1.883109E−05 50,991 7.669011E−07 25,892 2.188917E−06 34,907 8.973801E−06 71,903 −1.031621 33,161 5.197833E−04 53,804 9.776913E−08 37,773 −186.730488 41,844

F2 F3 F4 F5 F6 F7 F8 F9 F10

Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE Mean NFE

Generally, the data in Table 5 show that the solutions improve as the setting of Max_LSIte value increases from 500 to 1300. If we observe closely, it can be noticed that the rate of solution improvement and the rate of increment in NFE are not the same. The adjustment of Max_LSIte value from 500 to 800 causes a relatively big improvement in the solutions obtained with the cost of a comparatively small increment in NFE. Similar situation happens when Max_LSIte is increased from 800 to 1000. This indicates that the algorithm under-performs when the Max_LSIte value is set below 1000. However, when Max_LSIte is further increased from 1000 to 1300, the improvement of the solutions becomes less significant, with the cost of big NFE increment. This low improvement-to-cost ratio shows that the complexity of the algorithm is unfit when the Max_LSIte value is set greater than 1000. We can thus conclude from this observation that the best optimum setting for the Max_LSIte value is at 1000. 5. Conclusion A new modified Electromagnetism-like Mechanism algorithm named SPC-EM is presented in this paper to solve numerical optimization problems. This proposed modification is composed of Split, Probe and Compared mechanisms in the local search scheme. SPC-EM also applies a dynamic strategy to regulate the length of the probes based on certain predetermined criterions. The general concept of the tuning strategy is to begin the search with relatively longer probes and dynamically tune the probe lengths as iterations go using a nonlinear equation. Experiments on 10 different test problems reveal that the proposed algorithm has significant improvements especially in terms of solutions exploitation. Results also show that SPC-EM outperformed all four other algorithms involved in the benchmarking. We thus conclude that the new modified mechanism works well with the proposed length tuning equation in solving numerical optimization problems. Enhancement on the particle exploration movement segment will be considered in the future work by putting the speed, momentum, and other dynamic aspects of the particles into account. Acknowledgements The authors would like to thank the editorial board and the anonymous reviewers for their very helpful comments. The authors express great acknowledgment to University of Malaya for the support of this research under the grant of UM.C/HIR/MOHE/ENG/23. References [1] M. Mitchell, An Introduction to Genetic Algorithms, fifth ed., MIT Press, 1999. [2] D. Bratton, J. Kennedy, Defining a standard for particle swarm optimization, in: Proceedings of the 2007 IEEE Swarm Intelligence Symposium, 2007, pp. 120–127. [3] R. Storn, K. Price, Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces, J. Global Optim. 11 (1997) 341–359. [4] I.A. Abed, K.S.M. Sahari, S.P. Koh, S.K. Tiong, P. Jagadeesh, Using electromagnetism-like algorithm and genetic algorithm to optimize time of task scheduling for dual manipulators, IEEE R10-HTC2013, 2013, pp. 182–187. [5] S.I. Birbil, S.C. Fang, Electromagnetism-like mechanism for global optimization, J. Global Optim. 25 (2003) 263–282. [6] B. Naderi, R. Tavakkoli-Moghaddam, M. Khalili, Electromagnetism-like mechanism and simulated annealing algorithms for flowshop scheduling problems minimizing the total weighted tardiness and makespan, Knowledge-Based Systems 23 (2010) 77–85.

JID:TCS

10

AID:10801 /FLA

Doctopic: Theory of natural computing

[m3G; v1.180; Prn:8/06/2016; 11:24] P.10 (1-10)

J.-D. Tan et al. / Theoretical Computer Science ••• (••••) •••–•••

[7] P.C. Chang, S.H. Chen, C.Y. Fan, A hybrid electromagnetism-like algorithm for single machine scheduling problem, Expert Syst. Appl. 36 (2009) 1259–1267. [8] D.H. Muhsen, A.B. Ghazali, T. Khatib, I.A. Abed, Extraction of photovoltaic module model’s parameters using an improved hybrid differential evolution/electromagnetism-like algorithm, Sol. Energy 119 (2015) 286–297. [9] V. Filipovic, A. Kartelj, D. Matic, An electromagnetism metaheuristic for solving the maximum betweenness problem, Appl. Soft Comput. 13 (2013) 1303–1313. [10] C.L. Kuo, C.H. Chu, Y. Li, X.Y. Li, L. Gao, Electromagnetism-like algorithms for optimized tool path planning in 5-axis flank machining, Comput. Ind. Eng. 84 (2015) 70–78. [11] F. Yin, Y.N. Wang, S.N. Wei, Inverse kinematic solution for robot manipulator based on electromagnetism-like and modified DFP algorithms, Acta Automat. Sinica 37 (2011) 74–82. [12] V. Sels, M. Vanhoucke, A hybrid electromagnetism-like mechanism/tabu search procedure for the single machine scheduling problem with a maximum lateness objective, Comput. Ind. Eng. 67 (2014) 44–55. [13] A. Yurtkuran, E. Emel, A new hybrid electromagnetism-like algorithm for capacitated vehicle routing problems, Expert Syst. Appl. 37 (2010) 3427–3433. [14] A. Jamili, M.A. Shafia, R. Tavakkoli-Moghaddam, A hybridization of simulated annealing and electromagnetism-like mechanism for a periodic job shop scheduling problem, Expert Syst. Appl. 38 (2011) 5895–5901. [15] S.H. Yua, S.L. Zhu, Y. Ma, D.M. Mao, A variable step size firefly algorithm for numerical optimization, Appl. Math. Comput. 263 (2015) 214–220. [16] P.T. Wu, Y.Y. Hung, Z.P. Lin, Intelligent forecasting system based on integration of electromagnetism-like mechanism and fuzzy neural network, Expert Syst. Appl. 41 (2014) 2660–2677. [17] R. Dutta, R. Ganguli, V. Mani, Exploring isospectral cantilever beams using electromagnetism inspired optimization technique, Swarm Evol. Comput. 9 (2013) 37–46. [18] C.J. Zhang, X.Y. Li, L. Gao, Q. Wu, An improved electromagnetism-like mechanism algorithm for constrained optimization, Expert Syst. Appl. 40 (2013) 5621–5634. [19] C.H. Lee, F.K. Chang, C.T. Kuo, H.H. Chang, A hybrid of electromagnetism-like mechanism and back-propagation algorithms for recurrent neural fuzzy systems design, Internat. J. Systems Sci. 43 (2012) 231–247. [20] C.H. Lee, Y.C. Lee, Nonlinear systems design by a novel fuzzy neural system via hybridization of electromagnetism-like mechanism and particle swarm optimization algorithms, Inform. Sci. 186 (2012) 59–72. [21] E. Cuevas, D. Oliva, D. Zaldivar, M. Pérez-Cisneros, H. Sossa, Circle detection using electro-magnetism optimization, Inform. Sci. 182 (2012) 40–55. [22] A. Ratnaweera, S. Halgamuge, H.C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Trans. Evol. Comput. 8 (2004) 240–255.