Entropic simplified swarm optimization for the task assignment problem

Entropic simplified swarm optimization for the task assignment problem

Accepted Manuscript Title: Entropic Simplified Swarm Optimization for the Task Assignment Problem Authors: Chyh-Ming Lai, Wei-Chang Yeh, Yen-Cheng Hua...

1MB Sizes 0 Downloads 31 Views

Accepted Manuscript Title: Entropic Simplified Swarm Optimization for the Task Assignment Problem Authors: Chyh-Ming Lai, Wei-Chang Yeh, Yen-Cheng Huang PII: DOI: Reference:

S1568-4946(17)30212-0 http://dx.doi.org/doi:10.1016/j.asoc.2017.04.030 ASOC 4163

To appear in:

Applied Soft Computing

Received date: Accepted date:

15-3-2017 18-4-2017

Please cite this article as: Chyh-Ming Lai, Wei-Chang Yeh, Yen-Cheng Huang, Entropic Simplified Swarm Optimization for the Task Assignment Problem, Applied Soft Computing Journalhttp://dx.doi.org/10.1016/j.asoc.2017.04.030 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Entropic

Simplified

Swarm

Optimization

for

the

Task

Assignment Problem

Chyh-Ming Lai1,*, Wei-Chang Yeh2, and Yen-Cheng Huang 2 1

Instituteof Resources Management and Decision Science, Management College, National Defense University, Taipei 112, Taiwan

2

Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu 300, Taiwan

*

Corresponding author: Chyh-Ming Lai, E-mail: [email protected]

Highlights 

The first work to apply simplified swarm optimization to the task assignment problem.



Entropy is adopted to describe the uncertainty level of assigned tasks; the task with higher uncertainty then has more chance to be reassigned.



ELS encourages solutions to exploit more promising neighbors since the search is not only arbitrary.

1



Statistical results indicate the proposed method is better than other algorithms.



ELS: Entropic Local Search

Abstract The task assignment problem (TAP) aims to assign application tasks to a number of distributed processors in a computation system in order to increase the efficiency and effectiveness of the system for minimizing or maximizing a certain cost. The problem is NP-hard; thus, finding the exact solutions is computationally intractable for larger size problems. In this paper, a novel entropic simplified swarm optimization, known as ESSO, is proposed for solving this problem. In this method, an entropic local search (ELS) inspired by information theory is proposed to enhance the exploitation capability of SSO. Entropy is adopted to describe the uncertainty level of assigned tasks; the task with higher uncertainty then has more chance to be reassigned. Furthermore, for each reassigned task, the corresponding list of potential processors can be constructed using information theory; this enhances the probability of finding promising solutions in ELS. To empirically evaluate the performance of the proposed method, experiments are conducted using twenty-four randomly generated problems ranging from small to large scale,

2

and the corresponding results are compared with existing works. The experiment results indicate that ESSO is better than its competitors in both solution quality and efficiency.

Keywords: task assignment problem; simplified swarm optimization; information theory

1. Introduction The task assignment problem (TAP) aims to assign application tasks to a number of distributed processors in a computation system to increase the efficiency and effectiveness of the system for minimizing or maximizing a certain cost [1]. A distributed computing system without proper task assignment often incurs higher cost. Therefore, many varied TAPs have been proposed in recent decades in order to obtain more effective assignments in a more efficient way. Most focus on minimizing the total system cost [2-7], minimizing the application completion time [8] and maximizing the reliability of the system [9-11]. TAP can be divided into two categories: homogeneous and heterogeneous systems. In a homogeneous system, processors have the same computing capacity, and each task requires the same cost on each processor [12]. Compared to a homogeneous system, the heterogeneous system is more complicated. Each processor in the system is capacitated with various units of memory and processing resources. Thus, the execution cost of a task varies depending on the processor used. Moreover, the communication links among processors have various communication costs which will be incurred if there is a communication need between two 3

tasks and they are executed on different processors [4, 13, 14]. TAP is a well-known NP-hard problem with computational effort growing exponentially with the number of tasks, processors and communication needs in the system [15]. The existing approaches can be mainly divided into four categories: graph-theoretic representation [16-18], integer linear programming [19], state-space search [20] and the evolutionary computation method. Due to numerical difficulties and computational burdens, only small-size instances of the problem can be solved optimally using exact methods. For large scale instances, most researchers concentrate on developing evolutionary computation methods that provide near-optimal solutions within a reasonable computation time. Numerous evolutionary computation methods for solving TAP have been reported in the literature, such as Genetic algorithm [21-23], the Simulated annealing approach [24, 25], Particle Swarm Optimization [7, 9, 26], Harmony search algorithm [6] and Differential Evolution Algorithm [5]. The results show that they have made important contributions to TAP. However, there is still room to improve the effectiveness and efficiency of the above works. In this paper, a novel algorithm, Entropic Simplified Swarm Optimization (ESSO), is proposed as an alternative method for solving TAP. This paper is organized as follows: the problem formulation is given in Section 2. The overviews of Entropy and Simplified Swarm Optimization (SSO) are provided in Section 3. The proposed ESSO and its overall procedure are detailed in Section 4. The two experiments 4

and statistical analysis implemented for validating ESSO are illustrated in Section 5. Finally, the conclusions are presented in Section 6.

2. Problem formulation The main purpose of TAP is to find an optimal arrangement with numerous tasks and multiple processors for minimizing the total cost under resource constraints. A general formulation of the TAP for a heterogeneous system can be formulated as the following integer nonlinear programming problem: min f  X   E  X   C  X 

(1)

subject to mi  X   M i

(2)

ri ( X )  Ri

(3)

The objective function in Eq. (1) minimizes the total sum of execution cost E(X) and communication costs C(X). According to Eqs. (2) and (3), the total memory and processing requirements of the tasks assigned to processor i (mi and ri, respectively) should not exceed the processor’s memory Mi and processing capacity Ri, respectively. In addition to the number of tasks and processors (i.e. Ntsk and Nprs, respectively), the task interaction density d is another key factor that affects the complexity of TAP. It quantifies the ratio of the inter-task communication demands for a TAP. The inter-task communication of 5

TAP can be described by a task interaction graph G(V, E) with the node set V = {1, 2, …, Ntsk} and the arc set E = {1, 2, …, }, where the maximal number of  is Ntsk (Ntsk -1) / 2, if d = 1. Each node represents a task in a TAP, and each arc connecting tasks i and j indicates that there is a communication need between two tasks, and is associated with a communication cost cij which is incurred only when tasks i and j are assigned to different processors. The TAP is explained further via the following example: Example 1. A TAP with (Ntsk, Nprs, d) = (5, 3, 0.7) and its task interaction graph (TIG) is shown in Fig. 1. X = (1, 1, 2, 3, 1) is a task assignment scheme for this TAP, and the allocation x3 = 2 means that the third task is assigned to the second processor. The corresponding data is listed in Table 1, where wi and ui represent the memory and processing requirement of task i, respectively. eij denotes the execution cost of task i on processor j and cik is the communication cost between task i and k. <> Table 1 yields the following:

C ( X )  c12  c13  c14  c15  c23  c24  c25  c34  c35  c45

(4)  0*  0  12  0*  7  43  0  0  37  11  110 * Two tasks are executed on the same processor as task 1 (i.e. tasks 2 and 5). E ( X )  e11  e21  e32  e43  e51

(5)

 24  80  108  184  120  516

6

m1  w1  w2  w5  10  21  20  51 m2  w3  38

(6)

m3  w4  45 r1  u1  u2  u5  7  33  20  60 r2  u3  17

(7)

r3  u4  29

3. Related Work 2.1 Shannon Information Entropy Information theory, introduced by Shannon [27], can quantify information, disorder or uncertainty from the probability distribution of some events contained in a sample set of possible events. These measures, called entropy, have been widely used in numerous areas [28-34]. Suppose that X is a triple (x, A, P), where the outcome x is the value of a random variable that takes on a finite number of possible values, A = {a1, a2, …, an}, having probabilities P = {p1, p2, …, pn} with P(x = ai) = pi ≥ 0 and



ai A

P  x  ai   1 . The

Shannon information content of the outcome x = ai can be derived as:

h( x  ai )   log2 pi ,

(8)

and the entropy H of X is defined as the expected value of the information content:

H ( X )  i 1 p( xi )log2 p( xi ) n

(9)

7

2.2 Simplified Swarm Optimization (SSO) SSO is an emerging, population-based optimization algorithm first proposed by Yeh in 2009 [35]. It was originally designed to compensate for the deficiencies of PSO [36] in solving discrete problems, but eventually became sui generis with advantages including simplicity, efficiency and flexibility [37-40]. In the initialization phase, multiple candidate solutions in SSO are generated randomly within the search space. For any solution X it   xit1 , xit2 ,..., xijt ,... xint  in the tth iteration, each variable xijt is updated successively to a value related to the gBest g j , its current pBest pijt 1 , its current value xijt 1 or a random feasible value x depending on a uniform random number  between 0 and 1. The original update mechanism (UMo) of SSO is shown in Eq. (10):  gj  t 1 p xijt   tij1  xij  x

if   [0, C g ) if   [C g , C p )

(10)

if   [C p , Cw ) if   [Cw ,1]

where Cg, Cp and Cw are three predetermined parameters that construct four interval probabilities, that is, Cg, Cg-Cp, Cp-Cw and 1-Cw represent the probabilities of the updated variable generated from gBest, pBest, the current solution and a random movement, respectively. The UMo of SSO is a simple mathematical modeling, and updates each solution to be a compromise between four different sources to maintain population diversity and enhance the capacity to escape from a local optimum. According to [38-40], the pBest scheme

8

in UMo can be discarded for more efficient updating solutions without compromising the quality. The improved UM, called UMf, is shown in Eq. (11):  g j if   [0, Cg )  x   xijt 1 if   [Cg , Cw )  x if   [Cw ,1]  t ij

(11)

4. Proposed ESSO for TAP The UMf promotes SSO as an algorithm with satisfactory global searches, but which may take a long time to converge to an optimal or near-optimal solution [37]. One way to improve the performance of SSO is to hybridize it with a local search [41]. This paper proposes a novel local search based on the concept of entropy, and embeds it in SSO to further ameliorate the solution quality and convergence speed of SSO for solving TAP.

2.3 Solution Representation In the initialization phase of SSO, each X i   xi1 , xi 2 ,..., xij ,...xiNtsk  generated with a vector of Ntsk variables is a candidate solution for TAP. Each variable xj = k in X represents the jth task assigned to the kth processor and generated randomly between [1, Nprs]. As shown in Example 1, X = (1, 1, 2, 3, 1) is a task assignment that assigns five tasks to three processors. The allocation x4 = 3 means that the 4th task is assigned to the 3rd processor. During the search process, all candidate solutions will be updated by UMf, as shown in Eq. (11), and then evaluated by the fitness function. 9

2.4 Fitness Function In solving TAP, infeasible solutions may be encountered, of which the assignment violates the resource constraints (e.g., mi > Mi). In order to guide a search toward feasible regions, using a penalty function is common [5, 6]. In this study, the fitness function is formulated as follows: if mi  X   M i and ri  X   Ri ,   f X  FX      f  X       X  otherwise.

(12)

  X   mi  X   M i   ri  X   Ri 

(13)

where f(X) is the objective value corresponding to a solution X, as shown in Eq. (1), and  is a penalty coefficient set to 103 obtained by trial and error. The penalty function β(X) encourages all solutions to explore the feasible regions such that the search does not excessively deviate into the infeasible regions.

2.5 Entropic Local Search (ELS) ELS aims to enhance the exploitation capability of SSO for solving TAP. In this work, entropy is adopted to describe the uncertainty level of assigned tasks; that is, higher entropy means the task is less certain to be assigned to a given processor. The entropy of task j in the current population can be defined as follows: 10

(

H j  k 1 p jk log2 p jk , Nprs

(14)

where pjk is the probability of task j being assigned to processor k among all solutions, and can be derived as: p jk 

(

1 Nsol sij  i 1 Nsol

(15)

1 if xij  k sij   0 otherwise

( (16)

where Nsol is the number of solutions processed in the algorithm, and xij is the jth variable in solution i. The entropy of task j and its corresponding pjk are demonstrated as follows: Example 2. Let Nsol = 6, Ntsk = 5, Nprs = 3 and all solutions after UMf of SSO are listed in Table 2. For task 1, p1k and H1 can be derived as Eqs. (17) and (18), respectively. All calculation results are listed in Table 2. (

3 3 p11  , p12  , p13  0 6 6

(17)

3  3 3   3 H j     log2     log2    1.0000 6 6 6   6

( (18)

The entropy of each task in the current population is calculated after UMf of SSO, and ranks all tasks using this value. The top Nunc entropy tasks are selected to be an uncertainty set, T = {t1, t2, …, t} (i.e. T = {3, 1, 2, 4, 5} according to the result in Example 2, if Nunc = 100%), where  = round(NuncNtsk) and round() is the rounding operator. For each

11

selected task j in the uncertainty set, the corresponding list of potential processors Lj = {lj1, lj2, …, ljNprs}, where l [1, Nprs] can be constructed in the order of highest to lowest pjk (i.e. L1 = {1, 2, 3} for task 1 as shown in Table 2). For a solution X in ELS, the first task in the uncertainty set (i.e. xt1) is selected and repeatedly reassigned to its potential processors in Lt1 from the first to the last. The fitness function of the new X is calculated immediately after each reassignment. The above procedure is repeated until the solution X is improved or task xt1 runs out of all its potential processors. After the reassignments of xt1, the next task in the uncertainty set (i.e. xt2) is reassigned following the same procedure, until all tasks in the uncertainty set are implemented, i.e. 

consecutive times in total. The ELS heuristic process is activated for each solution X if  ≤ Nels, where  is a

random number in [0, 1]. Nels and Nunc are two predefined parameters of ELS to achieve a trade-off between exploration and exploitation, and are obtained based on the experiment in Section 5. Let G and F(G) denote the current gBest and its fitness value repectively, the ELS procedure is given below: STEP 0.

If  ≤ Nels, then activate ELS for the solution X = (x1, x2, …, xNtsk) with fitness value F(X), let j =1 and go to STEP 1. Otherwise, halt.

STEP 1.

Let k = 1.

12

STEP 2.

Let X* = X, F(X*) = F(X).

STEP 3.

If x*t = lk, go to STEP 5. Otherwise, let x * = lk, where lk  Lt j and calculate tj

j

F(X*). STEP 4

If F(X*) < F(X), let X = X*, F(X) = F(X*), j = j + 1 and go to STEP 1. Otherwise, go to STEP 5.

STEP 5.

If k < Nprs, let k = k +1 and go to STEP 2. Otherwise, go to Step 6.

STEP 6.

If j < , let j = j +1 and go to STEP 1. Otherwise, if F(X) < F(G), let G = X, F(G) = F(X) and halt.

The ELS procedure in ESSO is demonstrated in the following example: Example 3. Consider the problem in Example 2. Suppose that G = (1, 2, 3, 1, 3) with fitness value F(G) = 318, Nunc = 0.6, and then  = round(0.6  5) = 3; that is, uncertainty set T = {3, 1, 2}. The list of potential processors for each selected task in the uncertainty set is shown in Table 2 (i.e. L3, L1 and L2). Let X1 = (2, 3, 2, 1, 3) with its fitness value F1(X) = 376 and  ≤ Nels is satisfied: STEP 0.

Activate ELS, let j = 1 and go to STEP 1.

STEP 1.

Let k = 1.

STEP 2.

Let X* = X1 = (2, 3, 2, 1, 3), F(X*) = F1(X) = 376.

STEP 3.

Because x*t  x3*  2  l31  3 , let x3*  3 , X* = (2, 3, 3, 1, 3) and calculate F(X*) 1

13

= 323. STEP 4.

Because F(X*) = 323 < F1(X) = 376, let X1 = X* = (2, 3, 3, 1, 3), F1(X) = F(X*) = 323, j = 2 and go to STEP 1.

STEP 1.

Let k = 1.

STEP 2.

Let X* = X1 = (2, 3, 3, 1, 3), F(X*) = F1(X) = 323.

STEP 3.

Because x*t  x1*  2  l11  1 , let x1*  1 , X* = (1, 3, 3, 1, 3) and calculate F(X*) 2

= 268. STEP 4.

Because F(X*) = 268 < F1(X) = 323, letX1 = X* =(1, 3, 3, 1, 3), F1(X) = F(X*) = 268, j = 3 and go to STEP 1.

STEP 1.

Let k = 1.

STEP 2.

Let X* = X1 = (1, 3, 3, 1, 3), F(X*) = F1(X) = 268.

STEP 3.

Because x*t  x2*  3  l21  3 , go to STEP5.

STEP 5.

Let k = 2 and go to STEP 2.

STEP 2.

Let X* = X1 = (1, 3, 3, 1, 3), F(X*) = F1(X) = 268.

STEP 3.

Because x*t  x2*  3  l22  2 , let x2*  2 , X* = (1, 2, 3, 1, 3) and calculate F(X*)

3

3

= 318. STEP 4.

Because F(X*) > F(X), go to STEP 5.

STEP 5.

Let k = 3 and go to STEP 2.

14

STEP 2.

Let X* = X1 = (1, 3, 3, 1, 3), F(X*) = F1(X) = 268.

STEP 3.

Because x*t  x2*  3  l23  1 x2*  2 , X* = (1, 1, 3, 1, 3) and calculate F(X*) = 3

299. STEP 4.

Because F(X*) > F(X), go to STEP 5.

STEP 5.

Because k = 3 ≥ Nprs and go to STEP 6.

STEP 6.

Because j = 3 ≥ , and F1(X) = 268 < F(G) = 376, let G = X1 = (1, 3, 3, 1, 3), F(G) = F1(X) = 268 and halt.

2.6 The Overall Procedure of ESSO Based on the discussions in Sections 3 and 4, the flowchart of the proposed ESSO is illustrated in Fig. 2, and the overall procedure is summarized as follows:

STEP 1.

Generate Xi0 randomly, calculate F(Xi0) using Eq. (12), find gBest G among all solutions and let t = 1, for i = 1, 2, …, Nsol.

STEP 2.

Let i = 1.

STEP 3.

If stopping criteria is met, halt.

STEP 4

Execute UMf as shown in Eq. (11) to update Xit.

STEP 5.

If  ≤ Nels, execute ELS to update Xit. Otherwise, go to Step 6.

STEP 6.

If i < Nsol, let i = i +1 and go to STEP 3. Otherwise, let t = t + 1 and go to STEP 15

2. <>

5. Experiment Results and Discussion Two experiments: Ex-1 and Ex-2, are implemented in this Section. Ex-1 aims to verify the effect and find the best setting for Nels and Nunc using 12 designed treatments. Then, in Ex-2, ESSO with the best setting is compared with existing algorithms, including HPSO [7], IDE [5], NGHS [6] and SSO with UMo in order to validate the quality and performance of the proposed method. All compared algorithms are corded and implemented in MATLAB R2015a on an Intel Core i7 4-GHz PC with 32GB memory. The runtime unit is in CPU seconds. In all experiments, two SSO-based methods: ESSO and SSO, adopt the same parameter setting: Cg = 0.6, Cp = 0.8 (none in ESSO) and Cw = 0.85. For HPSO, IDE and NGHS, all parameters are adopted from [7], [5] and [6], respectively.

2.7 Problem Set In order to test the proposed method against different problem scales, the problem set is generated in terms of three key factors of TAP for the experiments: Ntsk, Nprs and d. The value of (Ntsk, Nprs) is set to (5, 3), (10, 3), (20, 3), (30, 3), (50, 3), (75, 3), (100, 3) and (125, 3). For each pair of (Ntsk, Nprs), three different TIGs at random with d = (0.3, 0.5, 0.8) are 16

generated. Thus, 24 combinations of (Ntsk, Nprs, d) are constructed in the problem set. For each combination, the value setting of the other parameters is also generated randomly, including the execution cost, the communication cost, the memory and the processing requirement of each task, and the memory and processing capacity of each processor. The ranges for generating the above parameters are shown in Table 3.

2.8 The Stopping Criteria Normally, one candidate solution calculates the fitness function only once in each iteration for most evolutionary computation methods, so the number of iterations is a common stopping criterion. However this is not always true for methods using local search, e.g., ESSO and HPSO. Thus, the number required to calculate the fitness function (Ncfe) is adopted here as the stopping criterion in order to make a fair comparison of ESSO with other competitors. For different pairs of (Ntsk, Nprs) both in Ex-1 and Ex-2, the stopping criterion Ncfe = Ntsk  5000.

2.9 Results of Ex-1 Twelve designed treatments at three levels: 0.1, 0.4 and 0.7 for Nels, and four levels: 0.1, 0.4, 17

0.7 and 1 for Nunc are tested against four pairs of (Ntsk, Nprs) under d = (0.3, 0.5, 0.8), in a total of 12 combinations, including (10, 6, 0.3-0.8), (30, 6, 0.3-0.8), (75, 6, 0.3-0.8) and (125, 6, 0.3-0.8), selected from the problem set. ESSO conducts 10 independent runs with 50 population sizes for each treatment on the above 12 TAPs. In order to have the same scale and for ease of observation, the fitness value Fij(X) and CPU time Tij(X) obtained by ESSO in the jth run of the ith treatment are normalized by min-max normalization, as shown in Eq. ((19):

Yij  X  

Yij  X   min Yi  X  

(

max Yi  X    min Yi  X  

(19)

The experiment results are summarized in Table 4. Favg and Tavg denote the normalized average of the fitness value and CPU time over 10 runs, respectively. Fstd and Tstd represent the corresponding standard deviations. The results lead to the following interpretation: 1.

The effect of the Nels: Table 4 shows that the Nels value negatively correlates with the quality of the solution and CPU time. This result arises because a higher Nels incurs a higher chance to activate ELS. As a result, the global search is weakened and the probability of gBest becoming trapped at a local optimum increases. Furthermore, ELS is more efficient in consuming Ncfe than UMa. That is, the more frequently ELS activates, the less time is required for ESSO.

2.

The effect of the Nunc: The value of Nunc negatively correlates with Favg, Tavg, Fstd and Tstd. This demonstrates that the proposed ELS enhances the ability of local search for 18

ESSO, and a higher Nunc yields a better solution with less CPU time. 3.

The effect of the Nels and Nunc: The ANOVA shown in Table 5 reveals that both Nels and Nunc significantly affect the effectiveness and efficiency of ESSO. Furthermore, as shown in Table 4, a setting with smaller Nels and larger Nunc values enhances the chance of ESSO obtaining higher quality solutions with less running time, compared to the results for other settings. This situation can also be observed in Table 6, where the best values of all settings are shown in bold. As can be seen, the quality of solutions yielded by adopting the setting for (Nels, Nunc) = (0.1, 1) on each selected problem is better than the others. Therefore, the proposed ESSO is implemented with Nels = 0.1 and Nunc = 1 in Ex-2.

2.10

Results of Ex-2

In Ex-2, the focus is shifted to compare the performance of the proposed ESSO with those of four existing algorithms: HPSO, IDE, NGHS and SSO, with UMo. All methods conduct 30 independent runs with 50 population sizes on each tested combination in the problem set. The corresponding results, including the best values (Fmin), the average values (Favg), the worst values (Fmax), the standard deviation (Fstd) of the obtained solutions and the average CPU time (Tavg) when d = 0.3, 0.5 and 0.8, are listed in Table 7, 8 and 9, respectively. The best values

19

obtained from among all methods are highlighted using boldface. To help readers quickly and easily understand the results obtained from all compared methods, the corresponding box plots in Fig. 3 and 4 graphically depict the full range of results. The results lead to the following interpretation: 1.

General observations: HPSO is more efficient than IDE and NGHS when the size of the problems is larger because, as with ESSO, the local search used in HPSO is more efficient than its update mechanism. IDE yields good quality solutions for smaller problems, but struggles with larger ones with more CPU time consumption. NGHS is faster for smaller problems; however, this advantage wears off as problems increase in size.

2.

ESSO vs. SSO: With the exception of TAPs with (5, 3, 0.3-0.8) and (10, 6, 0.5), ESSO obtains higher or equivalent quality solutions with less CPU time than SSO on all of the considered problems. This demonstrates that ELS empirically enhances the performance of ESSO to more efficiently and effectively solve TAPs.

3.

ESSO vs. HPSO, IDE and NGHS: ESSO obtains the greatest scores in Fmin, Favg and Fmax with a minimum of CPU time on problems larger than TAPs with (20, 12) for each d. The standard deviations also prove that the solutions obtained by ESSO are more robust than IDE and NGHS. Note that HPSO has smaller deviations on larger problems, which superficially makes it much more robust than ESSO, as it is comparatively 20

trapped into the local optimum. <> <>

2.11

The Statistical Analysis

Part of Table 10 presents the averages of Favg obtained by each method for all combinations under each d; the best values are shown in bold. Overall, these measures indicate that ESSO is the best performing algorithm, followed by HPSO, IDE, SSO and NGHS, consecutively. To further compare the aforesaid algorithms, the sign test, which is a common statistical method for comparing the overall performances between pairs of methods, is conducted with a significance level of  = 0.05 using Favg as a target value. The results of the sign test in Table 10 are reported by win-loss (W-L), in which the two values are the number of problems for ESSO obtaining better or worse Favg than the compared algorithm. The p-value denoted by p indicates whether the difference between the two algorithms is significant, based on W-L. According to the results in Table 10, in which the values of p with boldface type indicate statistically significant differences, ESSO achieved overwhelming win numbers compared to other methods. As a result, the performance of ESSO is significantly better than its competitors for dealing with the generated TAPs.

21

6. Conclusions This work proposed a novel local search, known as ELS, inspired by information theory to facilitate ESSO for solving TAP, which is an NP-hard problem. In ELS, the uncertainty of each task is measured, and then the uncertainty set and the corresponding list of potential processors are constructed for the local search. This encourages solutions to exploit more promising neighbors since the search is not only arbitrary. An extensive experimental study on 24 TAPs was conducted herein. The results show that ELS enhances the exploitation of ESSO to achieve better quality solutions in a more efficient way than SSO. Furthermore, the comparisons with four state-of-the-art methods show that ESSO outperforms its competitors in both solution quality and efficiency. In addition, the sign test confirms the significance of the results obtained by ESSO compared with its competitors on all generated problems. Thus, the proposed ESSO is a promising alternative for dealing with TAPs.

Acknowledgement This research was supported by the National Science Council of Taiwan, R.O.C. under grant MOST 106-2218-E-606-001. 22

23

References [1] H.S. Stone, Multiprocessor scheduling with the aid of network flow algorithms, IEEE transactions on Software Engineering, (1977) 85-93. [2] M. Qiu, E.H.M. Sha, Cost minimization while satisfying hard/soft timing constraints for heterogeneous embedded systems, ACM Transactions on Design Automation of Electronic Systems (TODAES), 14 (2009) 25. [3] Z. Shao, Q. Zhuge, C. Xue, E.M. Sha, Efficient assignment and scheduling for heterogeneous dsp systems, IEEE Transactions on Parallel and Distributed Systems, 16 (2005) 516-525. [4] B. Ucar, C. Aykanat, K. Kaya, M. Ikinci, Task assignment in heterogeneous computing systems, Journal of parallel and Distributed Computing, 66 (2006) 32-46. [5] D. Zou, H. Liu, L. Gao, S. Li, An improved differential evolution algorithm for the task assignment problem, Engineering Applications of Artificial Intelligence, 24 (2011) 616-624. [6] D. Zou, L. Gao, S. Li, J. Wu, X. Wang, A novel global harmony search algorithm for task assignment problem, Journal of Systems and Software, 83 (2010) 1678-1688. [7] P.Y. Yin, S.S. Yu, P.P. Wang, Y.T. Wang, A hybrid particle swarm optimization algorithm for optimal task assignment in distributed systems, Computer Standards & Interfaces, 28 (2006) 441-450. [8] G. Attiya, Y. Hamam, Task allocation for minimizing programs completion time in multicomputer systems, International Conference on Computational Science and Its Applications, (Springer2004), pp. 97-106. [9] P.Y. Yin, S.S. Yu, P.P. Wang, Y.T. Wang, Task allocation for maximizing reliability of a distributed system using hybrid particle swarm optimization, Journal of Systems and Software, 80 (2007) 724-735. [10] A. Dogan, F. Ozguner, Matching and scheduling algorithms for minimizing execution time and failure probability of applications in heterogeneous computing, IEEE Transactions on Parallel and Distributed Systems, 13 (2002) 308-323. [11] S.M. Shatz, J.P. Wang, M. Goto, Task allocation for maximizing reliability of distributed computer systems, IEEE Transactions on Computers, 41 (1992) 1156-1168. [12] C.H. Lee, K.G. Shin, Optimal task assignment in homogeneous networks, IEEE 24

Transactions on Parallel and Distributed Systems, 8 (1997) 119-129. [13] Q. Kang, H. He, H. Song, Task assignment in heterogeneous computing systems using an effective iterated greedy algorithm, Journal of Systems and Software, 84 (2011) 985-992. [14] K. Efe, Heuristic models of task assignment scheduling in distributed systems, Computer, 15 (1982) 50-56. [15] M.-S. Chern, G.H. Chen, P. Liu, An LC branch-and-bound algorithm for the module assignment problem, Information Processing Letters, 32 (1989) 61-71. [16] S.H. Bokhari, Assignment problems in parallel and distributed computing (Springer Science & Business Media, 2012). [17] V. Chaudhary, J.K. Aggarwal, A generalized scheme for mapping parallel algorithms, IEEE Transactions on Parallel and Distributed Systems, 4 (1993) 328-346. [18] C.M. Woodside, G.G. Monforton, Fast allocation of processes in distributed and parallel systems, IEEE Transactions on parallel and distributed systems, 4 (1993) 164-174. [19] W.W. Chu, L.J. Holloway, M.T. Lan, K. Efe, Task allocation in distributed data processing, Computer, 13 (1980) 57-69. [20] M. Kafil, I. Ahmad, Optimal task assignment in heterogeneous distributed computing systems, IEEE concurrency, 6 (1998) 42-50. [21] T. Chockalingam, S. Arunkumar, Genetic algorithm based heuristics for the mapping problem, Computers & Operations Research, 22 (1995) 55-64. [22] S. Salcedo-Sanz, Y. Xu, X. Yao, Hybrid meta-heuristics algorithms for task assignment in heterogeneous computing systems, Computers & operations research, 33 (2006) 820-835. [23] E.S. Hou, N. Ansari, H. Ren, A genetic algorithm for multiprocessor scheduling, IEEE Transactions on parallel and distributed systems, 5 (1994) 113-120. [24] G. Attiya, Y. Hamam, Task allocation for maximizing reliability of distributed systems: a simulated annealing approach, Journal of parallel and Distributed Computing, 66 (2006) 1259-1266. [25] Y. Hamam, K.S. Hindi, Assignment of program modules to processors: A simulated annealing approach, European Journal of Operational Research, 122 (2000) 509-513. [26] S.Y. Ho, H.S. Lin, W.H. Liauh, S.J. Ho, OPSO: Orthogonal particle swarm optimization and its application to task assignment problems, IEEE Transactions on Systems, Man, and 25

Cybernetics-Part A: Systems and Humans, 38 (2008) 288-298. [27] C.E. Shannon, W. Weaver, The mathematical theory of communication (University of Illinois press, 2015). [28] M. Beenamol, S. Prabavathy, J. Mohanalin, Wavelet based seismic signal de-noising using Shannon and Tsallis entropy, Computers & Mathematics with Applications, 64 (2012) 3580-3593. [29] N. Aquino, A. Flores-Riveros, J. Rivas-Silva, Shannon and Fisher entropies for a hydrogen atom under soft spherical confinement, Physics Letters A, 377 (2013) 2062-2068. [30] H.H. Bafroui, A. Ohadi, Application of wavelet energy and Shannon entropy for feature extraction in gearbox fault detection under varying speed conditions, Neurocomputing, 133 (2014) 437-445. [31] C.H. Lin, Y.K. Ho, Shannon information entropy in position space for two-electron atomic systems, Chemical Physics Letters, 633 (2015) 261-264. [32] V. Aguiar, I. Guedes, Shannon entropy, Fisher information and uncertainty relations for log-periodic oscillators, Physica A: Statistical Mechanics and its Applications, 423 (2015) 72-79. [33] V.d.P.R. da Silva, A.F. Belo Filho, R.S.R. Almeida, R.M. de Holanda, J.H.B. da Cunha Campos, Shannon information entropy for assessing space–time variability of rainfall and streamflow in semiarid region, Science of The Total Environment, 544 (2016) 330-338. [34] C.M. Lai, W.C. Yeh, C.Y. Chang, Gene selection using information gain and improved simplified swarm optimization, Neurocomputing, 218 (2016) 331-338. [35] W.C. Yeh, A two-stage discrete particle swarm optimization for the problem of multiple multi-level redundancy allocation in series systems, Expert Systems with Applications, 36 (2009) 9192-9200. [36] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, Micro Machine and Human Science, 1995. MHS'95., Proceedings of the Sixth International Symposium on, (IEEE1995), pp. 39-43. [37] C.M. Lai, W.C. Yeh, Two-stage simplified swarm optimization for the redundancy allocation problem in a multi-state bridge system, Reliability Engineering & System Safety, 156 (2016) 148-158. [38] W.C. Yeh, C.M. Lai, K.H. Chang, A novel hybrid clustering approach based on 26

K-harmonic means using robust design, Neurocomputing, 173 (2016) 1720-1732. [39] W.C. Yeh, C.M. Lai, Accelerated simplified swarm optimization with exploitation search scheme for data clustering, PloS one, 10 (2015) e0137246. [40] W.C. Yeh, An improved simplified swarm optimization, Knowledge-Based Systems, 82 (2015) 60-69. [41] W.C. Yeh, Orthogonal simplified swarm optimization for the series–parallel redundancy allocation problem with a mix of components, Knowledge-Based Systems, 64 (2014) 1-12.

Biography

Chyh-Ming Lai is an assistant professor of the Institute of Resources Management and Decision Science, Management College, National Defense University, Taipei, Taiwan. He received his Ph.D. degree from the Department of Industrial Engineering and Engineering Management at the National Tsing Hua University. His research interests are Evolutionary Computation, Data Mining and network reliability theory.

Wei-Chang Yeh is a professor of the Department of Industrial Engineering and Engineering Management at the National Tsing Hua University (NTHU), Hsinchu, Taiwan. He received his M.S. and Ph.D. from the Department of Industrial Engineering at the University of Texas at Arlington. His research interests include network reliability theory, graph theory, deadlock problem, and scheduling. Dr. Yeh is a member of IEEE and INFORMS and has received awards for his research achievement from the National Science Council.

Yen-Cheng Huang completed his M.S. degree in the Department of 27

Industrial Engineering and Engineering Management at the National Tsing Hua University (NTHU), Hsinchu, Taiwan. He received his B. S. degree from National Cheng Kung University. His research interests are Evolutionary Computation.

28

Fig. 1. The interaction graph for Example 1

Fig. 2. Flowchart of the proposed ESSO

29

Fig. 3. The box plots of the fitness values for all compared methods in Ex-2 on Ntsk

30

= 5-30

Fig. 4. The box plots of the fitness values for all compared methods in Ex-2 on Ntsk = 50-125

31

Table 1. Data for Example 1 eij processor j 1 2 3 1 10 7 24 67 121 2 21 33 80 46 3 3 38 17 70 108 99 4 45 29 14 127 184 5 20 20 120 65 21 * There is no communication need between two tasks. task i

wi

cik task k 1 2 0 10 0 0 0 0 0 0 0 0

ui

3 0* 7 0 0 0

4 12 43 0* 0 0

Table 2. The calculation result for Example 2 Xi X1 = (2, 3, 2, 1, 3)

pij Task j

Processor k 1

2

Hj

Lj

3

X2 = (2, 3, 1, 1, 3) 1

0.5000 0.5000 0

1.0000 L1 = {1, 2, 3}

X3 = (2, 3, 3, 1, 2) 2

0

X4 = (1, 3, 3, 2, 3) 3

0.1667 0.3333 0.5000 1.4591 L3 = {3, 2, 1}

X5 = (1, 2, 3, 1, 3) 4

0.8333 0.1667 0

X6 = (1, 3, 2, 1, 3) 5

0

0.1667 0.8333 0.6500 L2 = {3, 2, 1} 0.6500 L4 = {1, 2, 3}

0.1667 0.8333 0.6500 L5 = {3, 2, 1}

Table 3. The value settings for the problem set [min, max] d (Ntsk, Nprs) eij cik wi ui Mk Rk (5, 3) (10, 3) (20, 3) (30, 3) (0.3, 0.5, 0.8) [1, 200] [1, 50] [1, 50] [50, 250] (50, 3) (75, 3) (100, 3) (125, 3)

32

5 43 0* 37 11 0

Table 4. The main-effects of different treatments for Nels and Nunc Statistics Nels Favg

Tavg

Fstd

Tstd

Nunc 0.1

0.7

1

Average

0.1

0.5223b 0.2079b 0.1056b 0.0555ab 0.2228

0.4

0.5653

0.2464

0.1254

0.0717a 0.2522

0.7

0.6113

0.2960

0.1609

0.0847a 0.2882

Average 0.5663

0.2501

0.1306

0.0706

0.1

0.8839

0.3724

0.2105

0.1272a 0.3985

0.4

0.4194

0.2041

0.1156b 0.0645a 0.2009

0.7

0.3451b 0.1879b 0.1159

0.0634ab 0.1781

Average 0.5495

0.2548

0.1473

0.0850

0.1

0.0437

0.0303

0.0274

0.0158a 0.0293

0.4

0.0367b 0.0195b 0.0109b 0.0093ab 0.0191

0.7

0.0557

0.0279

0.0156

0.0132a 0.0281

Average 0.0454

0.0259

0.0180

0.0128

0.1

0.0218

0.0083

0.0209

0.0177a 0.0172

0.4

0.0154

0.0153b 0.0254

0.0133ab 0.0173

0.7

0.0127b 0.0172

Average 0.0166 a, b

0.4

0.0136

0.0116ab 0.0152 0.0193

0.0142

0.0154

The best value among the values of the same row and column, respectively.

33

Table 5. The ANOVA for Ex-1 Group Source

DF

SS

MS

F value

P value

F

Nels

2

12.351

6.176

51.15

0.000

Nunc

3

632.388

210.796 1745.89 0.000

Nels*Nunc 6

1.839

0.307

Error

108

13.040

0.121

Total

119

841.761

S = 0.3475

R2 = 98.02% R2adj = 97.82%

Nels

2

169.319

84.660

Nunc

3

549.185

183.062 4473.60 0.000

Nels*Nunc 6

118.837

19.806

Error

108

4.419

0.041

Total

119

841.761

S = 0.2023

R2 = 99.47% R2adj = 99.42%

T

34

2.54

0.024

2068.88 0.000

484.02

0.000

Table 6. The performance of each designed treatment on 12 selected problems Nunc Proble Criteri Nels = 0.1 Nels = 0.4 Nels = 0.7 m a 0.1 0.4 0.7 1 0.1 0.4 0.7 1 0.1 0.4 0.053 0.026 0.000 0.000 0.059 0.000 0.000 0.000 0.093 0.000 Favg 1 6 9 2 0 0 0 0 0 0 10 0.933 0.474 0.297 0.217 0.466 0.145 0.072 0.050 0.317 0.091 Tavg 7 2 2 6 2 8 4 8 4 2 Favg 30 Tavg Favg 75 Tavg Favg 125 Tavg

Averag e

Favg Tavg

0.7 0.000 0 0.038 6

1 0.000 0 0.027 5

0.512 3 0.843 3

0.325 8 0.251 3

0.215 3 0.153 1

0.130 3 0.100 7

0.508 8 0.281 5

0.297 2 0.091 1

0.176 4 0.056 9

0.144 7 0.040 7

0.598 0 0.197 2

0.335 8 0.073 8

0.194 0 0.037 7

0.152 8 0.032 7

0.723 1 0.931 2

0.250 0 0.359 7

0.126 1 0.168 9

0.066 9 0.084 7

0.835 1 0.474 7

0.356 2 0.260 4

0.175 8 0.130 0

0.089 5 0.062 1

0.864 9 0.427 2

0.439 3 0.256 0

0.237 9 0.154 4

0.110 9 0.086 1

0.800 8 0.827 7

0.229 1 0.404 6

0.081 2 0.222 8

0.024 6 0.106 0

0.857 3 0.455 0

0.332 4 0.319 1

0.149 2 0.202 9

0.052 5 0.104 3

0.889 0 0.438 5

0.408 7 0.330 8

0.211 8 0.232 6

0.074 9 0.107 2

0.522 3 0.883 9

0.207 9 0.372 4

0.105 6 0.210 5

0.055 5 0.127 2

0.565 3 0.419 4

0.246 4 0.204 1

0.125 4 0.115 6

0.071 7 0.064 5

0.611 3 0.345 1

0.296 0 0.187 9

0.160 9 0.115 9

0.084 7 0.063 4

35

Table 7. Simulation results of all algorithms when d = 0.3 (Ntsk, Nprs) Criteria HPSO IDE HGHS (5, 3) Fmin 237.0000 237.0000 237.0000 Favg 237.0000 237.0000 237.0000 Fmax 237.0000 237.0000 237.0000 Fstd 0.0000 0.0000 0.0000 Tavg 0.5833 1.1391 0.4688 (10, 6) Fmin 644.0000 644.0000 644.0000 Favg 644.5000 645.8000 644.0000 Fmax 645.0000 653.0000 644.0000 Fstd 0.5085 2.4410 0.0000 Tavg 1.1714 2.9146 1.5469 (20, 12) Fmin 1433.0000 1392.0000 1392.0000 Favg 1451.3000 1398.8000 1455.6000 Fmax 1487.0000 1448.0000 1561.0000 Fstd 15.7286 46.6909 12.3019 Tavg 2.3823 9.0276 2.3281 (30, 18) Fmin 3106.0000 3007.0000 2996.0000 Favg 3164.1333 3083.1667 3091.3667 Fmax 3229.0000 3212.0000 3377.0000 Fstd 29.5130 57.3922 92.8824 Tavg 4.2927 18.5063 4.1094 (50, 30) Fmin 9244.0000 8865.0000 8955.0000 Favg 9390.1333 9075.3667 10134.4667 Fmax 9487.0000 9262.0000 11302.0000 Fstd 101.7012 573.6514 64.6192 Tavg 9.5964 49.0063 10.2969 (75, 45) Fmin 21521.0000 20974.0000 22873.0000 Favg 21700.7667 22069.1000 24188.7000 Fmax 21849.0000 22830.0000 26998.0000 Fstd 517.0526 892.3030 91.4103 Tavg 23.2568 111.5307 25.2813 (100, 60) Fmin 35735.0000 35796.0000 38886.0000 Favg 35979.1000 37741.9333 42667.3000 Fmax 36177.0000 39349.0000 66301.0000 Fstd 741.8001 4874.7101 106.4996 Tavg 49.1505 207.3531 52.5781 (125, 75) Fmin 55124.0000 58428.0000 60010.0000 Favg 55370.7000 59200.9000 64191.0000 Fmax 55618.0000 60251.0000 83650.0000 Fstd 504.4035 4049.6518 113.7220 Tavg 88.8646 327.9646 93.8438

36

SSO 237.0000 237.0000 237.0000 0.0000 0.6698 644.0000 645.6333 693.0000 8.9461 2.3432 1392.0000 1437.0000 1482.0000 21.3283 6.3146 2990.0000 3093.8333 3378.0000 85.3904 13.6589 9403.0000 9601.6667 9990.0000 148.7048 37.5370 23019.0000 23359.8000 23785.0000 183.4513 86.1427 38622.0000 39254.2333 39686.0000 266.3728 160.1344 60361.0000 60883.3000 61488.0000 299.8921 264.1661

ESSO 237.0000 237.0000 237.0000 0.0000 0.5339 644.0000 644.0000 644.0000 0.0000 1.1188 1392.0000 1408.5333 1423.0000 15.7299 2.2396 2982.0000 3000.4667 3046.0000 19.5391 3.8188 8785.0000 8888.8000 9024.0000 66.7416 8.8693 20304.0000 20662.8000 21013.0000 133.2246 21.7896 34564.0000 34737.5000 34951.0000 112.9265 46.2370 53710.0000 54005.9667 54357.0000 166.5091 86.4630

Table 8.Simulation results of all algorithms when d = 0.5 (Ntsk, Nprs) Criteria HPSO IDE HGHS (5, 3) Fmin 256.0000 256.0000 256.0000 Favg 256.0000 256.0000 256.0000 Fmax 256.0000 256.0000 256.0000 Fstd 0.0000 0.0000 0.0000 Tavg 0.5755 1.1344 0.4531 (10, 6) Fmin 659.0000 659.0000 659.0000 Favg 664.3333 661.6667 659.0000 Fmax 675.0000 675.0000 659.0000 Fstd 7.6714 6.0648 0.0000 Tavg 1.1156 2.9365 1.0313 (20, 12) Fmin 2247.0000 2220.0000 2216.0000 Favg 2288.1667 2293.1333 2305.3667 Fmax 2312.0000 2364.0000 2398.0000 Fstd 16.1759 30.4741 51.8343 Tavg 2.3422 9.1255 2.3906 (30, 18) Fmin 4787.0000 4825.0000 4694.0000 Favg 4943.5667 4889.0000 5096.6000 Fmax 5016.0000 5053.0000 5715.0000 Fstd 49.9349 53.4835 259.7940 Tavg 4.3120 18.6198 4.6094 (50, 30) Fmin 13990.0000 13725.0000 13970.0000 Favg 14240.6000 14039.0333 14568.8333 Fmax 14390.0000 14353.0000 15870.0000 Fstd 167.5327 544.0389 97.1151 Tavg 9.8104 50.2297 10.2656 (75, 45) Fmin 33163.0000 33295.0000 33501.0000 Favg 33370.7000 34017.4667 36131.1667 Fmax 33527.0000 34830.0000 39376.0000 Fstd 443.8050 1218.7032 111.1756 Tavg 25.7578 114.9135 27.7344 (100, 60) Fmin 58729.0000 59123.0000 61744.0000 Favg 59142.6667 60912.0667 63598.5667 Fmax 59353.0000 62064.0000 66741.0000 Fstd 653.7717 1298.8976 114.1488 Tavg 53.6792 208.9719 56.5625 (125, 75) Fmin 95135.0000 97832.0000 102578.0000 Favg 95346.6000 99894.1333 129858.3667 Fmax 95628.0000 101233.0000 175685.0000 Fstd 746.1394 22996.2712 136.6275 Tavg 100.7833 344.5052 105.8906

37

SSO 256.0000 256.0000 256.0000 0.0000 0.6755 659.0000 659.0000 659.0000 0.0000 1.9542 2216.0000 2292.2333 2380.0000 44.0554 6.4083 4765.0000 4896.0333 5161.0000 89.7700 13.8234 14277.0000 14588.6333 14925.0000 164.8380 37.7615 34835.0000 35186.7000 35640.0000 223.6836 88.7958 62140.0000 62694.4000 63156.0000 228.9084 166.4224 100641.0000 101410.3000 102138.0000 372.7445 277.0172

ESSO 256.0000 256.0000 256.0000 0.0000 0.5427 659.0000 659.0000 659.0000 0.0000 1.0875 2216.0000 2227.9333 2298.0000 21.3217 2.2516 4684.0000 4707.6333 4750.0000 22.1927 3.9323 13342.0000 13678.5333 14056.0000 173.9805 9.0969 32015.0000 32362.3000 32772.0000 145.3569 23.5583 57349.0000 57878.4667 58289.0000 220.6809 51.8620 93267.0000 93786.9333 94312.0000 297.5666 97.9120

Table 9. Simulation results of all algorithms when d = 0.8 (Ntsk,Nprs) Criteria HPSO IDE HGHS (5, 3) Fmin 268.0000 268.0000 268.0000 Favg 268.0000 268.0000 268.0000 Fmax 268.0000 268.0000 268.0000 Fstd 0.0000 0.0000 0.0000 Tavg 0.5964 1.1620 0.4844 (10, 6) Fmin 929.0000 929.0000 929.0000 Favg 952.6667 937.8333 929.0000 Fmax 980.0000 993.0000 929.0000 Fstd 23.1611 20.5763 0.0000 Tavg 1.1521 2.9870 1.0469 (20, 12) Fmin 3340.0000 3367.0000 3311.0000 Favg 3467.6667 3516.1000 3548.6000 Fmax 3549.0000 3634.0000 3802.0000 Fstd 64.2068 100.1904 57.6609 Tavg 2.4260 9.1141 2.4063 (30, 18) Fmin 7982.0000 7790.0000 7883.0000 Favg 8168.2667 8078.1000 7863.9000 Fmax 8273.0000 8056.0000 8330.0000 Fstd 70.7499 106.3765 56.0953 Tavg 4.4526 18.9396 4.3594 (50, 30) Fmin 23225.0000 22629.0000 22967.0000 Favg 23549.9000 23066.0333 24418.4000 Fmax 23749.0000 23547.0000 26547.0000 Fstd 215.2480 796.6093 139.9695 Tavg 10.2141 50.2125 11.1719 (75, 45) Fmin 52115.0000 51514.0000 52305.0000 Favg 52509.8667 53147.3333 54970.9000 Fmax 52894.0000 54080.0000 57154.0000 Fstd 606.5252 1372.1769 168.0160 Tavg 24.3031 113.5568 26.7813 (100, 60) Fmin 98111.0000 99621.0000 101825.0000 Favg 98642.2000 101296.4000 104682.7333 Fmax 99078.0000 102668.0000 110398.0000 Fstd 688.5623 2086.9055 230.4174 Tavg 50.7333 208.9656 54.5781 (125, 75) Fmin 154580.0000 158706.0000 164270.0000 Favg 155033.2667 160694.9000 167323.0667 Fmax 155255.0000 162054.0000 197906.0000 Fstd 845.1686 5906.6129 192.7251 Tavg 92.8286 337.2255 98.0000

38

SSO 268.0000 268.0000 268.0000 0.0000 0.6880 929.0000 943.0333 993.0000 23.7770 1.9792 3311.0000 3523.0667 3735.0000 98.3330 6.4823 7899.0000 8117.0333 8488.0000 132.1724 13.9849 22997.0000 23709.9667 24213.0000 269.7437 37.9203 53337.0000 54341.1667 54901.0000 338.2841 88.9964 102245.0000 102860.8333 103582.0000 327.1607 163.8635 161291.0000 162226.5000 162904.0000 393.8159 269.1057

ESSO 268.0000 268.0000 268.0000 0.0000 0.5625 929.0000 929.0000 929.0000 0.0000 1.1177 3311.0000 3447.3000 3521.0000 88.2884 2.2776 7787.0000 7875.2667 8010.0000 62.5845 4.0516 22412.0000 22695.5333 22977.0000 149.2325 9.3260 50967.0000 51315.1000 51726.0000 195.2974 22.7859 96484.0000 97156.4333 97604.0000 276.5948 48.6417 152588.0000 153434.4667 153955.0000 294.1465 89.6453

Table 10. The results of the sign test Method ESSO 0.3 15448.1333 Average d 0.5 25694.6000 0.8 42140.1375 Sign test ESSO

W-L p

HPSO 15992.2042 26281.5792 42821.0208

IDE 16681.5083 27120.3125 43850.6667

NGHS 18326.1792 31559.2375 45528.4542

SSO 17314.0583 27747.9125 44498.7000

20-0 0.0000

19-2 0.0002

19-0 0.0000

20-0 0.0000

39