Neurocomputing 341 (2019) 41–59
Contents lists available at ScienceDirect
Neurocomputing journal homepage: www.elsevier.com/locate/neucom
Opposition-based multi-objective whale optimization algorithm with global grid ranking Wan Liang Wang a,∗, Wei Kun Li a, Zheng Wang a, Li Li a,b a b
School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, PR China Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, PR China
a r t i c l e
i n f o
Article history: Received 8 April 2017 Revised 28 July 2018 Accepted 28 February 2019 Available online 4 March 2019 Communicated by Prof. Zidong Wang Keywords: Evolutionary algorithms Multi-objective optimization Opposition-based learning Global grid ranking Engineering optimization
a b s t r a c t Nature-inspired computing has attracted a lot of research effort especially for addressing real-world multi-objective optimization problem (MOP). This paper proposes a new nature-inspired optimization algorithm which is named opposition-based multi-objective whale optimization algorithm with global grid ranking (MOWOA). The proposed approach utilizes several parts to enhance the performance in optimization. First, the efficient evolution process is inherited from the single objective whale optimization algorithm(WOA). Second, opposition-based learning(OBL) is applied into the algorithm. Meanwhile, a novel mechanism called global grid ranking(GGR) which is inspired by grid mechanism has been incorporated into the proposed algorithm. To show the significance of the proposed algorithm, MOWOA is tested on a diverse set of benchmark with a series of well-known evolutionary algorithms and the influence of each individual strategy is also verified through 14 benchmarks. Moreover, the new proposed algorithm is also applied to the simple data clustering problem and a real-world water optimization problem in China. The results demonstrate that MOWOA is not only an algorithm with well performance for bench-mark problems but also expected to have a more wide application in real-world engineering problems. © 2019 Elsevier B.V. All rights reserved.
1. Introduction In recent years, nature-inspired computing has gained huge attention as their high efficiency and adaption in solving optimization problems which are computationally difficult to solve by conventional mathematical programming methods. Borrowing the inspiration from natural evolutionary processes, these algorithms have a variety of applications including science and engineering due to their simplicity and flexibility. Some of these successful methodologies including genetic algorithms (GAs) [1], evolutionary programming (EP) [2], evolution strategies (ES) [3], particle swarm optimization (PSO) [4], simulated annealing (SA) [5], ant colony optimization (ACO) [6], invasive weed optimization (IWO) [7], cuckoo search (CS) [8], biogeography-based optimization (BBO) [9], artificial bee colony (ABC) [10], fireworks algorithm (FA) [11], bat algorithm (BA) [12], the water wave optimization(WWO) [13], the gravitational search algorithm (GSA) [14], the harmony search algorithm (HSA) [15], etc., have been extensively studied and variants of these algorithms have been developed. However, real-world optimization problems always involve multiple objectives and so called multi-objective optimization, ∗
Corresponding author. E-mail address:
[email protected] (W.L. Wang).
https://doi.org/10.1016/j.neucom.2019.02.054 0925-2312/© 2019 Elsevier B.V. All rights reserved.
which means, there is no single solution when considering multiple objectives as the goal of the optimization process [16]. In this case, the solutions for a multi-objective problem, which is the main focus of the algorithm, represent the trade-offs between the objectives due to the nature of such problems [17]. Some of the most well-known stochastic optimization techniques proposed so far to address multiple objectives are improving strength-Pareto evolutionary algorithm (SPEA2) [18], non-dominated sorting genetic algorithm, non-dominated sorting genetic algorithm version 2 (NSGA-II) [19], multi-objective particle swarm optimization (MOPSO) [20], multi-objective evolutionary algorithm based on decomposition (MOEA/D) [21], Pareto archived evolution strategy (PAES) [22]. Moreover, many novel multi-objective optimization algorithms have been proposed like hybrid fireworks optimization [23], hybrid teaching learning based multi-objective particle swarm optimization [24], non-dominated sorting based multiobjective artificial bee colony algorithm [25], multi-objective differential evolution algorithm(MODEA) [26], multi-objective grey wolf algorithm(MOGWO) [27], fast approximation-guided evolutionary multi-objective algorithm(AGE-II) [28], multi objective evolutionary algorithm using gaussian process-based Inverse modeling(IM-MOEA) [29], new local search-based multi objective optimization algorithm (NSLS) [30] and grid-based evolutionary algorithm(GrEA) [31]. In addition, the No Free Lunch(NFL)
42
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
[32] which proves that there is no optimization technique for solving all optimization problems, has also become the foundation and motivation to allow researchers to propose new algorithms or improve the methods. This paper proposed an opposition-based multi-objective whale optimization algorithm with global grid ranking (MOWOA) as to better solving some or new problems. The proposed algorithm inherits the well performance from WOA [33] and two mechanisms called the opposition-based learning (OBL) and global grid ranking (GGR) are also integrated into this work. Furthermore, the computing result is tested on a diverse set of benchmarks and on the engineer problems with well-known algorithms to give the objective evaluations. The remaining part of the paper is as follows: Section 2 gives the definitions and preliminaries of optimization in a multiobjective problem. The description of OBL and GGR techniques and algorithm details of MOWOA is also provided. In Section 3, numerical simulation and analysis of tests on various benchmarks with a series well-known algorithms are present, followed by a realworld engineer problem for hydropower scheduling and a simple data clustering problem in Section 4. Finally Section 5 concludes with the discussion and present directions for future research. 2. The proposed algorithm 2.1. Multi-objective optimization problems Without loss of generality, multi-objective optimization problems (MOPs), which involve more than one conflicting objective in an optimization first be described mathematically as follows:
min Fi (X ) = ( f1 (Xi ), f2 (Xi ), . . . , fM (Xi ))
T
(1)
X = ( x1 , . . . xn ) Y = F (X ) s.t.x = (x1 , . . . xn ) ∈ X ⊂ R
n
where the vector x claims to the decision space X, the objective function vector F(X) includes M (M ≥ 2) objectives, Y ⊂ RM represents the objective space, and f: Rn → RM is the objective mapping function. Pareto dominance: Given two vector x, y ∈ Rn and their corresponding objective vectors F(x), F(y) ∈ RM , x dominates y (denoted as x ≺ y) if and only if ∀i ∈ (1, 2, . . . , m ), f i (x ) ≤ fi (y ) and ∃ j ∈ (1, 2, . . . , m ), f j (x ) < f j (y ). Pareto optimal solution: A decision vector x ∈ Rn is said to be Pareto optimal, if and only if ∃y ∈ Rn : y ≺ x. Pareto optimal set: The set of Pareto optimal solutions (PS) is called Pareto optimal set as: P S = {x ∈ Rn | ∃y ∈ Rn : y ≺ x}. Pareto optimal front: The Pareto optimal front (PF) is defined as: P F = {F ( x )|x ∈ P S }. 2.2. Opposition-based learning The random initialization, in absence of a priori knowledge, lower the chance of sampling better regions in population-based algorithms. However, utilizing OBL can obtain fitter starting candidates even when there is no a priori knowledge and enhance the probability of detecting better regions. OBL is an efficient mechanism in the optimization field due to the promising potential to improve the performance for various materialistic algorithms include CS [34], MOA [35], DE [36–38] and PSO [39]. The definition of the opposite number and the opposite point is given below. Opposite number: Let x, be a real number in an interval [a, b] (x ∈ [a, b]). The opposite number of x, denoted by x˘, is defined as follows [37]:
x˘ = a + b − x
(2)
Opposite point: Let P (x1 , x2 , . . . , xD ) be a point dimensions space, wherex1 , x2 , . . . , xD are real numbers and xi ∈ [ai , bi ], i = 1, 2, . . . , D. The opposite point of P is denoted by P˘(x˘1 , x˘2 , . . . , x˘D ) [37] where
x˘i = ai + bi − xi
(3)
Opposition-Based Optimization: Let P (x1 , x2 , . . . , xD ), a point in a D-dimensions space with xi ∈ [ai , bi ](i = 1, 2, . . . , D), be a candidate solution. Assume f(x) is a fitness function which is used to measure candidate optimality. According to opposite point definition, P˘(x˘1 , x˘2 , . . . , x˘D ) is the opposite of P (x1 , x2 , . . . , xD ). If f (P˘ ) is better than f(P), then point P can be replaced with P˘; otherwise we continue with P. Hence, the point and its opposite point are evaluated simultaneously to continue with the fitter one [37]. Borrowing the idea from the Wang et al. [34], here we conduct a novel method in initialization of MOWOA with OBL, which is different from the survival selection in previous OBL-based algorithms (which the OBL strategies choose the Np best individuals among original Np individuals and the opposite Np individuals in initialization). The main steps of explaining that procedure is given as the Fig. 1 shows: Step 1: Divide population P(Np ) into two parts, the first half population P1 is generated by a random distribution Step 2: The rest half population P2 is initialized in terms of OBL as shown in Section 2.2:
P2 = ai + bi − P1
Step 3: The set P1 ∪ P2 tion Np .
(4) are restructured as the initial popula-
The OBL strategy employed in our proposed algorithm is different from the traditional OBL based algorithm. As for initialization, tradition OBL strategy first initializes the population randomly and then calculate opposite population. While the OBL operation in our proposed algorithm first divides the population into two parts and then generates randomly and calculates the opposite respectively, which can obtain fitter starting candidates when there is no priori knowledge about the solution. Moreover, after the above steps in the generation and opposite calculation, the two subpopulations are composed of one population, which can make the population size unchanged in the optimization process and assist the algorithm operating efficiently. As for employing OBL into the evolutionary phase, traditional OBL based algorithms may apply the OBL into the evolutionary phase with a jumping rate or jumping probability. However, the OBL of proposed MOWOA calculates the opposite directly without the jumping rate, which can enhance the probability of detecting better regions effectively and reduce the complex parameter setting especially for the real world problem. In a word, aiming to find the global optimum with a high possibility, the OBL strategy has been verified to be efficient in many algorithms, which is designed for the use of original individuals and their opposites. This strategy is employed with the purpose of accelerating the convergence when there is no a priori knowledge about the solutions, thus reaching better solutions more swiftly. 2.3. Global grid ranking strategy Borrowing ideas from Knowles and Corne [40] and Yang et al. [31], here we introduce a novel grid mechanism called global grid ranking, which has been proposed in our previous work [41]. In this mechanism, a grid is used as a frame to determine the location of individuals in the objective space. Moreover, it has an inherent property of reflecting the information of convergence and diversity
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
43
Fig. 1. A pictorial example to show the OBL operator in MOWOA.
simultaneously. The definition of the global grid ranking strategy is given below. Grid boundaries: The minimum and maximum values regarding the dth objective are denoted as minfd (x) and maxfd (x), respectively. Then, the upper and lower boundaries ud , ld are determined as follows:
ud = max fd (x ) +
max fd (x ) − min fd (x ) 2∗h
max fd (x ) − min fd (x ) ld = min fd (x ) − 2∗h
f d ( x ) − ld dd
HGD(xi ) = (6)
(7)
ud − ld represents the the hyperbox width in h the dth objective; . is the function of rounding up. Global grid ranking: Global grid ranking is defined as the summation of the number which individuals grid location is superior to other individuals in each objective. where dd =
GGR(xi ) =
D
f ind (Gd (xi ) < Gd (x j ))
(5)
where h ∈ Z, h ≥ 2 is a constant parameter, the number of divisions of the objective space in each dimension, set by the user. Grid location: The grid location of an individual in the kth objective can be denoted as:
Gd ( x ) =
GGR(xi ), more individuals dominated by xi . For example, in Fig. 2a, the GGR of individual p4 is the 11, which is superior to the others (or dominate the others). Hybrid grid distance: The normalized Euclidean distance between an individual and the minimal boundary point in its hyperbox, called HGD, denoted as:
(8)
d=1 i, j ∈ P, i = j where GGR(xi ) is the global grid ranking of individual xi , D is the number of the objective, Gd (xi ) is the grid coordinate of individual xi in the dth objective, find(.) is the function to find the number to meet the conditions of (.). The larger the
M
( fd (xi ) − (ld + Gd (xi − 1 ) ∗ dd ))2
(9)
d=1
Due to the fact that both grid coordinates and GGR have an integral value, some individuals may have the same grid coordinate and GGR values, in this case, the one which has a smaller value of HGD should be priority selected. For example, in Fig. 2b, the GGR of individual p5, p6 are the same, however, the HGD of the p5 is smaller than p6, which means p6 should be omitted and priority selected p5. To compare with the GrEA, both two algorithms share the idea of constructing the grid mechanism. However, the GGR strategy using the function of rounding up to establish the grid location, while the grid location in GrEA has been integrated with the floor function. Therefore, the calculation of euclidean distance between an individual and the minimal boundary point is totally deferent. Further more, the GGR in MOWOA considers the information of global, to be specific, the GGR has been established with the global density of the individuals, which can evaluate the distribution of the grid on the global level effectively. On the contrary, the grid mechanism in GrEA only considers the information of neighbors. Generally speaking, GGR strategy is designed with the purpose of assisting the algorithm to obtain well convergence and distribution. Specially, through the adoption of GGR, the algorithm is expected to filter individuals efficiently and promote the efficiency of obtaining non-dominated solutions.
44
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
Fig. 2. A case of global grid ranking (a) and hybrid grid distance(b).
2.4. Whale optimization operator In this section, the mathematical model of a whale operator is first provided. Inspiring by the whale, the operator is divided into three parts as encircling prey, spiral bubble-net feeding maneuver, and search for prey [33,42]. The mathematical model proposed for each step is presented as follows: Encircling prey: Whales can always recognize the location of prey and encircle them. Here we assume that the current best candidate solution is the target prey or the closed and after the best whale is defined. The other whale(search agents) will try to update their positions towards the it. The model is as follows:
C = AX (¯t ) − X (t )
(10)
X (t + 1 ) = X (¯t ) − B · C
(11)
C = AXrand¯ (t ) − X (t )
(14)
X (t + 1 ) = Xrand¯ (t ) − α · C
(15)
¯ (t ) is a random position vector (a random whale) whereXrand chosen from the current population and A is same as the in Eq. (10).
2.5. Archive strategy
A=2·r B = 2r · a − a where X(t)is the position vector and X (¯t ) is the position vector of the best solution obtained so far; a is linearly decreased from 2 to 0 and r is a random vector in [0,1]. Moreover,. is an element-by-element multiplication. Bubble-net attacking: To model this simultaneous behavior which whales swim around the prey within a shrinking circle and along a spiral-shaped path simultaneously, here we assume that there is a probability of 0.5 to choose between either the shrinking encircling or the spiral model to update the position [33]. The mathematical model is as follows:
C = X (¯t ) − X (t )
X (t + 1 ) =
is a random number in [0, 1], and. is an element-by-element multiplication. Prey search: Here we use α with the random values greater than 1 or less than −1 to force search agent to move far away from a reference whale [33]. This will emphasize exploration and allow the MOWOA algorithm to perform a global search. The detail is as follows:
X (¯t ) − α · C C · eα · cos(2π α ) + X (¯t )
(12) if if
p < 0.5 p ≥ 0, 5
(13)
where C is the distance of the ith whale to the prey (best solution obtained so far), α is a random number in [−1, 1], p
It is important to preserve and maintain non-dominated solutions in MOWOA, which means the solutions should be chosen and omit in an effective way to improve the distribution of the external archive. The general approach is to establish an external archive to lodge the non-dominated solutions. When the number of non-inferior solutions reaches the predetermined value, it has to be trimmed to improve the search efficiency of the algorithm and maintain the diversity of solutions so as to offer greater search space [43]. These approaches are like clustering-based schemes [44,45], niching-based (or crowding) techniques [46], etc. Here we set the size of external archive to Np which is also the size of the population and fetch the top 50% solutions in ascending order into the external archive by using the GGR strategy in Section 2.3. Since the distribution of the entire population has changed in external archive, it is necessary to re-calculate the global grid ranking to get Np to gain the optimal front. By combination of OBL, GGR method and whale operator, the detailed steps of the MOWOA algorithm are as Algorithm 1 follows:
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
Algorithm 1: Opposition-based multi-objective whale optimization algorithm with global grid ranking Data: population size N, external archive size N, objective number M. Result: optimal front. begin Initialization: generate N p with the OBL as showed in Section 2.2 initialize algorithm parameters initialize the external archive ←− null while termination criterion not fulfilled: do for each search agent do applied the GGR strategy in Section 2.3 if p < 0.5 then if |α| < 1 then update the position by the Eqs. (10) and (11) end else if |α| ≥ 1 then select a random search agent update the position by the Eqs. (14) and (15) end end else if p ≥0.5 then update the position by the Eq. (13) end end Calculate the opposite search agents according to the current search agents with OBL Calculate the objective values of all search agents and find the non-inferior solutions Using the strategy above-mentioned in Section 2.5 to maintain and update the archive end Return the external archive end
2.6. Convergence and complexity The MOWOA algorithm inherits all the characteristics of WOA, therefore, the convergence of the MOWOA algorithm is guaranteed as it utilizes the same mathematical model of WOA to search for optimal solutions which the convergence of this algorithm has been guarantees according to the [33]. The main difference is that MOWOA utilizes the GGR and OBL strategy to search around a set of external archive members, while WOA only saves and improves the best solutions. The computational complexity of MOWOA is of O(MN2 ) where M is the number of objectives and N is the number of individuals in the population. The complexity of the proposed MOWOA algorithm is identical to well-known algorithms including MOPSO, NSGA-II, PAES, and SPEA2. 3. Experimental results and discussion
Table 1 Parameters for NSGA-II, SPEA2, MOEA/D, MOPSO, AGE-II, IM-MOEA, MOGWO, MODEA, GrEA and MOWOA. Algorithm
Parameter
Value
NSGA-II SPEA2 MOEA/D
Crossover probability Mutation probability Probability Differential weight Mutation distribution index Neighborhood size nGrid Inertia weight c1 c2 epsilon K
1 0.1 1 0.5 20 30 30 0.5 1 2 0.1 10 0.5 0.1 0.1 10 4 2 0.1 0.5 0.3 0.1 10 10
MOPSO
AGE-II IM-MOEA NSLS
μ σ
MOGWO
alpha nGrid beta gamma alpha f Cr alpha div h
MODEA
GrEA MOWOA
vex, concave, and connected PFs with two or three optimization objectives. The parameters setting is shown in Table 1. 3.2. Performance metrics In order to minimize the distance of the Pareto front produced by MOWOA with respect to the global Pareto front and maximize the spread of solutions found, three metrics are selected as the quantitative assessment of the performance among the optimization algorithm includes: a. Inverted Generational Distance (IGD): Let P∗ denotes a set of uniformly distributed solutions in the objective space along the Pareto front. P is an approximation to the PF which is obtained by the algorithm. The IGD is described as [50]: P∗ | |
IGD(P, P ) *
i=1
dist (Pi∗ , P ) (16)
|P∗ |
where, dist (Pi∗ , P ) is the Euclidean distance between a point x∗ ∈ P∗ and its nearest neighbor in P, and |P∗ | is the cardinality of P∗ . It can be seen from the definition of IGD that the lower the IGD value is, the better the quality of P for approximating the whole Pareto front is. In other words, a small value of the IGD indicates the good convergence and distribution. Moreover, the IGD equals 0 indicates the obtained Pareto front contains every point of the true Pareto front. b. Spacing metric (SP): measures the range variance of neighboring solutions in the non-dominated set by comparison with the solutions converged to the true Pareto front. This metric is defined as [51]:
3.1. Parameters and instances In this section, experiments with four well-known and six state-of-the-arts MOEAs named NSGA-II, SPEA2, MOEA/D, MOPSO, AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA have been adopted in order to evaluate the performance of the proposed MOWOA algorithms on the standard benchmark test problems including ZDT{1,2,3,4,6} suites [47], UF{4,5,6,7,8,9,10} suites [48] and DTLZ{1,2} suites [49]. The selected test problems cover linear, con-
45
S
n 2 1 ( d − di ) n−1 i=1
(17)
where, di = min j ( f1 (Xi ) − f1 (X j ) + f2 (Xi ) − f2 (X j )) is the
minimal distance between two solutions and d = 1n ni=1 di is the mean value for all di , n is the number of vectors in the Pareto front. A smaller value of SP suggests a uniform distribution of solution in the non-dominated set as the solutions will
46
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59 Table 2 IGD metric results on bi-objective instances. Problem
Metric
MOWOA
MOPSO
MOEA/D
SPEA2
NSGA-II
ZDT1
Mean Std.
1.01E−03 1.30E−03
ZDT2
Mean Std.
1.13E−03 2.12E−03
ZDT3
Mean Std.
1.57E−03 3.01E−04
ZDT4
Mean Std.
1.74E−03 1.31E−03
ZDT6
Mean Std.
1.67E−03 4.22E−03
UF4
Mean Std.
1.44E−02 8.10E−03
UF5
Mean Std.
1.92E−02 4.52E−02
UF6
Mean Std
6.33E−02 2.24E−02
UF7
Mean Std.
3.81E−02 1.17E−01
7.34E−02 1.72E−03 = 7.19E−03 8.05E−04 + 3.71E−03 2.63E−03 + 7.26E−03 6.23E−03 + 1.55E−03 1.08E−03 = 6.70E−02 2.51E−02 = 3.97E−01 5.04E−01 + 9.25E−01 3.99E−01 + 3.59E−01 4.30E−01 + 6/3/0
8.27E−02 2.32E−02 + 4.12E−03 2.17E−03 = 1.90E−03 1.62E−03 = 1.32E−03 6.34E−03 = 1.52E−03 1.67E−03 = 4.26E−02 3.06E−02 = 2.26E−01 2.86E−01 + 5.13E−01 1.14E−01 = 4.83E−01 6.30E−01 = 2/7/0
1.67E−02 2.75E−03 = 2.77E−02 1.31E−02 = 6.23E−03 8.83E−03 = 6.18E−03 4.24E−03 + 3.87E−02 5.39E−02 + 3.39E−02 2.89E−02 = 6.78E−01 2.01E−01 + 4.18E−01 1.14E−01 + 1.94E−01 4.10E−01 + 5/4/0
1.89E−02 3.48E−03 + 3.14E−02 1.93E−02 + 3.26E−03 4.40E−03 = 5.27E−03 2.14E−03 + 4.98E−02 6.54E−02 + 7.59E−02 3.80E−02 = 6.89E−01 2.07E−01 + 3.51E−01 9.34E−02 + 2.36E−01 3.70E−01 + 7/2/0
w/t/l
be equally spaced. Furthermore, SP equals 0 means that all solutions on the Pareto front are equidistantly placed. c. Generalized spread (GSpread): measures the extent of spread achieved among the obtained solutions. As the original spread metric [19] works only for bi-objective problems, here we use an extension for tri-objective problem [52]:
Gspread (G, G∗ ) =
m
i=1
d (gi , G ) + X∈G∗ |d (X, G ) − d¯|
m ∗ ¯ i=1 d (gi , G ) + |G |d
(18)
where, d¯ = |G1∗ | X∈S∗ d (X, G ), d (X, G ) = min ||F (X ) − F (Y )||∗ Y ∈S,Y =X
and {g1 , g2 , . . . , gm } are extreme solutions in G∗ . This metric measures the spread in solutions obtained by an algorithm directly. Clearly, a lower GSpread value is preferable. Specially, for the most widely and uniformly spread-out set of nondominated solutions, the GSpread metric tends to be zero. 3.3. Discussion and analysis 3.3.1. Comparison with well-known algorithms This section provides relevant results obtained from the IGD metric. GSpread and SP metric are described in Section 3.2. Tables 2–5 and Fig. 4 present the results obtained by the candidates upon the test instances. Thirty independent runs are executed for each test problem to avoid randomness. Moreover, Wilcoxon rank sum test is adopted to compare the results obtained by MOWOA and those four compared algorithms at a significance level of 0.05. In the table, “ + ”, “ = ”, “ − ”, indicate the proposed algorithm is better than, similar to or worse than its competitor respectively and the results are summarized as “w/t/l”, which denotes that MOWOA wins on w functions, ties on t functions, and loses on l functions, compared with its corresponding competitor. As per the results of the algorithms on ZDT11 and ZDT2 in Table 2, MOWOA is able to outperform others in average. Although the best results of GSpread on ZDT1 and ZDT2 are achieved by SPEA2 and MOEA/D, separately, the Wilcoxon signed-rank test
results show the performance of the MOWOA algorithm is similar to them. The shape of the best Pareto optimal front obtained by the five algorithms on and ZDT3 are illustrated in Fig. 3. From the figure, it may be observed that SPEA2 and NSGA-II show the weak convergence. However, the MOWOA, MOPSO and MOEA/D provide a very good convergence toward all the true Pareto optimal fronts. Moreover, as the box plot is helpful for identifying outliers as well as comparing distribution on discrete data, it can specifically present the form of spacing metric. In Fig. 4, the box plot of SP on ZDT3 shows that the MOWOA has the better distribution of obtained solutions to the approximated Pareto front. The second and third row in Fig. 3 shows the best Pareto optimal front achieved by the five algorithms on the last two ZDT benchmark functions, ZDT4 and ZDT6, in which the shape of the fronts is convex and disconnected. NSGA-II, MOPSO and SPEA2 still tends to performance worse than the others on ZDT4.The interesting part is that the convergence of SPEA2 and NSGA-II is poor on ZDT4, while the distribution of them is much better than MOPSO. Comparing the results of MOWOA, MOPSO and MOEA/D on ZDT6 test functions, it can be observed that the convergence of these algorithms are almost similar. Furthermore, distribution of ZDT4 and ZDT6 in Fig. 4 and Table 4 show that the MOWOA is much better than SPEA2 and NSGA-II. On ZDT test suites, each algorithm can obtain a front close to the PF from the overall perspective, but there still has the difference between them. Compare to the other algorithms, MOWOA has advantages in both distribution and convergence. While for the statistical results for IGD and GSpread metric of the algorithms on UF4, the results show that MOWOA tends to obtain well convergence and coverage. Noting that the best results for mean of GSpread metric belong to MOEA/D on UF4, MOWOA still outperforms the other algorithms. Fig. 4 shows the SP statistical results for the algorithms compared to the previous test problems, which prove that the superiority of the MOWOA in distribution is not significant of UF4. However, the result of the MOWOA algorithm in convergence is better than the others. The UF5 and
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
47
Table 3 IGD metric results on tri-objective instances. MOWOA
MOPSO
MOEA/D
SPEA2
NSGA-II
6.53E−02 3.17E−03 + 4.42E−02 4.86E−02 + 4.92E−03 3.30E−03 + 6.14E−01 4.94E−02 + 7.01E−03 5.44E−03 = 4/1/0
3.21E−03 4.08E−03 = 5.44E−03 4.17E−02 + 5.81E−03 2.24E−03 = 7.89E−03 5.12E−03 = 7.06E−03 8.53E−03 = 1/4/0
2.07E−03 7.5E−03 = 3.38E−03 4.48E−02 + 7.48E−03 3.05E−03 + 8.09E−03 5.60E−03 + 5.32E−02 7.53E−03 = 3/2/0
4.55E−03 5.36E−03 + 8.01E−03 5.12E−02 + 6.93E−03 6.93E−03 + 5.45E−01 8.89E−02 + 7.17E−02 7.11E−03 = 4/1/0
UF8
Mean Std.
3.64E−03 1.18E−03
UF9
Mean Std.
1.16E−03 5.22E−02
UF10
Mean Std.
2.32E−03 1.36E−03
DTLZ1
Mean Std.
6.29E−03 1.33E−03
DTLZ2
Mean Std.
6.56E−03 1.61E−04
w/t/l
Table 4 GSpread metric results on bi-objective instances. Problem
Metric
MOWOA
MOPSO
MOEA/D
SPEA2
NSGA-II
ZDT1
Mean Std.
7.56E−01 3.44E−02
ZDT2
Mean Std.
7.53E−01 5.32E−02
ZDT3
Mean Std.
4.03E−01 4.33E−02
ZDT4
Mean Std.
3.65E−01 4.65E−02
ZDT6
Mean Std.
1.23E−01 2.23E−02
UF4
Mean Std.
4.66E−01 4.67E−02
UF5
Mean Std.
9.66E−01 8.66E−02
UF6
Mean Std.
7.69E−01 5.65E−02
UF7
Mean Std.
6.84E−01 5.67E−02
8.18E−01 8.71E−02 = 7.62E−01 1.55E−01 + 6.13E−01 7.72E−02 + 8.92E−01 1.14E−01 + 1.20E−01 2.09E−02 = 6.61E−01 3.66E−02 + 1.00E+0 1.04E−01 + 9.64E−01 5.85E−02 + 8.61E−01 9.80E−02 + 7/2/0
8.23E−01 1.31E−01 = 7.08E−01 9.80E−02 = 4.23E−01 6.30E−02 + 3.11E−01 5.74E−02 = 1.02E−01 1.87E−02 = 3.18E−01 4.47E−02 − 1.00E+0 8.46E−02 + 7.01E−01 1.94E−02 = 6.00E−01 4.71E−02 = 2/6/1
7.39E−01 8.16E−02 = 7.85E−01 3.57E−02 = 5.04E−01 7.51E−02 = 5.48E−01 1.67E−01 = 7.17E−01 9.77E−02 + 5.82E−01 6.32E−02 + 1.05E+0 9.34E−02 + 9.94E−01 7.57E−02 + 7.96E−01 2.49E−01 + 5/4/0
8.01E−01 1.01E−01 + 8.08E−01 3.70E−02 = 5.99E−01 8.97E−02 + 5.05E−01 1.60E−01 = 7.90E−01 1.32E−01 + 4.66E−01 3.36E−02 = 1.04E+0 5.36E−02 + 9.71E−01 5.60E−02 + 8.53E−01 2.38E−01 + 6/3/0
w/t/l
Table 5 GSpread metric results on tri-objective instances. Problem
Metric
MOWOA
MOPSO
MOEA/D
SPEA2
NSGA-II
UF8
Mean Std.
6.02E−02 4.56E−02
UF9
Mean Std.
4.55E−01 5.66E−02
UF10
Mean Std.
6.45E−01 6.77E−02
DTLZ1
Mean Std.
4.33E−01 2.99E−01
DTLZ2
Mean Std.
1.04E−01 1.33E−03
8.92E−01 4.96E−02 + 6.87E−01 5.92E−02 + 9.36E−01 8.61E−02 + 8.97E−01 1.14E−01 + 2.02E−01 3.26E−02 = 4/1/0
8.11E−02 1.75E−01 = 9.05E−01 6.75E−02 + 7.01E−01 1.62E−02 = 4.52E−01 2.70E−01 = 1.72E−01 3.66E−03 = 1/4/0
3.99E−01 7.26E−02 = 4.91E−01 7.25E−02 = 7.25E−01 1.35E−01 + 6.98E−01 2.56E−01 + 1.33E−01 1.05E−02 = 2/3/0
8.99E−01 5.02E−02 + 7.49E−01 6.60E−02 + 6.96E−01 1.07E−01 = 9.03E−01 1.76E−01 + 5.16E−01 5.21E−02 + 4/1/0
w/t/l
48
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
Fig. 3. Pareto fronts of some peer algorithms on some relatively hard test instances.
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
Fig. 4. Spacing metric on ZDT{1,2,3,4,6}, UF{4,5,6,7,8,9,10}, DTLZ{1,2} problems.
49
50
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59 Table 6 IGD metric results obtained by MOWOA, AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA on ZDT test problems. Problem
Metric
MOWOA
AGE-II
IM-MOEA
NSLS
MOGWO
MODEA
GrEA
ZDT1
Mean Std.
1.01E−03 1.30E−03
ZDT2
Mean Std.
1.13E−03 2.12E−03
ZDT3
Mean Std.
1.57E−03 3.01E−04
ZDT4
Mean Std.
1.74E−03 1.31E−03
ZDT6
Mean Std.
1.67E−03 4.22E−03
3.40E−02 1.62E−01 + 6.82E−02 9.76E−02 + 3.16E−02 1.31E−01 + 3.42E−02 2.42E−01 = 4.80E−03 3.45E−02 = 3/2/0
3.51E−02 2.54E−02 + 5.65E−02 3.58E−02 + 3.55E−02 2.87E−02 + 1.02E−02 7.49E−04 = 2.52E−02 1.55E−01 + 4/1/0
4.29E−03 5.12E−05 = 3.90E−03 1.91E−05 = 4.99E−03 4.61E−05 = 1.59E−01 9.79E−02 + 2.07E−03 1.16E−04 = 1/4/0
1.42E−02 4.70E−03 + 1.66E−02 1.53E−01 + 1.26E−02 3.50E−03 + 9.79E−02 4.00E−03 + 1.73E−02 9.34E−02 + 5/0/0
4.41E−03 3.10E−03 = 3.94E−03 3.16E−02 = 4.63E−03 2.10E−03 = 5.16E−02 3.44E−03 + 2.59E−03 6.51E−02 = 1/4/0
1.94E−03 8.31E−02 = 4.85E−03 1.56E−01 = 1.77E−03 7.82E−02 = 2.97E−02 2.11E−01 + 1.12E−03 5.05E−02 = 1/4/0
w/t/l
UF6 test problem have discontinuous search space, which makes it challenging for the algorithms. The obtained Pareto optimal solutions in Fig. 3 and statistical results in Tables 2 and 4 show the poor convergences and coverage for all of the algorithms. However, consequences of MOWOA are better than the others. As may be observed in Fig. 3, MOWOA tend to find the multi-disconnected regions of the true Pareto optimal front, whereas MOPSO, MOEA/D, SPEA2 and NSGA-II only move toward one part. This proves that the proposed MOWOA algorithm has the potential to provide superior results on UF5 and UF6. As the UF7 has a linear Pareto optimal front, it is easier for algorithms to converge towards the true Pareto optimal fronts. Although the MOEA/D is slightly superior than the proposed algorithm in distribution, the statistical results of metric suggest that MOWOA has better performance on UF7. The tri-objective UF {8,9,10} test suits, which have complex Pareto front, have been applied to validate the proposed algorithm. Tables 3, 5 and Figs. 3, 4 show that all well-known algorithms MOWOA, MOPSO, MOEA/D, PESA2 and NSGA-II can obtain convergence and coverage, however, the performance of these algorithm is much more worse than on ZDT test suits, especially for the MOPSO and NSGA-II. The results of IGD and GSpread metric for UF{8,9,10} problems show that MOWOA has better performance than the others. As for the DTLZ {1,2} suites which have multiple local PF and traps, the results are a little bit different to compare with the UF suites. It is obvious that the MOWOA is able to find a uniform distribution of solutions on each DTLZ test problem. In the Figs. 3 and 4, MOWOA obviously outperform the other algorithms. Tables 3 and 5 show that MOWOA has promising convergence performance as well as a good distribution on DTLZ1 and DTLZ2. Since the DTLZ test problem could be used to examine the ability to converge to the global Pareto front, MOWOA has been validated the potential to find Pareto optimal front.
the wilcoxon rank sum test shown. While for the Table 7, MODEA and NSLS has obtained the best value of IGD on UF6 and UF7 test problem, respectively. However, the proposed MOWOA still outperforms the other algorithms on most test problems. Furthermore, although the MOGWO has integrated the grid-based mechanism, the MOWOA tends to be superior than it on these test instances. This may attribute to the well performance and construction of GGR strategy and OBL strategy in our MOWOA. As for the statistical result of GSpread on 14 test problems in Tables 8 and 9, the performance of the MODEA is pretty good especially on ZDT1 and ZTD2, which the best value has been obtained by MODEA. Although performance of NSLS on GSpread is not as good as on IGD metric, the best value of GSpread on UF5 is still obtained by NSLS. Specially, the best value on UF10 has been obtained by MOGWO, which indicates that the performance of the MOGWO in Table 9 is much better than it in Table 6. However, the MOWOA has obtained the best values on most test instances, which is superior than most algorithms. Furthermore, the Wilcoxon rank sum test in these four tables has also given the objective statistical results which further indicate that the proposed algorithm can obtain good convergence and distribution. Due to the fact that the spacing is designed to measures the range variance of neighboring solutions in the nondominated set by comparison with the solutions converged to the true Pareto front, the performance score [53] is introduced to rank the algorithms and quantify how well each algorithm performs with the SP overall. Giving a specific test problem, suppose there are l algorithms Al g1 , Al g2 , . . . , Al gk involved in the comparison, The δ i,j is set 1, if Algj is significantly better than Algi in terms of SP, and 0 otherwise. Thereafter, for each algorithm Algi , the performance score P(Algi ) is determined as follows:
P (Algi ) =
k
δi, j
(19)
j=1, j=i
3.3.2. Comparison with state-of-the-arts algorithms As to further evaluate the performance of the proposed MOWOA algorithms, this section provides relevant results obtained from the state-of-the-arts algorithms such as AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA. Thirty independent runs are executed for these algorithms on 14 test problems with the wilcoxon rank sum test at a significance level of 0.05. The statistical result of IGD metric obtained by five algorithms has been shown in the Tables 6 and 7. As Table 6 shows, MOWOA, NSLS, MODEA and GrEA have obtained the relatively good performance than the other three algorithms on ZDT test problems. However, the proposed MOWOA is still outperform NSLS, MODEA and GrEA especially on ZDT4 as
This value represents the number of algorithms which have significantly better performance than the corresponding algorithm on the entire tested instances. Clearly, the smaller the index, the better the algorithm and zero means that no other algorithm is significantly better in terms of the Spacing indicator. Fig. 5 shows the average performance score over all 14 test problem among the eleven algorithms, and the rank of each algorithm according to the score is also given. As the figure shows, the MOWOA is in the first place and the NSLS is in the last place among the ten algorithms. Furthermore, in order to better visualize the performance, the average performance score summarized for different test
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
51
Table 7 IGD metric results obtained by MOWOA, AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA on UF, DTLZ1 and DTLZ2 test problems. Problem
Metric
MOWOA
AGE-II
IM-MOEA
NSLS
MOGWO
MODEA
GrEA
UF4
Mean Std.
1.44E−02 8.10E−03
UF5
Mean Std.
1.92E−02 4.52E−02
UF6
Mean Std.
6.33E−02 2.24E−02
UF7
Mean Std.
3.81E−02 1.17E−01
UF8
Mean Std.
3.64E−03 1.18E−03
UF9
Mean Std.
1.16E−03 5.22E−02
UF10
Mean Std.
2.32E−03 1.36E−03
DTLZ1
Mean Std.
6.29E−03 1.33E−03
DTLZ2
Mean Std.
6.56E−03 1.61E−04
9.44E−02 4.42E−03 = 1.02E−01 3.48E−01 = 5.32E−01 2.06E−01 + 3.22E−01 1.61E−01 = 4.88E−02 7.12E−02 + 4.87E−02 8.68E−02 + 1.58E−02 6.82E−02 = 5.49E−02 4.42E−02 = 9.75E−02 3.93E−03 + 4/5/0
1.18E−01 4.53E−03 + 1.72E−01 2.60E−01 = 7.64E−01 1.27E−01 + 2.57E−01 7.99E−02 = 2.88E−02 5.95E−03 = 5.12E−02 2.45E−02 + 6.18E−02 3.17E−02 = 6.71E−02 2.33E−02 + 1.03E−02 3.64E−03 = 4/5/0
2.95E−02 1.51E−03 = 7.00E−02 5.10E−03 = 4.50E−02 1.24E−02 = 7.21E−03 1.38E−03 − 6.40E−02 5.79E−02 + 3.76E−02 1.88E−03 = 2.24E−01 4.14E−02 + 3.56E−02 8.87E−02 = 7.47E−02 3.44E−03 + 3/5/1
6.39E−02 7.74E−02 = 7.21E−01 3.79E−01 + 7.77E−01 1.51E−01 + 6.38E−02 7.72E−02 = 4.66E−01 1.51E+00 + 2.04E−01 1.18E−01 + 1.94E+00 6.10E−01 + 5.66E−01 4.39E−02 + 7.35E−02 5.19E−02 + 7/2/0
4.19E−02 2.13E−03 = 1.47E−01 3.11E−03 + 2.43E−02 1.94E−02 = 1.69E−02 4.31E−02 = 1.14E−01 5.68E−03 + 7.92E−02 6.52E−02 + 3.12E−01 1.69E−02 + 4.55E−02 7.44E−02 + 3.55E−02 1.94E−03 + 6/3/0
7.51E−02 3.70E−03 = 5.46E−01 1.20E−01 + 4.25E−01 1.10E−01 + 1.51E−01 1.11E−02 + 4.99E−02 6.67E−02 + 3.94E−02 9.28E−02 = 8.37E−02 2.75E−02 = 1.55E−02 1.68E−02 = 6.60E−03 1.72E−03 = 4/5/0
w/t/l
Table 8 GSpread metric results on bi-objective problems by MOWOA, AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA. Problem
Metric
MOWOA
AGE-II
IM-MOEA
NSLS
MOGWO
MODEA
GrEA
ZDT1
Mean Std.
7.56E−01 3.44E−02
ZDT2
Mean Std.
7.53E−01 5.23E−02
ZDT3
Mean Std.
4.03E−01 4.33E−02
ZDT4
Mean Std.
3.65E−01 4.65E−02
ZDT6
Mean Std.
1.23E−01 2.23E−02
UF4
Mean Std.
4.66E−01 4.67E−02
UF5
Mean Std.
9.66E−01 8.66E−02
UF6
Mean Std.
7.69E−01 5.65E−02
UF7
Mean Std.
6.84E−01 5.67E−02
8.77E−01 8.76E−02 = 1.31E+00 4.11E−01 + 8.76E−01 6.25E−02 = 1.28E+00 3.78E−01 + 1.12E+00 3.94E−01 + 9.44E−01 1.33E−01 + 1.06E+00 6.81E−02 = 1.01E+00 7.49E−02 + 9.48E−01 1.70E−01 + 6/3/0
1.23E+00 3.50E−01 + 1.16E+00 2.53E−01 + 9.34E−01 2.16E−01 + 8.30E−01 5.34E−01 = 9.90E−01 6.78E−02 = 6.27E−01 5.80E−02 = 9.48E−01 4.04E−02 = 9.21E−01 4.06E−02 = 9.43E−01 6.54E−02 = 3/6/0
1.07E+00 4.21E−02 + 1.03E+00 6.77E−02 + 1.01E+00 7.81E−02 + 1.34E+00 3.66E−02 + 1.21E+00 3.69E−01 + 6.20E−01 6.69E−02 = 8.72E−01 6.40E−02 − 9.11E−01 1.28E−01 + 8.10E−01 9.99E−02 = 6/2/1
1.29E+00 1.71E−01 + 1.14E+00 1.82E−01 + 9.67E−01 1.21E−01 + 8.43E−01 3.12E−02 = 1.20E+00 2.32E−01 + 8.20E−01 1.12E−01 + 1.0 0E+0 0 2.22E−01 + 9.57E−01 1.78E−01 + 8.71E−01 1.16E−01 = 7/2/0
3.33E−01 4.16E−02 = 3.22E−01 7.33E−02 − 7.30E−01 8.15E−02 + 3.73E−01 5.26E−02 = 3.03E−01 6.77E−01 = 6.77E−01 7.49E−02 = 9.89E−01 8.43E−02 = 9.71E−01 4.28E−01 + 9.79E−01 5.69E−02 + 3/5/1
7.91E−01 7.86E−02 = 9.94E−01 5.80E−02 = 8.02E−01 9.77E−02 = 9.23E−01 2.08E−01 + 7.69E−01 1.21E−01 + 7.58E−01 6.08E−02 = 1.05E+00 7.78E−02 = 9.61E−01 6.15E−02 = 8.44E−01 1.25E−01 + 3/6/0
w/t/l
problems in terms of the Spacing is presented in Fig. 6. As the figure shows, the proposed MOWOA works well on nearly all the considered test problem. For the majority of ZDT test problems, MOWOA obtains the best value among the eleven algorithms according to the SP. While for the UF4 and UF7 test problems, the performance of MOWOA is not satisfactory as on ZDT test problems. Moreover, although MOWOA is slightly worse than on the part of the algorithms on DTLZ1 and DTLZ2 test problems, how-
ever, it is still outperform most algorithms such as NSLS, AGE-II, IM-MOEA, SPEA2, MOPSO and MOEA/D. As a summary, whale operator allows the whales to re-position themselves towards the best solution obtained so far, moreover, the exploration ability and convergence are emphasized due to the employed OBL and GGR mechanisms, which lead the results of this section revealed the high efficiency of the proposed MOWOA algorithm.
52
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59 Table 9 GSpread metric results on tri-objective problems by MOWOA, AGE-II, IM-MOEA, NSLS, MOGWO, MODEA and GrEA. Problem
Metric
MOWOA
AGE-II
IMMOEA
NSLS
MOGWO
MODEA
GrEA
UF8
Mean Std.
6.02E−02 4.56E−02
UF9
Mean Std.
4.55E−01 5.66E−02
UF10
Mean Std.
6.45E−01 6.77E−02
DTLZ1
Mean Std.
4.33E−01 2.99E−01
DTLZ2
Mean Std.
1.04E−01 1.33E−03
8.89E−01 8.26E−02 = 9.70E−01 8.36E−02 = 7.31E−01 9.31E−02 = 1.42E+00 4.47E−01 + 7.07E−01 5.86E−02 + 2/3/0
7.74E−01 8.44E−02 = 7.86E−01 7.89E−02 = 6.35E−01 8.08E−02 = 8.36E−01 9.46E−02 = 4.98E−01 4.22E−02 = 0/5/0
7.06E−01 1.19E−01 + 8.18E−01 1.54E−01 + 8.00E−01 8.17E−02 = 6.34E−01 1.03E−01 = 2.53E−01 4.21E−02 = 2/3/0
8.21E−01 1.35E−01 + 6.77E−01 8.21E−02 = 6.25E−01 7.92E−02 = 5.69E−01 1.07E−01 = 8.76E−01 2.59E−02 + 2/3/0
6.54E−01 4.39E−01 + 8.55E−01 3.39E−01 + 7.99E−01 3.17E−02 = 6.31E−01 7.46E−01 = 6.92E−01 5.62E−02 + 3/2/0
9.37E−01 4.85E−02 = 8.03E−01 6.29E−02 = 7.32E−01 7.51E−02 = 8.10E−01 2.95E−01 + 4.68E−01 3.75E−02 = 1/4/0
w/t/l
Fig. 5. The ranking and score of average performance obtained by each compared algorithms in terms of SP.
3.3.3. Parameter study In MOWOA, the grid division h is introduced to divide the grid environment. Here, we present the results for the ZDT3, UF4, UF10 and DTLZ2 test problem to investigate the effect of h and tries to provide a fitting setting for the user. To study the sensitivity of h, we repeat the experiments carried out at h ∈ [5, 30]. Moreover, we generate a series of random values to set the other control parameters and kept them unchanged in the test. The Fig. 7 shows that the IGD value varies regularly with the number of divisions. Moreover, the trajectory of performance ranging from divisions 5 to around 9 rapidly decreases and then rises until the boundary. The sensitivity of the algorithm on bi-objective and tri-objective problem is similar, however, a variation of h still may result in a change in the performance of the algorithm. This indicates that an appropriate setting of divisions should be conducted for the problem. On the other hand, the best values are distinguished but similar in general. MOWOA performs the best at h ∈ [9, 11] for the biobjective and tri-objective problem.
3.3.4. The influence of OBL and GGR According to the Section 2, two strategies OBL and GGR have been combined with the whale optimization method. In order to investigate the influence of each strategy, three variational algorithms are generated as follows: MOWOA1 (MOWOA without OBL strategy), MOWOA2 (MOWOA without GGR strategy) and MOWOA3 (MOWOA without OBL strategy and GGR strategy). Furthermore, their performance tested through 14 benchmarks is shown in Tables 10 and 11 and the better values among three variations are highlighted in italic. It shows that OBL strategy and GGR strategy both have ability to assist the algorithm to obtain better convergence and distribution. From Tables 10 and 11, MOWOA1 and MOWOA2 are well capable of searching for the best function values of IGD for most cases, however, the performance of MOWOA1 is slightly better than MOWOA2 with the Wilcoxon rank sum test, which indicates that GGR strategy has further influence on assisting algorithm obtain better convergence than OBL strategy. This is attributed to the well structure of grid mechanism and the effectiveness of global grid
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
53
ranking. While for the GSpread of three variations on bi-objective and tri-objective test instances, the results of this metric shown in Tables 10 and 11 are similar on most instances according to the Wilcoxon rank sum test. However, the OBL strategy still shows the advantages than GGR strategy in assisting algorithm obtains well distribution. To be more specifically, due to the applying of the opposite operation, OBL strategy is able to assist algorithm to explore the better region and obtain the well diversity more efficiently. Moreover, although the performance of MOWOA3 is not that good as the others, the value of the IGD and GSpread of this variation still shows the trends towards the better convergence and distribution on part of the test instances, which can be attribute to the well good potential of whale operator. Above all, with the help of OBL strategy and GGR strategy and the well potential of whale operator, the proposed MOWOA gain well potential to obtain well performance on different kinds of problems.
Fig. 6. Average performance score got by eleven algorithms on dimensions for different test problems in terms of the SP, Zx for ZDT, Ux for UF and Dx for DTLZ. The values of proposed MOWOA is connected by a red solid line.
Fig. 7. IGD of MOWOA with different numbers of h on ZDT3,UF4,UF10 and DTLZ2.
54
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59 Table 10 Mean function values of IGD and GSpread obtained by various MOWOA on bi-objective instances. IGD
Gspread
MOWOA
MOWOA1
MOWOA2
MOWOA3
MOWOA
MOWOA1
MOWOA2
MOWOA3
ZDT1
1.01E−03 1.30E−03 1.13E−03 2.12E−03
ZDT3
1.57E−03 3.01E−04
ZDT4
1.74E−03 1.31E−03
ZDT6
1.67E−03 4.22E−03
UF4
1.44E−02 8.10E−03
UF5
1.92E−02 4.52E−02
UF6
6.33E−02 2.24E−02
UF7
3.81E−02 1.17E−01
2.42E−02 2.69E−02 + 1.63E−02 1.69E−02 = 8.31E−02 1.25E−02 + 1.24E−01 1.08E−03 + 9.19E−02 7.68E−02 + 6.01E−02 7.39E−02 = 4.83E−02 8.58E−04 = 1.57E−02 1.73E−02 = 1.27E−01 7.67E−02 +
7.08E−02 1.40E−03 + 7.06E−02 1.20E−03 + 6.62E−02 1.62E−02 + 2.11E−02 1.50E−03 + 1.74E−01 8.70E−03 + 6.53E−02 1.40E−03 = 4.45E−01 1.03E−01 + 7.02E−01 1.66E−02 = 4.01E−01 9.60E−03 =
7.56E−01 3.44E−02
ZDT2
6.94E−02 6.18E−03 = 6.19E−02 5.52E−03 + 5.97E−02 4.45E−02 + 2.67E−03 3.47E−03 = 6.57E−02 1.51E−02 + 1.01E−01 7.89E−03 + 4.20E−02 3.68E−02 = 1.13E−02 1.24E−02 = 6.24E−02 2.05E−02 =
8.62E−01 1.19E−02 + 7.60E−01 6.20E−03 = 5.05E−01 2.89E−02 = 4.65E−01 7.87E−03 = 1.32E−01 7.02E−03 = 4.75E−01 5.67E−02 = 1.07E+00 4.42E−03 + 8.79E−01 6.44E−04 = 7.86E−01 7.67E−02 +
7.65E−01 6.03E−02 = 8.53E−01 5.80E−02 = 4.13E−01 6.74E−04 = 3.73E−01 3.43E−03 = 2.31E−01 1.36E−03 = 5.70E−01 1.59E−02 = 9.68E−01 1.37E−03 = 7.69E−01 9.02E−04 = 6.88E−01 4.97E−02 =
7.72E−01 7.20E−03 = 7.78E−01 7.70E−03 = 5.19E−01 2.19E−02 = 7.47E−01 8.50E−03 + 4.05E−01 3.36E−02 + 6.39E−01 1.10E−02 + 1.61E+00 1.10E−01 + 1.32E+00 1.21E−02 + 7.73E−01 3.35E−02 =
7.53E−01 5.32E−02 4.03E−01 4.33E−02 3.65E−01 4.65E−02 1.23E−01 2.23E−02 4.66E−01 4.67E−02 9.66E−01 8.66E−02 7.69E−01 5.65E−02 6.84E−01 5.66E−02
Table 11 Mean function values of IGD and GSpread obtained by various MOWOA on tri-objective instances. IGD
Gspread
MOWOA
MOWOA1
MOWOA2
MOWOA3
MOWOA
MOWOA1
MOWOA2
MOWOA3
UF8
3.64E−03 1.18E−03 1.16E−03 5.22E−02
UF10
2.32E−03 1.36E−03
DTLZ1
6.29E−03 1.33E−03
DTLZ2
6.56E−03 1.61E−04
1.69E−02 7.83E−02 = 9.14E−02 5.35E−02 + 8.64E−02 7.10E−03 = 9.74E−02 4.35E−02 + 5.35E−02 2.42E−02 +
2.55E−01 3.83E−02 = 3.44E−01 5.87E−02 + 2.34E+00 4.74E−01 + 9.77E−02 7.30E−03 = 5.61E−02 8.30E−03 +
6.02E−02 4.56E−02
UF9
5.66E−03 5.07E−03 = 2.30E−02 3.11E−02 + 7.33E−02 6.96E−02 = 8.13E−02 6.45E−03 = 2.63E−02 6.42E−02 =
1.63E−01 6.84E−02 + 4.60E−01 2.87E−03 = 7.62E−01 4.96E−02 = 5.40E−01 7.49E−02 = 2.11E−01 2.95E−02 +
6.87E−02 4.61E−03 = 5.56E−01 1.07E−03 = 6.58E−01 9.58E−02 + 4.39E−01 1.85E−02 = 1.09E−01 2.41E−02 +
1.07E+00 6.73E−02 + 1.58E+00 5.98E−02 + 1.94E+00 2.94E−01 + 6.65E−01 5.20E−03 = 3.56E−01 1.14E−02 +
4.55E−01 5.66E−02 6.45E−01 6.77E−02 4.33E−01 2.99E−01 1.04E−01 1.33E−03
4. Application study
the objective function can be described as follows:
4.1. Application in hydropower scheduling problem
F = max
4.1.1. Model description In this section, MOWOA was applied to solve hydropower scheduling problem due to fact that the optimization of water resources utilization plays an important role in hydropower. Here, true data from Jiugong Mountain basin in Hubei Province in China has been used to verify the performance of the proposed algorithm in addressing a real engineering problem. Aiming to satisfy water diversion demand and the downstream water requirements as well as to satisfy industry requirements, hydropower generation and ecological water requirement, the multi-objective model can be defined as follows: Maximization of hydropower production: Power generation is a priority for the reservoir system. As taking the actual situation into consideration and fully utilize the water resource,
T
KQt H¯t t
(20)
t=1
where F indicates the annual hydropower output, the K reflects the flow of energy loss in the process of potential energy convert into electricity, the Q, H indicate the power flow and the average power net head of the station in during t period of time respectively and the T indicates the total time calculated in one year. Minimization of water shortage: Here, the ecological water requirement of the river was converted into ecological water shortage to facilitate the multi-objective optimization. Therefore, minimization of water shortages is adopted as an objective function:
F = minWQ =
(Qx − Qs ) ∗ t 0
Qx ≤ Qs Qx > Qs
(21)
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
55
Table 12 Statistical results of median values for the hydropower scheduling problem by using the MOWOA, MOGWO, AGE-II, MOPSO and NSGA-II according to 30 different runs. MOWOA
Best Worst Median Std.
MOGWO
AGE-II
MOPSO
NSGA-II
PG
WS
PG
WS
PG
WS
PG
WS
PG
WS
10659.14 10622.77 10651.60 14.16
−23422.41 −23572.85 −23529.39 58.06
10673.95 10601.96 10650.08 26.33
−23289.86 −23595.01 −23558.66 111.85
10623.60 10579.65 10612.56 16.35
−23109.08 −23482.22 −23351.95 131.95
10633.96 10540.64 10616.05 33.27
−23030.47 −23426.56 −23305.59 131.64
10732.20 10608.60 10643.99 45.06
−23290.87 −24111.84 −23543.67 270.80
where WQ indicates the water shortage, Qx , Qs indicate discharged flow and ecological flow, respectively. Constraints: The details can be described as follows:
Vt+1 = Vt + (Q¯t − Qt − Qloss ) ∗ T Vmin ≤ Vt ≤ Vmax Qmin ≤ Qt ≤ Qmax Nmin ≤ Nt ≤ Nmax X ≥0 where Vt+1 indicates the capacity of hydropower station at the end oftperiod, Vt indicates the capacity of hydropower station at the begin oft period, t indicates the unit of optimization time in month, Q¯t indicates the average reservoir inflow at t period, Qt indicates the power generation inflow at t period, Qloss indicates the total loss of flow the discharged flow of reservoir and the flow of evaporation and seepage; Vmin indicates the level of dead water(the minimum level), Vmax indicates the maximum level at the t period; Qmin and Qmax indicate the the minimum and maximum discharge flow of reservoir at t period respectively; Nmin , Nt and Nmax indicate the firm output, average output and installed capacity of the hydropower station. 4.1.2. Scheduling results The maximum iterations of 1,0 0 0 and the population size of 100 is chosen for the MOWOA and other competitors, respectively. Moreover, the h is set to 10 and every algorithm has 30 runs for the problem. To show the situation of the solution values obtained by five algorithms on this engineer problem, the statistical results of median values among five algorithms in 30 different runs have been listed in the Table 12. To obtain the results in this table, here we first calculate the median values of the power generation and water shortage for the each algorithm in every single run respectively. Then the statistical results of median values for the each algorithm in 30 different runs are obtained. Clearly, the larger value is preferable. From the table, the values of solutions obtained by MOWOA have shown the well performance among five algorithms. To be more specific, the fluctuation of values obtained by MOWOA is obviously better than the values obtained by the other algorithms. Moreover, the worst values of the power generation obtained by MOWOA(the minimum power generation) are also better(larger) than the values obtained by the other algorithms according to the statistical results of median values in Table 12. Furthermore, the distribution of the solution among five algorithms in a single run is shown in the Fig. 8. Aiming to visualize the data, here the water shortage is adopted with the absolute value (which means the small value of the water shortage is preferable). Each column (marked with different colors) in the figure stands for the median value of the power generation or water shortage obtained by the corresponding algorithms. The black lines on the top of the each column are generated according to the standard deviation of the solutions obtained by the corresponding algorithms in this single run (100 solutions). As the figure shows, the median value of the MOWOA in power generation
Fig. 8. The visualization of solutions obtained by each compared algorithms in one single run.
is similar with results obtained by NSGA-II and MOGWO, but still larger than the values obtained by MOPSO and AGE-II. As for the value of the water shortage, MOWOA is obviously better than most algorithms with the minor fluctuation according to the column and the black lines on it. With the purpose of further verify the well performance of WOMOA in addressing the hydropower problem, two famous metric named Coverage and Maximum Spread are taken into account. The deification of these metrics is as follows: Coverage metric (CM): The coverage of two sets, CM is defined for two sets of obtained solutions A, B⊆X, the definition is as follows [54]:
CM (A, B ) =
|b ∈ B|∃a ∈ A : a ≺ b| |B|
(22)
where CM (A, B ) = 1 means that all solutions in B are dominated by A which indicates that the algorithm A outperform algorithm B. Furthermore, CM (A, B ) = 0 indicates that none of the solutions in B are dominated by A. Maximum Spread (MS): The MS metric denotes the spread of the non-dominated front solutions and can be defined as follows [54]:
MS =
M
max(||xi , yi || )
(23)
i=1
where || · || is the function to calculate the Euclidean distance, M is the number of objectives, xi is the maximum value of the ith objective and yi is the minimum in the ith objective. Clearly, the large value indicates the better coverage of the solution. The statistical results in Table 13 incorporate the median and the standard deviation of the MS and CM metrics obtained by these five algorithms with the Wilcoxon rank sum test, which have verified that MOWOA algorithm outperform than the most algorithms
56
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
Table 13 The results of the performance metrics obtained by MOWOA, MOGWO, AGE-II, MOPSO and NSGA-II. Metric
MOWOA
MOGWO
AGE-II
MOPSO
NSGA-II
87.5581 2.7286 + 0.7667 0.0268 \
83.9537 2.266 + 0.8267 0.0602 \
84.4632 6.2409 + 0.84 0.0439 \
86.6381 11.8317 + 0.8267 0.0332 \
MS
Median std.
95.0893 1.4544
CM
Median std.
\ \
in this hydropower problem. In addition, the high value of the CM metric has also demonstrated that the well performance of the MOWOA in obtaining the non-dominated solutions for addressing the water resource utilization problem. To sum up, MOWOA outperforms the other algorithms in Jiugong Mountain basin hydropower optimization problem. Despite all that, the result of optimization, which represent the tradeoff between the power generation and water shortage, can only be the reference, in other words, the decision maker should select the solution according to the actual situation and max the comprehensive benefits.
Table 14 Description of the data sets. Name
Number of points
Number of clusters
Number of dimension
Iris Wine WBC Glass Long1 Sizes5 Square1 Square4 T wenty Forty
150 178 683 214 10 0 0 10 0 0 10 0 0 10 0 0 10 0 0 10 0 0
3 3 2 6 2 4 4 4 20 40
4 13 9 9 2 2 2 2 2 2
other objective is designed to reflect the cluster connectedness, the connectivity, which evaluates the degree to which neighboring data points have been placed in the same cluster has been adopted [55]. It can be defined as follows:
C onn(C ) =
N L i=1
1
xi,nni j
(29)
j=1
i f Ck : a ∈ Ck ∧ b ∈ Ck j , nnij is the jth nearest 0 otherwise neighbor of an data point i, N is the size of the data set, and L is a parameter determining the number of neighbors that contribute to the connectivity measure.
4.2. Application in simple data clustering
where xa,b =
4.2.1. General description In this section, MOWOA was applied to the data clustering, which can further explore the performance and potentiality of the proposed algorithm. Moreover, four algorithms which have been applied in clustering in the previous study are adopted with the purpose of comparison and analysis. Without losing the generality, the brief description of clustering is given firstly. Data clustering is a kind of important technique, to be more specific, it is the process of partitioning a given set of n points into a number of groups (C1 , C2 , . . . , CK ) based on some metric. The main objective of any data clustering technique is to produce a K × n partition matrix of the given dataset X consisting of n patterns, where X = x1 , x2 , . . . , xn . The partition matrix in hard partitioning can be represented as follows:
4.2.3. Performance measures In spite of the GSpread metric described in Section 3.2, here a simplified F-measure has also been adopted to evaluate the performance of all the data clustering algorithms quantitatively. Fmeasure [56] is a balanced measure, which contains the precision and recall as harmonic mean. Precision is the fraction of retrieved objects that are relevant, while recall is the fraction of relevant objects that are retrieved [57]. It can be defined as follows:
uk j =
1 0
i f x j ∈ Ck i f xj ∈ / Ck
(24)
where ukj is the membership of pattern xj to cluster Ck and k = 1, 2, . . . , K. While for the fuzzy partitioning of the data, the equation can be described as follows:
∀k ∈ 1 , 2 , . . . K , 0 <
n
uk j < n
(25)
j=1
∀ j ∈ 1, 2, . . . n,
K
uk j = 1
(26)
k=1 K n
uk j = n
(27)
k=1 j=1
4.2.2. Fitness functions As to investigate data clustering quality of the algorithms, two conflicting objectives compactness and the connectedness are considered in this problems. In the first objective, we calculate the overall deviation of a partitioning, which can be computed as follows [55]:
Dev(C ) =
δ (i, μk )
(28)
Ck ∈C i∈Ck
Where C is the set of all clusters, μk is the centroid of cluster Ck and δ (•) is the Euclidean distance function. Moreover, since the
Fk,l = 2 ×
precision(k, l ) × recal l (k, l ) precision(k, l ) + recal l (k, l )
(30)
where precision(k, l ) = nkl /nk , nkl is the number of points which belong to cluster Ck and class l, and nk is the total number of points in cluster Ck . Moreover, the recall of cluster k with respect to classl is recal l (k, l ) = nkl /nl , where nl is the number of objects in class l. For F-measure, the optimum score is 1 and the higher scores stand for the better performance of the algorithm. 4.2.4. Experimental results and discussions Aiming to further investigate the performance of the MOWOA, five algorithms which cover evolutionary and non-evolutionary clustering algorithms that have already been applied to data clustering successfully are adopted to compare with the proposed algorithm. As for the evolutionary clustering algorithm, the default parameters setting of variable string length genetic algorithm with PS-distance-based clustering (VGAPS) [58] and the multi objective clustering algorithm (MOCK) [59] are used as in their original paper. Furthermore, the h in MOWOA is 10 and the population size for these three algorithms is 100. While in the non-evolutionary clustering algorithm, the average link agglomerative clustering is implemented as in citevoorhees1985effectiveness. In addition, the standard implementation of the K-means algorithm are also implemented [60]. With the purpose of comparing and analyzing the performance of selected algorithms, four real-life data sets named Iris, Wine, Wisconsin Breast Cancer (WBC) and Glass and six artificial data sets like Long1, Sizes5, Square1, Square4, T wenty and Forty are utilized to the experiments. Artificial data sets contains well-separated clusters of different shapes, sizes and convexities, while the other four
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
57
Table 15 F-measure values on different data sets by MOWOA, MOCK, VGAPS, Average-link, K-means and Cluster ensemble. Data set
Metric
MOWOA
MOCK
VGAPS
Long1
Mean Std.
8.12E−01 3.32E−02
Sizes5
Mean Std.
8.22E−01 6.23E−02
Square1
Mean Std.
9.65E−01 3.10E−03
Square4
Mean Std.
9.14E−01 2.14E−02
Twenty
Mean Std.
6.72E−01 6.28E−02
Forty
Mean Std.
9.22E−02 3.14E−03
Iris
Mean Std.
8.97E−01 4.54E−03
WBC
Mean Std.
9.75E−01 6.96E−03
Wine
Mean Std.
7.36E−01 1.45E−02
Glass
Mean Std.
5.50E−01 3.58E−02
1.0 0E+0 0 0.0 0E+0 0 − 7.91E−01 1.20E−02 + 9.99E−01 1.20E−02 = 8.95E−01 1.10E−02 = 1.0 0E+0 0 0.0 0E+0 0 − 1.0 0E+0 0 0.0 0E+0 0 = 7.75E−01 2.20E−02 + 8.19E−01 1.40E−02 + 7.26E−01 2.00E−03 = 5.34E−01 6.00E−03 = 3/5/2
4.87E−01 2.10E−02 + 8.16E−01 1.30E−02 = 9.99E−01 1.40E−02 = 9.25E−01 1.30E−02 = 4.79E−01 2.20E−02 + 9.50E−02 6.00E−03 = 7.54E−01 1.30E−02 + 9.53E−01 1.20E−02 = 6.17E−01 8.00E−03 + 5.34E−01 8.00E−03 = 4/6/0
w/t/l
Average-link
K-means
5.00E−01 1.10E+01 + 2.26E−01 2.10E−02 + 7.32E−01 2.10E−02 + 7.15E−01 1.50E−02 + 8.09E−01 3.00E−03 − 7.98E−01 1.80E−02 + 8.87E−01 1.00E−03 = 9.61E−01 1.30E−02 = 7.09E−01 1.10E−02 = 4.92E−01 1.40E−02 = 5/4/1
1.0 0E+0 0 0.0 0E+0 0 − 1.81E−01 1.10E−02 + 3.68E−01 6.00E−03 + 3.68E−01 1.60E−02 + 9.47E−01 9.00E−03 − 9.09E−01 2.30E−02 = 7.64E−01 9.00E−03 + 6.88E−01 8.00E−03 + 5.02E−01 1.30E−02 + 4.22E−01 7.00E−03 = 6/3/1
Fig. 9. The ranking and score of average performance obtained by each compared algorithms in terms of F-measure.
data sets from UCI machine learning repository contain multivariate, real and integer values. A brief description of the data sets in terms of the number of points, number of clusters and the dimension of the data sets is presented in Table 14. Thirty different runs are employed for each algorithm and the L for the connectedness is 10. The statistical results of five algo-
rithms on these ten data sets with the F-measure are given in Table 15. The mean and standard deviation of the F-measure obtained by these five algorithms have been incorporated in the table. Furthermore, the Wilcoxon rank sum test is also integrated in this result. As the Table 15 shows, MOWOA, MOCK and VGAPS have obtained the all the best values among these data sets. For
58
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59
the artificial data sets, the performance of the MOWOA is little worse than the MOCK especially on Long1 data set and T wenty data set. However, MOWOA still outperform the Average-link and K-means algorithms on most artificial data sets. While in the reallife data sets, the performance of the MOCK and VGAPS is not as well as on artificial data sets. Moreover, the best values on these four data set have all been obtained by the MOWOA. Generally speaking, the proposed MOWOA tends to outperform the most algorithms on these data sets especially on the Sizes5 and Iris according to the Wilcoxon rank sum test. Furthermore, as to the reflect the performance of algorithms comprehensively, the performance score described in Section 3.3.2 is adopted to rank the algorithms and quantify how well each algorithm performs. Here, the performance score of the F-measure on these ten data sets is considered to generate the final rank of the algorithms and the result is shown in Fig. 9. As the figure shows, the proposed MOWOA rank as the first place which indicates the well performance and potentiality in simple data clustering. Although the VGAPS and MOCK perform well on parts of the data set, the score of VGAPS is a little worse than MOCK. While in the non-evolutionary clustering algorithm, although the average link and the k-means algorithm have obtained well performance on parts of the data sets, the rank is little worse than the evolutionary based clustering algorithm. However, these two algorithms still are good for data clustering problem.
5. Conclusion This study gave a multi-objective optimization algorithm inspired by the humpback whales. The proposed MOWOA algorithm inherited three operators search for prey, encircling prey, and bubble-net foraging from the original WOA algorithm. Moreover, two mechanisms named OBL and GGR, together with appropriate external archive strategy are also incorporated into it. The mechanism introduces OBL, GGR and HGD to increase the selection pressure toward the optimal front. Moreover, systematic experiments on 14 mathematical benchmark functions were performed to make an extensive comparison of MOWOA with a series famous multi-objective algorithms and MOWOA was considered to be competitive with other methods. In addition, the effect of the number of grid divisions on MOWOA has been experimentally investigated. The results showed that MOWOA can achieve a good tradeoff among convergence and diversity under a proper setting. Furthermore, in order to verify the influence of the OBL and GGR strategies, various MOWOA have been generated. The results show that although OBL and GGR strategy has a different focus on the optimization respectively, they both assist the algorithm to obtain the best performance among different problem effectively. We have also used MOWOA to solve a hydropower scheduling problem and simple data clustering problem, which shows the feasibility and effectiveness of the materialistic in real-world applications. The future work is to investigate the proposed algorithms in a more real engineering problem with different characteristics from our previous study and a deeper insight into the reduction of the algorithm random parameter is also the focus of our future study.
Acknowledgments This work was supported by the National Science & Technology Pillar Program during the Twelfth Five-year Plan Period under Grant 2012BAD10B01, National Natural Science Foundation of China under Grants 61379123, 61572438 and The Open Found of The Key Laboratory for Metallurgical Equipment and Control of Ministry of education in Wuhan University of Science and Technology.
References [1] J. Holland, Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control and artificial intelligence, Q. Rev. Biol. 6 (1) (1994) 126–137. [2] L.J. Fogel, Artificial intelligence through a simulation of evolution, Proc. of the 2nd Cybernetics Science Symp., 1965, p. 1965. [3] H.G. Beyer, H.P. Schwefel, Evolution strategies a comprehensive introduction, Natural Comput. 1 (1) (2002) 3–52. [4] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of the IEEE International Conference on Neural Networks, 1995, pp. 1942–1948vol.4. [5] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, et al., Optimization by simmulated annealing, Science 220 (4598) (1983) 671–680. [6] M. Dorigo, M. Birattari, T. Stutzle, Ant colony optimization, IEEE Comput. Intell. Mag. 1 (4) (2006) 28–39. [7] A.R. Mehrabian, C. Lucas, A novel numerical optimization algorithm inspired from weed colonization, Ecol. Inf. 1 (4) (2006) 355–366. [8] X.-S. Yang, S. Deb, Cuckoo search via Lévy flights, in: Proceedings of the World Congress on Nature & Biologically Inspired Computing, NaBIC, IEEE, 2009, pp. 210–214. [9] D. Simon, Biogeography-based optimization, IEEE Trans. Evol. Comput. 12 (6) (2008) 702–713. [10] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm, J. Global Optim. 39 (3) (2007) 459–471. [11] Y. Tan, Y. Zhu, Fireworks algorithm for optimization, in: Proceedings of the International Conference in Swarm Intelligence, Springer, 2010, pp. 355–364. [12] X.-S. Yang, A. Hossein Gandomi, Bat algorithm: a novel approach for global engineering optimization, Eng. Comput. 29 (5) (2012) 464–483. [13] Y.-J. Zheng, Water wave optimization: a new nature-inspired metaheuristic, Comput. Oper. Res. 55 (2015) 1–11. [14] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, Gsa: a gravitational search algorithm, Inf. Sci. 179 (13) (2009) 2232–2248. [15] Z.W. Geem, J.H. Kim, G. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation 76 (2) (2001) 60–68. [16] K. Deb, K. Sindhya, J. Hakanen, Multi-objective optimization, in: Decision Sciences: Theory and Practice, CRC Press, 2016, pp. 145–184. [17] C.A.C. Coello, G.B. Lamont, D.A. Van Veldhuizen, et al., Evolutionary Algorithms for Solving Multi-objective Problems, 5, Springer, 2007. [18] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm, TIK-report (2001) 103. [19] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evolut. Comput. 6 (2) (2002) 182–197. [20] C.A.C. Coello, G.T. Pulido, M.S. Lechuga, Handling multiple objectives with particle swarm optimization, IEEE Trans. Evolut. Comput. 8 (3) (2004) 256–279. [21] Q. Zhang, H. Li, MOEA/D: a multiobjective evolutionary algorithm based on decomposition, IEEE Trans. Evolut. Comput. 11 (6) (2007) 712–731. [22] J. Knowles, D. Corne, The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation, in: Proceedings of the Congress on Evolutionary Computation, CEC, 1, IEEE, 1999, pp. 98–105. [23] Y.-J. Zheng, X.-L. Xu, H.-F. Ling, S.-Y. Chen, A hybrid fireworks optimization method with differential evolution operators, Neurocomputing 148 (2015) 75–82. [24] T. Cheng, M. Chen, P.J. Fleming, Z. Yang, S. Gan, A novel hybrid teaching learning based multi-objective particle swarm optimization, Neurocomputing 222 (2017) 11–25. [25] A. Kishor, P.K. Singh, J. Prakash, NSABC: non-dominated sorting based multi-objective artificial bee colony algorithm and its application in data clustering, Neurocomputing 216 (2016) 514–533. [26] M. Ali, P. Siarry, M. Pant, An efficient differential evolution based algorithm for solving multi-objective optimization problems, Eur. J. Oper. Res. 217 (2) (2012) 404–416. [27] S. Mirjalili, S. Saremi, S.M. Mirjalili, L.d. S. Coelho, Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization, Expert Syst. Appl. 47 (2016) 106–119. [28] M. Wagner, F. Neumann, A fast approximation-guided evolutionary multi-objective algorithm, in: Proceedings of the Conference on Genetic and Evolutionary Computation, 2013, pp. 687–694. [29] R. Cheng, Y. Jin, K. Narukawa, B. Sendhoff, A multiobjective evolutionary algorithm using gaussian process-based inverse modeling, IEEE Trans. Evolut. Comput. 19 (6) (2015) 838–856. [30] B. Chen, W. Zeng, Y. Lin, D. Zhang, A new local search-based multiobjective optimization algorithm, IEEE Trans. Evolut. Comput. 19 (1) (2015) 50–73. [31] S. Yang, M. Li, X. Liu, J. Zheng, A grid-based evolutionary algorithm for many-objective optimization, IEEE Trans. Evolut. Comput. 17 (5) (2013) 721–736. [32] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evolut. Comput. 1 (1) (1997) 67–82. [33] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw. 95 (2016) 51–67. [34] G.-G. Wang, S. Deb, A.H. Gandomi, A.H. Alavi, Opposition-based krill herd algorithm with cauchy mutation and position clamping, Neurocomputing 177 (2016) 147–157. [35] M. Aziz, M.-H. Tayarani-N, Opposition-based magnetic optimization algorithm with parameter adaptation strategy, Swarm Evolut. Comput. 26 (2016) 97–119.
W.L. Wang, W.K. Li and Z. Wang et al. / Neurocomputing 341 (2019) 41–59 [36] S.-Y. Park, J.-J. Lee, Stochastic opposition-based learning using a beta distribution in differential evolution, IEEE Trans. Cybern. 46 (10) (2016) 2184–2194. [37] S. Rahnamayan, H.R. Tizhoosh, M.M. Salama, Opposition-based differential evolution, IEEE Trans. Evolut. Comput. 12 (1) (2008) 64–79. [38] X. Li, M. Yin, An opposition-based differential evolution algorithm for permutation flow shop scheduling based on diversity measure, Adv. Eng. Softw. 55 (2013) 10–31. [39] H. Wang, Z. Wu, S. Rahnamayan, Y. Liu, M. Ventresca, Enhancing particle swarm optimization using generalized opposition-based learning, Inf. Sci. 181 (20) (2011) 4699–4714. [40] J. Knowles, D. Corne, Properties of an adaptive archiving algorithm for storing nondominated vectors, IEEE Trans. Evol. Comput. 7 (2) (20 03) 10 0–116. [41] L. Li, W. Wang, X. Xu, W. Li, Multi-objective particle swarm optimization based on grid ranking, J. Comput. Res. Dev. 54 (5) (2017) 1012. [42] J.A. Goldbogen, A.S. Friedlaender, J. Calambokidis, M.F. Mckenna, M. Simon, D.P. Nowacek, Integrative approaches to the study of baleen whale diving behavior, feeding performance, and foraging ecology, BioScience 63 (2) (2013) 90–100. [43] L. Li, W. Wang, X. Xu, Multi-objective particle swarm optimization based on global margin ranking, Inf. Sci. 375 (2017) 30–47. [44] G.T. Pulido, C.A.C. Coello, Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer, in: Proceedings of the Genetic and Evolutionary Computation Conference, Springer, 2004, pp. 225–237. [45] U. Halder, S. Das, D. Maity, A cluster-based differential evolution algorithm with external archive for optimization in dynamic environments, IEEE Trans. Cybern. 43 (3) (2013) 881–897. [46] G.-G. Wang, A.H. Gandomi, A.H. Alavi, Study of lagrangian and evolutionary parameters in krill herd algorithm, in: Adaptation and Hybridization in Computational Intelligence, Springer, 2015, pp. 111–128. [47] E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: Empirical results, Evol. Comput. 8 (2) (20 0 0) 173–195. [48] Q. Zhang, A. Zhou, S. Zhao, P.N. Suganthan, W. Liu, S. Tiwari, Multiobjective optimization test instances for the CEC 2009 special session and competition 264 (2008). University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multi-objective optimization algorithms, Technical report [49] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable multi-objective optimization test problems, in: Proceedings of the 2002 Congress on Evolutionary Computation, CEC, 1, IEEE, 2002, pp. 825–830. [50] C.A.C. Coello, M.R. Sierra, A study of the parallelization of a coevolutionary multi-objective evolutionary algorithm, in: Proceedings of the Mexican International Conference on Artificial Intelligence, 2004, pp. 688–697. [51] J.R. Schott, Fault tolerant design using single and multicriteria genetic algorithm optimization., Technical Report, DTIC Document, 1995. [52] A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, E. Tsang, Combining model-based and genetics-based offspring generation for multi-objective optimization using a convergence criterion, in: Proceedings of the IEEE Congress on Evolutionary Computation, CEC, IEEE, 2006, pp. 892–899. [53] J. Bader, E. Zitzler, Hype: an algorithm for fast hypervolume-based many-objective optimization, Evolut. Comput. 19 (1) (2011) 45–76. [54] E. Zitzler, Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications, Shaker, Ithaca, 1999. [55] J. Handl, J. Knowles, An evolutionary approach to multiobjective clustering, IEEE Trans. Evolut. Comput. 11 (1) (2007) 56–76. [56] S. Saha, S. Bandyopadhyay, A generalized automatic clustering algorithm in a multiobjective framework, Appl. Soft Comput. 13 (1) (2013) 89–108. [57] A. Kishor, P.K. Singh, J. Prakash, NSABC: non-dominated sorting based multi-objective artificial bee colony algorithm and its application in data clustering, Neurocomputing 216 (2016) 514–533. [58] S. Bandyopadhyay, S. Saha, A point symmetry-based clustering technique for automatic evolution of clusters, IEEE Trans. Knowl. Data Eng. 20 (11) (2008) 1441–1457.
59
[59] J. Handl, J. Knowles, An evolutionary approach to multiobjective clustering, IEEE Trans. Evolut. Comput. 11 (1) (2007) 56–76. [60] J. MacQueen, et al., Some methods for classification and analysis of multivariate observations, in: Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, 1, 1967, pp. 281–297. Oakland, CA, USA. Wanliang Wang received the Ph.D. degree in control theory and control engineering from TongJi University, Shanghai, China. Since July 1997, he has been a Professor with the School of Computer Science and technology, Zhejiang University of Technology, Hangzhou, China. He is currently the dean of the School of Computer Science and technology of Zhejiang University of Technology, dean of Software Institute, director of Key Laboratory of visual media Intelligent processing technology in Zhejiang Province. He is also the member of the Teaching Steering Committee of Computer Science and Technology of the Ministry of Education in 2013-2017, the member of the Teaching Guidance Sub-Committee of the Department of Electrical Engineering and automation, and the review expert of National Science and Technology award. Weikun Li received the B.Sc. degree in computer science from the Zhejiang University of Technology, Hangzhou, China. He is now working toward the Ph.D. degree in control Science and engineering at Zhejiang University of Technology, Hangzhou, China. His research interests include evolutionary computation, evolutionary computation, multi objective optimization, dynamic optimization, and data driven based scheduling.
Zheng Wang received the M.Sc in Computational biology, University of East Anglia, UK, 2011. He is now working toward the Ph.D. degree in control science and engineering at Zhejiang University of Technology, Hangzhou, China. He is also the lecturer in School of Computer Science& Technology, Zhejiang Institute of Mechanical & Electrical Engineering, P.R. China. His research interests include optimal scheduling of intelligent systems, intelligent control algorithms for robots, and applications of Internet of things technology.
Li Li received the Ph.D. degree in control science and engineering from Zhejiang University of Technology, Hangzhou, China. He is now the lecturer in School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, P.R.China. His research interests include evolutionary computation, evolutionary computation, scheduling optimization, dynamic optimization, and the multi-objective optimization.