A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems

A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems

Journal Pre-proof A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems Junfeng Liu , Zhe Yang , D...

20MB Sizes 0 Downloads 93 Views

Journal Pre-proof

A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems Junfeng Liu , Zhe Yang , Dingfang Li PII: DOI: Reference:

S0957-4174(19)30851-6 https://doi.org/10.1016/j.eswa.2019.113134 ESWA 113134

To appear in:

Expert Systems With Applications

Received date: Revised date: Accepted date:

13 July 2019 12 December 2019 12 December 2019

Please cite this article as: Junfeng Liu , Zhe Yang , Dingfang Li , A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems, Expert Systems With Applications (2019), doi: https://doi.org/10.1016/j.eswa.2019.113134

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier Ltd.

Highlights 

A multi-objective grey wolf optimizer algorithm with multiple search strategies is developed.



The developed algorithm is evaluated on a series of benchmark functions.



The performance of the algorithm is compared with well-known algorithms using various metrics.



A novel constraints handling method used for optimal scheduling problem of cascade hydropower stations is designed.



The algorithm is firstly applied to optimize multi-objective optimal scheduling problem of cascade hydropower stations.

1

A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems Junfeng Liu1*, Zhe Yang2, Dingfang Li1 1

2

School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China School of Computer Science, The University of Manchester, Manchester M139PL, UK

E-mail addresses: [email protected] (J.F. Liu), [email protected] (Z. Yang), [email protected] (D.F. Li).

Abstract: In this paper, a novel multi-objective grey wolf optimizer (MOGWO) based on multiple search strategies (i.e., adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy) which we shall call MMOGWO is proposed to solve multi-objective optimization problems (MOPs). The algorithm uses a fixed-sized external archive that is adaptively maintained according to a grid-based approach to preserve the non-dominated solutions found during the search process. Then, the archive is used to define the social hierarchy and simulate the hunting behaviors of grey wolves. In the proposed algorithm, an adaptive chaotic mutation strategy based on a chaotic cubic map and modified generational distance (GD) is applied to the archive to dynamically adjust the convergence speed and balance the exploration and exploitation. To prevent the population diversity loss, a boundary mutation strategy based on the concept of multi-level parallel is employed to manage boundary constraint violations. Moreover, a non-dominated sorting and crowding distance-based elitism strategy is also incorporated into the algorithm for exploiting more potential Pareto optimal solutions and preserve the diversity of solutions in the approximated set. The proposed algorithm is evaluated on a wide range of multi-objective optimization problems (MOPs), and compared with other state-of-the-art multi-objective optimization algorithms in terms of often-used performance metrics with the help of statistical analysis, average ranks test and Wilcoxon Signed-Rank Test (WSRT). It is revealed by the experimental results that the algorithm is highly competitive and significantly outperforms other well-known algorithms on most of the test problems. On obtaining satisfactory performance for test problems, to investigate the performance of the MMOGWO for solving real-world optimization problems with various constraints, MMOGWO is further applied to handle the multi-objective optimal scheduling problem (MOOSP) of cascade hydropower stations (CHSs) based on a novel constraints handling method designed in this paper. Simulation results indicate that, compared with other algorithms, MMOGWO can produce better quality solutions and it can be considered as a promising alternative tool to deal with multi-objective real-life engineering problems with complex constraints by equipping with constraints handling methods. Keywords: Multi-objective grey wolf optimizer, Multiple search strategies, Multi-objective optimal scheduling problem, Cascade hydropower stations, Constraints handling methods. Corresponding author: [email protected] (J.F. Liu).

1. Introduction Optimization problems that need to optimize more than one objective simultaneously are ubiquitous in many real-world engineering applications, and they are usually called MOPs (Deb, Pratap, Agarwal, & Meyarivan, 2002). Unlike single-objective optimization problems (SOPs), however, MOPs are usually more difficult to be solved. Since the objectives in MOPs are often conflicting and/or incommensurable with each other, the improvement of one objective may deteriorate other objectives. Thus, there does not have a single solution that can optimize all objectives simultaneously. Instead, finding a set of equally-optimal solutions that represent a trade-off among all the objectives, called the Pareto optimal set (PS), becomes an alternative (Akbari, Hedayatzadeh, Ziarati, & Hassanizadeh, 2012; Zhou, Qu, Li, Zhao, Suganthan, & Zhang, 2011). Due to their population-based nature, evolutionary algorithms (EAs) can approximate the whole PS of a MOP in a single run. The use of nature-inspired meta-heuristic algorithms for tackling MOPs that are characterized by multi-modality, non-linearity and discontinuity have significantly grown in the last few years (Coello, Pulido, & Lechuga, 2004; Jones, Mirrazavi, & Tamiz, 2002; Hasançebi & Azad 2015; Azad 2019). Since a Vector Evaluated Genetic Algorithm (VEGA) is proposed by Schaffer in 1985 (Schaffer, 1985), thousands of research papers and dozens of books about EAs have been published, and these EAs are called Multi-objective Evolutionary Algorithms (MOEAs). Statistical data indicates that, by April 2019, there are 12,112 references published on MOEAs for solving MOPs. Among these references, 95.2% have been published in the last 20 years, 47.9% are articles and 36.3% are in proceedings while 11.5% are in collections. For details of statistical data, please refer to the paper repository in the EMOO web site, http://delta.cs.cinvestav.mx/~ccoello/EMOO/, which is maintained by Professor Coello Coello, C. A. Some of the representative MOEAs proposed so far are: Weight-based Genetic Algorithm (WBGA) (Hajela, & Lin, 1992), Multiple Objective Genetic Algorithm (MOGA) (Fonseca, & Fleming, 1993), Non-dominated Sorting Genetic Algorithm (NSGA) (Srinivas, & Deb, 1994), Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler, & Thiele, 1999), Pareto Archived Evolution Strategy (PAES) (Konwles, & Corne, 2000), Pareto-frontier Differential Evolution (PDE) (Abbass, Sarker, & Newton, 2001), Strength Pareto Evolutionary Algorithm 2 (SPEA2) (Zitzler, Laumanns, & Thiele, 2001), Non-dominated Sorting Genetic Algorithm II (NSGA-II) (Deb, Pratap, Agarwal, & Meyarivan, 2002), Multi-objective Particle Swarm Optimization (MOPSO) (Coello Coello, Pulido, & Lechuga, 2004), Multi-objective Evolutionary Algorithm based on Decomposition (MOEA/D) (Zhang, & Li, 2007). All the above multi-objective optimization algorithms can effectively approximate the true Pareto optimal solutions of MOPs.

Fig.1. Social hierarchy of Grey wolves.

The grey wolf optimizer (GWO) algorithm, introduced by Mirjalili, Mirjalili, and Lewis (2014), is one of the most recently introduced swarm intelligence algorithms. In GWO, there are four main steps: (1) Population (or grey wolf population) initialization. In this step, settings of the control parameters such as population size and stopping criterion are given. Subsequently, one initial population is randomly generated in the decision space. (2) Social (or leadership) hierarchy. To mathematically model the social (or leadership) hierarchy of grey wolves, the fitness of each solution (grey wolf) in the grey wolf population is calculated at first. Then, the population is categorized into four different groups (i.e., alpha, beta, delta, and omega) at each generation based on their fitness. In detail, the best fitness solution is considered as the alpha (α) wolf while the second and the third-best solutions are named beta (β) and delta (δ) wolves, respectively. Consequently, the rest of the candidate solutions are assumed to be omega (ω) wolves. The social (or leadership) hierarchy is shown in Fig.1. (3) Encircling prey. In this procedure, one candidate solution (grey wolf) can update its position according to the position of the prey. To avoid the premature convergence, the candidate solution (grey wolf) might occasionally update its position inside the space between its original position and the prey position by adjusting the value of the correlative parameters. (4) Hunting. Grey wolves can recognize the location of prey and encircle them. The hunt is usually guided by the alpha (α), beta (β) and delta (δ) wolves. However, the position of prey (optimum) is unknown in an abstract search space. To mathematically simulate the hunting behavior of grey wolves, we assume that the alpha (α), beta (β) and delta (δ) wolves have better knowledge about the potential location of prey. Therefore, the best three solutions obtained so far are saved and oblige the other candidate solutions (grey wolves) towards the potential solution of prey according to the position of the three best wolves. For more details on GWO, the interested readers can refer to the literature Mirjalili, Mirjalili, and Lewis (2014). The GWO algorithm seems particularly competitive or superior to other classical meta-heuristic algorithms for MOPs due to it provides better quality solutions and performs fast convergence 2

speed for SOPs as well as has fewer control parameters and does not require derivative information in the initial search (Faris, Aljarah, Al-Betar, & Mirjalili, 2018; Lu, Gao, Li, & Xiao, 2017). In recent years, extending original GWO for tackling MOPs has received more attention from scientists and researchers. Some representative works have been published: Mirjalili et al. (2016) designed a MOGWO by adapting the original GWO algorithm to optimize MOPs for the first time (Mirjalili, Saremi, Mirjalili, and Coelho, 2016). In this algorithm, a fixed-sized external archive was used to save the non-dominated solutions found during the optimization process while a grid-based approach was employed to maintain and adaptively assess the Pareto front. The MOGWO was compared with MOPSO and MOEA/D on CEC 2009 benchmark functions in terms of three performance metrics: Inverted Generational Distance (IGD) (Sierra, & Coello Coello, 2005), Spacing (SP) (Schott, 1995) and Maximum Spread (MS) (Zitzler, 1999). Mohamed et al. (2016) presented a modified GWO algorithm for solving environmental pollution emissions and economic cost problems (Mohamed, El-Gaafary, Mohamed, & Hemeida, 2016). In this algorithm, an external memory, called “repository”, was employed to find the non-dominated solutions, and a fuzzy ranking strategy was used over the Pareto optimal solutions to find the best compromise solutions from the trade-off curve. Lu et al. (2016) proposed a multi-objective discrete grey wolf optimizer (MODGWO) for solving the welding scheduling problem (WSP) (Lu, Xiao, Li, & Gao, 2016). In the MODGWO, an appropriate encoding and decoding scheme is designed to construct a bridge between solution space and search space. Moreover, they used a non-domination rank technique to define the social (or leadership) hierarchy and adopted a replacement scheme based on ranking and crowding distance to select the next generation. Later, Lu et al. (2017) developed a hybrid multi-objective grey wolf optimizer (HMOGWO) by combing GWO and GA to solve the multi-objective dynamic welding scheduling problems (MODWSP) followed the framework of MODGWO (Lu, Gao, Li & Xiao, 2017). To improve the exploitation and exploration, they designed a modified social hierarchy. To address the hybrid flow-shop scheduling problem (HFSP) that considering noise pollution. Lu et al. (2019) also presented a multi-objective cellular grey wolf optimizer (MOCGWO) by integrating the merits of cellular automata (CA) for diversification and variable neighborhood search (VNS) for intensification (Lu, Gao, Pan, Li, & Zheng, 2019). In this algorithm, they used a permutation-based scheme to randomly initialize the population and divided the population into some subpopulations utilizing one given cellular lattice structure. To construct a social hierarchy, each subpopulation was classified into four social hierarchies according to the fitness values. An external archive was adopted and maintained by using a truncation operator (Zitzler, Laumanns, & Thiele, 2001). Moreover, a variable neighborhood search (VNS) strategy was utilized to further improve the quality of solutions (Mladenović, & Hansen, 1997). Dilip et al. (2018) suggested a MOGWO algorithm for handling the optimal power flow (OPF) problems. In this algorithm, the Pareto concept, an external archive strategy, and a leader selection strategy were employed to make the algorithm converge to the true Pareto optimal front (Dilip, Bhesdadiya, Trivedi, & Jangir, 2018). Zapotecas-Martínez et al. (2019) reported an algorithm called Multi-objective Grey Wolf Optimizer based on Decomposition (MOGWO/D), which was the first attempt to employ decomposition method into MOGWO (Zapotecas-Martínez, García-Nájera, & López-Jaimes, 2019). In this algorithm, a MOP was decomposed into a set of scalar optimization sub-problems and these sub-problems were simultaneously optimized by the algorithm. However, each sub-problem was optimized by only utilizing the information from its several neighboring sub-problems, which was beneficial for lowering the computational complexity (Zhang, & Li, 2007). Instead of considering a hierarchy order among the grey wolves, MOGWO/D randomly selected the alpha (α) wolf, the beta (β) wolf and the delta (δ) wolf from the subpopulation. To enhance the search ability, a polynomial-based mutation (PBM) was designed (Deb, & Agarwal, 1999). Their proposed algorithm was validated on DTLZ (DTLZ1-DTLZ7) and UF (UF1-UF10) test functions, and compared with five well-known MOEAs: MOGWO, MOEA/D, MOPSO based on Decomposition (MOPSO/D) (Peng, & Zhang, 2008), Multi-objective Artificial Bee Colony Algorithm based on Decomposition (MOABC/D) (Medina, Das, Coello Coello, & Ramírez, 2014), Multi-objective Teaching-Learning Algorithm based on Decomposition (MOTLA/D) (Medina, Das, Coello Coello, & Ramírez, 2014) and decomposition-based Multi-objective Particle Swarm Optimizer (dMOPSO) (Zapotecas-Martínez, & Coello Coello, 2011) by considering two performance metrics: Normalized Hypervolume (Hn) (Zitzler, & Thiele, 1998; Zitzler, Thiele, Laumanns, Fonseca, & da Fonseca, 2003) and Inverted Generational Distance plus (IGD+) (Ishibuchi, Masuda, Tanigaki, & Nojima, 2015). Apart from the aforementioned studies, other researchers also have published prominent works (Emary, Yamany, Hassanien, & Snasel, 2015; Li, Wang, & Chen, 2019; Nguyen, Thom, & Dao, 2017; Nuaekaew, Artrit, Pholdee, & Bureerat, 2017; Sahoo, & Chandra, 2017; Xia, Ji, Li, Xue, Wang, & Zhang, 2019; Tsai, Nguyen, & Dao, 2017). From the above, it becomes clear that employing GWO for solving MOPs has shown a rapid development, and the common factor between all these algorithms is the use a fixed-sized external archive to store the Pareto optimal solutions found during the optimization process and update the archive by applying different archiving strategies and density measures (Got, Moussaoui, & Zouache, 2020; Hu, & Yen, 2015). However, few MOGWOs foci on addressing a search strategy on the archive according to the feedback information from the non-dominated solutions preserved in the archive, which is a crucial operation that affects the convergence of the algorithm. At the same time, each MOP has many boundary constraints that limit the search space, when the candidate solution of an algorithm move outside the search space, an approach should be followed to manage violation of the boundary constraints. Traditionally, with per element re-initialization, if a candidate solution moves outside the search space, each dimension of the individual‟s position that violates the boundary constraint is re-initialized to a violated boundary value of the search space, which may cause the dimensions of the valid position remain the same (Helbig, & Engelbrecht, 2013b; Pampará, Engelbrecht, & Cloete, 2008). Even though MOGWO achieved very competitive results against other well-known multi-objective optimizers, there is no systematic work in the literature to investigate the best constraint handling methods for this algorithm. The majority of the current works use the death penalty function, which is not always effective when the search space has a large number of infeasible regions (Faris, Aljarah, Al-Betar, & Mirjalili, 2018). The multi-objective optimal scheduling problem (MOOSP) of cascade hydropower stations (CHSs) is a high-dimensional, non-linearity, multi-stage and stringent constraint optimal problem (Barros, Tsai, Yang, Lopes, & Yeh, 2003). The operation decision of the current stage will have an important effect on the future outcomes all over the system and the computation complexity during the solution process will increase with the number of plants and constraints increases. An efficient and qualitative method meet the demand for optimal operation is required so that the optimization results can be applied to the practical hydro system (Chau, Chuntian, & Li, 2002). However, the conventional methods still have some drawbacks, i.e. the curse of dimensionality, mass memor y demand, increasing computation time and infeasible solutions. Therefore, there are strong demands on a general and practical optimization method to deal with these problems during the optimization process (Zhang, Zhou, Ouyang, Wang, & Zhang, 2013). Also, to the best of the authors‟ knowledge, no paper has been reported that addresses a real-world multi-objective optimal scheduling problem (MOOSP) of cascade hydropower stations (CHSs) using MOGWO based on a novel constraints handling method. To narrow the gap between theoretical research and practical application, in this paper, we suggest a simple yet effective MOGWO by using an improved GWO algorithm with multiple search strategies (i.e., adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy), abbreviated as MMOGWO. The main contributions of the proposed MMOGWO are summarized as follows: • An adaptive chaotic mutation strategy based on a chaotic cubic map and modified generational distance (GD) is applied to the archive to dynamically adjust the convergence speed and balance the exploration and exploitation. The effect of the adaptive chaotic mutation strategy is experimentally proven in Section 5. • A boundary mutation strategy based on the concept of multi-level parallel is introduced into MMOGWO to prevent the population diversity loss. The effect of the boundary mutation strategy is experimentally proven in Section 5. • An elitism strategy is also incorporated into MMOGWO to exploit more potential Pareto optimal solutions and preserve the diversity of solutions in the approximated set. The effect of the elitism strategy is experimentally proven in Section 5. • A novel constraint handling method is incorporated into MMOGWO to guide the population searching towards the feasible regions of search space before optimizing the MOOSP of CHSs. The rest of this paper is organized as follows: In Section 2, we describe our MMOGWO algorithm with more details, where the framework of MMOGWO and multiple search strategies, including adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy are illustrated. In Section 3, test problems, performance metrics and experimental results between MMOGWO and some classical MOEAs are presented. Thereafter, in Section 4, the simulation results of the MMOGWO in comparison with other state-of-the-art MOEAs are analyzed. In Section 5, the advantages of multiple search strategies are well studied through a simulation experiment. Based on the designed constraints handling technique, the proposed MMOGWO is applied to the MOOSP of CHSs in Section 6. Finally, we outline the conclusions of the paper in Section 7. 3

2. The proposed MMOGWO algorithm The MMOGWO algorithm is designed based on the basic GWO algorithm and multiple search strategies (i.e., adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy). It uses a fixed-sized external archive, which is responsible for maintaining Pareto optimal solutions visited by the grey wolves. Once the archive is full, a grid mechanism would be conducted to re-arrange the segmentation of the objective space and the solutions within the most crowded segment would be omitted. Moreover, a leader selection mechanism is designed to select the alpha (α), beta (β) and delta (δ) wolves as the leaders of the hunting process from the archive (Mirjalili, Saremi, Mirjalili, & Coelho, 2016). The flow chart of the MMOGWO is shown in Fig. 2. The MMOGWO algorithm consists of main procedures: initialization, update and maintenance of external archive, social (leadership) hierarchy, adaptive chaotic mutation strategy, boundary mutation strategy, encircling prey, hunting prey, elitism strategy. In the following subsections, we will give a detailed explanation of them. Algorithm: MMOGWO ☆ Input: ☆ Pop, the size of the grey wolf; ☆ AS, the size of the external archive; ☆ maxIteration, the termination criteria. ☆ Output: ☆ external archive. 1. Initialization: 1.1 Initialize the grey wolf population with the parameter Pop; 1.2 Initialize the external archive with the parameter AS; 1.3 Initialize the correlative coefficient vectors a, A and C; 1.4 Calculate the fitness value of each grey wolf (population); // Evaluate fitness 1.5 Add non-dominated solutions within the initial population into the external archive; 1.6 Select alpha (α), beta (β) and delta (δ) wolves from the external archive by using the “roulette-wheel method”, respectively; // Perform social (leadership) hierarchy 2. Main loop: While (t < maxIteration) 2.1 For individual = 1 to Pop new individual ← Update operation (position(individual, alpha, beta, and delta)) and boundary mutation strategy; new population. add (new individual); // Update population End for 2.2 Calculate the fitness value of each grey wolf (new population) and find the Pareto optimal solutions; // Evaluate fitness 2.3 Update the external archive with respect to the obtained non-dominated solutions; 2.4 Execute the adaptive chaotic mutation strategy and boundary mutation strategy for the external archive; 2.5 Update the external archive with respect to the obtained non-dominated solutions; 2.6 Maintain the external archive by using the archive controller and the grid; 2.6 Update alpha (α), beta (β) and delta (δ) wolves by using the “roulette-wheel” method. The rest of the population are assumed to be omega wolves; // Perform social (leadership) hierarchy 2.7 Update the correlative coefficient vectors a, A and C; 2.8 Combine population and new population, select the next population by using the elitism strategy; 2.9 t = t + 1. End while 3. Return external archive. Fig.2. The pseudo code of the MMOGWO algorithm

2.1. Initialization In the MMOGWO algorithm, each grey wolf represents a potential solution to the considered problem, and the value of fitness possessed by the grey wolf corresponds to the quality (fitness) of the associated solution (Kang, Li, & Ma, 2011). For an n-dimensional m-objective MOP, an initial population containing Pop n

solutions (grey wolves) is randomly generated in search space   xmin d , xmax d  in the initialization phase. In this way, each random n-dimensional vector xi = (xi1, d 1

xi2, …, xin) that represents the state of the grey wolf i will be generated through the following equation: xid  Ld  rand  0,1   xmax d  xmin d 

(1)

where i = 1, 2, … , Pop, d = 1, 2, … , n; and rand (0,1) is a uniformly distributed random number in the range [0,1]; xmin,d and xmax,d are lower and upper bounds of the d-th dimension, respectively. Up to now, the initialization of the grey wolf pack is accomplished (Jain, Singh, & Rani, 2019; Xiang, Zhou, & Liu, 2015). Thereafter, the fitness of position for each grey wolf is calculated by putting the values of decision variable (solution vector) into a user-defined fitness function and the corresponding fitness values can be expressed as: fitij  f  xi  (2) where i = 1, 2, … , Pop, j = 1, 2, … , m; and fitij is the related fitness value of i-th grey wolf for j-th objective. Next, an external archive with the size of AS is prepared for storing non-dominated solutions visited by the algorithm. In the initialization stage, all non-dominated grey wolves in the pack are added into the archive whose update and maintenance approach is presented in Section 2.2. 2.2. Update and maintenance of the external archive Different from single-objective optimization algorithms, multi-objective optimization algorithms usually produce a non-dominated solution set. For the absence of preference information, none of the non-dominated solutions can be said to outperform the others. Hence, in our algorithm, an external archive is introduced here as a storage unit to keep a historical of the Pareto optimal solutions obtained along the search process (Coello Coello, Pulido, & Lechuga, 2004; Xiang, Zhou, & Liu, 2015; Zou, Zhu, Chen, & Zhang, 2011). This method is used in many MOEAs, such as PAES, SPEA2, NSGA-II, SMPSO (Nebro, Durillo, Garcia-Nieto, Coello Coello, Luna, & Alba, 2009a) and etc. In our algorithm, the archive is updated and maintained by using a grid mechanism employed in MOPSO. The grid mechanism consists of two key modules: the archive controller and the grid. (1) The archive controller. The function of the archive controller is to decide whether a certain non-dominated solution can be added to the archive or not. During the course of the iteration, the non-dominated solutions obtained from the primary population are compared with the individuals in the archive. Note that in the initialization phase, the archive is empty. Now try to add a new non-dominated solution i into the archive.

Fig. 3. Possible cases for the archive controller. 4

Case 1: If the external archive is empty, then the non-dominated solutions provided by the algorithm are added, directly (see case 1, in Fig. 3). Case 2: If i is dominated by or equal to any individual within the archive, then it will be automatically discarded (see case 2, in Fig. 3). Case 3: If i is non-dominated with any individual within the archive and the archive is not full, then i is directly added into the archive (see case 3, in Fig. 3). Case 4: If there are individuals within the archive that are dominated by i, then all dominated individuals (solutions) will be removed from the archive and i is added into the archive (see case 4, in Fig. 3). Case 5: If the archive is full and i is to be added, then we first invoke the adaptive grid procedure to re-arrange the recursive subdivision of the objective space for getting the grid location information of the individual (solution) within the archive. After that, one of the individuals (solutions) within the most crowded segment will be removed from the archive. Then, insert the new solution i to the least crowded segment in order to keep a good spread of solutions in the archive (see case 5, in Fig. 3). (2) The grid. To produce well-distributed Pareto fronts, an approach by using a variation of the adaptive grid proposed by Konwles, & Corne, (2000) is introduced here. Adopting an external archive to maintain all the non-dominated solutions with respect to the related information of the archive is the basic idea. Into the archive, the objective space is divided into regions as presented in Fig. 4. Note that the probability of omitting an individual (solution) in increased proportional to the number of individuals (solutions) in the hypercube (subdivision). For deleting solutions if the archive is full, an individual (solution) within the most crowded segments is firstly removed from one of the individuals (solutions) for providing a space for the new individual (solution). However, there is a special case, i.e., the individual (solution) inserted into the archive lies outside the current bounds of the grid. In this case, the grid has to be recalculated and each individual (solution) within it has to be relocated in order to cover the new individual (solution) (see Fig. 5) (Coello Coello, Pulido, & Lechuga, 2004; Mirjalili, Saremi, Mirjalili, & Coelho, 2016).

Fig. 4. Graphical representation of the insertion of a new element in the adaptive grid when the individual lies within the current boundaries of the grid.

Fig. 5. Graphical representation of the insertion of a new element in the adaptive grid when the individual lies outside the previous boundaries of the grid.

It should be note that the grid inflation parameter, the leader selection pressure parameter, the number of grids per each dimension and the parameter of extra (to be deleted) repository member selection pressure used for MOGWO and MMOGWO are set to α = 0.1, β = 4 and nGrid = 10, respectively (Mirjalili, Saremi, Mirjalili, & Coelho, 2016). 2.3. Social (leadership) hierarchy The core idea of the basic GWO is that three of the best wolves (solutions) visited so far are used as leaders (including alpha, beta and delta wolves) to guide the other wolves (omega wolves) toward the promising regions of the search space with the hope to find a solution close to the global optimum. However, due to the Pareto dominance nature of MOPs, the optimal result is usually not a single solution but a set of non-dominated solutions also named “trade-off solutions” in the MOPs. That is, for a trade-off solution, the improvement one objective may cause the deterioration of at least other objectives (Lu, Gao, Li & Xiao, 2017; Lu, Xiao, Li, & Gao, 2016). Thus, a leader selection mechanism is designed to address this issue. The leader selection component chooses the least crowded segments of the search space, and one of its non-dominated individuals (solutions) as alpha, beta or delta wolves. The selection is done by a “roulette-wheel method” with a certain probability. In detail, the probability for each hypercube i can be defined as follows (Mirjalili, Saremi, Mirjalili, & Coelho, 2016): c Pi  (3) Ni where c is a constant number greater than one and N is the number of the non-dominated individuals (solutions) obtained so far in the i-th segment. In this research, the best three individuals (solutions) are selected by the following assumptions: 1. If there are three non-dominated individuals (solutions) in the first least crowded hypercube (subdivision), three of them are randomly assigned to alpha, beta or delta wolves. 2. If there are less than three non-dominated individuals (solutions) in the first least crowded hypercube (subdivision), the second least crowded hypercube (subdivision) is also found to select other leaders from. This scenario is the same if the second least crowded hypercube (subdivision) has only one individual (solution), in this case, the delta wolf should be selected from the third crowded hypercube (subdivision). With this approach, picking similar leaders for alpha, beta and delta wolves cannot occur. Consequently, the search is always toward the sparsest zone of the search space since the leader selection mechanism favors the least crowded hypercubes (subdivisions) and offers leaders from different segments if there is not enough number of leaders (less than 3) in the least crowded hypercubes (subdivisions) (Mirjalili, Saremi, Mirjalili, & Coelho, 2016). 2.4. Adaptive chaotic mutation strategy 5

With the development of the non-linear dynamics, chaos theory has been widely considered in various applications (Pecora, & Carroll, 1990). In this context, the introduction of chaos theory into optimization algorithms is one of the most famous applications (Yang, Li, & Cheng, 2007). In recent years, the chaos theory has been successfully combined with some EAs, such as chaotic krill herd algorithm (Wang, Guo, Gandomi, Hao, Wang, 2014), chaotic bat algorithm (Gandomi, & Yang, 2014), chaotic whale optimization algorithm (Kaur, & Arora, 2018), chaotic grey wolf optimizer (Kohli, & Arora, 2018) and etc. Considering all the current non-dominated solutions are preserved in an external archive, if the local perturbation is employed to the external archive, the probability of obtaining better non-dominated solutions may be higher. In the present study, an adaptive chaotic mutation strategy based on chaotic cubic mapping that has exquisite internal structure as well as modified GD is designed for the purpose of dynamically adjusting the convergence speed and balancing the exploration and exploitation of MMGWO. This model can be expressed below (Feng, Liu, & He, 2013; Zhang, & Qin, 2016):

xit ,new  xit  1    zi  , i  1, 2, t i

t ,new i

where x and x

, k , k  1, 2,

, Q t 

(4)

represents the state of i-th non-dominated solution before and after mutation, respectively; Q(t) is the size of non-dominated solution that

conduct mutation of current iteration t; η represents the control factor of mutation, which is defined by   1   t  1 / T  1

(5)

where t indicates the current iteration; T represents the maximum of iteration. From Eqs. (5), we can conclude that the values of η construct an arithmetic progression that with 1 as the first term, 0 as the last term, and 1/ T  1 as the tolerance. η = 1 denotes that mutation plays the most vital role, while η = 0 denotes that mutation plays the least vital role. zi ( i = 1, 2, … , k) represents the chaotic sequences, which is defined by

z1  1  2  rand 1, n  zi1  4  zi3  3  zi , i  1, 2,

, k , k  1, 2,

(6)

, Q  t  , 1  zi  1

(7)

After the mutation is complete, a set of new solutions is generated. For each new solution, Update and maintenance of the external archive, as presented in Section 2.2, is performed, and the non-dominated solutions will be preserved. In the early stage of the search process, the convergence speed of the algorithm is fast. In this context, the size of the mutation can be reduced appropriately, which is beneficial for improving the search efficiency of the algorithm. In the late stage of the search process, the convergence speed of the algorithm slows down, and the probability of trapping into local optimum is higher. In this context, the size of the mutation can be enlarged to make the population jump out of the local optimum. Here, we employed modified GD (denotes as GD *) to dynamically adjust the size of the mutation. The expression of GD * can be defined below (Yang, Ma, Che, Xu, & Guo, 2015; Van Veldhuizen, & Lamont, 1998):

GD* 



s i 1

di2

(8) s where s is the number of non-dominated solutions that being analyzed; di is the Euclidean distance (measured in the objective space) from the i-th solution of the current external archive to its nearest reference solution of the previous external archive. The difference of GD* between current external archive and previous external archive can be defined as:

GD  t   GD  t   GD  t  1 , t  3

(9)

The indicator  GD measures the convergence speed of the algorithm. If the value of GD  0 , it indicates that the convergence speed of the algorithm runs fast, and the size of mutation can be reduced to enhance the efficiency of the algorithm. If the value of GD  0 , it indicates that the convergence speed of the algorithm slows down, and the size of the mutation can be increased to improve the convergence and diversity of the algorithm. Based on the aforementioned analysis, the size of the mutation can be adjusted as follows:

integer  0.75  Q  t   ,  GD  t   0;  Q  t  1  Q  t  ,  GD  t   0; integer 1.25  Q  t   ,   t   0. GD 

(10)

where Q  t  is the size of the external archive at the current iteration t and 0  Q  t   L , L is the size of the current external archive; if iteration t  1, 2,3 ,

Q t   L . 2.5. Boundary mutation strategy Each MOP has various boundary constraints that limit the search space. When the individuals of an algorithm move outside the search space, an approach should be adopted to manage the violation of boundary constraints (Helbig, & Engelbrecht, 2013b). Traditionally, any individual of an algorithm that violates a specific boundary of the search space is placed on the violated boundary of the search space. Mathematically, this approach can be expressed as follows: if xit  xmax , xit  xmax

(11)

if xit  xmin , xit  xmin

(12)

or where xmin = (xmin,1, xmin,2, …, xmin,n) and xmax = (xmax,1, xmax,2, …, xmax,n) are lower and upper bounds of the d-th dimension, respectively. Through this management of boundary constraints, all individuals that move outside the search space will gather at the boundary. If there is a local optimum at the boundary, then the individuals will be prone to stagnation in this local optimum and cannot find the true Pareto optimal solutions. What‟s more, with the increase in the number of individuals gathered at the boundary, the population diversity will decrease, which is unfavorable for the exploration of the algorithm. At the same time, when individuals move outside the predefined bounds of the multi-dimensional asymmetric search space, using the unified boundary constraint approaches to manage boundary constraint violations may lead to new boundary constraint violations or weak the search ability of the algorithm. To solve this problem, a specific boundary mutation strategy based on hierarchical parallel mode and the idea of asymmetrical mutation used by genetic algorithm (GA) is designed to manage boundary constraint violations. The details of this approach are: if an individual‟s position violates the asymmetrical boundary for a specific dimension, it conducts mutation near the respond boundary and maintains the individual among the effective search space. Mathematically, the boundary mutation is defined as follows: if xidt  xmax d , xidt  xmax d  c  rand 



(13)

if xidt  xmin d , xidt  xmin d  c  rand 



(14)

or where c  10 , 10  . Through this the boundary mutation approach, all individuals that move outside the search space will not gather at the boundary but spread in the effective search space about c·rand () from the boundary. This approach can not only improve the diversity of the population but also enhance the global search ability of the algorithm. 4

2

2.6. Encircling prey As discussed previously, in addition to the social (leadership) hierarchy, grey wolves encircle prey during the hunt. The encircling behavior can be 6

mathematically modeled as follows (Mirjalili, Mirjalili, and Lewis, 2014):

Dt  C t  X tp  X t X

t 1

 X  A D t p

t

(16) t p

t

t

(15) t

t

where t indicates the current iteration, A and C are coefficient vectors at t-th iteration, X and X indicate the position vector of the prey and the grey wolf at tth iteration, respectively. The vectors At and C t are formulated as follows:

At = 2a t  r1  a t

(17)

C = 2  r2 t

(18)

where a t is a vector that linearly decreased from 2 to 0 over the course of iterations and a t  2  2   t / T  , r1 and r2 are random vectors in the range [0,1]. To illustrate the effects of Eqs. (15) and (16), an example of possible updated positions of a grey wolf in 2-D space is shown in Fig. 6(a). As can been seen from this figure, a grey wolf in the position of ( X , Y ) can update its position according to the position of the prey ( X  , Y  ). Moreover, with respect to the current position by adjusting the value of vectors A and C , any position around the best agent can be reached. For example, ( X   X , Y  ) can be reached by setting A = 1,0  and C = 1,1 . The possible updated positions of a grey wolf in 3-D space are shown in Fig. 6(b). Note that the same concept can be extended to an n-D space. In this case, the grey wolves will relocate in a circle (in a 2-D space), a sphere (in a 3-D space), or a hypersphere (in an n-D space) around the best position visited so far by using Eqs. (15) and (16) (Mirjalili, Mirjalili, and Lewis, 2014).

(a)

(b) Fig.6. 2-D and 3-D position vectors and the possible next positions of grey wolves.

2.7. Hunting prey Grey wolves can recognize the position of prey and encircle them. The hunt is usually guided by the alpha wolf. The beta and delta wolves might also participate in hunting occasionally. However, the position of prey is usually unknown in an abstract search space. To mathematically simulate the hunting behavior of grey wolves, it is assumed that the alpha, beta and delta wolves have better knowledge about the potential position of prey. Therefore, the first best three solutions (wolves) obtained so far are considered as the alpha, beta and delta wolves, respectively. Then the other solutions (wolves) are assumed as omega wolves and obliged to update their position in the light of the alpha, beta and delta wolves. In this case, new position of grey wolves can be obtained as follows (Mirjalili, Mirjalili, and Lewis, 2014):

Dt  C1t  Xt  X t , Dt  C2t  X t  X t , Dt  C3t  Xt  X t

(19)

X  X  A  D , X  X   A  D , X  X  A  D

(20)

t 1

t

t 1

t 2

t

t 1 1

X t

t

t

t 2

t

t 3

t

t 3

t

X  X + X /3 t 1

t 2

t 3

t

(21) t

t

t

where X  , X  and X  represent the positions of the alpha, beta and delta at t-th iteration, respectively. D , D and D denote the approximate distances between the current position and the alpha, beta and delta, respectively. After calculating the approximate distances, the final position of the candidate solution (wolf) can be calculated by using Eqs. (21). How a candidate solution (wolf) updates its position according to the alpha, beta and delta wolves in a 2-D search space is shown in Fig. 7. It can be observed that the alpha, beta and delta wolves estimate the potential position of prey, and other wolves update their positions randomly within a circle defined by the positions of the alpha, beta and delta wolves in the search space. To sum up, the search process is guided by the three best wolves (alpha, beta and delta wolves). When A  1 , they diverge from each other to search for prey, while when A  1 they converge to attack prey (Mirjalili, Mirjalili, and Lewis, 2014). The process is plotted in Fig. 8.

Fig. 7. Position updating of wolves.

Fig. 8. Attacking prey versus searching for prey.

2.8. Elitism strategy The fast non-dominated sort and the crowding distance assignment used in NSGA-II are employed to get the next population. The details of the phase go as 7

follows: First, combine the parent population Pt and the offspring population Qt to form a combined population Rt , i.e., Rt  Pt

Qt . The size of the population Rt

is 2Pop. Second, sort the combined population according to non-domination. Now, each solution within the population Rt is assigned a rank denotes its non-domination level (1 represents the first best level, 2 represents the second-best level, and so on). Thus, the minimization of rank is assumed. Since all solutions not only previous population members but also current population members are included in Rt , elitism is ensured. Now, Rt is divided into a various set based on the rank, i.e., Rt  F1

F2

Fl . Note that F1 is the first-best non-dominated set and Fl is the last non-dominated set which no other set can be accommodated.

Third, select these intermediate solutions within different non-dominated sets to form the next population. In detail, if the size of set F1 is smaller than Pop, then, all members within Fl are selected for the next population Pt+1 . The remaining members of the next population Pt+1 are selected from subsequent sets in the order of their rank. Thus, members within the set F2 are selected next, followed by members within the set F3 and so on. This procedure is continued until no more sets can be accommodated. The size of members in all sets from F1 to Fl is larger than Pop. To select Pop population members, we sort the members within the last set Fl according to the crowding distance in ascending order of magnitude and select the best members needed to fill all next population slots. The procedure for selecting is depicted in Fig. 9.

Fig. 9. Update strategy for next population.

3. Experimental results and analysis 3.1. Benchmark functions In this study, 28 benchmark functions are used to evaluate the performance of MMOGWO. These functions can be classified into five problem families. The first class is low-dimensional bi-objective problems without any equality or inequality constraints, such as Schaffer ‟s function (SCH) (Schaffer, 1985), Fonseca and Fleming‟s function (FON) (Deb, Pratap, Agarwal, & Meyarivan, 2002; Fonseca, & Fleming, 1998), Poloni‟s function (POL) (Poloni, 1997; Poloni, Giurgevich, Onesti, & Pediroda, 2000), and Kursawe‟s function (KUR) (Kursawe, 1990). They are abbreviated as SFPK benchmark functions. The second kind is high-dimensional bi-objective problems without any equality or inequality constraints, including ZDT benchmark functions (ZDT1-ZDT4 and ZDT6) (Deb, & Agarwal, 1999; Deb, Pratap, Agarwal, & Meyarivan, 2002; Zitzler, Deb, & Thiele, 2000). The third category is scalable objective problems without any equality or inequality constraints, covering DTLZ benchmark functions (DTLZ1-DTLZ7) (Deb, Thiele, Laumanns, & Zitzler, 2001). In our experimental studies, we focus the analysis on tri-objective for DTLZ family problems. The fourth sort is the CEC2009 competition benchmark functions (UF1-UF10) (Zhang, Zhou, Zhao, Suganthan, Liu, & Tiwari, 2008). Of them, 7 functions (UF1-UF7) are bi-objective problems, and the rest are tri-objective (UF8-UF10) problems. The fifth type is low-dimensional bi-objective problems with inequality constraints, Srinivas.N‟s function (SRN) (Srinivas, & Deb, 1994) and Tanaka‟s function (TNK) (Tanaka, Watanabe, Furukawa, & Tanino, 1995). They are shortly denoted as ST benchmark functions. 3.2. Performance metrics and ranking method In general, a quantitative assessment of the performance of an MOEA normally consists of three objectives (Coello Coello, Pulido, & Lechuga, 2004; Zitzler, Deb, & Thiele, 2000): (1) the distance of the non-dominated set provided by target algorithm to the PF of the MOP should be minimized; (2) a good and uniform spread of the solutions along the PF of the MOP should be maximized; and (3) the extent that the non-dominated solutions obtained by target algorithm cover the PF of the MOP should be maximized. Based on this notion, five widely used performance metrics are employed in our experimental studies to assess the performance of the MMOGWO algorithm. 3.2.1. Convergence metric The convergence metric (  ) introduced by Deb, Pratap, Agarwal, & Meyarivan, (2002) is used to investigate how far the non-dominated solutions provided by target algorithm from those in PF. Mathematically, the metric can be defined as:

=



N0 i 1

di

(22) N0 where N0 is the number of non-dominated solutions obtained by target algorithm; di denotes the Euclidean distance (measured in objective space) between the ith member of non-dominated solutions and the nearest member of PF. The calculation procedure of  is shown in Figure.10. It should be noted that the smaller the value of this metric, the better the convergence toward the PF, and   0 means that all the non-dominated solutions obtained by the target algorithm are in PF (Deb, Pratap, Agarwal, & Meyarivan, 2002). The first issue listed above can be addressed by this metric.

Figure.10. Convergence metric.

3.2.2. Diversity metric 8

A range (distance) metric (denoted as  ) is employed to measures the extent of spread achieved among the non-dominated solutions obtained by the target algorithm (Deb, Pratap, Agarwal, & Meyarivan, 2002). The mathematical definition of this metric is defined as:

d f  dl   i01 di  d N 1

 where di  min  k 1 f ki  f k j ,( i, j  1, 2, m

j

(23)

d f  dl   N 0  1 d

, N0 ); d is the average of all d i ; N0 is the number of non-dominated solutions obtained by target algorithm; m is the

number of objectives; f ki and f k j are values of k-th objective function for i-th and j-th members in the non-dominated solution set, respectively; d f and d l are the Euclidean distance (measured in objective space) between the extreme non-dominated solution and the boundary solutions of the obtained non-dominated solution set, as shown in Figure.11. It should be noted that the smaller the value of this metric, the better the distributions of non-dominated solutions within the PF, and   0 means that all the non-dominated solutions obtained by the target algorithm are equidistantly spaced (Coello Coello, Pulido, & Lechuga, 2004). The second issue listed above can be addressed by this metric.

Figure.11. Diversity metric.

3.2.3. Generational distance Generational distance (GD) proposed by Van Veldhuizen and Lamont (1998) is employed in this paper to examine the performance of an algorithm. The mathematical equation of this performance metric is as follows:



GD 

N0 i 1

di 2

(24)

N0

where N0 is the number of non-dominated solutions provided by target algorithm; di indicates the Euclidean distance between the ith Pareto optimal solution obtained and the closet true Pareto optimal solution in the reference set. 3.2.4. Inverted generational distance Inverted generational distance (IGD) (Zitzler, & Thiele, 1999) is a combined and comprehensive metric that can simultaneously measure both convergence and diversity of an algorithm. It is also employed in this paper to examine the performance of an algorithm. Let P* be a set of uniformly distributed points along the PF (in the objective space), and A be an approximate set to the PF. The metric of IGD is formulated as follows (Zhang, Zhou, Zhao, Suganthan, Liu, & Tiwari, 2008):   d  v, A  IGD A, P  vP  (25) P





where d  v, A is the minimum Euclidean distance between v and the points in A. If P is large enough to represent the PF very well, both the convergence









  and diversity of the approximated set A could be measured by using IGD A, P . To have a low value of IGD A, P , the set A must be very close to the PF and

cannot miss any part of the whole PF. This metric can measure all the issues listed above. As mentioned above, it is should be noted that all metrics are to be minimized. That is to say, the smaller the better. To ease the analysis of these tables, some cells have gray colored background in each row; particularly, there are two different gray levels: a darker one, pointing out the algorithm obtaining the best value of the metric, and a lighter one, highlighting the algorithm obtaining the second-best value of the metric (Durillo, & Nebro, 2011). In MOEAs, an additional parameter that commonly used is maxIteration, the termination criteria. The value of this parameter is tuned according to the maximal number of function evaluations (maxFEs) and population size (Pop), that is, maxIteration = maxFEs / Pop (Xiang, Zhou, & Liu, 2015). 3.2.5. Ranking method To measure the performance of an MOEA in a more general manner, lexicographic ordering and average ranking are used to test the different MOEAs. In CEC 2009, lexicographic ordering is employed to obtain a final ranking for all the MOEAs which can show how good a n MOEA is but cannot show how bad the MOEA is (Yu, Hong, Zhang, & Niu, 2018; Zhang, & Suganthan, 2009). The average ranking method can provide an integrative reflection on an algorithm in terms of accuracy as well as stability simultaneously (Chow, & Yuen, 2012). Let‟s assume that R  R1 , R2 , , Rn  is a rank set of an algorithm A. Here, n is the total number of benchmark functions. Then, the mean rank μr used for measuring the accuracy of A and the standard deviation of the ranks δr used for measuring the stability of A can be calculated as follows:

r 



n i 1

Ri

(26)

n

 R    n

r 

i 1

i

2

r

(27) n It should be noted that the lower the corresponding μr and δr are, the better the algorithm is. In this paper, the average ranking is adopted to reflect the performance of a specific algorithm. Moreover, the plots of μr against δr for benchmark functions are also presented. 3.3. Simulation results for benchmark functions In order to evaluate how competitive our proposed MMOGWO algorithm is, we decide to compare it against SPEA2, NSGA-II, MOPSO, MOGWO, Multi-objective ant lion optimizer (MOALO) (Mirjalili, Jangir, & Saremi, 2017), Multi-objective multi-verse optimization (MOMVO) algorithm (Mirjalili, Jangir, Saremi, & Saremi, 2017), Multi-objective grasshopper optimization (MOGOA) algorithm (Mirjalili, Mirjalili, Saremi, & Faris, 2018) and Multi-objective artificial bee colony (MOABC) algorithm (Akbari, Hedayatzadeh, Ziarati, & Hassanizadeh, 2012) on all 28 test problems in terms of metrics  ,  , GD and IGD. The 9

initial population size (Pop) is set to 100. The maximal number of the solutions in the approximate set produced by each algorithm for computing metrics (e.g. AS used in MMOGWO) is set to 100 for all test problems. And the maxFEs is set to 25,000 for bi-objective benchmark functions and 50,000 for tri-objective benchmark functions. Thus, the maxIteration will be 25,000/100 = 250 for and bi-objective benchmark functions and 50,000/100 = 500 for tri-objective benchmark functions. The results of MMOGWO, SPEA2, NSGA-II, MOPSO, MOGWO, MOALO, MOMVO, MOGOA, and MOABC are obtained by running our Matlab code. For details of default parameter settings for SPEA2, NSGA-II, MOPSO, MOGWO, MOALO, MOMVO, MOGOA, and MOABC, please refer to the original works of literature. For each benchmark function, the default parameter settings for MMOGWO are kept consistent with the settings of MOGWO. All algorithms are independently simulated 30 times and four metrics are computed for each run. The experimental results of mean (Mean) values and standard deviations (Std) values on metrics  ,  , GD and IGD are shown in Tables 1-4. In these tables, the rank of each algorithm based on the mean value of each metric for each benchmark function is shown in curly brackets.

10

Fig. 12. Pareto fronts obtained by MMOGWO.

11

Fig. 13. Box plot graphs of metric IGD on all test problems. Table 1 Mean values and Variance values of metric  for all test problems. Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms

SCH Mean ± Std 8.18E-03{06} ± 6.85E-04 8.60E-03{09} ± 1.27E-03 8.03E-03{03} ± 5.41E-04 7.80E-03{02} ± 4.60E-04 8.07E-03{04} ± 5.85E-04 7.40E-03{01} ± 1.27E-03 8.13E-03{05} ± 3.22E-04 8.28E-03{07} ± 6.96E-04 8.38E-03{08} ± 4.72E-04 ZDT1 Mean ± Std 1.23E-03{02} ± 4.01E-04 1.04E-01{08} ± 2.50E-02 4.61E-02{06} ± 4.33E-02 2.87E-03{03} ± 1.77E-03 8.27E-04{01} ± 2.10E-04 5.04E-03{04} ± 9.67E-03 3.92E-02{05} ± 1.65E-02 7.79E-02{07} ± 2.33E-01 2.94E-01{09} ± 5.59E-02 ZDT6 Mean ± Std 1.92E-01{03} ± 1.02E-02 1.31E-01{02} ± 2.92E-01 2.30E-01{05} ± 3.18E-01 3.06E-02{01} ± 6.16E-02 2.75E-01{07} ± 1.17E-01 3.14E-01{08} ± 1.98E-01 2.24E-01{04} ± 4.52E-02 2.55E-01{06} ± 6.86E-02 3.95E-01{09} ± 1.16E-01 DTLZ4 Mean ± Std 6.16E-02{06} ± 3.34E-02 6.63E-02{07} ± 1.62E-02 2.60E-02{03} ± 7.98E-03 8.78E-02{08} ± 1.17E-02 3.12E-02{04} ± 5.04E-02 2.11E-01{09} ± 1.83E-01 2.35E-02{02} ± 6.50E-03 3.93E-02{05} ± 2.88E-02 2.34E-02{01} ± 1.35E-02 UF1

FON Mean ± Std 1.06E-02{05} ± 8.29E-04 1.22E-02{07} ± 7.06E-04 9.97E-03{01} ± 3.87E-04 1.02E-02{03} ± 3.23E-04 1.03E-02{04} ± 7.58E-04 1.11E-02{06} ± 2.08E-03 1.01E-02{02} ± 2.17E-03 9.23E-02{09} ± 1.12E-02 3.65E-02{08} ± 8.89E-03 ZDT2 Mean ± Std 8.52E-04{03} ± 1.06E-04 1.62E-01{08} ± 4.02E-02 7.52E-02{07} ± 4.28E-02 2.02E-03{04} ± 1.54E-03 7.99E-04{02} ± 9.56E-05 5.40E-04{01} ± 7.52E-05 4.50E-03{06} ± 3.70E-03 4.02E-03{05} ± 6.95E-03 3.05E-01{09} ± 7.19E-02 DTLZ1 Mean ± Std 8.55E+01{09} ± 2.47E+01 8.03E+00{03} ± 5.88E+00 1.28E+01{05} ± 3.45E+00 2.69E+01{06} ± 4.63E+00 3.43E+01{08} ± 5.64E+00 1.20E+01{04} ± 2.98E+00 4.91E+00{02} ± 7.27E-01 3.09E+01{07} ± 7.90E-01 3.64E+00{01} ± 2.73E+00 DTLZ5 Mean ± Std 1.54E+00{02} ± 1.95E-01 1.66E+00{04} ± 1.02E-01 1.79E+00{06} ± 7.62E-02 1.68E+00{05} ± 4.36E-02 1.91E+00{07} ± 1.25E-01 1.62E+00{03} ± 2.85E-01 1.97E+00{08} ± 2.33E-01 2.06E+00{09} ± 2.34E-01 1.40E+00{01} ± 1.42E-01 UF2 12

POL Mean ± Std 1.58E+01{09} ± 4.46E-01 1.30E+01{01} ± 1.01E+00 1.46E+01{07} ± 3.57E-01 1.45E+01{05} ± 2.93E-01 1.43E+01{03} ± 1.34E+00 1.45E+01{05} ± 3.09E+00 1.46E+01{07} ± 1.56E+00 1.43E+01{03} ± 1.31E+00 1.36E+01{02} ± 7.39E-01 ZDT3 Mean ± Std 4.69E-04{01} ± 6.19E-04 7.44E-02{08} ± 2.31E-02 5.31E-02{07} ± 5.42E-02 5.23E-03{02} ± 1.11E-03 5.41E-03{03} ± 2.70E-03 7.67E-03{04} ± 3.27E-03 2.77E-02{05} ± 1.41E-02 3.83E-02{06} ± 6.39E-02 1.87E-01{09} ± 5.94E-02 DTLZ2 Mean ± Std 3.08E-02{04} ± 1.49E-02 5.79E-02{07} ± 1.49E-02 5.10E-02{06} ± 6.15E-03 7.85E-02{08} ± 1.06E-02 2.23E-02{03} ± 9.60E-03 5.03E-02{05} ± 1.99E-02 1.26E-02{01} ± 2.62E-03 1.52E-02{02} ± 8.63E-03 9.05E-02{09} ± 1.85E-01 DTLZ6 Mean ± Std 6.40E+00{03} ± 4.20E-01 6.59E+00{04} ± 3.00E-01 7.07E+00{05} ± 2.73E-01 6.07E+00{02} ± 1.89E-01 5.93E+00{01} ± 5.08E-01 7.93E+00{07} ± 8.69E-01 8.10E+00{08} ± 4.30E-01 7.75E+00{06} ± 5.49E-01 9.59E+00{09} ± 5.55E-02 UF3

KUR Mean ± Std 3.50E-02{03} ± 3.00E-02 3.87E-02{05} ± 1.42E-02 5.28E-01{08} ± 3.52E-01 3.34E-02{02} ± 5.58E-03 3.77E-02{04} ± 8.78E-03 5.78E-02{07} ± 3.12E-02 1.93E-02{01} ± 4.56E-03 1.24E+00{09} ± 2.70E-02 4.64E-02{06} ± 3.67E-02 ZDT4 Mean ± Std 4.25E+00{02} ± 4.15E+00 9.73E-02{01} ± 1.49E-01 7.08E+00{05} ± 2.85E+00 1.65E+01{08} ± 1.36E+01 5.67E+00{04} ± 5.84E+00 2.01E+01{09} ± 5.24E+00 1.26E+01{06} ± 4.90E-02 1.53E+01{07} ± 3.37E-01 2.25E+00{03} ± 8.90E-01 DTLZ3 Mean ± Std 5.15E+02{09} ± 9.91E+01 4.30E+01{02} ± 2.19E+01 7.24E+01{04} ± 1.87E+01 4.10E+02{07} ± 2.77E+01 4.24E+02{08} ± 2.45E+01 1.05E+02{06} ± 1.21E+02 1.24E+01{01} ± 6.14E+00 8.25E+01{05} ± 2.97E+00 6.60E+01{03} ± 9.50E+01 DTLZ7 Mean ± Std 2.54E-02{03} ± 2.00E-02 2.39E-01{07} ± 6.33E-02 3.33E-02{05} ± 1.24E-02 3.08E-02{04} ± 6.77E-03 2.00E-02{02} ± 1.60E-02 3.42E-03{01} ± 2.25E-03 1.20E-01{06} ± 5.27E-02 2.99E-01{08} ± 7.34E-01 5.23E-01{09} ± 9.28E-01 UF4

MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

Mean ± Std 4.43E-02{02} ± 3.80E-02 3.98E-02{01} ± 3.39E-02 2.22E-01{09} ± 9.24E-02 1.19E-01{08} ± 4.25E-02 5.89E-02{04} ± 5.83E-02 6.76E-02{05} ± 5.15E-02 4.89E-02{03} ± 1.89E-02 9.04E-02{07} ± 3.65E-02 7.95E-02{06} ± 2.10E-02 UF5 Mean ± Std 1.32E+00{05} ± 5.91E-01 1.23E+00{04} ± 1.53E-01 3.22E+00{09} ± 3.19E-01 1.71E+00{08} ± 6.32E-01 1.42E+00{06} ± 7.51E-01 1.45E+00{07} ± 2.56E-01 1.04E+00{03} ± 9.05E-02 5.53E-01{01} ± 8.09E-02 1.02E+00{02} ± 1.21E-01 UF9 Mean ± Std 1.97E-01{02} ± 1.20E-01 2.57E-01{05} ± 7.91E-02 7.30E+00{09} ± 1.06E+00 2.09E+00{08} ± 7.08E-01 2.33E-01{03} ± 2.32E-01 2.47E-01{04} ± 1.09E-01 1.01E+00{07} ± 2.69E-01 6.11E-01{06} ± 4.76E-01 2.97E-02{01} ± 2.84E-02

Mean ± Std 5.13E-02{04} ± 1.44E-02 5.33E-02{05} ± 1.05E-02 7.92E-02{07} ± 2.51E-02 1.19E-01{08} ± 2.52E-02 7.05E-02{06} ± 1.95E-02 1.23E-01{09} ± 4.25E-02 3.50E-02{02} ± 1.14E-02 2.21E-02{01} ± 2.47E-02 4.12E-02{03} ± 7.31E-03 UF6 Mean ± Std 4.21E-01{04} ± 2.55E-02 5.84E-01{06} ± 5.48E-02 1.38E+00{09} ± 6.91E-01 7.74E-01{08} ± 4.02E-01 4.55E-01{05} ± 1.47E-01 3.40E-01{03} ± 1.24E-01 1.56E-01{01} ± 4.58E-02 2.64E-01{02} ± 2.31E-01 6.00E-01{07} ± 9.27E-02 UF10 Mean ± Std 4.79E+00{07} ± 1.86E+00 1.57E+00{03} ± 6.51E-01 1.27E+01{09} ± 2.62E+00 1.01E+01{08} ± 1.15E+00 3.87E+00{06} ± 1.38E+00 3.46E+00{05} ± 7.75E-01 1.46E+00{02} ± 5.40E-01 2.84E+00{04} ± 4.30E-01 6.68E-01{01} ± 3.40E-01

Mean ± Std 2.54E-01{04} ± 6.05E-02 3.79E-01{08} ± 4.68E-02 3.11E-01{06} ± 8.20E-02 4.84E-01{09} ± 1.01E-01 3.06E-01{05} ± 1.15E-01 2.15E-01{02} ± 8.63E-02 2.36E-01{03} ± 8.48E-02 1.72E-01{01} ± 4.74E-02 3.39E-01{07} ± 6.92E-02 UF7 Mean ± Std 2.15E-02{01} ± 5.04E-03 6.04E-02{06} ± 1.95E-02 2.54E-01{09} ± 1.55E-01 1.11E-01{08} ± 5.03E-02 3.44E-02{04} ± 3.11E-02 5.46E-02{05} ± 4.69E-02 3.01E-02{02} ± 1.39E-02 3.33E-02{03} ± 1.91E-02 7.08E-02{07} ± 2.30E-02 SRN Mean ± Std 1.86E+00{04} ± 5.40E-01 1.09E+00{02} ± 6.63E-01 2.51E+00{07} ± 5.98E-01 2.66E+00{08} ± 3.49E-01 1.38E+00{03} ± 6.61E-01 7.03E-01{01} ± 1.13E+00 2.50E+00{06} ± 1.85E+00 2.47E+00{05} ± 1.71E+00 2.87E+00{09} ± 9.39E-01

Mean ± Std 5.76E-02{02} ± 4.52E-03 1.89E-01{09} ± 1.91E-02 1.19E-01{08} ± 4.79E-03 9.04E-02{05} ± 7.47E-03 6.30E-02{03} ± 7.13E-03 5.67E-02{01} ± 6.81E-03 9.31E-02{07} ± 9.70E-03 9.29E-02{06} ± 1.12E-02 6.63E-02{04} ± 1.84E-03 UF8 Mean ± Std 1.96E+00{08} ± 7.10E-01 1.32E-01{02} ± 6.42E-02 4.65E+00{09} ± 9.59E-01 1.53E+00{07} ± 3.24E-01 1.07E+00{06} ± 8.73E-01 1.91E-01{03} ± 1.42E-01 6.15E-01{05} ± 2.90E-01 4.53E-01{04} ± 5.97E-01 2.59E-02{01} ± 1.83E-02 TNK Mean ± Std 3.64E-03{06} ± 9.32E-04 1.49E-02{09} ± 5.38E-03 3.06E-03{04} ± 4.78E-04 5.59E-03{08} ± 6.58E-04 5.14E-03{07} ± 2.61E-03 3.36E-03{05} ± 1.08E-03 2.52E-03{03} ± 4.20E-04 2.36E-03{02} ± 1.04E-03 9.94E-04{01} ± 1.69E-04

Firstly, we analysis the values obtained in the  metric. It is apparent from Table 1 that the MOMVO is the best algorithm obtaining the best or second-best values on 11 out of the 28 test problems, and then it is the MOABC algorithm which has computed the best or the second-best values regarding this metric on 9 out of the 28 test problems. MMOGWO, SPEA2, MOALO, MOGOA, MOPSO, and MOGWO perform similarly since they provide the best or the second-best values regarding this metric on 8 cases, 7 cases, 6 cases, 6 cases, and 4 cases, respectively. At last, NSGA-II is the algorithm performing the worst on this metric. Table 2 Mean values and SD values of metric  for all test problems. Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2

SCH Mean ± Std 9.68E-01{06} ± 1.35E-01 8.69E-01{04} ± 9.37E-02 4.14E-01{02} ± 4.43E-02 3.70E-01{01} ± 4.11E-02 9.99E-01{07} ± 1.42E-01 1.53E+00{09} ± 1.11E-01 7.35E-01{03} ± 6.16E-02 1.05E+00{08} ± 6.94E-02 9.35E-01{05} ± 7.03E-02 ZDT1 Mean ± Std 1.09E+00{06} ± 1.24E-01 9.11E-01{04} ± 1.29E-01 4.56E-01{02} ± 5.10E-02 3.20E-01{01} ± 4.79E-02 1.12E+00{08} ± 1.43E-01 1.11E+00{07} ± 4.71E-02 9.77E-01{05} ± 1.62E-01 1.20E+00{09} ± 7.41E-02 8.08E-01{03} ± 8.04E-02 ZDT6 Mean ± Std 9.61E-01{04} ± 7.41E-02 9.57E-01{03} ± 1.89E-01 9.96E-01{05} ± 1.59E-01 6.91E-01{01} ± 4.27E-01 1.12E+00{07} ± 1.43E-01 1.15E+00{09} ± 1.03E-01 1.14E+00{08} ± 7.91E-02 1.03E+00{06} ± 9.94E-02 9.31E-01{02} ± 1.53E-01 DTLZ4 Mean ± Std 5.68E-01{02} ± 6.19E-02 6.93E-01{05} ± 1.38E-01 6.85E-01{04} ± 6.26E-02 4.97E-01{01} ± 4.65E-02 6.07E-01{03} ± 1.29E-01 1.30E+00{09} ± 8.33E-02 7.50E-01{06} ± 4.87E-02 1.06E+00{08} ± 3.89E-02 7.59E-01{07} ± 1.62E-01 UF1 Mean ± Std 9.48E-01{04} ± 9.99E-02 1.38E+00{09} ± 2.86E-01

FON Mean ± Std 8.97E-01{05} ± 8.04E-02 7.26E-01{04} ± 6.77E-02 3.92E-01{02} ± 4.40E-02 2.53E-01{01} ± 3.22E-02 9.20E-01{06} ± 1.21E-01 1.36E+00{08} ± 1.16E-01 1.22E+00{07} ± 7.51E-02 1.44E+00{09} ± 1.37E-01 7.19E-01{03} ± 1.14E-01 ZDT2 Mean ± Std 9.89E-01{05} ± 1.44E-01 9.57E-01{04} ± 1.16E-01 5.01E-01{02} ± 6.90E-02 3.42E-01{01} ± 5.46E-02 1.07E+00{09} ± 1.41E-01 1.02E+00{08} ± 7.40E-03 1.01E+00{07} ± 1.08E-02 1.00E+00{06} ± 2.30E-04 8.50E-01{03} ± 9.17E-02 DTLZ1 Mean ± Std 1.04E+00{05} ± 1.80E-01 1.39E+00{09} ± 3.23E-01 8.52E-01{03} ± 4.85E-02 6.98E-01{01} ± 8.31E-02 7.18E-01{02} ± 6.72E-02 1.30E+00{08} ± 2.19E-01 1.06E+00{07} ± 6.99E-02 1.04E+00{05} ± 2.62E-02 9.20E-01{04} ± 1.70E-01 DTLZ5 Mean ± Std 7.80E-01{04} ± 5.62E-02 6.99E-01{02} ± 4.55E-02 7.41E-01{03} ± 1.88E-02 6.58E-01{01} ± 3.77E-02 8.00E-01{05} ± 2.05E-02 1.15E+00{09} ± 5.02E-02 1.08E+00{08} ± 3.77E-02 1.05E+00{07} ± 6.56E-02 9.95E-01{06} ± 8.99E-02 UF2 Mean ± Std 1.00E-01{01} ± 9.03E-02 8.16E-01{05} ± 7.83E-02 13

POL Mean ± Std 9.78E-01{04} ± 1.08E-02 9.82E-01{05} ± 3.79E-02 8.87E-01{02} ± 1.16E-02 8.77E-01{01} ± 1.42E-02 9.85E-01{06} ± 1.87E-02 1.06E+00{09} ± 6.51E-02 1.02E+00{08} ± 9.69E-03 1.01E+00{07} ± 9.87E-03 9.61E-01{03} ± 1.88E-02 ZDT3 Mean ± Std 9.78E-01{05} ± 1.06E-01 9.18E-01{04} ± 9.22E-02 5.28E-01{02} ± 1.02E-01 3.51E-01{01} ± 4.56E-02 9.78E-01{05} ± 1.04E-01 1.30E+00{09} ± 1.09E-01 1.09E+00{06} ± 1.21E-01 1.28E+00{08} ± 1.19E-01 8.16E-01{03} ± 9.78E-02 DTLZ2 Mean ± Std 8.18E-01{04} ± 3.79E-02 5.82E-01{02} ± 5.61E-02 6.81E-01{03} ± 8.76E-02 4.71E-01{01} ± 6.82E-02 8.60E-01{05} ± 3.02E-02 1.17E+00{09} ± 5.33E-02 1.16E+00{08} ± 6.73E-02 1.12E+00{07} ± 8.32E-02 1.02E+00{06} ± 9.08E-02 DTLZ6 Mean ± Std 8.77E-01{05} ± 8.98E-02 7.34E-01{03} ± 4.87E-02 7.17E-01{02} ± 3.52E-02 6.09E-01{01} ± 3.42E-02 7.64E-01{04} ± 6.86E-02 1.17E+00{09} ± 6.73E-02 1.10E+00{08} ± 3.80E-02 1.06E+00{07} ± 2.38E-02 9.48E-01{06} ± 7.53E-02 UF3 Mean ± Std 1.19E+00{08} ± 2.52E-01 9.84E-01{04} ± 3.41E-02

KUR Mean ± Std 1.03E+00{06} ± 9.50E-02 7.95E-01{04} ± 8.17E-02 5.54E-01{02} ± 7.13E-02 3.89E-01{01} ± 4.84E-02 7.03E-01{03} ± 6.50E-02 1.37E+00{09} ± 1.62E-01 1.07E+00{07} ± 6.62E-02 1.07E+00{07} ± 2.82E-02 8.55E-01{05} ± 1.24E-01 ZDT4 Mean ± Std 1.04E+00{06} ± 6.61E-02 1.18E+00{09} ± 2.55E-01 9.36E-01{01} ± 3.25E-02 9.91E-01{03} ± 3.72E-02 1.08E+00{08} ± 1.18E-01 1.04E+00{06} ± 4.24E-02 9.98E-01{04} ± 8.83E-04 9.81E-01{02} ± 0.00E+00 1.01E+00{05} ± 1.51E-01 DTLZ3 Mean ± Std 9.54E-01{04} ± 1.59E-01 1.32E+00{09} ± 3.61E-01 9.62E-01{05} ± 1.13E-01 7.63E-01{02} ± 8.93E-02 7.01E-01{01} ± 5.35E-02 1.16E+00{08} ± 1.74E-01 1.06E+00{07} ± 9.92E-02 1.00E+00{06} ± 6.16E-03 8.47E-01{03} ± 1.77E-01 DTLZ7 Mean ± Std 9.41E-01{06} ± 7.91E-02 7.43E-01{02} ± 9.01E-02 8.49E-01{04} ± 4.88E-02 4.61E-01{01} ± 5.66E-02 8.27E-01{03} ± 1.48E-01 1.01E+00{07} ± 7.80E-03 1.07E+00{08} ± 1.34E-01 1.15E+00{09} ± 9.69E-02 9.36E-01{05} ± 6.91E-02 UF4 Mean ± Std 9.49E-01{04} ± 1.07E-01 9.68E-01{05} ± 1.59E-01

NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

8.11E-01{02} ± 7.58E-02 8.73E-01{03} ± 1.72E-01 9.60E-01{05} ± 1.55E-01 1.14E+00{07} ± 1.31E-01 1.18E+00{08} ± 1.12E-01 1.07E+00{06} ± 5.99E-02 7.16E-01{01} ± 1.05E-01 UF5 Mean ± Std 9.87E-01{04} ± 5.54E-02 1.04E+00{08} ± 1.00E-01 9.59E-01{03} ± 3.21E-02 9.22E-01{02} ± 8.19E-02 1.02E+00{07} ± 1.64E-01 1.04E+00{08} ± 3.27E-02 9.88E-01{05} ± 1.12E-02 1.00E+00{06} ± 6.99E-03 8.47E-01{01} ± 8.85E-02 UF9 Mean ± Std 8.94E-01{05} ± 1.49E-01 7.54E-01{02} ± 7.09E-02 7.60E-01{03} ± 3.40E-02 7.36E-01{01} ± 7.15E-02 8.51E-01{04} ± 9.97E-02 1.11E+00{08} ± 6.01E-02 1.15E+00{09} ± 4.39E-02 1.10E+00{07} ± 7.67E-02 9.58E-01{06} ± 6.96E-02

5.92E-01{02} ± 6.26E-02 7.84E-01{04} ± 8.71E-02 8.57E-01{06} ± 1.26E-01 1.34E+00{09} ± 1.24E-01 1.10E+00{08} ± 6.48E-02 1.05E+00{07} ± 2.98E-02 6.49E-01{03} ± 9.13E-02 UF6 Mean ± Std 1.00E+00{05} ± 5.56E-02 1.05E+00{07} ± 1.02E-01 9.50E-01{02} ± 4.77E-02 9.85E-01{03} ± 8.52E-02 1.06E+00{08} ± 1.44E-01 1.09E+00{09} ± 1.06E-01 9.79E-01{04} ± 2.69E-02 1.02E+00{06} ± 3.70E-02 8.71E-01{01} ± 1.33E-01 UF10 Mean ± Std 8.99E-01{06} ± 3.74E-02 6.87E-01{02} ± 7.79E-02 7.39E-01{04} ± 4.49E-02 7.09E-01{01} ± 6.48E-02 7.25E-01{03} ± 1.16E-01 1.06E+00{08} ± 6.67E-02 9.25E-01{07} ± 5.60E-02 1.09E+00{09} ± 4.23E-02 8.60E-01{05} ± 1.20E-01

8.61E-01{01} ± 8.62E-02 1.04E+00{05} ± 2.26E-01 1.08E+00{06} ± 1.80E-01 1.50E+00{09} ± 1.77E-01 9.62E-01{03} ± 3.69E-02 1.08E+00{06} ± 2.76E-02 8.76E-01{02} ± 1.12E-01 UF7 Mean ± Std 1.11E+00{05} ± 1.68E-01 1.13E+00{07} ± 1.63E-01 8.87E-01{04} ± 7.93E-02 7.89E-01{01} ± 1.02E-01 8.35E-01{02} ± 1.09E-01 1.38E+00{09} ± 1.81E-01 1.12E+00{06} ± 7.69E-02 1.18E+00{08} ± 8.02E-02 8.85E-01{03} ± 1.01E-01 SRN Mean ± Std 9.01E-01{05} ± 7.30E-02 7.73E-01{04} ± 6.91E-02 4.79E-01{02} ± 4.19E-02 3.91E-01{01} ± 3.25E-02 9.29E-01{06} ± 1.15E-01 1.34E+00{09} ± 1.62E-01 1.27E+00{07} ± 7.58E-02 1.32E+00{08} ± 1.75E-01 7.23E-01{03} ± 9.91E-02

5.27E-01{01} ± 4.41E-02 5.84E-01{03} ± 6.66E-02 1.14E+00{06} ± 2.37E-01 1.31E+00{09} ± 1.46E-01 1.14E+00{06} ± 8.10E-02 1.18E+00{07} ± 8.46E-02 5.60E-01{02} ± 5.95E-02 UF8 Mean ± Std 8.40E-01{04} ± 4.15E-02 6.05E-01{01} ± 7.63E-02 7.36E-01{03} ± 3.96E-02 6.51E-01{02} ± 6.90E-02 8.86E-01{05} ± 4.43E-02 1.15E+00{08} ± 7.79E-02 1.16E+00{09} ± 3.89E-02 1.07E+00{07} ± 6.88E-02 1.01E+00{06} ± 1.18E-01 TNK Mean ± Std 7.23E-01{03} ± 9.43E-02 8.10E-01{05} ± 1.34E-01 4.20E-01{01} ± 5.27E-02 5.65E-01{02} ± 5.55E-02 7.85E-01{04} ± 1.66E-01 1.03E+00{08} ± 9.22E-02 8.58E-01{06} ± 7.69E-02 1.23E+00{09} ± 1.39E-01 9.57E-01{07} ± 9.38E-02

Next, we analyze the  metric (Tables 2). In this case, the most competitive algorithm is MOPSO that has obtained the best or the second-best values on 19 test problems, and then it is the NSGA-II algorithm that has provided the best or the second-best fronts on 16 test problems regarding this metric. Roughly, SPEA2, MOABC, MOGWO, MMOGWO, and MOGOA present competitive results in terms of this metric, while MOALO and MOMVO give poor results on this metric. Table 3 Mean values and SD values of metric GD for all test problems. Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO

SCH Mean ± Std 9.44E-04{05} ± 6.86E-05 1.32E-03{09} ± 9.39E-04 9.41E-04{04} ± 5.13E-05 9.19E-04{02} ± 4.21E-05 9.35E-04{03} ± 5.25E-05 8.78E-04{01} ± 1.16E-04 9.48E-04{06} ± 2.86E-05 9.63E-04{07} ± 6.85E-05 9.94E-04{08} ± 4.73E-05 ZDT1 Mean ± Std 2.34E-04{02} ± 1.37E-04 1.23E-02{08} ± 3.69E-03 4.78E-03{05} ± 4.47E-03 3.62E-04{03} ± 1.83E-04 1.24E-04{01} ± 5.79E-05 6.70E-04{04} ± 1.32E-03 9.10E-03{07} ± 1.29E-02 8.55E-03{06} ± 2.65E-02 9.73E-02{09} ± 2.37E-02 ZDT6 Mean ± Std 2.13E-02{02} ± 8.35E-04 3.90E-02{05} ± 5.58E-02 6.21E-02{08} ± 3.95E-02 1.78E-02{01} ± 3.01E-02 4.53E-02{06} ± 2.90E-02 5.39E-02{07} ± 4.99E-02 3.60E-02{03} ± 2.93E-02 3.66E-02{04} ± 1.94E-02 1.70E-01{09} ± 6.62E-03 DTLZ4 Mean ± Std 8.72E-03{05} ± 5.77E-03 1.43E-02{07} ± 3.54E-03 3.12E-03{02} ± 1.30E-03 1.27E-02{06} ± 3.70E-03 3.98E-03{03} ± 7.13E-03 2.85E-02{09} ± 2.33E-02 2.42E-03{01} ± 6.34E-04 6.83E-03{04} ± 7.29E-03 1.79E-02{08} ± 1.03E-02 UF1 Mean ± Std 5.42E-03{01} ± 4.17E-03 1.52E-02{06} ± 2.96E-02 3.21E-02{09} ± 1.46E-02 2.86E-02{08} ± 1.85E-02 1.20E-02{05} ± 1.35E-02 8.32E-03{03} ± 5.29E-03

FON Mean ± Std 1.23E-03{06} ± 7.99E-05 1.87E-03{03} ± 1.23E-04 1.18E-03{01} ± 3.75E-05 1.20E-03{04} ± 2.93E-05 1.20E-03{04} ± 7.21E-05 1.28E-03{07} ± 2.10E-04 1.19E-03{02} ± 2.04E-04 1.01E-02{08} ± 1.03E-03 1.06E-02{09} ± 1.96E-03 ZDT2 Mean ± Std 9.79E-05{03} ± 1.09E-05 2.60E-02{08} ± 2.25E-02 7.58E-03{07} ± 4.26E-03 2.26E-04{04} ± 1.73E-04 9.32E-05{02} ± 1.03E-05 6.12E-05{01} ± 6.83E-06 1.57E-03{05} ± 1.46E-03 2.25E-03{06} ± 5.75E-03 1.21E-01{09} ± 3.52E-02 DTLZ1 Mean ± Std 1.35E+01{09} ± 3.49E+00 3.81E+00{06} ± 2.75E+00 1.33E+00{02} ± 3.56E-01 6.10E+00{07} ± 7.79E-01 7.58E+00{08} ± 1.42E+00 1.82E+00{03} ± 8.15E-01 6.57E-01{01} ± 3.65E-01 3.10E+00{05} ± 7.76E-02 2.41E+00{04} ± 2.20E+00 DTLZ5 Mean ± Std 1.58E-01{01} ± 1.98E-02 2.28E-01{08} ± 1.60E-02 1.96E-01{05} ± 8.08E-03 1.84E-01{03} ± 3.73E-03 1.92E-01{04} ± 1.22E-02 1.66E-01{02} ± 2.68E-02 2.03E-01{06} ± 2.16E-02 2.08E-01{07} ± 2.28E-02 7.49E-01{09} ± 2.31E-01 UF2 Mean ± Std 7.37E-03{03} ± 2.35E-03 1.49E-02{08} ± 3.28E-03 1.42E-02{06} ± 5.33E-03 2.26E-02{09} ± 5.87E-03 1.00E-02{05} ± 2.96E-03 1.43E-02{07} ± 4.77E-03 14

POL Mean ± Std 1.71E+00{07} ± 4.38E-02 1.84E+00{08} ± 2.43E-01 1.59E+00{06} ± 3.83E-02 1.58E+00{04} ± 2.89E-02 1.53E+00{01} ± 1.43E-01 1.53E+00{01} ± 3.29E-01 1.58E+00{04} ± 1.65E-01 1.55E+00{03} ± 1.48E-01 2.45E+00{09} ± 1.82E-01 ZDT3 Mean ± Std 6.16E-04{01} ± 9.65E-05 1.65E-02{08} ± 5.59E-03 6.98E-03{07} ± 5.77E-03 6.69E-04{02} ± 1.06E-04 7.08E-04{03} ± 4.24E-04 1.22E-03{04} ± 6.85E-04 6.86E-03{06} ± 1.08E-02 4.70E-03{05} ± 6.78E-03 6.56E-02{09} ± 2.21E-02 DTLZ2 Mean ± Std 3.23E-03{04} ± 1.52E-03 1.17E-02{08} ± 3.71E-03 5.93E-03{06} ± 7.42E-04 1.15E-02{07} ± 3.60E-03 2.41E-03{02} ± 1.07E-03 5.19E-03{05} ± 2.00E-03 1.37E-03{01} ± 2.96E-04 2.87E-03{03} ± 1.99E-03 7.68E-02{09} ± 1.74E-01 DTLZ6 Mean ± Std 6.70E-01{03} ± 3.77E-02 9.56E-01{08} ± 5.93E-02 7.48E-01{04} ± 2.26E-02 6.62E-01{02} ± 1.77E-02 6.05E-01{01} ± 5.16E-02 8.06E-01{06} ± 7.41E-02 8.18E-01{07} ± 4.11E-02 7.80E-01{05} ± 5.33E-02 2.23E+00{09} ± 1.63E-01 UF3 Mean ± Std 3.24E-02{05} ± 3.46E-03 5.66E-02{07} ± 1.40E-02 3.20E-02{04} ± 8.99E-03 1.04E-01{08} ± 2.18E-02 5.37E-02{06} ± 2.42E-02 2.94E-02{03} ± 6.64E-03

KUR Mean ± Std 7.14E-03{04} ± 6.25E-03 1.17E-02{06} ± 5.02E-03 7.15E-02{08} ± 4.69E-02 4.86E-03{02} ± 8.83E-04 5.24E-03{03} ± 1.37E-03 9.65E-03{05} ± 5.05E-03 2.99E-03{01} ± 1.05E-03 8.30E-01{09} ± 6.18E-03 1.64E-02{07} ± 1.48E-02 ZDT4 Mean ± Std 6.10E-01{02} ± 6.53E-01 8.91E-02{01} ± 1.30E-01 7.13E-01{03} ± 2.84E-01 8.71E+00{07} ± 6.98E+00 1.71E+00{04} ± 3.45E+00 2.06E+00{06} ± 6.65E-01 1.95E+00{05} ± 5.87E-01 1.48E+01{09} ± 2.11E+00 1.19E+01{08} ± 5.59E-01 DTLZ3 Mean ± Std 6.73E+01{08} ± 1.20E+01 1.77E+01{05} ± 9.52E+00 7.36E+00{02} ± 1.89E+00 7.94E+01{09} ± 8.15E+00 6.67E+01{07} ± 5.98E+00 1.23E+01{04} ± 1.47E+01 3.02E+00{01} ± 1.70E+00 1.14E+01{03} ± 1.01E+00 5.59E+01{06} ± 9.68E+01 DTLZ7 Mean ± Std 4.96E-03{05} ± 3.98E-03 5.21E-02{08} ± 2.48E-02 4.48E-03{04} ± 1.74E-03 4.33E-03{03} ± 9.74E-04 3.29E-03{02} ± 2.86E-03 3.60E-04{01} ± 2.28E-04 2.30E-02{06} ± 3.25E-02 4.00E-02{07} ± 8.08E-02 3.24E-01{09} ± 6.42E-01 UF4 Mean ± Std 6.10E-03{02} ± 5.18E-04 4.10E-02{09} ± 3.27E-03 1.21E-02{08} ± 4.89E-04 9.49E-03{05} ± 7.24E-04 7.86E-03{03} ± 1.87E-03 6.05E-03{01} ± 9.29E-04

MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

7.27E-03{02} ± 2.53E-03 1.14E-02{04} ± 6.19E-03 2.72E-02{07} ± 9.75E-03 UF5 Mean ± Std 2.09E-01{04} ± 9.05E-02 2.69E-01{05} ± 4.35E-02 3.26E-01{06} ± 3.23E-02 5.05E-01{09} ± 2.08E-01 4.44E-01{08} ± 2.99E-01 1.52E-01{03} ± 2.96E-02 1.34E-01{02} ± 4.22E-02 5.90E-02{01} ± 8.20E-03 3.96E-01{07} ± 5.95E-02 UF9 Mean ± Std 2.33E-02{02} ± 1.39E-02 4.39E-02{05} ± 1.38E-02 8.18E-01{09} ± 1.18E-01 2.63E-01{08} ± 9.62E-02 2.61E-02{03} ± 2.42E-02 2.67E-02{04} ± 1.16E-02 1.17E-01{07} ± 3.00E-02 7.28E-02{06} ± 6.42E-02 2.29E-02{01} ± 2.60E-02

4.95E-03{02} ± 1.54E-03 2.66E-03{01} ± 2.76E-03 9.60E-03{04} ± 2.80E-03 UF6 Mean ± Std 8.13E-02{05} ± 1.19E-02 1.26E-01{07} ± 2.99E-02 1.50E-01{08} ± 8.54E-02 2.27E-01{09} ± 1.22E-01 1.19E-01{06} ± 6.94E-02 3.45E-02{04} ± 1.22E-02 2.23E-02{01} ± 8.80E-03 2.93E-02{03} ± 2.39E-02 2.24E-01{02} ± 6.34E-02 UF10 Mean ± Std 4.93E-01{07} ± 1.89E-01 3.07E-01{03} ± 1.29E-01 1.34E+00{09} ± 2.95E-01 1.30E+00{08} ± 1.66E-01 4.26E-01{06} ± 1.70E-01 3.49E-01{04} ± 7.70E-02 1.68E-01{01} ± 7.05E-02 2.88E-01{02} ± 4.31E-02 3.85E-01{05} ± 2.25E-01

2.52E-02{02} ± 8.45E-03 1.87E-02{01} ± 4.53E-03 1.39E-01{09} ± 3.07E-01 UF7 Mean ± Std 2.53E-03{01} ± 6.57E-04 7.46E-03{06} ± 2.49E-03 2.87E-02{09} ± 1.76E-02 2.06E-02{07} ± 1.78E-02 4.99E-03{03} ± 5.70E-03 5.84E-03{05} ± 5.01E-03 4.94E-03{02} ± 1.88E-03 5.37E-03{04} ± 1.85E-03 2.75E-02{08} ± 1.18E-02 SRN Mean ± Std 5.54E-01{04} ± 1.21E-01 3.84E-01{02} ± 2.03E-01 7.55E-01{07} ± 1.36E-01 7.89E-01{08} ± 7.09E-02 4.13E-01{03} ± 1.68E-01 1.67E-01{01} ± 2.41E-01 6.07E-01{05} ± 3.06E-01 6.15E-01{06} ± 3.61E-01 9.18E-01{09} ± 3.17E-01

9.81E-03{06} ± 9.52E-04 9.44E-03{04} ± 1.12E-03 9.82E-03{07} ± 4.10E-04 UF8 Mean ± Std 2.03E-01{08} ± 7.12E-02 1.96E-02{02} ± 1.08E-02 5.26E-01{09} ± 1.14E-01 1.83E-01{07} ± 3.94E-02 1.08E-01{06} ± 8.86E-02 1.99E-02{03} ± 1.45E-02 7.75E-02{05} ± 3.92E-02 5.70E-02{04} ± 7.03E-02 1.85E-02{01} ± 1.52E-02 TNK Mean ± Std 5.98E-04{05} ± 1.95E-04 5.53E-03{09} ± 2.30E-03 5.62E-04{04} ± 9.00E-05 8.21E-04{07} ± 1.05E-04 9.59E-04{08} ± 6.76E-04 6.02E-04{06} ± 2.03E-04 4.35E-04{02} ± 1.01E-04 4.79E-04{03} ± 3.00E-04 1.86E-04{01} ± 7.03E-05

We now pay attention to the computed results in terms of the GD (Table 3). Regarding this metric, the MOMVO, the MMOGWO, and the MOALO are the two most competitive algorithms that give the best or the second-best values on 14, 7 and 7 test problems, respectively. Other promising algorithms are MOGWO, MOPSO, NSGA-II, MOGOA, and MOABC. At last, SPEA2 are the algorithms providing the worst results on this metric. Table 4 Mean values and SD values of metric IGD for all test problems. Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms

SCH Mean ± Std 2.02E-03{04} ± 1.71E-04 2.07E-03{07} ± 3.18E-04 2.00E-03{03} ± 1.35E-04 1.95E-03{02} ± 1.15E-04 2.03E-03{05} ± 1.46E-04 1.83E-03{01} ± 3.18E-04 2.04E-03{06} ± 8.05E-05 2.10E-03{09} ± 1.74E-04 2.07E-03{07} ± 1.18E-04 ZDT1 Mean ± Std 1.18E-03{02} ± 4.01E-04 9.98E-02{08} ± 2.50E-02 3.72E-02{07} ± 4.33E-02 2.57E-03{04} ± 1.77E-03 8.17E-04{01} ± 2.10E-04 1.57E-03{03} ± 9.67E-03 3.14E-02{06} ± 1.65E-02 2.58E-02{05} ± 2.33E-01 3.02E-01{09} ± 5.59E-02 ZDT6 Mean ± Std 2.34E-01{04} ± 1.28E-02 4.57E-02{02} ± 3.25E-01 1.63E-01{03} ± 3.53E-01 4.74E-03{01} ± 6.68E-02 2.74E-01{06} ± 1.27E-01 3.18E-01{08} ± 2.17E-01 2.59E-01{05} ± 5.01E-02 2.88E-01{07} ± 7.38E-02 4.27E-01{09} ± 1.25E-01 DTLZ4 Mean ± Std 4.73E-02{06} ± 3.35E-02 6.77E-02{07} ± 1.63E-02 2.27E-02{04} ± 8.00E-03 8.84E-02{08} ± 1.17E-02 1.48E-02{01} ± 5.05E-02 1.78E-01{09} ± 1.84E-01 2.18E-02{03} ± 6.52E-03 2.62E-02{05} ± 2.89E-02 1.94E-02{02} ± 1.35E-02 UF1 Mean ± Std 2.42E-02{01} ± 3.80E-02 3.01E-02{02} ± 3.39E-02 2.11E-01{09} ± 9.24E-02 1.05E-01{08} ± 4.25E-02 3.33E-02{03} ± 5.83E-02 5.12E-02{04} ± 5.15E-02 5.17E-02{05} ± 1.89E-02 7.82E-02{07} ± 3.65E-02 7.65E-02{06} ± 2.10E-02 UF5

FON Mean ± Std 1.08E-02{05} ± 8.48E-04 1.25E-02{07} ± 7.22E-04 1.01E-02{02} ± 3.96E-04 1.04E-02{03} ± 3.30E-04 1.05E-02{04} ± 7.75E-04 1.11E-02{06} ± 2.13E-03 9.68E-03{01} ± 2.22E-03 9.70E-02{09} ± 1.14E-02 3.57E-02{08} ± 9.09E-03 ZDT2 Mean ± Std 8.52E-04{03} ± 1.06E-04 1.59E-01{08} ± 4.02E-02 8.31E-02{07} ± 4.28E-02 1.64E-03{05} ± 1.54E-03 7.99E-04{02} ± 9.56E-05 5.40E-04{01} ± 7.52E-05 2.81E-03{06} ± 3.70E-03 8.79E-04{04} ± 6.95E-03 3.02E-01{09} ± 5.17E-03 DTLZ1 Mean ± Std 1.90E+02{09} ± 5.11E+01 1.40E+01{03} ± 1.21E+01 2.64E+01{05} ± 7.06E+00 5.39E+01{06} ± 9.49E+00 6.90E+01{08} ± 1.16E+01 2.39E+01{04} ± 6.11E+00 9.71E+00{02} ± 1.48E+00 6.39E+01{07} ± 1.66E+00 5.22E+00{01} ± 5.57E+00 DTLZ5 Mean ± Std 1.88E+00{03} ± 2.66E-01 2.07E+00{04} ± 1.55E-01 2.29E+00{07} ± 1.09E-01 2.11E+00{05} ± 5.77E-02 2.18E+00{06} ± 2.02E-01 1.84E+00{02} ± 3.90E-01 2.51E+00{08} ± 3.48E+02 2.71E+00{09} ± 3.40E-01 1.69E+00{01} ± 2.19E-01 UF2 Mean ± Std 4.70E-02{05} ± 1.44E-02 5.22E-02{06} ± 1.05E-02 7.14E-02{08} ± 2.51E-02 1.18E-02{01} ± 2.52E-02 6.86E-02{07} ± 1.95E-02 1.07E-01{09} ± 4.25E-02 3.33E-02{03} ± 1.14E-02 1.28E-02{02} ± 2.47E-02 3.92E-02{04} ± 7.31E-03 UF6 15

POL Mean ± Std 8.25E-01{09} ± 1.77E-02 7.39E-01{01} ± 5.34E-02 7.91E-01{06} ± 1.28E-02 7.86E-01{04} ± 1.41E-02 7.71E-01{03} ± 4.76E-02 8.21E-01{08} ± 1.17E-01 7.90E-01{05} ± 6.62E-02 7.96E-01{07} ± 5.58E-02 7.47E-01{02} ± 3.70E-02 ZDT3 Mean ± Std 2.74E-03{01} ± 3.71E-04 5.17E-02{08} ± 1.46E-02 2.91E-02{07} ± 3.86E-02 3.18E-03{03} ± 1.15E-03 2.93E-03{02} ± 1.52E-03 4.40E-03{04} ± 2.80E-03 1.74E-02{06} ± 8.02E-03 1.46E-02{05} ± 4.68E-02 1.21E-01{09} ± 3.43E-02 DTLZ2 Mean ± Std 2.78E-02{05} ± 1.49E-02 5.94E-02{08} ± 1.49E-02 5.00E-02{06} ± 6.15E-03 7.68E-02{09} ± 1.06E-02 1.99E-02{04} ± 9.60E-03 5.01E-02{07} ± 1.99E-02 1.24E-02{02} ± 2.62E-03 1.51E-02{03} ± 8.63E-03 2.88E-03{01} ± 1.85E-01 DTLZ6 Mean ± Std 8.47E+00{03} ± 5.11E-01 8.63E+00{04} ± 3.80E-01 9.20E+00{05} ± 3.56E-01 7.83E+00{02} ± 2.53E-01 7.64E+00{01} ± 6.72E-01 1.03E+01{07} ± 1.22E+00 1.06E+01{08} ± 6.46E-01 1.02E+01{06} ± 1.01E+00 1.20E+01{09} ± 1.58E-01 UF3 Mean ± Std 2.53E-01{04} ± 6.05E-02 3.90E-01{08} ± 4.68E-02 3.02E-01{05} ± 8.20E-02 4.93E-01{09} ± 1.01E-01 3.05E-01{06} ± 1.15E-01 1.91E-01{02} ± 8.63E-02 2.12E-01{03} ± 8.48E-02 1.56E-01{01} ± 4.74E-02 3.31E-01{07} ± 6.92E-02 UF7

KUR Mean ± Std 2.76E-03{02} ± 4.14E-03 5.35E-03{06} ± 1.86E-03 8.31E-02{08} ± 5.41E-02 4.63E-03{03} ± 8.11E-04 5.22E-03{05} ± 1.30E-03 7.05E-03{07} ± 4.12E-03 2.61E-03{01} ± 5.78E-04 1.24E+00{09} ± 2.70E-02 5.13E-03{04} ± 5.03E-03 ZDT4 Mean ± Std 4.04E+00{04} ± 4.16E+00 1.81E-02{01} ± 1.46E-01 6.22E+00{05} ± 2.85E+00 1.04E+01{06} ± 1.36E+01 3.89E+00{03} ± 5.86E+00 1.85E+01{09} ± 5.25E+00 1.26E+01{07} ± 4.91E-02 1.53E+01{08} ± 3.38E-01 2.17E+00{02} ± 8.92E-01 DTLZ3 Mean ± Std 5.10E+02{09} ± 9.93E+01 3.98E+01{03} ± 2.20E+01 6.80E+01{07} ± 1.88E+01 4.12E+02{04} ± 2.77E+01 4.28E+02{05} ± 2.45E+01 4.68E+01{06} ± 1.22E+02 1.09E+01{01} ± 6.15E+00 8.23E+01{08} ± 2.99E+00 3.20E+01{02} ± 9.52E+01 DTLZ7 Mean ± Std 1.74E-02{03} ± 1.58E-02 1.32E-01{09} ± 2.22E-02 2.17E-02{04} ± 8.82E-03 2.71E-02{05} ± 5.24E-03 1.47E-02{02} ± 1.54E-02 2.41E-03{01} ± 1.52E-03 7.30E-02{06} ± 1.68E-02 8.05E-02{07} ± 2.23E-01 1.09E-01{08} ± 2.81E-01 UF4 Mean ± Std 5.64E-02{02} ± 4.52E-03 1.92E-01{09} ± 1.91E-02 1.20E-01{08} ± 4.79E-03 9.10E-02{06} ± 7.47E-03 6.16E-02{03} ± 7.13E-03 5.46E-02{01} ± 6.81E-03 9.12E-02{07} ± 9.70E-03 8.93E-02{05} ± 1.12E-02 6.59E-02{04} ± 1.84E-03 UF8

Mean ± Std 1.38E+00{06} ± 5.91E-01 1.21E+00{04} ± 1.53E-01 3.30E+00{09} ± 3.19E-01 1.59E+00{08} ± 6.32E-01 1.35E+00{05} ± 7.51E-01 1.46E+00{07} ± 2.56E-01 1.00E+00{02} ± 9.05E-02 5.36E-01{01} ± 8.09E-02 1.02E+00{03} ± 1.21E-01 UF9 Mean ± Std 1.62E-01{02} ± 1.20E-01 2.49E-01{04} ± 7.91E-02 7.46E+00{09} ± 1.06E+00 2.16E+00{08} ± 7.08E-01 1.78E-01{03} ± 2.32E-01 2.13E-01{05} ± 1.09E-01 1.02E+00{07} ± 2.69E-01 5.17E-01{06} ± 4.76E-01 2.03E-02{01} ± 2.84E-02

MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

Mean ± Std 4.25E-01{04} ± 2.55E-02 5.73E-01{07} ± 5.48E-02 1.35E+00{09} ± 6.91E-01 5.67E-01{06} ± 4.02E-01 4.27E-01{05} ± 1.47E-01 2.92E-01{03} ± 1.24E-01 1.33E-01{01} ± 4.58E-02 1.50E-01{02} ± 2.31E-01 5.75E-01{08} ± 9.27E-02 UF10 Mean ± Std 4.93E+00{07} ± 1.86E+00 1.36E+00{02} ± 6.51E-01 1.17E+01{09} ± 2.62E+00 1.00E+01{08} ± 1.15E+00 4.16E+00{06} ± 1.38E+00 3.28E+00{05} ± 7.75E-01 1.52E+00{03} ± 5.40E-01 2.71E+00{04} ± 4.30E-01 5.63E-01{01} ± 3.40E-01

Mean ± Std 2.11E-02{01} ± 5.04E-03 5.85E-02{06} ± 1.95E-02 2.54E-01{09} ± 1.55E-01 9.16E-02{08} ± 5.03E-02 2.18E-02{02} ± 3.11E-02 3.03E-02{05} ± 4.69E-02 2.67E-02{04} ± 1.39E-02 2.66E-02{03} ± 1.91E-02 7.20E-02{07} ± 2.30E-02 SRN Mean ± Std 8.33E-03{04} ± 2.47E-03 4.30E-03{02} ± 3.03E-03 1.16E-02{07} ± 2.74E-03 1.23E-02{08} ± 1.60E-03 5.89E-03{03} ± 3.02E-03 2.21E-03{01} ± 5.14E-03 8.57E-03{05} ± 8.49E-03 1.04E-02{06} ± 7.84E-03 1.25E-02{09} ± 4.33E-03

Mean ± Std 1.95E+00{08} ± 7.10E-01 1.15E-01{02} ± 6.42E-02 4.64E+00{09} ± 9.59E-01 1.57E+00{07} ± 3.24E-01 7.87E-01{06} ± 8.73E-01 1.50E-01{03} ± 1.42E-01 5.73E-01{05} ± 2.90E-01 2.11E-01{04} ± 5.97E-01 1.89E-02{01} ± 1.83E-02 TNK Mean ± Std 3.55E-03{06} ± 9.37E-04 1.50E-02{09} ± 5.41E-03 3.18E-03{05} ± 4.80E-04 5.61E-03{08} ± 6.61E-04 4.24E-03{07} ± 2.62E-03 3.03E-03{04} ± 1.09E-03 2.48E-03{03} ± 4.22E-04 2.19E-03{02} ± 1.05E-03 9.53E-04{01} ± 1.70E-04

Sequentially, we compare algorithms regarding the IGD metric. Definitely, in this case, the MOABC are the most competitive algorithm obtaining the best or the second-best results on 11 test problems, followed by MMOGWO, SPEA2, MOGWO, MOALO and MOMVO, which perform the best or the second-best on 7 test problems, while MOGOA, MOPSO, and NSGA-II are three worst algorithms regarding this metric. To visually show the optimization performance, the computed fronts with the best IGD metric values obtained in 30 runs of MMOGWO on all test problems are plotted in Fig. 12. Also, for intuitional comparison and to obtain more detailed information about the numerical results, the box plots of the IGD metric with all test problems over 30 runs are displayed in Fig. 13. The order of the algorithms shown in Fig. 13 is MMOGWO, SPEA2, NSGA-II, MOPSO, MOGWO, MOALO, MOMVO, MOMGOA and MOABC, and they are represented by 1, 2, 3, 4, 5, 6, 7, 8, and 9, respectively. The reason for choosing the IGD metric is that it measures both the diversity and convergence of PF in a sense and it is a combined or comprehensive metric. We first take a look at Fig. 12. It is observed that the MMOGWO can provide remarkable convergence and coverage ability in solving SCH, FON, POL, KUR, ZDT1, ZDT2, ZDT3, ZDT4, DTLZ4, UF2, UF7, UF8, UF9, SRN, and TNK. It also can be seen from Fig. 12 that although the MMOGWO shows poor performance on DTLZ1, DTLZ3, UF5, UF6 and UF10, the Pareto optimal solutions obtained by MMOGWO are very close to the true PF on problems ZDT6, DTLZ2, DTLZ7, UF1, UF3, UF4, UF6, and UF10. Next, by observing Fig. 13, from it, we can find that the MMOGWO maintains the fewer values and the shorter distribution of the IGD metric in most cases, implying its great superiority when compared with the state-of-the-art MOEAs. Table 5 Final rankings of each algorithm on each metric. Algorithms MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

 µr 4.2500 5.1071 6.3571 5.8214 4.3929 4.6429 4.0357 5.1071 5.1786

δr 5.6875 7.0242 4.5867 6.2895 3.7385 6.1582 5.1059 5.9528 10.790

AR 2 5 9 8 3 4 1 5 7

 µr 4.6786 4.7143 2.5714 1.6786 5.1429 8.3571 6.6071 7.0357 3.8929

δr 1.7895 5.5612 1.2449 1.1467 4.0510 0.6582 2.6671 2.1773 3.2385

GD µr 4.0714 6.2500 5.7857 5.6786 4.1429 3.9286 3.5357 4.6429 6.8571

AR 4 5 2 1 6 9 7 8 3

δr 5.1378 5.1161 5.9541 6.7181 4.4796 4.6378 4.8916 4.8724 7.4082

AR 3 8 7 6 4 2 1 5 9

IGD µr 4.3571 5.2500 6.5000 5.5357 4.0714 4.7143 4.3214 5.3929 4.8214

δr 5.6582 7.1161 4.3214 6.0344 3.8520 6.9898 4.7895 6.0957 10.361

AR 3 6 9 8 1 4 2 7 5

Table 5 shows the final rankings of all algorithms computed based on the average ranking (AR) method, where the best values are bolded and underlined, and the second-best values are bolded only. It is shown by the test results that the MOMVO are the most competitive algorithms that obtain the best or the second-best values regarding three metrics. Especially, our MMOGWO performs the second-ranking on a metric:  and presents the third-ranking on two metrics: GD and IGD, implying its great superiority over its competitors. Moreover, the µr and δr values on the four metrics provided by all the compared algorithms shown in Fig. 14 indicates that the MMOGWO has better robustness and convergence in overall performance.

Fig. 14. Plot of Standard deviation (δr) against Average (µr) on all test problems.

To present a statistical conclusion, a pair-wise statistical test named Wilcoxon Signed-Ranks Test (WSRT) (Alcalá-Fdez, Fernández, Luengo, Derrac, García, Sánchez, & Herrera, 2011; Derrac, García, Molina, & Herrera, 2011) is employed here to better compare the overall performance of the algorithms. WSRT is a 16

nonparametric test that can be used to check for the statistically significant difference between the two algorithms. The null hypothesis H0 for a two-sided test is: “there is no difference between the median of the solutions provided by algorithm A and the median of the solutions provided by algorithm B for the same test function” (Zhang, Li, Li, Lai, & Zhang, 2016; Zhao, Wang, & Zhang, 2019). To determine whether algorithm A achieves a statistically better solution than algorithm B, or if not, whether the alternative hypothesis is valid, the sum of the ranks by produced by WSRT is examined. When using WSRT, the R+ and R regarding the comparisons between the two algorithms are computed firstly. Once the R+ and R have been achieved, the p-values can be computed. In this part, WSRT at a significant level of α = 0.05 is used for multi-problem statistical analysis. Table 6 Statistical results obtained by multi-problem-based WSRT for MMOGWO vs. its competitors. MMOGWO vs. SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC MMOGWO vs. SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

Metric  + R 204 89 140 213.5 197 215 196 199 Metric GD + R 150 100 102 214 223 218 193 313

-

R 202 317 266 192.5 209 191 210 207 -

R 256 306 304 192 182 188 213 93

p-value 0.982 0.009 0.151 0.811 0.891 0.785 0.873 0.937

Sig. sign = + = = = = = =

p-value 0.227 0.019 0.021 0.802 0.641 0.733 0.820 0.011

Sig. sign = + + = = = = +

Metric  + R 242 367 379 185 0 65 18.5 142 Metric IGD + R 189 93 145 252 203 216 200 184.5

-

R 164 39 27 193 378 341 359.5 264 -

R 217 313 261 154 203 190 206 221.5

p-value 0.374 0.000 0.000 0.923 0.000 0.002 0.000 0.169

Sig. sign = = + + + =

p-value 0.750 0.012 0.187 0.265 1.000 0.767 0.946 0.682

Sig. sign = + = = = = = =

For multi-problem statistical analysis, the mean values of the above four metrics over 30 runs for each test problem are used for the sample data. The multi-problem-based statistical comparisons between the MMOGWO algorithm and one of the referred algorithms by WSRT as well as the corresponding statistical results for each test problem in 30 runs are shown in Table 6. In these tables, „+‟ means the case in which the null hypothesis is rejected and MMOGWO shows a better performance in the single-problem-based statistical comparisons tests at 95% significance level (α = 0.05); „-‟ means the case in which the null hypothesis is rejected and LSO shows a worse performance; and „=‟ means a case in which no statistically significant difference between MMOGWO and the other algorithms exists (Zhao, Wang, & Zhang, 2019). From Table 6, we can find that our MMOGWO achieves comparable results compared with SPEA2, MOPSO, MOGWO, MOALO, MOMVO, MOGOA and MOABC on metric  and IGD, but significantly improves NSGA-II on these two metrics. As for metric  , the MMOGWO shows an improvement over MOALO, MOMVO, and MOGOA, which provides comparable results compared with SPEA2, MOGWO, and MOABC, although the NSGA-II and MOPSO outperform MMOGWO on this metric. From the results on the basis of metric GD, we can see that the MMOGWO perform s better than NSGA-II, MOPSO, and MOABC, and is equivalent to SPEA2, MOGWO, MOALO, MOMVO, and MOGOA. Thus, according to the aforementioned analysis, we can conclude that MMOGWO can show a strong and competitive performance when compared with other MOEAs. Table 7 Mean values of metric RUNTIME for all the algorithms. MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC MMOGWO SPEA2 NSGA-II MOPSO MOGWO MOALO MOMVO MOGOA MOABC

SCH 9.686E+03 3.161E+03 1.551E+02 3.501E+02 7.394E+03 3.703E+02 2.226E+03 6.058E+02 6.612E+02 ZDT4 4.069E+03 3.127E+03 3.134E+02 9.169E+01 2.174E+03 4.703E+02 1.216E+02 1.261E+03 5.906E+01 DTLZ6 1.940E+04 8.465E+03 4.719E+03 5.129E+03 1.556E+04 4.400E+03 4.433E+03 8.206E+03 1.142E+03 UF6 2.327E+03 3.483E+03 5.101E+02 1.587E+02 1.216E+03 7.822E+02 2.859E+02 4.921E+03 9.365E+01

FON 9.014E+03 3.172E+03 1.340E+02 3.142E+02 6.264E+03 3.961E+02 2.336E+02 9.520E+02 1.3923+02 ZDT6 7.583E+03 3.291E+03 7.819E+02 7.761E+02 5.584E+03 7.636E+02 5.613E+02 2.315E+03 1.001E+02 DTLZ7 2.212E+04 8.615E+03 3.575E+03 4.182E+03 1.654E+04 4.636E+03 3.981E+03 1.126E+04 9.053E+02 UF7 6.482E+03 3.160E+03 5.102E+02 2.476E+02 4.454E+03 9.188E+02 4.642E+02 4.814E+03 8.749E+01

POL 8.473E+03 3.287E+03 2.932E+02 5.535E+02 5.976E+03 5.449E+02 4.659E+02 8.750E+02 1.211E+02 DTLZ1 1.119E+04 7.133E+03 1.524E+03 6.112E+02 3.498E+03 1.494E+03 1.014E+03 4.202E+03 3.224E+02 UF1 4.866E+03 3.252E+03 5.615E+02 2.119E+02 2.218E+03 7.150E+02 4.426E+02 5.523E+03 9.010E+01 UF8 2.208E+04 8.570E+03 4.163E+03 3.936E+03 1.801E+04 5.445E+03 4.154E+03 1.298E+04 9.095E+02

KUR 8.251E+03 3.258E+03 3.174E+02 4.236E+02 5.739E+03 4.721E+02 3.637E+02 1.063E+03 9.719E+01 DTLZ2 2.549E+04 1.656E+04 1.284E+04 1.387E+04 1.629E+04 1.404E+04 1.394E+04 1.657E+04 1.890E+03 UF2 8.265E+03 3.171E+03 5.029E+02 2.927E+02 5.700E+03 9.066E+02 5.476E+02 5.469E+03 1.115E+02 UF9 1.948E+04 7.246E+03 4.185E+03 3.722E+03 1.498E+04 5.576E+03 4.267E+03 1.203E+04 9.351E+02

ZDT1 9.845E+03 3.297E+03 2.941E+02 4.200E+02 7.600E+03 7.350E+02 3.223E+02 5.374E+03 7.310E+01 DTLZ3 1.173E+04 7.629E+03 2.027E+03 9.968E+02 5.218E+03 2.367E+03 9.197E+02 4.969E+03 4.736E+02 UF3 7.671E+03 3.189E+03 5.436E+02 2.060E+02 3.048E+03 8.966E+02 3.565E+02 5.261E+03 9.952E+01 UF10 2.203E+04 7.035E+03 4.183E+03 3.198E+03 1.283E+04 1.910E+03 4.194E+03 1.192E+04 9.129E+02

ZDT2 8.864E+03 3.291E+03 3.193E+02 3.930E+02 2.571E+03 4.673E+02 1.155E+02 4.998E+03 6.535E+01 DTLZ4 1.775E+04 7.943E+03 1.965E+03 2.452E+03 1.207E+04 2.756E+03 2.618E+03 5.654E+03 4.643E+02 UF4 7.944E+03 3.185E+03 4.764E+02 4.392E+02 4.350E+03 7.172E+02 5.327E+02 5.015E+03 1.719E+02 SRN 1.030E+04 2.782E+03 9.764E+02 1.090E+03 7.765E+03 1.167E+03 9.800E+02 9.370E+02 2.078E+02

ZDT3 8.313E+03 3.187E+03 2.810E+02 1.383E+02 6.127E+03 6.675E+02 2.643E+02 5.212E+03 6.745E+01 DTLZ5 1.982E+04 8.959E+03 3.764E+03 5.154E+03 1.622E+04 4.615E+03 4.480E+03 7.835E+03 7.994E+02 UF5 2.693E+03 3.061E+03 4.629E+02 1.183E+02 1.122E+03 4.999E+02 2.006E+02 4.761E+03 7.000E+01 TNK 7.412E+03 2.934E+03 1.399E+03 2.282E+02 4.934E+03 3.312E+02 2.287E+02 4.979E+02 2.375E+02

To evaluate the speed of the algorithms, the mean values of the runtime over 30 runs are recorded in Table 7. From it, we can see that MOABC is the fastest algorithm, and then it is the MOPSO, NSGA-II, MOMVO, MOALO, and MOGOA. Although MOABC, MOPSO, NSGA-II, MOMVO, MOALO, and MOGOA outperforms MMOGWO in terms of the runtime, our algorithm provides similarly to or much better results than its rivals as for metrics  ,  , GD and IGD (Table 6). On the whole, the MMOGWO is a promising and effective algorithm with an acceptable running speed. 17

4. Component-wise experiment and analysis 4.1. Comparison between MMOGWO and several variants of MOPSO In this section, the results obtained by the MMOGWO algorithm are compared with the results of several variants of MOPSO (OMOPSO (Sierra, & Coello Coello, 2005), SMPSO (Nebro, Durillo, Garcia-Nieto, Coello Coello, Luna, & Alba, 2009a), dMOPSO (Martinez, & Coello Coello, 2011), CMPSO (Zhan, Li, Cao, Zhang, Chung, & Shi, 2013), D2MOPSO (AI Moubayed, Pertovski, & McCall, 2012; AI Moubayed, Pertovski, & McCall, 2014) and MMPSO (Lin, Li, Du, Chen, & Ming, 2015)) on three families test problems: SFK (SCH, FON and KUR) benchmark functions, ZDT (ZDT1-ZDT4 and ZDT6) benchmark functions and DTLZ (DTLZ1-DTLZ7) benchmark functions in terms of IGD metric. We run the MMOGWO by using our Matlab code. Specifically, the settings of population size and external archive size are kept to 200 for bi-objective problems and 595 for tri-objective problems, respectively. The maxIteration is set to 300. Thus, the maximal numbers of functions evaluations (FEs) are set to 60,000 and 178,500 for bi-objective and tri-objective problems, respectively. The MMOGWO algorithm is run independently 30 times for each test problem, and the Mean values, as well as the Std values of IGD are summarized for comparison in Table 8. In this table, the results of the algorithms (OMOPSO, SMPSO, dMOPSO, CMPSO, D 2MOPSO, and MMOPSO) are provided by Lin, Li, Du, Chen, & Ming, (2015). For comparison purposes, the results of NSGA-II and MOEA/D reported by Lin, Li, Du, Chen, & Ming, (2015) are also taken herein. From Table 8, we can observe that the MMOPSO and the SMPSO are the most competitive algorithms obtaining the best or second-best values on 6 out of 15 test problems, and then they are MMOGWO, dMOPSO and MOEA/D algorithms give competitive results on 4 out of 15 test problems. CMPSO may be the third-best algorithm, while OMPSO, D2MOPSO, and NSGA-II are the three worst algorithms since they obtain competitive results only on one test problems. Although our MMOGWO does not rank the first on all test problems, MMOGWO obtains the best results on 5 problems, which outperforms all other algorithms except MMOPSO. The final rankings of all algorithms computed based on the average ranking (AR) method are recorded in Table 9 and plotted in Fig. 15. In Table 9, the best results are bolded and underlined, and the second-best results are bolded only. Seen from Table 9, the most competitive algorithm is MMOPSO, and then it is SMPSO and OMOPSO. Other promising algorithms are MOEA/D, dMOPSO, CMPSO, NSGA-II, and MMOGWO. D2MOPSO is the algorithm performing the worst on this method. To further distinguish the difference between MMOGWO and other competitors, the statistical results obtained by the multi-problem-based WSRT at significant levels of α = 0.05 listed in Table 10 demonstrates that the MMOGWO shows as competitive as OMOPSO, dMOPSO, CMPSO, D2MOPSO, NSGA-II and MOEA/D, although the statistical results between MMOGWO and SMPSO and MMOPSO are not significant. Thus, we can conclude that MMOGWO is very competitive compared with its counterparts quantitatively.

Fig. 15. Plot of Standard deviation (δr) against Average (µr) on all test problems.

18

Table 8 Mean values and Std values of metric IGD for all algorithms. Problem SCH FON KUR ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7

MMOGWO Mean ±Std 2.02E-03{01} ± 1.04E-04 1.05E-02{09} ± 5.05E-04 1.26E-03{01} ± 1.32E-03 1.13E-03{01} ± 3.42E-04 7.46E-04{01} ± 4.08E-04 2.69E-03{05} ± 1.75E-04 3.09E+00{08} ± 2.40E+00 2.39E-01{09} ± 7.65E-02 2.13E+02{09} ± 4.14E+01 3.66E-02{09} ± 1.84E-02 5.56E+02{09} ± 5.00E+01 6.60E-02{08} ± 5.77E-04 2.39E+00{09} ± 1.37E-01 9.85E+00{09} ± 2.66E-01 7.73E-03{01} ± 1.02E-03

OMOPSO Mean ±Std 7.97E-03{02} ± 7.89E-04 1.85E-03{03} ± 1.87E-05 1.97E-02{06} ± 6.65E-04 1.86E-03{03} ± 2.11E-05 1.92E-03{04} ± 1.76E-05 2.24E-03{03} ± 8.61E-05 4.44E+00{09} ± 2.02E+00 1.56E-03{04} ± 8.72E-05 2.47E+01{08} ± 8.01E+00 2.81E-02{06} ± 4.88E-04 8.65E+01{08} ± 2.15E+01 2.78E-02{03} ± 4.93E-03 6.81E-04{03} ± 2.18E-05 6.62E-04{03} ± 3.47E-05 2.93E-02{05} ± 8.02E-04

SMPSO Mean ±Std 8.33E-03{04} ± 5.55E-04 1.85E-03{03} ± 1.50E-05 1.82E-02{04} ± 4.98E-04 1.82E-03{02} ± 1.53E-05 1.89E-03{02} ± 1.47E-05 2.14E-03{02} ± 7.86E-05 1.87E-03{02} ± 2.18E-05 1.46E-03{03} ± 7.92E-05 1.16E-02{04} ± 2.26E-04 2.84E-02{07} ± 7.22E-04 2.83E-02{03} ± 4.97E-04 2.91E-02{06} ± 4.32E-03 6.76E-04{02} ± 2.39E-05 6.52E-04{02} ± 3.46E-05 3.02E-02{06} ± 1.21E-03

dMOPSO Mean ±Std 1.67E-01{07} ± 3.09E-05 1.82E-03{02} ± 1.30E-05 2.49E-02{09} ± 6.70E-04 2.25E-03{05} ± 9.58E-06 1.96E-03{05} ± 1.75E-05 6.64E-03{09} ± 9.90E-05 2.27E-03{03} ± 1.41E-05 1.18E-03{02} ± 4.64E-07 1.78E-02{05} ± 3.89E-03 2.24E-02{01} ± 2.00E-04 4.32E-02{05}± 5.59E-03 1.73E-02{01}± 8.02E-04 7.31E-03{08}± 2.45E-04 1.25E-02{08}± 1.11E-05 5.21E-02{09}± 1.65E-04

CMPSO Mean ±Std 5.15E-02{06} ± 9.06E-03 2.43E-03{07} ± 7.17E-05 1.95E-02{05} ± 5.83E-04 2.03E-03{05} ± 3.94E-05 2.14E-03{06} ± 3.67E-05 4.64E-03{06} ± 8.46E-04 5.06E-02{06} ± 3.94E-02 1.80E-03{05} ± 1.84E-04 5.83E-02{06} ± 1.99E-02 2.56E-02{02} ± 4.51E-04 1.18E-01{06} ± 3.28E-02 6.78E-02{09} ± 1.24E-01 9.66E-04{05} ± 3.72E-05 7.62E-04{04} ± 4.02E-05 2.72E-02{02} ± 7.43E-04

Table 9 Final rankings of each algorithm on IGD. Algorithms MMOGWO OMOPSO SMPSO dMOPSO CMPSO D2MOPSO MMOPSO NSGA-II MOEA/D

D2MOPSO Mean ±Std 8.33E-03{04} ± 5.89E-04 1.88E-03{06} ± 5.42E-05 1.95E-02{02} ± 1.29E-03 3.42E-03{08} ± 7.26E-04 2.26E-01{07} ± 2.98E-01 6.06E-03{07} ± 1.78E-03 1.66E+00{07} ± 1.33E+00 3.75E-03{07} ± 2.16E-03 3.22E+00{07} ± 2.95E+00 2.63E-02{04} ± 3.91E-04 1.26E+01{07} ± 1.19E+01 3.01E-02{07} ± 2.56E-03 1.07E-03{06} ± 1.53E-04 4.08E-03{07} ± 8.75E-04 3.64E-02{07} ± 2.08E-03

MMOPSO Mean ±Std 8.00E-03{03} ± 6.29E-04 1.86E-03{05} ± 1.37E-05 1.63E-02{03} ± 3.00E-04 1.87E-03{04} ± 1.38E-05 1.91E-03{03} ± 2.10E-05 2.10E-03{01} ± 4.49E-05 1.84E-03{01} ±1.87E-05 1.56E-03{04} ± 4.72E-05 1.01E-02{01} ± 2.02E-04 2.74E-02{05} ± 4.27E-04 2.75E-02{02} ± 5.08E-04 2.85E-02{04} ± 1.81E-03 6.61E-04{01} ± 2.27E-05 6.44E-04{01} ± 3.27E-05 2.86E-02{04} ± 9.63E-04

NSGA-II Mean ±Std 4.46E-01{08} ± 1.46E-01 2.78E-03{08} ± 6.30E-05 2.16E-02{08} ± 8.40E-04 2.33E-03{07} ± 6.14E-05 2.39E-03{08} ± 7.71E-05 2.60E-03{04} ± 8.07E-05 2.48E-03{04} ± 2.57E-04 2.57E-03{06} ± 1.93E-04 1.07E-02{02} ± 3.57E-04 2.81E-02{06} ± 5.64E-04 2.85E-02{04} ± 9.97E-04 2.87E-02{05} ± 2.05E-03 8.81E-04{04} ± 3.87E-05 9.35E-04{05} ± 5.84E-05 2.77E-02{03} ± 6.16E-04

MOEA/D Mean ±Std 4.84E-01{09} ± 2.03E-01 1.77E-03{01} ± 3.65E-06 2.07E-02{07} ± 1.73E-04 5.33E-03{09} ± 1.16E-03 3.98E-03{09} ± 8.76E-04 6.16E-03{08} ± 6.64E-04 4.94E-02{05} ± 6.02E-02 1.17E-03{01} ± 3.02E-06 1.11E-02{03} ± 2.50E-04 2.61E-02{03} ± 1.12E-04 2.68E-02{01} ± 3.27E-04 1.98E-02{02} ± 8.61E-04 2.64E-03{07} ± 9.92E-06 2.37E-03{06} ± 4.26E-06 3.64E-02{07} ± 2.08E-03

Table 10 Statistical results obtained by multi-problem-based WSRT for MMOGWO vs. its competitors on IGD+. IGD

µr

δr

5.9333 4.6667 3.4667 5.2667 5.3333 6.2000 2.8000 5.4667 5.2000

13.1289 4.6222 2.6489 8.1956 2.8889 2.4267 2.1600 3.7156 8.9600

MMOGWO vs.

AR 8 3 2 5 6 9 1 7 4

OMOPSO SMPSO dMOPSO CMPSO D2MOPSO MMOPSO NSGA-II MOEA/D

19

Metric IGD+ + R 85 96 92 86 92 96 92 91

-

R 35 24 28 34 28 24 28 29

p-value 0.156 0.041 0.069 0.140 0.069 0.041 0.069 0.078

Sig. sign = = = = = =

4.2. Comparison between MMOGWO and several improved MOEAs In this part, the MMOGWO is compared with some state-of-the-art MOEAs (PAES, SPEA2, NSGA-II, IBEA (Zitzler, & Künzli, 2004), OMOPSO, AbYSS (Nebro, Luna, Alba, Dorronsoro, Durillo, & Beham, 2008), CellDE (Durillo, Nebro, Luna, & Alba, 2008), MOEA/D, SMPSO, MOCell (Nebro, Durillo, Luna, Dorronsoro, & Alba, 2009b), GDE3 (Kukkonen, & Lampinen, 2009) and eMOABC (Xiang, Zhou, & Liu, 2015)) on ZDT function family (ZDT1-ZDT4, ZDT6) DTLZ function family (DTLZ1-DTLZ7), and CEC2009 competition unconstrained function family (UF1-UF10). In this experiment, only the comprehensive metric IGD is used to assess the performance of different algorithms. For all algorithms, the population size and external archive size are set to 20 and 100, respectively. And the maxFEs is set to 50,000 for all test problems. Thus, the maxIteration is 50,000/20 = 2,500. The MMOGWO algorithm is implemented independently 30 times for each test problem by our Matlab codes and the Mean values of metric IGD are presented in Table 11. For other algorithms, the results are provided by Xiang, Zhou, & Liu, (2015). From Table 11, we can see that our MMOGWO obtains the best results in 8 test problems (ZDT1, ZDT2, ZDT3, DTLZ2, DTLZ4, DTLZ7, UF1, and UF7). Although SPEA2, OMOPSO, and GDE3 all rank the first on 1 test problem (DTLZ1, DTLZ6, and UF6), SPEA2 achieves the second-best results on 2 test problems (DTLZ2 and DTLZ7), and GDE3 achieves the second-best results on 3 test problems (UF4, UF5, and UF9). MOEA/D and SMPSO all obtain the best results on 2 cases, but they get the second-best results in 2 and 6 cases (UF2, UF7, ZDT1, ZDT2, ZDT3, ZDT4, ZDT6, and DTLZ3), respectively. If we consider the number of the best and the second best results one algorithm presents, the best algorithm is eMOABC, since it achieves the best results in 7 cases and ranks the second-best on 7 problems. PAES, NSGA-II, IBEA, AbYSS, CellDE, and MOCell are the six worst algorithms that provide worse results compared to other algorithms. Also, although the number of the best and the second-best results provided by our MMOGWO is not the best, the number of the best results provided by our MMOGWO is the best (the number is 8). Again, the results by using the average ranking (AR) method are recorded in Table 12 and plotted in Fig. 16. In Table 12, the best results are bolded and underlined, and the second-best results are bolded only. It is shown in Table 12 that eMOABC is the best algorithm, and then it is SPEA2. Roughly, GDE3, SMPSO, NSGA-II, MOEA/D, and MMOGWO give competitive results regarding this metric, which also can be seen from Fig. 16. The rest of the algorithms (MOCell, OMOPSO, AbYSS, CellDE, IBEA, and PAES) get poor results in terms of this indicator. Furthermore, the statistical results by using the multi-problem-based WSRT at significant levels of α = 0.05 presented in Table 13 illustrate that MMOGWO achieves comparable results compared with PAES, SPEA2, NSGA-II, IBEA, OMOPSO, AbYSS, CellDE, MOEA/D, SMPSO and MOCell, although the statistical results between MMOGWO and GDE3 and eMOABC are not significant. However, compared with its competitors, the control parameters employed in our MMOGWO algorithm are less, which is beneficial for the practical application. As a summary, we can conclude that our MMOGWO algorithm is competitive and can be a prime choice when dealing with practical problems.

Fig. 16. Plot of Standard deviation (δr) against Average (µr) on all test problems.

20

Table 11 Mean values of metric IGD for all the algorithms. Problems ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10

Algorithms MMOGWO 8.011E-04{01} 7.410E-04{01} 2.654E-03{01} 9.188E-01{11} 2.329E-01{13} 6.290E+01{13} 1.810E-02{01} 2.580E+02{13} 3.536E-02{01} 1.795E+00{13} 7.834E+00{13} 1.580E-02{01} 1.465E-02{01} 3.441E-02{03} 1.883E-01{05} 5.880E-02{08} 7.304E-01{09} 3.442E-01{07} 1.100E-02{01} 2.216E+00{13} 2.552E-01{05} 1.068E+01{13}

PAES 1.175E-02{12} 1.463E-02{12} 5.611E-02{13} 7.346E-03{07} 7.074E-03{12} 5.860E-02{09} 3.151E-01{13} 1.919E-01{03} 3.991E-01{13} 6.835E-03{09} 7.135E-03{06} 8.878E-01{13} 3.647E-01{13} 1.792E-01{12} 3.450E-01{12} 2.181E-01{13} 5.296E-01{08} 7.262E-01{13} 5.057E-01{13} 4.133E-01{11} 2.717E-01{06} 5.101E-01{06}

SPEA2 3.921E-03{08} 3.895E-03{06} 4.848E-03{05} 4.075E-03{04} 3.172E-03{07} 2.021E-02{01} 5.420E-02{02} 3.387E-01{05} 1.379E-01{11} 4.332E-03{07} 1.254E-02{08} 6.961E-02{02} 1.248E-01{09} 4.522E-02{08} 1.942E-01{06} 5.205E-02{04} 3.802E-01{04} 2.749E-01{05} 1.699E-01{08} 1.995E-01{03} 2.040E-01{04} 3.248E-01{03}

NSGA-II 4.838E-03{11} 4.898E-03{09} 5.389E-03{07} 4.933E-03{06} 4.765E-03{10} 2.612E-02{05} 6.887E-02{09} 2.937E-01{04} 6.390E-02{05} 5.420E-03{08} 1.357E-02{09} 7.647E-02{04} 1.038E-01{06} 3.858E-02{05} 1.581E-01{03} 5.123E-02{03} 3.801E-01{03} 2.580E-01{04} 1.854E-01{09} 2.213E-01{04} 3.159E-01{09} 3.855E-01{04}

IBEA 4.109E-03{09} 9.413E-03{11} 2.974E-02{12} 6.263E-01{10} 5.167E-03{11} 1.818E-01{11} 1.222E-01{12} 5.116E-01{07} 2.107E-01{12} 1.939E-02{12} 5.754E-02{10} 3.998E-01{12} 1.460E-01{11} 7.514E-01{13} 2.901E-01{11} 6.839E-02{11} 3.995E-01{05} 3.770E-01{08} 2.848E-01{10} 4.436E-01{12} 2.836E-01{08} 6.013E-01{07}

OMOPSO 3.711E-03{04} 3.832E-03{04} 4.358E-03{03} 4.926E+00{13} 3.019E-03{03} 1.184E+01{12} 6.886E-02{08} 1.152E+02{12} 6.487E-02{06} 4.134E-03{05} 3.899E-03{01} 8.687E-02{07} 1.086E-01{07} 4.057E-02{06} 1.784E-01{04} 6.437E-02{10} 1.195E+00{12} 5.341E-01{11} 6.487E-02{07} 3.667E-01{09} 5.596E-01{12} 2.201E+00{11}

AbYSS 3.725E-03{05} 3.826E-03{03} 1.500E-02{10} 4.412E-03{05} 3.058E-03{05} 2.733E-02{06} 6.888E-02{10} 3.940E-01{06} 6.053E-02{03} 4.088E-03{04} 7.892E-02{11} 3.946E-01{11} 1.340E-01{10} 6.518E-02{10} 2.800E-01{09} 6.269E-02{09} 5.071E-01{07} 4.168E-01{09} 3.258E-01{11} 2.807E-01{07} 3.812E-01{10} 6.696E-01{08}

Table 12 Final rankings of each algorithm on IGD. Algorithms MMOGWO PAES SPEA2 NSGA-II IBEA OMOPSO AbYSS CellDE MOEA/D SMPSO MOCell GDE3 eMOABC

IGD

µr

δr

6.6818 10.4091 5.4545 6.2273 10.2273 7.5909 7.6818 8.3636 6.6364 5.6818 7.2727 5.6364 2.6364

26.4897 9.1508 6.3388 6.3574 4.0847 12.5145 6.8533 7.1405 14.9587 13.6715 11.1074 9.2314 2.9587

AR 7 13 2 5 12 9 10 11 6 4 8 3 1

Table 13 21

CellDE 4.831E-03{10} 4.367E-03{08} 1.020E-02{09} 4.240E+00{12} 3.436E-03{08} 1.605E-01{10} 6.611E-02{05} 8.518E+00{11} 7.711E-02{09} 8.561E-03{10} 4.542E-03{05} 1.234E-01{08} 6.246E-02{04} 3.854E-02{04} 2.657E-01{08} 5.516E-02{06} 7.720E-01{11} 3.165E-01{06} 5.674E-02{05} 4.054E-01{10} 6.578E-01{13} 2.589E+00{12}

MOEA/D 1.258E-02{13} 9.135E-03{10} 1.722E-02{11} 1.433E-01{08} 4.167E-03{09} 2.543E-02{04} 6.714E-02{07} 1.179E+00{09} 5.490E-02{02} 1.040E-02{11} 9.369E-03{07} 1.902E-01{09} 3.114E-02{03} 2.478E-02{02} 7.855E-02{01} 7.966E-02{12} 7.612E-01{10} 2.560E-01{03} 1.528E-02{02} 1.150E-01{01} 1.576E-01{03} 8.738E-01{09}

SMPSO 3.672E-03{02} 3.796E-03{02} 4.285E-03{02} 3.717E-03{02} 3.003E-03{02} 2.828E-02{07} 7.178E-02{11} 1.155E-01{02} 6.805E-02{08} 4.009E-03{01} 3.937E-03{03} 8.520E-02{05} 1.264E-01{08} 4.668E-02{09} 2.260E-01{07} 5.498E-02{05} 1.861E+00{13} 6.853E-01{12} 6.198E-02{06} 2.264E-01{05} 3.976E-01{11} 2.929E-01{02}

MOCell 3.688E-03{03} 3.796E-03{02} 6.176E-03{08} 3.843E-03{03} 3.000E-03{01} 2.869E-02{08} 6.681E-02{06} 7.556E-01{08} 1.356E-01{10} 4.053E-03{03} 7.551E-01{12} 2.451E-01{10} 1.649E-01{12} 6.823E-02{11} 2.832E-01{10} 5.691E-02{07} 4.596E-01{06} 4.300E-01{10} 3.455E-01{12} 2.578E-01{06} 2.872E-01{07} 4.437E-01{05}

GDE3 3.777E-03{07} 3.913E-03{07} 4.369E-03{04} 4.722E-01{09} 3.124E-03{06} 2.339E-02{03} 6.288E-02{04} 2.259E+00{10} 6.571E-02{07} 4.199E-03{06} 4.153E-03{04} 7.477E-02{03} 8.602E-02{05} 4.306E-02{07} 3.526E-01{13} 4.709E-02{02} 3.319E-01{02} 1.567E-01{01} 4.118E-02{04} 3.323E-01{08} 2.016E-01{02} 1.552E+00{10}

eMOABC 3.737E-03{06} 3.850E-03{05} 5.164E-03{06} 3.710E-03{01} 3.025E-03{04} 2.258E-02{02} 6.050E-02{03} 6.253E-02{01} 6.262E-02{04} 4.017E-03{02} 3.934E-03{02} 8.659E-02{06} 1.480E-02{02} 1.407E-02{01} 1.075E-01{02} 4.123E-02{01} 1.625E-01{01} 2.174E-01{02} 1.872E-02{03} 1.218E-01{02} 6.641E-02{01} 2.950E-01{01}

Statistical results obtained by multi-problem-based WSRT for MMOGWO vs. its competitors on IGD. MMOGWO vs. PAES SPEA2 NSGA-II IBEA OMOPSO AbYSS CellDE MOEA/D SMPSO MOCell GDE3 eMOABC

Metric IGD + R 150 185 185 155 134 158 138 187 149 161 189 211

-

R 103 68 68 98 119 95 115 66 104 92 64 42

p-value 0.445 0.058 0.058 0.355 0.808 0.306 0.726 0.050 0.465 0.263 0.042 0.006

Sig. sign = = = = = = = = = = -

22

4.3. Further comparison on benchmark functions in the CEC2009 competition In this section, the results provided by the MMOGWO are further compared with some well-known algorithms on benchmark functions in the CEC2009 competition (UF1-UF10). We run the MMOGWO algorithm by using our Matlab codes. Specifically, the maximal number of the solutions in the approximated set for computing the IGD is 100 for bi-objective problems, and 150 for tri-objective problems. The maxFEs is set to 300,000 for all test problems. The MMOGWO algorithm is run independently 30 times for each test problem, and the Mean values of metric IGD are shown in Table 14. In this table, the results for the algorithms MOEA/D (Zhang, Liu, & Li, 2009), MTS (Tseng, & Chen, 2009), GDE3 (Kukkonen, & Lampinen, 2009), DMOEADD (Liu, Zou, Chen, & Zhi, 2009) and LiuLiAlgorithm (Liu, & Li, 2009), DECMOSA-SQP (Zamuda, Brest, Bošković, & Žumer, 2009), AMGA (Tiwari, Fadel, Koch, & Deb, 2009), MOEP (Qu, & Suganthan, 2009), OMOEA-II (Gao, Zeng, Xiao, Zhang, & Shi, 2009), MOEADGM (Chen, Chen, & Zhang, 2009), OW-MOSaDE (Huang, Zhao, Mallipeddi, & Suganthan, 2009), NSGA-IILS(Sindhya, Sinha, Deb, & Miettinen, 2009), ClusteringMOEA (Wang, Dang, Li, Han, & Wei, 2009) are directly taken from the final report on the CEC2009 MOEA competition (Zhang, & Suganthan, 2009), while the results for MOSaDE (Huang, Qin, Suganthan, & Tasgetiren, 2007) are provided by Huang et al. in Huang, Zhao, Mallipeddi, & Suganthan, (2009). The data for A-MOABC/PD, A-MOABC/NS, and S-MOABC/NS are provided by Akay in Akay, (2013). The results of dMOABC (Zhong, Xiang, & Liu, 2014), NSGA-II (Deb, Pratap, Agarwal, & Meyarivan, 2002) and eMOABC (Xiang, Zhou, & Liu, 2015) are taken from literature Xiang, Zhou, & Liu, (2015). The results for NSABC are obtained by Kishor et al. in Kishor, Singh, & Prakash, (2016), and those for KnEA (Zhang, Tian, & Jin, 2015) and MOEA/D-DRA (Khan, & Zhang, 2010) are obtained by running the PlatEMO, a Matlab platform for evolutionary multi-objective optimization (Tian, Cheng, Zhang, & Jin, 2017). It is apparent from Table 14 that the MOEA/D is the best algorithm obtaining the best or the second-best values on 5 out of the 10 test problems, and then they are MTS, eMOABC, and NSABC algorithms which compute the best or the second-best results on 4, 4 and 3 test problems, respectively. GDE3, DMOEADD, dMOABC, and MOEA/D-DRA provide competitive results on 1 test problems. The rest of the algorithms achieve poor results compared to other algorithms. Although MOEA/D, MTS, eMOABC, NSABC, GDE3, DMOEADD, dMOABC, and MOEA/D-DRA perform better than our MMOGWO in terms of the number of the best and the second-best results provided by one algorithm, the results provided by the MMOGWO are competitive than most of its competitors (see Table 14). Again, the results by using the average ranking (AR) method are listed in Table 15 and depicted in Fig. 17. In Table 15, the best results are bolded and underlined, and the second-best results are bolded only. According to Table 15, it can be found that eMOABC and MOEA/D are two promising algorithms that perform the first and the second rank regarding this indicator. Our MMOGWO is better than NSGA-II, MOEP, A-MOABC/PD, KnEA and MOSaDE in terms of this indicator. Also, it can be seen from Fig. 17 that the results obtained by MMOGWO are competitive. Especially, the performance of MMOGWO is similar to or significantly superior to its most competitors. Also, the statistical results by using the multi-problem-based WSRT at significant levels of α = 0.05 listed in Table 16 show that MOEA/D, MTS, DMOEADD, LiuLiAlgorithm, dMOABC, and eMOABC significantly outperforms our MMOGWO in terms of the p-value. However, by observing Table 16, we can find that the MMOGWO can achieve comparable results compared with GDE3, DECMOSA-SQP, AMGA, MOEP, OMOEA-II, MOEADGM, OW-MOSaDE, NSGA-IILS, ClusteringMOEA, MOSaDE, A-MOABC/PD, A-MOABC/NS, S-MOABC/NS, NSGA-II, NSABC, KnEA and MOEA/D-DRA. On the whole, MMOGWO is a competitive algorithm. Table 14 Mean values of metric IGD MMOGWO and the well-known algorithms on benchmark functions CEC2009 competition (UF1-UF10). Problems UF1 MMOGWO 0.010558{11} MOEA/D 0.00435 {01} MTS 0.00646722{07} GDE3 0.005342{04} DMOEADD 0.010384{10} LiuLiAlgorithm 0.007850{08} DECMOSA-SQP 0.0770281{19} AMGA 0.035886{15} MOEP 0.0588{18} OMOEA-II 0.08564624{20} MOEADGM 0.0062{06} OW-MOSaDE 0.0122{13} NSGA-IILS 0.01153{12} ClusteringMOEA 0.0299{14} MOSaDE 0.0983{21} A-MOABC/PD 0.038333{16} A-MOABC/NS 0.052928{17} S-MOABC/NS 0.25373{23} dMOABC 0.005897{05} NSGA-II 0.1038{22} eMOABC 0.00494{02} NSABC 0.0085{09} KnEA 0.50185{24} MOEA/D-DRA 0.0052222{03} Algorithms

UF2 0.030042{19} 0.00679{05} 0.00615756{03} 0.011953{10} 0.006791{06} 0.012300{11} 0.0283427{18} 0.016236{14} 0.0516{22} 0.03057246{20} 0.0064{04} 0.0081{08} 0.01237{12} 0.0228{16} 0.0607{23} 0.024845{17} 0.019434{15} 0.008963{09} 0.005714{02} 0.03858{21} 0.004324{01} 0.0070{07} 0.40929{24} 0.012387{13}

UF3 0.23359{19} 0.00742{01} 0.05310720{08} 0.106395{15} 0.033370{04} 0.014975{03} 0.0935006{12} 0.069980{11} 0.1910{18} 0.27141506{21} 0.0429{05} 0.1030{13} 0.10603{14} 0.0549{09} 0.3248{23} 0.283436{22} 0.173872{17} 0.267995{20} 0.05256{07} 0.1581{16} 0.0655{10} 0.0482{06} 0.32930{24} 0.0094327{02}

UF4 0.057439{16} 0.06385{20} 0.02356120{02} 0.026506{03} 0.042686{10} 0.043501{11} 0.0339266{04} 0.040621{08} 0.0624{19} 0.04624691{12} 0.0476{13} 0.0513{15} 0.0584{17} 0.0585{18} 0.0977{23} 0.075264{22} 0.041493{09} 0.038286{07} 0.03531{06} 0.05123{14} 0.03424{05} 0.0220{01} 0.24788{24} 0.070843{21}

UF5 0.62058{19} 0.18071{10} 0.01489430{01} 0.039281{02} 0.314543{12} 0.161867{07} 0.167139{08} 0.094057{04} 0.7608{21} 0.16920102{09} 1.7919{23} 0.4303{16} 0.5657{18} 0.2473{11} 0.6963{20} 1.258446{22} 0.416111{15} 0.396417{14} 0.1046{05} 0.0801{03} 0.1052{06} 1.8896{24} 0.34102{13} 0.44803{17}

Table 15 Final rankings of each algorithm on IGD. Algorithms MMOGWO MOEA/D MTS GDE3 DMOEADD LiuLiAlgorithm DECMOSA-SQP AMGA MOEP OMOEA-II MOEADGM OW-MOSaDE NSGA-IILS ClusteringMOEA MOSaDE A-MOABC/PD A-MOABC/NS S-MOABC/NS dMOABC NSGA-II eMOABC NSABC KnEA MOEA/D-DRA

UF6 0.24339{14} 0.00587{01} 0.05917810{02} 0.250913{15} 0.066732{03} 0.175553{10} 0.126042{07} 0.129425{08} 0.3606{21} 0.07338196{04} 0.5563{24} 0.1918{12} 0.31032{17} 0.0871{05} 0.3640{22} 0.347716{20} 0.224721{13} 0.334695{19} 0.1141{06} 0.258{16} 0.1436{09} 0.1836{11} 0.40179{23} 0.31534{18}

UF7 0.0080658{05} 0.00444{01} 0.04079490{16} 0.025228{13} 0.010324{06} 0.007301{03} 0.024163{12} 0.057076{19} 0.0408{17} 0.03354878{15} 0.0076{04} 0.0585{20} 0.02132{10} 0.0223{11} 0.1916{23} 0.149848{21} 0.033151{14} 0.016391{09} 0.01043{07} 0.1854{22} 0.006161{02} 0.0146{08} 0.34734{24} 0.045232{18}

UF8 2.7138{24} 0.05840{02} 0.11251700{08} 0.248556{19} 0.068419{03} 0.082353{05} 0.215834{15} 0.171251{11} 0.6512{23} 0.19200591{14} 0.2446{18} 0.0945{07} 0.0863{06} 0.2383{17} 0.4019{21} 0.278321{20} 0.177667{12} 0.144444{10} 0.1378{09} 0.2213{16} 0.07508{04} 0.0502{01} 0.40638{22} 0.19133{13}

UF9 0.072412{05} 0.07896{06} 0.11442300{11} 0.082482{07} 0.048965{02} 0.093915{08} 0.14111{12} 0.188610{14} 0.2744{19} 0.23179599{17} 0.1878{13} 0.0983{09} 0.0719{04} 0.2934{20} 0.3984{22} 0.356294{23} 0.251694{18} 0.218798{15} 0.1014{10} 0.3159{22} 0.03694{01} 0.0608{03} 0.29381{21} 0.21949{16}

UF10 11.9902{24} 0.47415 {12} 0.15306500 {02} 0.433261 {10} 0.322118 {05} 0.446914 {11} 0.369857 {07} 0.324186 {06} 2.4987 {22} 0.62754414 {18} 0.5646 {17} 0.7430 {19} 0.84468 {20} 0.4111 {09} 2.9313 {23} 2.275790 {21} 0.536442 {15} 0.502193 {14} 0.225 {03} 0.3855 {08} 0.2486 {04} 0.1167 {01} 0.50125{13} 0.54176{16}

Table 16 Statistical results obtained by multi-problem-based WSRT for MMOGWO vs. its competitors.

IGD

µr

δr

15.6 5.9 6.0 8.8 6.1 7.7 11.4 11.0 20.0 15.0 12.7 13.2 13.0 13.0 22.1 20.4 14.5 14.0 6.0 16.0 4.4 7.1 21.2 13.7

42.44 36.49 21.60 20.36 10.69 9.01 22.04 19.00 3.80 26.60 53.61 17.56 24.80 20.40 1.09 4.64 6.45 25.80 5.40 37.00 9.04 43.49 17.76 36.41

MMOGWO vs.

AR 19 2 3 8 5 7 10 9 21 18 11 14 12 12 24 22 17 16 3 20 1 6 23 15

MOEA/D MTS GDE3 DMOEADD LiuLiAlgorithm DECMOSA-SQP AMGA MOEP OMOEA-II MOEADGM OW-MOSaDE NSGA-IILS ClusteringMOEA MOSaDE A-MOABC/PD A-MOABC/NS S-MOABC/NS dMOABC NSGA-II eMOABC NSABC KnEA MOEA/D-DRA

23

Metric IGD + R 48 47 46 53 50 44 42 23 36 35 45 39 40 19 20 39 31 49 32 55 45 31 17

-

R 7 8 9 2 5 11 13 32 19 20 10 16 15 36 35 16 24 6 23 0 10 24 38

p-value 0.037 0.047 0.059 0.009 0.022 0.093 0.139 0.646 0.386 0.445 0.074 0.241 0.203 0.386 0.445 0.241 0.721 0.028 0.695 0.005 0.074 0.770 0.322

Sig. sign = = = = = = = = = = = = = = = = =

Fig. 17. Plot of Standard deviation (δr) against Average (µr) on all test problems.

5. Advantages of the multiple search strategies In this section, we are going to investigate the advantages of the multiple search strategies introduced in the MMOGWO algorithm by a simulation experiment. As mentioned above, the MMOGWO will select some non-dominated solutions from the archive which is defined as the intermediate solution within the most crowded segments, and then use them to generate new solutions by using an adaptive chaotic mutation strategy. Next, the Update and maintenance of the external archive approach shown in Section 2.2 is called to select the new non-dominated solutions. In the following, the elitism strategy is adopted to select the next population. Moreover, if the position of an individual moves outside the search space, the boundary mutation strategy is used to manage boundary constraint violations. The aforementioned procedures are repeated until the termination criteria satisfied. From the flowchart of the MMOGWO, we can find the multiple search strategies are conducted in each cycle. These strategies could ensure more sufficient exploitation and guide the whole grey wolf pack toward the sparest region. The advantages of these strategies lie in MMOGWO could: (i) exploit more potential non-dominated solutions within the less crowded zone, and (ii) improve the diversity of the solutions in the approximated set. Hence, the MMOGWO will not only have good convergence but also have an appropriate spread over the Pareto front in the objective space since to the advantages of these strategies. Now, we carry out a simulation experiment to test the aforementioned analysis. Four variants of MMOGWO are included for comparison: (1) the original MOGWO that uses no strategies at all; (2) the acMOGWO that uses adaptive chaotic mutation strategy; (3) the bMOGWO that replaces traditional management of boundary constraints used in MOGWO with the boundary mutation strategy, and (4) the eMOGWO that introduces the elitism strategy select the next population. For all algorithms, the population size and external archive size are all set to 100. And the maxFEs is set to 30,000 for bi-objective benchmark functions and 50,000 for tri-objective benchmark functions. Thus, the maxIteration will be 30,000/100 = 300 for and bi-objective benchmark functions and 50,000/100 = 500 for tri-objective benchmark functions. All algorithms are implemented independently 30 times for each test problem by running our Matlab codes, and the simulation results of the five algorithms on metrics  ,  , GD and IGD for each problem under the same algorithm configurations are presented in Tables 17-20. Table 17 Mean values and Std values of MMOGWO and its variants on metric  . Problems SCH FON POL KUR ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10 SRN TNK

Algorithms MMOGWO 8.022E-03 (6.82E-04) {04} 1.035E-02 (8.14E-04) {01} 1.591E+01 (6.21E-01) {04} 3.095E-02 (2.36E-02) {03} 1.071E-03 (4.21E-04) {04} 7.769E-04 (7.76E-05) {03} 3.553E-03 (7.74E-04) {02} 4.214E+00 (3.46E+00) {02} 1.896E-01 (1.14E-02) {01} 9.521E+01 (2.54E+01) {04} 2.840E-02 (1.07E-02) {04} 4.594E+02 (1.00E+02) {04} 4.782E-02 (3.38E-02) {05} 1.545E+00 (1.61E-01) {02} 6.439E+00 (4.69E-01) {03} 3.050E-02 (2.30E-02) {04} 2.689E-02 (2.22E-02) {01} 4.192E-02 (1.19E-02) {01} 2.259E-01 (7.29E-02) {02} 5.636E-02 (3.88E-03) {01} 1.255E+00 (5.14E-01) {02} 4.091E-01 (1.47E-02) {02} 2.133E-02 (8.95E-03) {01} 1.784E+00 (5.63E-01) {05} 1.797E-01 (1.03E-01) {01} 4.095E+00 (2.89E+00) {04} 2.120E+00 (5.16E-01) {04} 3.531E-03 (7.92E-04) {01}

MOGWO 7.921E-03 (6.85E-04) {02} 1.043E-02 (9.15E-04) {03} 1.467E+01 (1.38E+00) {01} 3.214E-02 (8.09E-03) {04} 9.537E-04 (5.14E-04) {03} 8.046E-04 (8.70E-05) {04} 4.512E-03 (6.33E-04) {03} 7.594E+00 (9.05E+00) {05} 2.370E-01 (6.00E-02) {04} 3.394E+01 (4.97E+00) {03} 2.105E-02 (9.98E-03) {03} 4.230E+02 (3.33E+01) {03} 1.732E-02 (9.37E-03) {01} 1.935E+00 (9.84E-02) {04} 5.779E+00 (4.35E-01) {01} 1.952E-02 (1.45E-02) {02} 4.834E-02 (5.31E-02) {03} 6.770E-02 (1.98E-02) {05} 3.186E-01 (1.19E-01) {05} 6.035E-02 (4.59E-03) {05} 1.437E+00 (7.51E-01) {05} 4.019E-01 (6.07E-02) {01} 2.804E-02 (1.93E-02) {04} 1.086E+00 (7.88E-01) {02} 3.016E-01 (2.87E-01) {04} 3.852E+00 (1.06E+00) {03} 1.287E+00 (8.98E-01) {01} 4.726E-03 (1.12E-03) {05}

acMOGWO 7.983E-03 (6.34E-04) {03} 1.061E-02 (1.25E-03) {04} 1.592E+01 (5.88E-01) {05} 3.016E-02 (2.31E-02) {02} 1.288E-03 (4.38E-04) {05} 7.610E-04 (8.32E-05) {01} 4.789E-03 (5.52E-04) {04} 3.367E+00 (2.39E+00) {01} 1.949E-01 (3.23E-02) {02} 2.466E+01 (9.08E+00) {01} 2.983E-02 (1.26E-02) {05} 2.960E+02 (8.06E+01) {01} 2.534E-02 (1.76E-02) {03} 1.530E+00 (1.62E-01) {01} 6.446E+00 (5.24E-01) {04} 3.263E-02 (2.07E-02) {05} 3.310E-02 (2.53E-02) {02} 4.814E-02 (1.22E-02) {02} 2.736E-01 (6.06E-02) {03} 6.010E-02 (7.48E-03) {03} 1.132E+00 (5.43E-01) {01} 4.147E-01 (2.68E-02) {04} 2.267E-02 (1.03E-02) {02} 1.605E+00 (4.97E-01) {04} 2.074E-01 (1.02E-01) {03} 4.829E+00 (2.20E+00) {05} 2.149E+00 (6.40E-01) {05} 3.803E-03 (9.25E-04) {02}

bMOGWO 7.897E-03 (6.79E-04) {01} 1.039E-02 (6.93E-04) {02} 1.475E+01 (1.52E+00) {03} 3.537E-02 (1.15E-02) {05} 8.649E-04 (1.45E-04) {01} 7.633E-04 (8.01E-05) {02} 4.798E-03 (7.21E-04) {05} 6.334E+00 (7.36E+00) {04} 2.313E-01 (5.70E-02) {03} 1.110E+02 (1.36E+01) {05} 1.988E-02 (1.17E-02) {01} 5.935E+02 (4.03E+01) {05} 4.275E-02 (1.01E-02) {04} 1.868E+00 (9.82E-02) {03} 6.011E+00 (5.48E-01) {02} 2.150E-02 (1.37E-02) {03} 7.002E-02 (7.82E-02) {05} 6.598E-02 (1.86E-02) {04} 3.160E-01 (1.19E-01) {04} 6.023E-02 (4.86E-03) {04} 1.371E+00 (6.09E-01) {04} 4.244E-01 (1.86E-02) {05} 3.114E-02 (3.40E-02) {05} 9.968E-01 (7.17E-01) {01} 3.420E-01 (3.30E-01) {05} 3.847E+00 (1.64E+00) {02} 1.537E+00 (1.24E+00) {03} 4.662E-03 (1.39E-03) {04}

eMOGWO 8.140E-03 (7.66E-04) {05} 1.064E-02 (8.77E-04) {05} 1.470E+01 (1.58E+00) {02} 2.935E-02 (1.17E-02) {01} 9.506E-04 (3.06E-04) {02} 8.370E-04 (1.18E-04) {05} 2.878E-03 (4.21E-04) {01} 6.140E+00 (6.16E+00) {03} 2.719E-01 (1.34E-01) {05} 3.005E+01 (5.53E+00) {02} 2.057E-02 (1.29E-02) {02} 4.121E+02 (2.32E+01) {02} 2.048E-02 (1.88E-02) {02} 1.942E+00 (8.53E-02) {05} 6.447E+00 (4.39E-01) {05} 1.310E-02 (1.02E-02) {01} 5.525E-02 (6.45E-02) {04} 6.543E-02 (1.96E-02) {03} 2.224E-01 (9.38E-02) {01} 5.647E-02 (2.71E-03) {02} 1.268E+00 (6.07E-01) {03} 4.127E-01 (8.23E-02) {03} 2.311E-02 (1.13E-02) {03} 1.275E+00 (1.07E+00) {03} 1.915E-01 (1.26E-01) {02} 3.573E+00 (1.16E+00) {01} 1.529E+00 (1.10E+00) {02} 4.501E-03 (1.27E-03) {03}

acMOGWO 9.834E-01 (1.36E-01) {02} 9.347E-01 (1.04E-01) {05} 9.853E-01 (1.48E-02) {05} 1.026E+00 (7.85E-02) {04} 1.090E+00 (1.14E-01) {03} 24

bMOGWO 1.046E+00 (1.63E-01) {05} 9.083E-01 (1.16E-01) {02} 9.695E-01 (1.50E-02) {01} 7.277E-01 (9.14E-02) {01} 1.120E+00 (1.67E-01) {05}

eMOGWO 9.993E-01 (1.46E-01) {03} 9.115E-01 (1.21E-01) {04} 9.835E-01 (1.78E-02) {04} 7.801E-01 (1.32E-01) {03} 1.006E+00 (1.47E-01) {01}

Table 18 Mean values and Std values of MMOGWO and its variants on metric  . Problems SCH FON POL KUR ZDT1

Algorithms MMOGWO 9.603E-01 (1.12E-01) {01} 9.098E-01 (8.36E-02) {03} 9.778E-01 (1.06E-02) {03} 1.045E+00 (8.01E-02) {05} 1.060E+00 (1.13E-01) {02}

MOGWO 1.030E+00 (1.53E-01) {04} 8.918E-01 (9.49E-02) {01} 9.757E-01 (1.79E-02) {02} 7.494E-01 (8.58E-02) {02} 1.097E+00 (1.29E-01) {04}

ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10 SRN TNK

9.960E-01 (1.07E-01) {01} 8.328E-01 (7.85E-02) {01} 1.009E+00 (1.07E-02) {02} 9.646E-01 (1.35E-01) {01} 1.013E+00 (1.73E-01) {05} 8.065E-01 (4.20E-02) {01} 9.984E-01 (1.55E-01) {05} 5.459E-01 (5.38E-02) {01} 8.332E-01 (6.30E-02) {04} 8.785E-01 (8.49E-02) {04} 9.367E-01 (6.67E-02) {05} 1.013E+00 (8.38E-02) {04} 9.952E-01 (9.82E-02) {05} 1.096E+00 (2.88E-01) {01} 9.619E-01 (1.39E-01) {02} 1.004E+00 (9.18E-02) {02} 9.782E-01 (5.59E-01) {01} 1.141E+00 (1.79E-01) {04} 8.526E-01 (3.26E-02) {02} 9.097E-01 (1.10E-01) {04} 9.073E-01 (1.10E-01) {05} 9.071E-01 (5.37E-02) {01} 7.684E-01 (8.75E-02) {05}

1.019E+00 (1.69E-01) {02} 9.842E-01 (1.31E-01) {05} 9.931E-01 (6.12E-02) {01} 1.094E+00 (1.51E-01) {04} 7.270E-01 (5.05E-02) {01} 8.674E-01 (3.28E-02) {04} 6.685E-01 (5.28E-02) {01} 5.885E-01 (8.35E-02) {05} 7.857E-01 (2.43E-02) {02} 7.772E-01 (7.77E-02) {03} 7.815E-01 (1.42E-01) {02} 1.009E+00 (2.01E-01) {03} 8.473E-01 (1.01E-01) {01} 1.164E+00 (1.75E-01) {04} 1.192E+00 (1.79E-01) {04} 9.864E-01 (1.82E-01) {01} 1.073E+00 (1.40E-01) {04} 8.301E-01 (1.18E-01) {02} 8.707E-01 (6.44E-02) {04} 8.389E-01 (6.75E-02) {02} 7.331E-01 (1.05E-01) {03} 1.018E+00 (1.43E-01) {05} 7.186E-01 (1.16E-01) {02}

1.021E+00 (1.61E-01) {03} 9.595E-01 (9.09E-02) {03} 1.056E+00 (5.88E-02) {05} 9.736E-01 (1.41E-01) {02} 8.870E-01 (1.48E-01) {04} 8.140E-01 (4.21E-02) {02} 9.559E-01 (1.34E-01) {04} 5.794E-01 (1.03E-01) {03} 8.337E-01 (7.43E-02) {05} 9.387E-01 (7.94E-02) {05} 9.344E-01 (7.15E-02) {04} 1.027E+00 (1.64E-01) {05} 9.929E-01 (8.77E-02) {04} 1.314E+00 (2.35E-01) {05} 1.227E+00 (1.70E-01) {05} 1.006E+00 (1.59E-01) {03} 9.864E-01 (3.63E-01) {02} 1.169E+00 (1.45E-01) {05} 8.387E-01 (3.86E-02) {01} 9.490E-01 (9.04E-02) {05} 8.907E-01 (2.99E-02) {04} 9.425E-01 (9.23E-01) {02} 7.501E-01 (8.74E-02) {04}

1.052E+00 (1.64E-01) {05} 9.569E-01 (1.37E-02) {02} 1.042E+00 (1.41E-01) {04} 1.096E+00 (1.27E-01) {05} 8.637E-01 (6.66E-02) {03} 8.752E-01 (3.25E-02) {05} 7.137E-01 (6.31E-02) {03} 5.740E-01 (7.83E-02) {02} 7.975E-01 (2.90E-02) {03} 7.314E-01 (7.07E-02) {02} 7.806E-01 (1.35E-01) {01} 9.775E-01 (1.99E-01) {01} 8.766E-01 (9.41E-02) {02} 1.096E+00 (2.29E-01) {01} 8.141E-01 (1.10E-01) {01} 1.067E+00 (1.71E-01) {05} 1.054E+00 (1.15E-01) {03} 8.179E-01 (9.95E-02) {01} 8.840E-01 (5.66E-02) {05} 8.459E-01 (8.15E-02) {03} 7.079E-01 (9.67E-02) {01} 1.012E+00 (1.61E-01) {04} 7.063E-01 (8.58E-02) {01}

1.032E+00 (1.02E-01) {04} 9.623E-01 (1.38E-01) {04} 1.031E+00 (1.05E+01) {03} 1.066E+00 (1.54E-01) {03} 7.383E-01 (6.58E-02) {02} 8.427E-01 (3.82E-02) {03} 6.874E-01 (7.05E-02) {02} 5.868E-01 (1.04E-01) {04} 7.794E-01 (2.43E-02) {01} 7.016E-01 (5.11E-02) {01} 8.369E-01 (1.35E-01) {03} 9.775E-01 (1.68E-01) {01} 8.819E-01 (9.99E-02) {03} 1.102E+00 (2.41E-01) {03} 1.178E+00 (2.01E-01) {03} 1.028E+00 (1.81E-01) {04} 1.111E+00 (1.54E-01) {05} 8.592E-01 (1.35E-01) {03} 8.624E-01 (4.44E-02) {03} 8.296E-01 (5.21E-02) {01} 7.328E-01 (9.87E-02) {02} 9.940E-01 (1.74E-01) {03} 7.249E-01 (1.48E-01) {03}

Firstly, we describe the results in terms of the  metric. Observed from Table 17, the MMOGWO algorithm and the eMOGWO algorithm are the most competitive algorithm performing the best or the second-best values on 14 out of the 28 test problems, verifying that MMOGWO which adopts an elitism strategy can effectively improve its performance. And then it is the acMOGWO algorithm that provides the best or the second-best values regarding this metric on 11 out of the 28 test problems, implying the positive effect of introducing an adaptive chaotic mutation strategy in our MOGWO framework. MOGWO and bMOGWO perform similarly which compute the best or the second-best values only on 8 test problems. Based on the above analysis, it can be concluded that the elitism strategy as well as the adaptive chaotic mutation strategy employed in MMOGWO indeed improves the convergence of the algorithm. Next, we pay attention to the  metric. Regarding this metric (Table 18), MMOGWO, MOGWO, and bMOGWO are the two best algorithms that obtain the best or the second-best values on 14 out of the 28 test problems, followed by eMOGWO algorithm, which obtains the best or the second-best values in 8 cases. Roughly, acMOGWO obtains worse values compared to other algorithms, which give competitive results only on 6 test problems. Thus, the results verify that the effect of the boundary mutation strategy on diversity is positive. Table 19 Mean values and Std values of MMOGWO and its variants on metric GD. Problems SCH FON POL KUR ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1 DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10 SRN TNK

Algorithms MMOGWO 9.347E-04 (6.33E-05) {04} 1.199E-03 (7.89E-05) {01} 1.718E+00 (6.08E-02) {05} 6.628E-03 (6.03E-03) {05} 1.931E-04 (1.24E-04) {04} 9.198E-05 (6.10E-06) {03} 6.538E-04 (1.35E-04) {05} 5.384E-01 (4.05E-01) {02} 2.158E-02 (3.02E-03) {02} 1.452E+01 (3.74E+00) {05} 3.047E-03 (1.08E-03) {04} 6.279E+01 (1.30E+01) {02} 6.316E-03 (5.40E-03) {05} 1.599E-01 (1.57E-02) {02} 6.722E-01 (4.17E-02) {05} 5.800E-03 (4.50E-03) {04} 3.516E-03 (2.69E-03) {01} 5.903E-03 (1.89E-03) {01} 3.059E-02 (6.59E-03) {01} 5.999E-03 (4.95E-04) {01} 2.016E-01 (9.30E-02) {02} 7.905E-02 (1.34E-02) {01} 2.752E-03 (1.25E-03) {01} 1.837E-01 (5.73E-02) {05} 2.138E-02 (1.37E-02) {02} 8.515E-02 (2.89E-01) {01} 5.967E-01 (1.18E-01) {05} 5.787E-04 (1.55E-04) {01}

MOGWO 9.393E-04 (1.17E-04) {05} 1.210E-03 (9.30E-05) {03} 1.581E+00 (1.49E-01) {03} 4.412E-03 (1.33E-03) {02} 1.641E-04 (1.32E-04) {03} 9.373E-05 (7.66E-06) {04} 5.788E-04 (8.39E-05) {01} 2.300E+00 (4.11E+00) {05} 3.532E-02 (1.94E-02) {04} 7.589E+00 (1.57E+00) {04} 2.248E-03 (1.08E-03) {03} 6.611E+01 (6.58E+00) {04} 1.911E-03 (1.38E-03) {01} 1.945E-01 (9.80E-03) {04} 5.899E-01 (4.20E-02) {01} 3.137E-03 (2.46E-03) {02} 1.042E-02 (1.25E-02) {03} 9.854E-03 (3.08E-03) {05} 5.568E-02 (2.18E-02) {05} 7.009E-03 (1.15E-03) {04} 4.542E-01 (2.84E-01) {05} 1.011E-01 (4.67E-02) {04} 3.694E-03 (2.82E-03) {04} 1.112E-01 (7.91E-02) {02} 3.286E-02 (3.10E-02) {04} 4.174E-01 (1.20E-01) {03} 4.049E-01 (1.79E-01) {01} 8.396E-04 (4.43E-04) {05}

acMOGWO 9.328E-04 (7.32E-05) {02} 1.224E-03 (1.19E-04) {04} 1.713E+00 (5.75E-02) {04} 6.291E-03 (5.48E-03) {04} 2.767E-04 (1.47E-04) {05} 8.979E-05 (6.12E-06) {01} 6.400E-04 (7.93E-05) {04} 4.491E-01 (6.06E-01) {01} 2.227E-02 (6.34E-03) {03} 4.350E+00 (1.84E+00) {02} 3.167E-03 (1.31E-03) {05} 4.193E+01 (8.09E+00) {01} 2.739E-03 (2.03E-03) {03} 1.588E-01 (1.55E-02) {01} 6.576E-01 (4.92E-02) {03} 6.218E-03 (3.77E-03) {05} 4.417E-03 (3.05E-03) {02} 6.998E-03 (1.96E-03) {02} 3.312E-02 (4.54E-03) {03} 6.418E-03 (9.19E-04) {03} 1.818E-01 (9.51E-02) {01} 8.236E-02 (3.17E-02) {02} 2.797E-03 (1.36E-03) {02} 1.668E-01 (5.00E-02) {04} 2.543E-02 (1.38E-02) {03} 4.950E-01 (2.21E-01) {05} 5.945E-01 (1.18E-01) {04} 6.407E-04 (1.86E-04) {02}

bMOGWO 9.177E-04 (6.81E-05) {01} 1.205E-03 (7.25E-05) {02} 7.857E-01 (4.22E-02) {01} 4.878E-03 (1.80E-03) {03} 1.301E-04 (4.87E-05) {01} 9.021E-05 (7.36E-06) {02} 6.205E-04 (1.11E-04) {02} 2.230E+00 (4.15E+00) {04} 3.584E-02 (1.73E-02) {05} 1.820E+00 (3.58E+00) {01} 2.115E-03 (1.25E-03) {01} 7.431E+01 (9.39E+00) {05} 4.355E-03 (1.30E-03) {04} 1.878E-01 (9.63E-03) {03} 6.172E-01 (5.37E-02) {02} 3.600E-03 (2.70E-03) {03} 1.450E-02 (1.77E-02) {05} 9.290E-03 (2.65E-03) {03} 5.299E-02 (2.18E-02) {04} 7.064E-03 (1.39E-03) {05} 3.672E-01 (2.57E-01) {03} 8.765E-02 (2.68E-02) {03} 4.199E-03 (5.73E-03) {05} 1.022E-01 (7.09E-02) {01} 3.730E-02 (3.53E-02) {05} 4.219E-01 (1.88E-01) {04} 4.309E-01 (2.29E-01) {02} 7.813E-04 (3.91E-04) {03}

eMOGWO 9.378E-04 (7.79E-05) {03} 1.229E-03 (8.50E-05) {05} 1.579E+00 (1.71E-01) {02} 4.134E-03 (2.59E-03) {01} 1.613E-04 (1.01E-04) {02} 9.743E-05 (1.16E-05) {05} 6.394E-04 (1.38E-04) {03} 1.931E+00 (3.81E+00) {03} 1.901E-02 (2.94E-02) {01} 6.787E+00 (1.15E+00) {03} 2.147E-03 (1.40E-03) {02} 6.491E+01 (6.20E+00) {03} 2.400E-03 (3.01E-03) {02} 1.952E-01 (8.53E-03) {05} 6.670E-01 (4.12E-02) {04} 2.200E-03 (2.00E-03) {01} 1.183E-02 (1.49E-02) {04} 9.432E-03 (2.92E-03) {04} 3.304E-02 (1.12E-02) {02} 6.057E-03 (3.99E-04) {02} 4.456E-01 (2.86E-01) {04} 1.034E-01 (6.97E-02) {05} 2.963E-03 (1.51E-03) {03} 1.290E-01 (1.08E-01) {03} 2.046E-02 (1.32E-01) {01} 3.824E-01 (1.28E-01) {02} 4.350E-01 (2.19E-01) {03} 7.868E-04 (4.32E-04) {04}

We now analyze the computed results regarding the GD metric (Table 19). In this case, the MMOGWO is the best algorithm obtaining the best or the second-best values in 15 test problems, followed by acMOGWO, bMOGWO, eMOGWO and MOGWO, which the best or the second-best values in 11, 10, 10 and 7 test problems, respectively, denoting that the strategies introduced in our MOGWO framework can effectively alleviate a premature convergence and significantly improve the performance of the algorithm. Table 20 Mean values and Std values of MMOGWO and its variants on metric IGD. Problems SCH FON POL KUR ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1

Algorithms MMOGWO 1.986E-03 (1.71E-04) {02} 1.065E-02 (8.32E-04) {01} 8.217E-01 (1.68E-02) {04} 2.287E-03 (3.24E-03) {01} 8.986E-04 (4.21E-04) {04} 7.769E-04 (7.76E-05) {03} 2.126E-03 (4.36E-04) {01} 3.200E+00 (3.47E+00) {01} 2.337E-01 (1.32E-02) {01} 2.035E+02 (5.27E+01) {04}

MOGWO 1.992E-03 (1.71E-04) {04} 1.081E-02 (9.35E-04) {05} 7.836E-01 (4.50E-02) {01} 4.491E-03 (1.11E-03) {04} 8.392E-04 (5.14E-04) {01} 8.046E-04 (8.70E-05) {04} 2.777E-03 (3.94E-04) {02} 3.374E+00 (9.07E+00) {03} 2.577E-01 (6.41E-02) {04} 7.288E+01 (1.03E+01) {03}

acMOGWO 1.967E-03 (1.59E-04) {01} 1.075E-02 (1.28E-03) {02} 8.281E-01 (2.13E-02) {05} 2.926E-03 (3.02E-03) {02} 1.331E-03 (4.38E-04) {05} 7.610E-04 (8.32E-05) {01} 2.804E-03 (3.20E-04) {03} 3.371E+00 (2.39E+00) {02} 2.350E-01 (3.38E-02) {02} 5.281E+01 (1.86E+01) {01} 25

bMOGWO 1.990E-03 (1.70E-04) {03} 1.079E-02 (7.09E-04) {04} 7.857E-01 (4.22E-02) {03} 4.570E-03 (1.68E-03) {05} 8.757E-04 (1.45E-04) {03} 7.633E-04 (8.01E-05) {02} 2.870E-03 (4.75E-04) {04} 6.350E+00 (7.38E+00) {05} 2.601E-01 (6.09E-02) {05} 2.311E+02 (2.82E+01) {05}

eMOGWO 2.016E-03 (1.91E-04) {05} 1.076E-02 (8.96E-04) {03} 7.839E-01 (5.17E-02) {02} 3.633E-03 (1.68E-03) {03} 8.409E-04 (3.06E-04) {02} 8.370E-04 (1.18E-04) {05} 2.878E-03 (4.21E-04) {05} 4.307E+00 (6.18E+00) {04} 2.541E-01 (1.46E-01) {03} 6.117E+01 (1.13E+01) {02}

DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7 UF1 UF2 UF3 UF4 UF5 UF6 UF7 UF8 UF9 UF10 SRN TNK

2.894E-02 (1.07E-02) {05} 4.632E+02 (1.01E+02) {04} 3.868E-02 (3.39E-02) {04} 1.889E+00 (2.17E-01) {01} 8.554E+00 (6.48E-01) {05} 2.140E-02 (1.65E-02) {04} 2.014E-02 (2.22E-02) {01} 4.069E-02 (1.19E-02) {01} 2.035E-01 (7.29E-02) {01} 5.689E-02 (3.88E-03) {02} 1.129E+00 (5.14E-01) {02} 4.080E-01 (1.47E-02) {02} 1.954E-02 (8.95E-03) {01} 1.801E+00 (5.63E-01) {05} 1.407E-01 (1.03E-01) {01} 3.516E+00 (2.89E+00) {01} 9.758E-03 (2.36E-03) {05} 3.445E-03 (7.96E-04) {01}

1.991E-02 (9.98E-03) {03} 4.220E+02 (3.33E+01) {03} 1.451E-02 (9.39E-03) {01} 2.178E+00 (1.71E-01) {04} 7.638E+00 (5.36E-01) {01} 1.850E-02 (1.39E-02) {02} 2.678E-02 (5.31E-02) {04} 6.660E-02 (1.98E-02) {05} 3.270E-01 (1.19E-01) {04} 5.974E-02 (4.59E-03) {05} 1.307E+00 (7.51E-01) {05} 4.127E-01 (6.07E-02) {04} 2.180E-02 (1.93E-02) {04} 1.112E+00 (7.88E-01) {03} 2.073E-01 (2.87E-01) {04} 3.561E+00 (1.06E+00) {02} 5.158E-03 (4.10E-03) {01} 4.684E-03 (1.13E-03) {05}

2.863E-02 (1.26E-02) {04} 2.825E+02 (8.08E+01) {01} 1.866E-02 (1.76E-02) {03} 1.903E+00 (2.06E-01) {02} 8.535E+00 (7.46E-01) {04} 2.692E-02 (1.58E-02) {05} 2.557E-02 (2.53E-02) {03} 4.877E-02 (1.22E-02) {02} 2.763E-01 (6.06E-02) {03} 5.831E-02 (7.48E-03) {03} 1.122E+00 (5.42E-01) {01} 4.096E-01 (2.68E-01) {03} 2.120E-02 (1.03E-02) {03} 1.502E+00 (4.97E-01) {04} 1.661E-01 (1.02E-01) {03} 4.832E+00 (2.20E+00) {05} 9.370E-03 (2.93E-03) {04} 3.821E-03 (9.30E-04) {02}

1.807E-02 (1.17E-02) {02} 5.912E+02 (4.04E+01) {05} 4.063E-02 (1.01E-02) {05} 2.060E+00 (1.16E-01) {03} 8.038E+00 (7.02E-01) {02} 1.870E-02 (1.30E-02) {03} 2.918E-02 (7.82E-02) {05} 6.012E-02 (1.86E-02) {04} 3.391E-01 (1.19E-01) {05} 5.895E-02 (4.86E-02) {04} 1.211E+00 (6.09E-01) {04} 4.256E-01 (1.86E-02) {05} 2.407E-02 (3.40E-02) {05} 9.467E-01 (7.17E-01) {01} 2.234E-01 (3.30E-01) {05} 4.054E+00 (1.64E+00) {04} 5.759E-03 (5.64E-03) {02} 4.444E-03 (1.40E-03) {04}

1.799E-02 (1.29E-02) {01} 4.106E+02 (2.33E+01) {02} 1.501E-02 (1.89E-02) {02} 2.263E+00 (1.37E-01) {05} 8.299E+00 (6.97E-01) {03} 1.040E-02 (1.00E-02) {01} 2.545E-02 (6.45E-02) {02} 5.974E-02 (1.96E-02) {03} 2.190E-01 (9.38E-02) {02} 5.647E-02 (2.71E-03) {01} 1.130E+00 (6.07E-01) {03} 4.047E-01 (8.23E-02) {01} 2.065E-02 (1.13E-02) {02} 1.047E+00 (1.07E+00) {02} 1.516E-01 (1.26E-01) {02} 3.655E+00 (1.16E+00) {03} 7.088E-03 (5.00E-03) {03} 4.255E-03 (1.27E-03) {03}

Finally, we compare algorithms regarding the IGD metric. It is apparent from Table 20 that the MMOGWO is also the best algorithm. Other competitive algorithms are eMOGWO and acMOGWO. At last, MOGWO and bMOGWO are two algorithms give poor results on this metric. Therefore, it is true that the multiple search strategies (i.e., adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy) introduced in MOGWO can improve the performance of the MOGWO algorithm. Table 21 Final rankings of each algorithm on each metric. Algorithms MMOGWO MOGWO acMOGWO bMOGWO eMOGWO

 µr 2.6786 3.1786 2.9643 3.3929 2.7857

δr 1.8610 1.8610 2.1059 1.9528 1.8827

AR 1 4 3 5 2

 µr 2.8571 2.7857 3.7143 2.7500 2.8214

δr 2.6224 1.8112 1.4898 2.4732 1.1467

GD µr 2.8571 3.3571 2.8929 2.9643 2.9286

AR 4 2 5 1 3

δr 2.7653 1.7296 1.7385 2.0344 1.5663

AR 1 5 2 4 3

IGD µr 2.4286 3.2500 2.8214 3.8214 2.6786

δr 2.4592 1.8304 1.6467 1.4324 1.4324

AR 1 4 3 5 2

Fig. 18. Plot of Standard deviation (δr) against Average (µr) on all test problems. Table 22 Statistical results obtained by multi-problem-based WSRT for MMOGWO vs. its competitors. MMOGWO vs. MOGWO acMOGWO bMOGWO eMOGWO MMOGWO vs. MOGWO acMOGWO bMOGWO eMOGWO

Metric  + R 224 174 152 223 Metric GD + R 164 235 170 194

-

R 182 232 254 183 -

R 242 171 236 212

p-value 0.633 0.509 0.246 0.649

Sig. sign = = = =

p-value 0.386 0.479 0.465 0.849

Sig. sign = = = =

Metric  + R 239 84 243 248 Metric IGD + R 178 143 106 205

-

R 167 322 135 158 -

R 228 263 300 201

p-value 0.412 0.007 0.195 0.305

Sig. sign = + = =

p-value 0.569 0.172 0.027 0.964

Sig. sign = = + =

The final rankings of the MMOGWO and its variants are calculated by the average ranking (AR) method are listed in Table 21. In Table 21, the best results are bolded and underlined, and the second-best results are bolded only. From Table 21, we can find the MMOGWO is the most competitive algorithm obtaining the best rankings on three metrics, which validates the advantages of MMOGWO. Also, eMOGWO gets better average rankings on the  metric and the IGD metric than other variants, implying the positive effect of introducing an elitism strategy in our MOGWO framework. By Observing Table 21, we find that the acMOGWO outperforms MOGWO in terms of the  metric, the GD metric, and the IGD metric, while the bMOGWO outperforms MOGWO regarding to the  metric and the GD metric, denoting that the adaptive chaotic mutation strategy can effectively alleviate a premature convergence and the boundary mutation strategy can significantly enhance the diversity of algorithm. It also can be seen from Fig. 18 that the values of the µr and δr provided by MMOGWO are relatively lower than all MOGWO variants, demonstrating that MMOGWO has better robustness and convergence. Moreover, the statistical results by using the multi-problem-based WSRT at significant levels of α = 0.05 summarized in Table 22 show that our MMOGWO significantly improves over acMOGWO in terms of the  metric, and over bMOGWO on the IGD metric. That is to say, the elitism strategy employed in MMOGWO indeed improves the performance of the algorithm, and achieves better effects than other strategies (adaptive chaotic mutation strategy and boundary mutation strategy). 26

6. Application for optimal operation of cascade hydropower stations In this section, we are going to investigate the great potential of MMOGWO for real-world optimization problems. The case considered here is called the multi-objective optimal scheduling problem (MOOSP) of cascade hydropower stations (CHSs). The curse of dimensionality and constraints handling techniques in the MOOSP of CHSs are two great challenges to optimization methods or algorithms because there are many equality or inequality constraints and the decision variables of the optimization model usually have interdependence relationships. To deal with the complicated constraints of the MOOSP of CHSs, a novel constraint handling technique is designed, and then a case study of Jinsha River CHSs in China is considered to demonstrate the efficiency of the MMOGWO algorithm for handling these problems. 6.1. Application background The Jinsha River, which is the upper reaches of the Yangtze River, descends from NW to SE through deep valleys in the mountains east of the Qinghai-Tibet Plateau, is one of 13 hydropower bases in China. The river flows through five major landforms, i.e. the eastern Qinghai-Tibet plateau, the western Sichuan plateau, the Hengduan Mountains, the Yunnan-Guizhou plateau and the mountainous area of southwest Sichuan (Tayyab, Zhou, Dong, Ahmad, & Sun, 2019). The overall length of the Jinsha River is approximately 3481 km and the drainage area is 502000 km2, which accounts for 26% of the total drainage area of the Yangtze River. The fall of the Jinsha River is about 3300 m over the 3481 km and the deepest gorge has a very steep slope (~14 m/km). The annual mean precipitation varies dramatically with elevation, below 600 mm for the reaches from Zhimenda to Shigu, and 900-1300 mm for the reaches from Huiyi to Yibin (Wu, Yang, Xu, & Yin, 2008). As the largest tributary Yalong River merges into the Jinsha River at Panzhihua, the discharge significantly increases and the annual average flow is 4750 m3/s. The Jinsha River has attracted extensive attention not only national but also international due to its abundant resources for water and electricity, its economically exploitable capacity has been determined to be approximately 88,910 MW, ranking first among all hydropower bases, is about 40% of the Yangtze River and about one-sixth of that of China (Cheng, Wang, Chau, & Wu, 2014; Liu, Cheng, Wang, Liao Chau, Wu, & Li, 2018). According to China‟s State Power Planning, about twenty hydropower plants have been planned on the mainstream of the Jinsha River, for which the total installed capacity is approximately 71660 MW. Moreover, according to the South-North Water Transmission Project, a total of 1.5 billion m3 will be transferred to the northern area of China from the Jinsha River in the first phase through the Western Line of the South-North Water Transmission Project. Speeding up the development of the Jinsha River and realizing the electricity transmission from west to east is the strategic choice for the primary energy sources equilibrium in the early 21st century of China. Finally, the Jinsha River also contributes to flooding control, irrigation, shipping, and tourism. Above all, the Jinsha River plays an important role both in regional and national economic development (Wang, & Zhang, 2012). The cascaded hydropower system studied in this paper is located on the lower stream of the Jinsha River as shown in Fig. 19, where 4 hydropower stations, Wudongde (WDD), Baihetan (BHT), Xiluodu (XLD) and XIangjiaba (XJB), are selected as the case study, which is a typical multi-reservoir system in China. Their important characteristics are presented in Table 23.

Fig. 19. Location of Jinsha River and the cascaded hydropower system studied in this paper. Table 23 Characteristics of the hydropower stations located on the lower stream of the Jinsha River. Reservoir items Normal water level Dead water level Total storage Regulation storage Regulation ability Installed capacity Annual generation Firm power Power coefficient

WDD 975 945 76.00 26.00 seasonal 10200 389.10 3150 8.5

BHT 825 765 206 104.00 multi-year 16000 640.95 5500 8.5

XLD 600 540 126.7 64.60 Incomplete multi-year 13860 571.20 3850 8.5

XJB 380 370 51.63 9.03 Incomplete seasonal 6400 307.47 2009 8.5

Units m m billion m3 billion m3 MW billion kW·h MW -

6.2. Problem formulation The MOOSP of CHSs is referred to maximize the benefits of the water resources as much as possible by determining some optimal release for each reservoir over the whole schedule periods, while satisfying a series of complicated equality and inequality constraints (Lu, Zhou, Qin, Wang, & Zhang, 2011; Ming, Chang, Huang, Wang, & Huang, 2015; Qin, Zhou, Lu, Wang, & Zhang, 2010). The objectives and constraints of MOOSP of CHSs are formulated in the following sections. 6.2.1. Objectives The goal in this paper is formulated as an MOP which is concerned with the attempt to maximize annual power generation and firm power simultaneously, and the upstream water level is selected as the decision variable. Since the reservoir storage volume is reflected by the water level, therefore, the variation of the outflow can be represented by the fluctuation of the water level. Moreover, the power output is connected with turbine release and hydraulic head that dynamically determined by the water level by period. Once we determine the upstream water level, we can calculate the outflow through the water balance equation shown in Eq. (33). Simultaneously, the downstream water level is related to the outflow of the reservoir. Once we increase the outflow of the reservoir, the downstream water level will be higher, and thus will result in a lower hydraulic head (Ming, Chang, Huang, Wang, & Huang, 2015). The objective functions can be expressed as follows: (1) Maximization of power generation Ssum T

max f1  max E  max  Ki Qi ,t H i ,t t

(28)

i 1 t 1

where E is the total power generation of CHSs (108 kW·h). i and S sum are the hydropower station index and the number of CHSs, respectively. i and T are the period index and the number of total periods, respectively. K i is the power coefficient of the i-th hydropower station. Qi ,t is the water discharge rate passing turbines of the i-th hydropower station at the t-th schedule period (m3/s). H i ,t is the average hydraulic head of the i-th hydropower station at the t-th schedule period (m). t is the schedule interval (h). 27

(2) Maximization of power generation Ssum Ssum     max f 2  max N  max min  Nt ,t   max min  Ki Qi ,t H i ,t  , t  1, 2, i 1 i 1    

,T

(29)

where N is the firm power of CHSs (104 kW). N i ,t is the power output of the i-th hydropower station at the t-th schedule period (104 kW). 6.2.2. Constraints (1) Upstream water level limits max , i  1, 2, Zimin ,t  Zi ,t  Zi ,t

, Ssum ; t  1, 2,

(30)

,T

max where Z i ,t is the water level of the i-th hydropower station at the t-th schedule period (m). Z imin are the minimum and maximum upstream water level ,t and Z i ,t

limits of the i-th hydropower station at the t-th schedule period, respectively (m). (2) Power output limits max Nimin ,t  Ni ,t  Ni ,t , i  1, 2,

, Ssum ; t  1, 2,

(31)

,T

max where N imin are the minimum and maximum power output limits of the i-th hydropower station at the t-th schedule period, respectively (104 kW). ,t and N i ,t

(3) Water discharge rate limits max Qimin ,t  Qi ,t  Qi ,t , i  1, 2,

min i ,t

where Q

max i ,t

and Q

, Ssum ; t  1, 2,

(32)

,T

are the minimum and maximum water discharge rate limits of the i-th hydropower station at the t-th schedule period, respectively (m).

(4) Water balance equation Ni   Vi ,t  Vi ,t 1   I i ,t  Qi ,t  Si ,t   Qk ,t  ki  Sk ,t  ki   t , i  1, 2, k  1  





, Ssum ; t  1, 2,

(33)

,T

where Vi ,t 1 and Vi ,t are the initial and terminal reservoir storage volume of the i-th hydropower station at the t-th schedule period, respectively (m3). I i ,t and Si ,t are the natural inflow rate and the spillage of the i-th hydropower station at the t-th schedule period, respectively (m3/s). N i is the number of upstream hydropower stations directly above the i-th hydropower station.  ki is the water transport delay from reservoir k to i. (5) Hydraulic connection equation

Ii1,t  Qi ,t  qi ,t , i  1, 2,

, Ssum  1 ; t  1, 2,

(34)

,T

where qi ,t is the interval inflow into the i-th hydropower station at the t-th schedule period (m3/s). (6) Initial and terminal upstream water level limits

Zi ,0  Zibegin , Zi ,T  Ziend , i  1, 2, where Z

begin i

and Z

end i

, Ssum

(35)

are the initial and terminal water level of the i-th hydropower station, respectively (m).

Water levels, power outputs, and water discharge rates are all variables that should be calculated, historical monthly inflow sequences of reservoirs, the curve of reservoir upstream water level-reservoir storage volume, curve of reservoir water discharge rate-reservoir downstream water level and values of control parameters that used for reservoirs-hydropower stations are all known input data. Furthermore, we also assume that the generality loss is ignored and the evaporation loss of water can be canceled out by rainfall (Ming, Chang, Huang, Wang, & Huang, 2015). 6.3. Constraint handling method The key step of the successful application for solving MOOSP of CHSs effectively is handling constraints (including equality constraints and inequality constraints) properly (Lu, Zhou, Qin, Wang, & Zhang, 2011; Qin, Zhou, Lu, Wang, & Zhang, 2010). At present, the most popular method for handling constraints of MOOSP of CHSs is penalty function methods by punishing the infeasible solutions with penalty functions and keeping the priorities of the feasible solutions during the selection operation. However, for problems with a series of complex constraints, it is very hard to find a set of appropriable penalty parameters used for guiding the search towards the constrained optimum. Also, this approach may degrade the efficiency of the algorithm remarkably because it requires multiple iterations to tune the penalty parameters (Lu, Zhou, Qin, Wang, & Zhang, 2010). Therefore, to handle the complicated constraints of MOOSP of CHSs effectively without degrading the computation efficiency of our MMOGWO algorithm, we propose an efficient constraint handling strategy that translates power output limits and water discharge rate limits into upstream water level limits based on water balance equation. In the following subsections, we are going to describe the constraints handling strategy in detail. (1) Constraint handling method for upstream water level limits In this paper, the upstream water level is selected as the decision variable. Hence, the upstream water level limits can be satisfied automatically. In this case, the upstream water level limits are denoted as: max  Z1   Z1imin , Ssum ; t  0,1, 2, , T (36) ,t , Z1i ,t  , i  1, 2, max begin max end min max where Z1imin , Z1imin ,0  Z1i ,0  Zi ,0 ,T  Z1i ,T  Zi ,T , Z1i ,t  Z1i ,t ( i  1, 2,

, Ssum ; t  1, 2,

, T ).

(2) Constraints handling method for power output limits For power output limits shown in Eq. (31), the handling method is carried out as follows: Step 1: In a positive direction. Starting from the initial upstream water level limits Zi ,0  Zibegin ( i  1, 2,

, Ssum ), a series of maximum terminal upstream water level Z 2imax ,t ( i  1, 2,

t  1, 2,

, T  1 ) can be calculated according to the initial upstream water level limits Zi ,0  Z

( i  1, 2,

, Ssum ; t  1, 2,

begin i

( i  1, 2,

, Ssum ;

, Ssum ), the minimum power output limits N imin ,t

, T  1 ) , the water balance equation shown in Eq. (33) and the hydraulic connection equation shown in Eq. (34). Through this procedure,

a corridor composed of a set of maximum terminal upstream water levels in the admissible domain is created. Step 2: In a negative direction. Starting from the terminal upstream water level limits Zi ,T  Ziend ( i  1, 2,

t  T  1, T  2, ( i  1, 2,

, Ssum ), a series of minimum initial upstream water level Z 2imin ,t ( i  1, 2,

,1 ) can be calculated according to the terminal upstream water level limits Zi ,T  Z

, Ssum ; t  T  1, T  2,

end i

( i  1, 2,

, Ssum ;

, Ssum ), the minimum power output limits N imin ,t

,1 ) , the water balance equation shown in Eq. (33) and the hydraulic connection equation shown in Eq. (34). Through this

procedure, a corridor composed of a set of minimum initial upstream water levels in the admissible domain is created. In this case, the upstream water level limits are denoted as: max  Z2   Z2imin .t , Z 2i .t  , i  1, 2,

28

, Ssum ; t  0,1, 2,

,T

(37)

max begin max end where Z2imin , Z2imin ,0  Z 2i ,0  Zi ,0 ,T  Z 2i ,T  Zi ,T ( i  1, 2,

, Ssum ).

(3) Constraints handling method for water discharge rate limits For water discharge rate limits shown in Eq. (32), the handling method is carried out as follows: Step 1: the effect of the initial upstream water level limits Z i ,t 1 ( i  1, 2, , Ssum ; t  1, 2, , T  1 ) on the terminal upstream water level limits Z i ,t ( i  1, 2,

, Ssum ; t  1, 2,

, T  1 ).

According to the water discharge rate limits shown in Eq. (32), the water balance equation shown in Eq. (33) and the hydraulic connection equation shown in Eq. (34), the corresponding maximum and minimum reservoir storage volume can be calculated (the process of the calculation is expressed by Eq. (36) and (37)). Then, a corridor composed of a set of maximum and minimum initial upstream water levels in the admissible domain is created based on the curve of reservoir storage volume-reservoir upstream water level. Ni   max V3min  Si ,t   Qk ,t  ki  Sk ,t  ki   tt , i  1, 2, i ,t  V3i ,t 1  Z i ,t 1    I i ,t  Qi ,t k  1  





Ni   min V3max i ,t  V3i ,t 1  Z i ,t 1    I i ,t  Qi ,t  Si ,t   Qk ,t  ki  S k ,t  ki   tt , i  1, 2, k 1   In this case, the upstream water level limits are denoted as: max  Z3   Z3imin , Ssum ; t  1, 2, , T ,t , Z3i ,t  , i  1, 2,



where Z

min 3i ,0

Z

max 3i ,0

Z

begin i ,0

,Z

min 3i ,T

Z

max 3i ,T

Z

end i ,T

( i  1, 2,



, Ssum ; t  T  1, T  2,

,T 1

(38)

, Ssum ; t  1, 2,

,T 1

(39)

(40)

, Ssum ).

Step 2: the effect of the terminal upstream water level limits Z i ,t 1 ( i  1, 2, ( i  1, 2,

, Ssum ; t  1, 2,

, Ssum ; t  T  1, T  2,

,1 ) on the initial upstream water level limits Z i ,t

,1 ).

According to the water discharge rate limits shown in Eq. (32), the water balance equation shown in Eq. (33) and the hydraulic connection equation shown in Eq. (34), the corresponding maximum and minimum reservoir storage volume can be calculated (the process of the calculation is expressed by Eq. (36) and (37)). Then, a corridor composed of a set of maximum and minimum initial upstream water levels in the admissible domain is created based on the curve of reservoir storage volume-reservoir upstream water level. Ni   max V4imin (41) , Ssum ; t  T  1, T  2, ,1 ,t  V3i ,t 1  Z i ,t 1    I i ,t 1  Qi ,t 1  Si ,t 1   Qk ,t 1 ki  S k ,t 1 ki   tt 1 , i  1, 2, k 1  





Ni   min V4imax , Ssum ; t  T  1, T  2, ,t  V3i ,t 1  Z i ,t 1    I i ,t 1  Qi ,t 1  Si ,t 1   Qk ,t 1 ki  S k ,t 1 ki   tt 1 , i  1, 2, k 1   In this case, the upstream water level limits are denoted as: max  Z4   Z4imin , Ssum ; t  1, 2, , T ,t , Z 4i ,t  , i  1, 2,



max begin max end where Z4imin , Z4imin ,0  Z 4i ,0  Zi ,0 ,T  Z 4i ,T  Zi ,T ( i  1, 2,



,1

(42)

(43)

, Ssum ).

(4) The intersection of upstream water level limits Based on the aforementioned upstream water level limits, a new corridor, which is composed of a restricted set of discrete values of the decision variables (upstream water level) in the admissible domain, is created. The new upstream water level limits can be expressed as follows:

Zi ,t  Z1 where Z

min i ,t

 max Z

min 1i ,t

,Z

min 2i ,t

,Z

min 3i ,t

,Z

min 4 i ,t

, Z

max i ,t

Z2

Z3

 min Z

max 1i ,t

,Z

max  Z4  Zimin ,t , Zi ,t  , i  1, 2, max 2i ,t

,Z

max 3i ,t

,Z

max 4i ,t

 ( i  1, 2,

, Ssum ; t  1, 2,

,T

, Ssum ; t  1, 2,

, T ).

(44)

Specifically, when the intersection of upstream water level limits for certain t is a null set, considering the decision variable is upstream water level, we employ the original upstream water level limits (shown in Eq. (36)) as the final upstream water level limits. By this approach, the MOOSP of CHSs is converted into an unconstrained MOP to handle. 6.4. Selection strategy considering violation Generally, all the constraints of MOOSP of CHSs can be satisfied after the implementation of constraint handling methods introduced above, however, in the real simulation phase, we noticed that the constraint handling methods still cannot guarantee that all the new generated individuals satisfy all the complicated constraints because the intersection between Z1 and Z 4 is a null set sometimes, which means that the upstream water level limits shown in Eq. (30) and the power output limits shown in Eq. (31) cannot be satisfied at the same time. In this case, considering the priority of the upstream water level is higher than that of power output, we enable the individual to violate the power output limits. Moreover, To strike a balance between computational efficiency and constraint handling, in the following section, a strategy is developed to determine the priority of two individuals by considering the dominance relationship, crowding distance and constraint violations synthetically (Lu, Zhou, Qin, Wang, & Zhang, 2010). The total violation of the i th individual TotalVio is calculated as follows: T

TotalVio   Ni ,t

(45)

t 1

Based on the TotalVio calculated above, the details of the selection strategy adopted in our study are as follows (Lu, Zhou, Qin, Wang, & Zhang, 2011): (1) If one individual is feasible and the other one is infeasible, then the feasible one is favored. (2) If both two individuals are feasible and they have a dominance relationship with each other, then the Pareto dominance concepts will be employed to decide which one of them is favored. (3) If both two individuals are feasible, however, they are non-dominated with each other; then the one with larger crowding distance is favored. (4) If both two individuals are infeasible, then the one with smaller total constrains violation is favored. 6.5. Simulation scenarios and Parameters settings (1) Simulation scenarios In this section, three scenarios of the Jinsha River cascaded hydropower system which is consisted of four hydropower stations (WDD, BHT, XLD, and XJB) are implemented to verify the effectiveness of MMOGWO algorithm for solving MOOSP of CHSs. the schedule horizon is chosen as one month with 12 schedule intervals of each one month, which ranges from June of the current year to May of the following year. Since the schedule horizon is one month, the water transport delay between connected reservoirs is not taken into account. Three typical years are selected as the input for solving MOOSP of CHSs based on the hydrological frequency analysis of the historical monthly inflow sequence data, which are called: the wet year (1966.6-1967.5), the normal year (2002.6-2003.5) and the dry year (1992.6-1993.5). The corresponding empirical probabilities of the water storage volumes are nearly 10%, 50%, and 90%, respectively. (2) Parameters settings 29

When the proposed MMOGWO algorithm is implemented to deal with the MOOSP of CHSs mentioned above, the parameter settings of the proposed MMOGWO algorithm are as follows: the population size and the size of the archive are set as 200 and 30, respectively. The maximum number of iteration is selected as 50,000. For details of default parameter settings for MMOGWO, please refer to Section 3.3. Meanwhile, To verify the effectiveness of our MMOGWO, two MOEAs: NSGA-II and MOGWO are also implemented to solve the same problems for the convenience of simulation result comparison. For MOGWO, the parameter settings are kept to the settings for MMOGWO, and the parameter settings for NSGA-II are as follows: the population size and the size of the archive are both set as 30. The mutation probability and the crossover probability are kept to the same of its original literature. The maximum number of iteration is set to be 50,000. 6.6. Computation results and comparison With the parameter settings presented in Section 6.5, the MOOSP of CHSs is implemented by directly comparing the performance of the MMOGWO and the NSGA-II algorithm as well as the MOGWO algorithm. The distribution of the Pareto front formed by the non-dominated schemes obtained by the MMOGWO, the NSGA-II, and the MOGWO is shown in Fig. 20. To verify the effectiveness of the proposed MMOGWO algorithm for tackling the MOOSP of CHSs, the objective function values of those schemes obtained by different algorithms are listed in Tables 24-26 for the convenience of comparisons. Meanwhile, the monthly water level processes, the monthly power output processes, the monthly power generation processes, the discharge and power output processes, the monthly reservoir storages of each reservoir as well as the stack of colors of monthly power out for each reservoir on three representative schedule results (i.e., scheme 1, scheme 15, and scheme 30) obtained by MMOGWO are shown in Figs. 21-28 to check whether the constraints of the problem are satisfied or not.

Fig. 20. Pareto front obtained by MMOGWO, NSGA-II and MOGWO. Table 24 The details of non-dominated scheduling schemes obtained by different algorithms of the wet year. Schedule index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

MMOGWO Objective value E N 2628.9 1595.2 2628.9 1595.2 2629.8 1594.9 2638.8 1569.7 2651.4 1560.6 2659.2 1555.3 2660.3 1519.4 2670.1 1487.8 2677.8 1454.0 2681.8 1433.9 2683.8 1340.1 2684.0 1314.4 2689.3 1314.0 2692.6 1286.3 2693.9 1253.6

NSGA-II Objective value E N 2675.2 1579.5 2676.4 1564.6 2678.7 1561.8 2679.8 1546.8 2683.2 1545.7 2683.5 1543.9 2684.8 1530.2 2684.9 1527.4 2687.1 1518.6 2687.4 1509.3 2688.9 1497.2 2690.6 1479.4 2690.8 1464.3 2693.4 1463.5 2694.0 1451.6

MOGWO Objective value E N 2629.9 1569.6 2651.0 1561.6 2661.1 1555.2 2673.3 1537.6 2675.2 1453.8 2677.0 1450.2 2681.9 1434.5 2682.2 1356.8 2684.8 1318.9 2688.4 1316.3 2690.0 1257.6 2691.7 1191.2 2693.8 1129.1 2693.9 1096.8 2694.9 1095.9

Schedule index 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

MMOGWO Objective value E N 2695.1 1222.1 2695.3 1213.9 2695.4 1189.7 2696.0 1163.0 2696.1 1135.8 2696.6 1108.1 2696.6 1085.2 2697.2 977.00 2697.2 955.20 2697.4 935.50 2697.4 915.30 2697.5 886.00 2697.6 864.50 2697.6 834.90 2697.6 834.90

NSGA-II Objective value E N 2694.7 1360.3 2694.9 1338.8 2695.9 1301.2 2696.1 1290.0 2696.7 1227.3 2696.7 1223.1 2696.9 1169.4 2697.1 1139.0 2697.2 1066.6 2697.5 1021.3 2697.7 991.40 2697.7 927.20 2697.9 874.60 2698.2 795.80 2698.3 747.20

MOGWO Objective value E N 2695.1 1083.5 2696.7 1079.3 2696.9 1005.4 2697.0 979.30 2697.4 961.10 2697.4 946.90 2697.5 932.70 2697.6 922.50 2697.7 922.10 2697.7 899.20 2697.9 864.80 2698.0 852.80 2698.1 832.40 2698.1 804.80 2698.2 781.50

MMOGWO Objective value E N 2158.2 1009.9 2158.2 1009.9 2159.6 991.00 2163.0 966.30 2163.3 944.20 2163.3 944.20 2167.9 870.40 2168.4 870.40 2174.2 868.50 2174.2 868.50 2179.6 848.30 2179.6 848.30 2187.0 845.70 2190.1 842.40 2190.1 842.40

NSGA-II Objective value E N 2146.5 1232.9 2147.9 1220.6 2148.7 1196.3 2152.0 1155.1 2152.3 1151.0 2153.2 1126.6 2153.4 1111.3 2153.7 1061.9 2154.6 1045.2 2154.9 1016.6 2155.5 962.10 2156.4 896.20 2156.6 873.10 2157.0 837.80 2157.0 837.60

MOGWO Objective value E N 2147.0 1056.8 2153.7 1047.5 2155.5 1034.6 2156.4 982.50 2160.4 980.10 2161.3 918.70 2164.7 899.40 2164.9 885.80 2165.5 879.20 2165.8 877.90 2169.7 847.70 2171.6 846.40 2172.5 846.30 2173.1 795.20 2177.2 791.80

Table 25 The details of non-dominated scheduling schemes obtained by different algorithms of the normal year. Schedule index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

MMOGWO Objective value E N 2108.2 1341.0 2108.2 1341.0 2110.3 1328.7 2116.2 1320.1 2122.4 1285.1 2128.0 1266.5 2128.9 1232.1 2131.1 1221.5 2131.2 1152.3 2136.0 1139.0 2141.6 1116.1 2141.6 1116.1 2145.0 1069.4 2146.7 1055.2 2146.7 1055.2

NSGA-II Objective value E N 2132.3 1354.1 2132.8 1354.1 2135.2 1354.1 2136.2 1351.8 2137.7 1342.8 2137.8 1342.1 2138.7 1326.6 2140.0 1306.4 2140.9 1299.2 2141.2 1286.2 2143.0 1269.9 2143.1 1269.4 2144.5 1255.0 2144.5 1253.7 2145.9 1237.6

MOGWO Objective value E N 1999.4 1285.3 2042.8 1257.9 2072.9 1234.0 2074.8 1228.0 2080.5 1210.5 2089.3 1174.3 2101.1 1208.3 2108.8 1191.5 2112.7 1182.0 2116.8 1149.7 2121.4 1148.8 2125.8 1131.4 2134.5 1113.9 2138.6 1081.2 2140.5 1067.2

Schedule index 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

From Fig. 20, it can be observed that MMOGWO and NSGA-II can deal with this case study with good performance while MOGWO cannot. As shown in Fig. 20, the MMOGWO is superior to NSGA-II and MOGWO in terms of the diversity performance, although the Pareto optimal solutions obtained by MMOGWO are not better than that of the ones obtained by NSGA-II. Moreover, the MMOGWO is superior to MOGWO in terms of convergence performance. It also indicates that the distribution range of the Pareto optimal solutions obtained by the algorithms under the wet year is wider than that of the ones obtained by the algorithms under the normal year and the dry year, implying that the cascaded hydropower system has more space to coordinate the relationship between the annual power generation and firm power when the water supply is abundant. Meanwhile, it can be seen easily from Fig. 20 that there is an obvious conflicting relationship between the annual power generation and the firm power. For example (see Tables 24-26), the maximum annual power generation (i.e., 2697.6 108 kW·h, 2190.1 108 kW·h, and 1721.4 108 kW·h) can be found at the scheme 30 in which the firm power (i.e., 834.90 104 kW, 842.40 104 Kw, and 760.90 104 kW) is minimum for the wet year, the normal year and the dry year, respectively (obtained by MMOGWO). On the contrary, the maximum firm power (i.e., 1595.2 104 kW, 1341.0 104 kW and 1265.5 104 kW) occurs at the scheme 1 in which the annual power generation (i.e., 2628.9 108 kW·h, 2108.2 108 kW·h and, 1612.2 108 kW·h) is minimum for the wet year, the normal year and the dry year, respectively (obtained by MMOGWO). At the same time, as shown in Tables 24-26, when the best annual power generation is considered, our MMOGWO can increase the annual power generation by -0.7 108 kW·h, 33.1 108 kW·h, 11.8 108 kW·h than that of NGSA-II and by -0.6 108 kW·h, 12.9 108 kW·h, 2.5 108 kW·h than that of MOGWO for the wet year, the normal year and the dry year, respectively. On the other hand, when the best firm power is considered, MMOGWO can increase firm power by 15.7 104 kW, -13.1 104 kW, -142.1 104 kW than that of NGSA-II and by 25.6 104 kW, 55.7 104 kW, 134.5 104 kW than those of MOGWO for the wet year, the normal year and the dry year, respectively. The scheduling results 30

comparisons indicate that the MMOGWO can present better Pareto optimal results with better distribution properties and convergence properties when applied to deal with the MOOSP of CHSs. Hence, we can conclude that the proposed MMOGWO can be used as a promising alternative tool to solve the MOOSP of CHSs. Table26 The details of non-dominated scheduling schemes obtained by different algorithms of the dry year. Schedule index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

MMOGWO Objective value E N 1612.2 1265.5 1612.2 1265.5 1614.5 1240.7 1620.8 1221.9 1627.4 1204.2 1627.4 1204.2 1661.6 1155.1 1661.6 1155.1 1662.3 1152.1 1669.3 1133.0 1670.6 1098.4 1670.6 1098.1 1670.6 1098.1 1683.4 1030.8 1683.4 1030.8

NSGA-II Objective value E N 1674.3 1407.6 1674.8 1389.0 1676.4 1377.6 1678.9 1370.6 1679.2 1368.1 1681.0 1353.6 1681.4 1345.4 1682.4 1325.9 1684.2 1290.5 1687.6 1288.7 1688.6 1277.3 1689.9 1255.3 1691.2 1211.0 1693.0 1196.8 1694.1 1141.4

MOGWO Objective value E N 1571.7 1131.0 1579.8 1040.1 1597.5 1131.8 1604.4 1113.3 1608.2 1068.6 1609.9 1039.5 1619.8 1034.2 1627.5 1037.5 1636.5 1023.4 1636.8 978.60 1640.3 974.50 1651.3 1006.6 1666.1 955.80 1667.5 981.40 1672.9 910.90

Schedule index 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

MMOGWO Objective value E N 1684.6 1028.7 1684.6 1028.7 1693.2 998.20 1698.5 968.40 1699.9 924.30 1702.6 903.00 1702.6 903.00 1706.0 878.40 1706.0 878.40 1708.1 858.10 1708.9 819.70 1710.1 792.20 1719.0 780.80 1721.4 760.90 1721.4 760.90

NSGA-II Objective value E N 1696.1 1130.8 1697.0 1106.4 1697.1 1093.1 1698.6 1043.0 1699.3 1005.1 1700.0 965.20 1700.5 937.70 1701.6 906.70 1701.6 862.60 1702.0 824.50 1702.2 809.10 1707.4 808.70 1707.5 806.20 1709.2 781.20 1709.6 776.80

MOGWO Objective value E N 1673.7 909.20 1674.9 879.40 1682.2 906.80 1687.5 858.50 1688.7 868.30 1692.9 801.30 1693.6 795.30 1701.4 796.10 1710.2 779.10 1710.5 779.10 1714.1 722.10 1715.9 711.80 1716.5 699.50 1716.9 696.40 1718.9 698.80

Fig. 21. Monthly water level processes of the Jinsha River cascaded hydropower system (Wet Year, Normal Year and Dry Year).

Firstly, we analyze the monthly water level processes shown in Fig. 21. It is apparent from Fig. 21 that the monthly water levels are all in the boundaries of constraints, which means the designed constraints handling method could satisfy the practical requirement of the multi-reservoir system. It also can be seen from Fig. 21 that the water levels of the four reservoirs are nearly identical for the wet year and the normal year, while the obvious differen ces mainly appear from January to May. The reason may be that from June to December is the wet season and the water supply is very abundant which is beneficial for optimization, but from January to May is the dry season and the water supply is short which is difficult to be optimized. Meanwhile, it is worth mentioning that the variation of the water levels is obviously different for the dry year since the water supply is short which makes it more difficult to allocate the water resources than the wet year and the normal year.

31

Fig. 22. Monthly power output processes of the Jinsha River cascaded hydropower system (Wet Year, Normal Year and Dry Year).

32

Fig. 23. Monthly generation processes of the Jinsha River cascaded hydropower system (Wet Year, Normal Year and Dry Year).

Next, we pay attention to the power output processes and the power generation processes (Figs. 22 and 23). It is shown by the optimal results that the power output and the power generation from May to December are all obvious more than those from January to April. The reason may be that from May to December is the wet season and the water supply is very abundant. In these months, the reservoirs can increase the discharge of the water to enhance the power generation, while from January to April is the dry season and the water supply is very short. In these months, the reservoirs should decrease the discharge of the water to ensure the water demand.

Fig. 24. Discharge processes and power output processes of the Jinsha River cascaded hydropower system (Wet Year) 33

Fig. 25. Discharge processes and power output processes of the Jinsha River cascaded hydropower system (Normal Year)

34

Fig. 26. Discharge processes and power output processes of the Jinsha River cascaded hydropower system (Dry Year).

We now analyze the discharge processes and power output processes of each reservoir (Figs. 24, 25, and 26). It can be seen from these figures that the water discharge rate and the power output are all in the boundaries of constraints, demonstrating that the proposed constraints\ handling method could satisfy the practical requirement for the multi-reservoir system. Meanwhile, the changes of the discharge processes and power output processes are all following the changes in the water levels.

Fig. 27. Monthly reservoir storages for each reservoir of the Jinsha River cascaded hydropower system (Wet Year, Normal Year and Dry Year).

35

Fig. 28. Stack of colors of monthly power output for each reservoir of the Jinsha River cascaded hydropower system (Wet Year, Normal Year and Dry Year).

Finally, we describe the results of the Monthly reservoir storages and the Stack of colors of monthly power output for each reservoir (Figs. 27 and 28). It is observed that the Monthly reservoir storages of the MMOGWO scheme for three scenarios are all in the boundaries of constraints. Meanwhile, the results are shown in Fig. 28 indicate that the monthly power output all satisfies the firm power. Thus, the effectiveness of the constraints handling method proposed in this paper is verified. 7. Conclusions In this paper, a new multi-objective grey wolf optimizer, denoted as MMOGWO is suggested. The proposed MMOGWO incorporates a fixed-sized external archive that is adaptively maintained according to a grid-based approach to preserve the non-dominated solutions found during the search process. An adaptive chaotic mutation strategy based on a chaotic cubic map and modified generational distance (GD) is applied to the archive to dynamically adjust the convergence speed and balance the exploration and exploitation. To keep the population diversity, a boundary mutation strategy based on the concept of multi-level parallel is employed to manage boundary constraint violations. Moreover, a non-dominated sorting and crowding distance-based elitism strategy is also incorporated into the algorithm for exploiting more potential Pareto optimal solutions and preserve the diversity of solutions in the approximated set. The MMOGWO is evaluated on 28 benchmark functions and compared with a large number of state-of-the-art algorithms in terms of five often-used metrics, including  ,  , GD, IGD, and RUNTIME. The numerical and graphical results show clearly that our MMOGWO is a very competitive algorithm in comparison with other algorithms since it could provide approximated Pareto fronts with good convergence and appropriate distribution in an acceptable runtime. Additionally, a component-wise experiment is conducted to show the advantages of the multiple search strategies (including adaptive chaotic mutation strategy, boundary mutation strategy, and elitism strategy). Afterward, MMOGWO is applied to solve the MOOSP of CHSs to further investigate its feasibility and effectiveness. Meanwhile, to handle the complex constraints of the MOOSP of CHSs effectively, a novel constraint handling method for dealing with various kinds of constraints is designed in this paper. The experimental results indicate that the MMOGWO can provide better schedule results with better convergence and diversificat ion properties compared with NSGA-II and MOGWO. Moreover, from the analysis of the experimental results, it can be seen that the proposed constraint handling method could satisfy the practical requirement of the MOOSP of CHSs. According to this comprehensive study, we can conclude that the proposed MMOGWO can be considered as a viable alternative for solving multi-objective optimization problems with complex constraints by equipping with constraints handling methods. In future work, we will focus on further enhancing the performance of the MMOGWO for solving many-objective optimization problems. Also, equipping MMOGWO with other constraints handling techniques for other practical engineering problems is meaningful work.

Conflict of Interest The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted Credit Author Statement Manuscript title: A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems All persons who meet authorship criteria are listed as authors, and all authors certify that they have participated sufficien tly in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication before its appearance in the Expert Systems with Applications. Acknowledgements All persons who have made substantial contributions to the work reported in the manuscript (e.g., technical help, writing and editing assistance, general support), but who do not meet the criteria for authorship, are named in the Acknowledgements and have given us their written permission to be named. If we have not included an Acknowledgements, then that indicates that we have not received substantial contributions from non-authors. Acknowledgments This research is granted by the National Program on Key Research Project of China (2016YFC0402207, 2017YFC0405901), and the National Natural Science Foundation (51679143). Sincere gratitude is extended to the editor and the anonymous reviewers for their valuable comments, which greatly improved the presentation of the paper. 36

Reference Abbass, H. A., Sarker, R., & Newton, C. (2001). PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems. In: Proceedings of the 2001 Congress on Paper presented at the Evolutionary Computation, 2001, (pp. 971-978). AI, Moubayed. N., Pertovski, A., & McCall, J. (2012). D2MOPSO: Multi-objective particle swarm optimizer based on decomposition and dominance. Evolutionary Computation in Combinatorial Optimization, Notes in Computer Science, 7245, 75–86. AI, Moubayed. N., Pertovski, A., & McCall, J. (2014). D2MOPSO: MOPSO based on decomposition and dominance with archiving using crowding distance in objective and solution spaces. Evolutionary Computation, 22(1), 47–77. Alcalá-Fdez, J., Fernández, A., Luengo, J., Derrac, J., García, S., Sánchez, L., & Herrera, F. (2011). KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. Journal of Multiple-Valued Logic and Soft Computing, 17(2-3), 255-287. Akay, B. (2013). Synchronous and asynchronous Pareto-based multi-objective artificial bee colony algorithms. Journal of Global Optimization, 57(2), 415-445. Akbari, R., Hedayatzadeh, R., Ziarati, K., & Hassanizadeh, B. (2012). A multi-objective artificial bee colony algorithm. Swarm and Evolutionary Computation, 2(1), 39-52. Azad, S. K. (2019). Monitored convergence curve: a new framework for metaheuristics structural optimization algorithms, Structural and Multidisciplinary Optimization, 60 (2), 481-499. Barros, M. T. L., Tsai, F. T. C., Yang, S. L., Lopes, J. E. G., & Yeh, W. W. G. (2003). Optimization of large-scale hydropower system operations. Journal of Water Resources Planning and Management, 129 (3), 178-188. Chau, K. W., Chuntian, C., & Li, C. W. (2002). Knowledge management system on flow and water quality modeling. Expert Systems With Applications, 22, 321-330. Chen, C. M., Chen, Y. P., & Zhang, Q. F. (2009). Enhancing MOEA/D with guided mutation and priority update for multi-objective optimization. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 209-216). Cheng, C. T., Wang, S. Chau, K. W., & Wu, X. Y. (2014). Parallel discrete differential dynamic programming for multireservoir operation. Environmental Modeling & Software, 57, 152-164. Chow, C. K., & Yuen, S. Y. (2012). A multiobjective evolutionary algorithm that diversifies population by its density. IEEE Transactions on Evolutionary Computation, 16 (2), 149-172. Coello Coello, C. A., Pulido, G. T., & Lechuga, M. S. (2004). Handling multiple objectives with particle swarm optimization. IEEE Transactions on Evolutionary Computation, 8(3), 256-279. Deb, K., & Agarwal, S. (1999). A niched-penalty approach for constraint handling in genetic algorithms. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna, (pp. 235-243). Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182-197. Deb, K., Thiele, L., Laumanns, M., & Zitzler, E. (2001). Scalable Test Problems for Evolutionary Multi-objective Optimization. Evolutionary Multiobjective Optimization (pp. 105-145). Springer. Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1(7), 3-18. Dilip, L., Bhesdadiya, R., Trivedi, I., & Jangir, P. (2018). Optimal Power Flow Problem Solution Using Multi-objective Grey Wolf Optimizer Algorithm. In: Intelligent Communication and Computational Technologies. Lecture Notes in Networks and Systems, vol 19. Springer, Singapore, (pp. 191-201). Durillo, J. J., & Nebro, A. J. (2011). jMetal: A java framework for multi-objective optimization. Advances in Engineering Software, 42(10), 760–771. Durillo, J. J., Nebro, A. J., Luna, F., & Alba, E. (2008). Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: Parallel problem solving from nature (PPSN x). Lecture Notes in Computer Science, vol. 5199 (pp.661–670). Springer. Emary, E., Yamany, W., Hassanien, A. E., & Snasel, V. (2015). Multi-objective gray-wolf optimization for attribute reduction. Procedia Computer Science 65, 623-632. Faris, H., Aljarah, L., Al-Betar, M. A., & Mirjalili, S. (2018). Grey wolf optimizer: a review of recent variants and applications. Neural Computing and Applications, 30 (2), 413-435. Feng, Y. H., Liu, J. Q., & He, Y. C. (2013). Chaos-based dynamic population firefly algorithm. Journal of Computer Applications, 33(3), 796-799. Fonseca, C. M., & Fleming, P. J. (1993). Genetic algorithms for multiobjective optimization: formulation discussion and generalization. In: Proceedings of the fifth international conference on genetic algorithms. Morgan Kauffman Publishers (pp. 34-44). Fonseca, C. M., & Fleming, P. J. (1998). Multiobjective optimization and multiple constraint handing with evolutionary algorithms-Part I: a unified formulation. IEEE Transactions on System, Man, and Cybernetics-Part A: Systems and Humans, 28(1), 26-37. Gandomi, A. H., & Yang, X. S. (2014). Chaotic bat algorithm. Journal of Computational Science, 5(2), 224-232. Gao, S., Zeng, S. Y., Xiao, B., Zhang, L., & Shi, Y. L. (2009). An orthogonal multi-objective evolutionary algorithm with lower-dimensional crossover. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 1713-1726). Got, A., Moussaoui, A., & Zouache, D. (2020). A guided population archive whale optimization algorithm for solving multiobjective optimization problems. Expert Systems with Applications, 141, 112972. Hajela, H., & Lin, C.-Y. (1992). Genetic search strategies in multi-criterion optimal design. Structural and Multidisciplinary Optimization, 4(2), 99-107. Hasançebi O., & Azad S. K. (2015). Adaptive dimensional search: a new metaheuristic algorithm for discrete truss sizing optimization. Computers and Structures, 154 (2015), 1-16. Helbig, M., & Engelbrecht, A. P. (2013b). Dynamic multi-objective optimization using PSO. Metaheuristics for dynamic optimization: 433 (pp. 147-188). Berlin Heidelberg: Springer. Hu, W., & Yen, G. G. (2015). Adaptive multiobjective particle swarm optimization based on parallel cell coordinate system. IEEE Transactions on Evolutionary Computation, 19 (1), 1-18. Huang, V. L., Qin, A. K., Suganthan, P. N., & Tasgetiren, M. F. (2007). Multi-objective optimization based on self-adaptive differential evolution algorithm. In: Proceedings of 2007 IEEE Congress on Evolutionary Computation (pp. 3601-3608). Huang, V. L., Zhao, S. Z., Mallipeddi, R., & Suganthan, P. N. (2009). Multi-objective optimization using self-adaptive differential evolution algorithm. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 190-194). Ishibuchi, H., Masuda, H., Tanigaki, Y., & Nojima, Y. (2015). Modified distance calculation in generational distance and inverted generational distance. In: Evolutionary Multi-Criterion Optimization. EMO 2015. Lecture Notes in Computer Science, vol 9019. Springer, Cham (pp.110-125). Jain, M., Singh, V., & Rani, A. (2019). A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm and Evolutionary Computation, 44, 148-175. Jones, D. F., Mirrazavi, S. K., & Tamiz, M. (2002). Multi-objective meta-heuristics: An overview of the current state-of-the-art. European Journal of Operational Research, 137(1), 1-9. Kang, F., Li, J., & Ma, Z. (2011). Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions. Information Sciences, 181 (16), 3508-3531. Kaur, G., & Arora, S. (2018). Chaotic whale optimization algorithm. Journal of Computational Design and Engineering, 5(3), 275-284. Khan, W., & Zhang, Q. F. (2010). MOEA/D-DRA with two crossover operators. In: Proceedings of 2010 UK Workshop on Computational Intelligence (UKCI), ART. Kishor, A., Singh, P. K., & Prakash, J. (2016). NSABC: Non-dominated sorting based multi-objective artificial bee colony algorithm and its application in data clustering. Neurocomputing, 216, 514-533. Kohli, M., & Arora, S. (2018). Chaotic grey wolf optimization algorithm for constrained optimization problems. Journal of Computational Design and Engineering, 5(4), 458-472. Konwles, J. D., & Corne, D. W. (2000). Approximating the non-dominated front using the Pareto archived evolution strategy. Evolutionary Computation, 8(2), 149-172. Kukkonen, S., & Lampinen, J. (2009). Performance assessment of generalized differential evolution 3 with a given set of constrained multi-objective test problems. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 1943–1950). Kursawe, F. (1990). A variant of evolution strategies for vector optimization. Parallel Problem Solving from Nature 1st Workshop, PPSNI, Lecture Notes in Computer Science, 496, 193-197. Li, C. S., Wang, W. X., & Chen, D. S. (2019). Multi-objective complementary scheduling of hydro-thermal-RE power system via a multi-objective hybrid grey wolf optimizer. Energy, 171 (2019) 241-255. Lin, Q. Z., Li, J.Q., Du, Z. H., Chen, J. Y., & Ming, Z. (2015). A novel multi-objective particle swarm optimization with multiple search strategies. European Journal of Operational Research, 247(3), 732-744. Liu, B. X., Cheng, C. T., Wang, S. Liao S. L., Chau, K. W., Wu, X. Y., & Li, W. D. (2018). Parallel chance-constrained dynamic programming for cascade hydropower system operation. Energy, 165 (A), 752-767. Liu, H. B., Li, X. Q. (2009). The multiobjective evolutionary algorithm based on determined weight and sub-regional search. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 1928–1934). Liu, M. Z., Zou, X. F., Chen, Y., & Zhi, J. W. (2009). Performance assessment of DMOEA-DD with CEC 2009 MOEA competition test instances. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 2913-2918). Lu, C., Gao, L., Li X. Y., & Xiao, S. Q. (2017). A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Engineering Applications of Artificial Intelligence, 57(C), 61-79. Lu, C., Gao, L., Pan, Q. K., Li, X. Y., & Zheng, J. (2019). A multi-objective cellular grey wolf optimizer for hybrid flowshop scheduling problem considering noise pollution. Applied Soft Computing, 75, 728-749. Lu, C., Xiao, S. Q., Li, X. Y., & Gao, L. (2016). An effective multi-objective discrete grey wolf optimizer for a real-world scheduling problem ion welding production. Advances in Engineering Software, 99, 161-176. Lu, Y. L., Zhou, J. Z., Qin, H., Wang, Y., & Zhang, Y. C. (2010). An adaptive chaotic differential evolution for the short-term hydrothermal generation scheduling problem. Energy Conversion and Management, 51 (7), 1481-1490. Lu, Y. L., Zhou, J. Z., Qin, H., Wang, Y., & Zhang, Y. C. (2011). A hybrid multi-objective cultural algorithm for short-term environmental/economic hydrothermal scheduling. Energy Conversion and Management, 52 (5), 2121-2134. Martinez, S. Z., & Coello Coello, C. A. (2011). A multi-objective particle swarm optimizer based on decomposition. In: Proceedings of the 13th annual genetic and evolutionary computation conference (pp.69–76). Medina, M. A., Das, S., Coello Coello, C. A., & Ramírez, J. M. (2014). Decomposition-based modern metaheuristic algorithms for multi-objective optimal power flow-A comparative study. Engineering Applications of Artificial Intelligence, 32, 10-20. Ming, B., Chang, J., Huang, Q., Wang, Y., & Huang, S. (2015). Optimal operation of multi-reservoir system based-on Cuckoo Search Algorithm. Water Resources Management, 29 (15), 5671-5678. Mirjalili, S., Jangir, P., & Saremi, S. (2017). Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Applied Intelligence, 46 (1), 79-95. Mirjalili, S., Jangir, P., Saremi, S., & Saremi, S. (2017). Optimization of problems with multiple objectives using the multi-verse optimization algorithm. Knowledge-Based Systems, 134 (2017), 50-71. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46-61. Mirjalili, S., Saremi, S., Mirjalili, S. M., & Coelho, L. D. S. (2016). Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Systems with Applications, 47, 106-119. Mirjalili, S. Z., Mirjalili, S., Saremi, S., & Faris, H. (2018). Grasshopper optimization algorithm for multi-objective optimization problems. Applied Intelligence, 48 (4), 805-820. Mladenović, N., & Hansen, P. (1997). Variable neighborhood search. Computers & Operations Research, 24(11), 1097-1100. Mohamed, A. A. A., El-Gaafary, A. A. M., Mohamed, Y. S., & Hemeida, A. M. (2016). Multi-objective modified grey wolf optimizer for optimal power flow. In: Proceedings of 2016 Eighteenth International Middle East Power Systems Conference (MEPCON). Nebro, A. J., Luna, F., Alba, E., Dorronsoro, B., Durillo, J. J., & Beham, A. (2008). AbYSS: Adapting scatter search to multiobjective optimization. IEEE Transactions on Evolutionary Computation, 37

12(4), 439-457. Nebro, A. J., Durillo, J. J., Garcia-Nieto, J., Coello Coello, C. A., Luna, F., & Alba, E. (2009a). SMPSO: A new pso-based metaheuristic for multi-objective optimization. In: Proceedings of IEEE symposium on computational intelligence in multi-criteria decision-making (pp.66–73). Nebro, A. J., Durillo, J. J., Luna, F., Dorronsoro, B., & Alba, E. (2009b). MOcell: A cellular genetic algorithm for multiobjective optimization. International Journal of Intelligent Systems, 24(7), 726–746. Nguyen, T. T., Thom, H. T. H., & Dao, T. K. (2017). Estimation localization in wireless sensor network based on multi-objective grey wolf optimizer. In: Advances in Information and Communication Technology. ICTA 2016. Advances in Intelligent Systems and Computing, vol 538, (pp. 228-237). Springer, Cham. Nuaekaew, K., Artrit, P., Pholdee, N., & Bureerat, S. (2017). Optimal reactive power dispatch problem using a two-archive multi-objective grey wolf optimizer. Expert Systems with Applications, 87 (2017), 79-89. Pampará, G., Engelbrecht, A. P., & Cloete, T. (2008). Cilib: a collaborative framework for computational intelligence algorithms-Part I. In: Proceedings of 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence) (pp. 1750-1751). Pecora, L. M., & Carroll, T. L. (1990). Synchronization in Chaotic systems. Physical Review Letters, 64 (8), 821-824. Peng, W., & Zhang, Q. F. (2008). A decomposition-based multi-objective particle swarm optimization algorithm for continuous optimization problems. In: Proceedings of 2008 IEEE International Conference on Granular Computing (pp. 534-537). Poloni, C. (1997). Hybrid GA for multiobjective aerodynamic shape optimization. Genetic Algorithm in Engineering and Computer Science, New York: Wiley (pp. 397-414). Poloni, C., Giurgevich, A., Onesti, L., & Pediroda, V. (2000). Hybridization of a multi-objective genetic algorithm, a neural network and a classical optimizer for a complex design problem in fluid dynamics. Computer Methods in Applied Mechanics and Engineering, 186(2-4), 403-420. Qin, H., Zhou, J. Z., Lu, Y. L., Wang, Y., & Zhang, Y. C. (2010). Multi-objective differential evolution with adaptive Cauchy mutation for short-term multi-objective optimal hydro-thermal scheduling. Energy Conversion and Management, 51 (4), 788-794. Qu, B. Y., & Suganthan, P. N. (2009). Multi-objective evolutionary programming without non-domination sorting is up to twenty times faster. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 2934-2939). Sahoo, A., & Chandra, S. (2017). Multi-objective grey wolf optimizer for improved cervix lesion classification. Applied Soft Computing, 52, 64-80. Schaffer, J. D. (1985). Multiple objective optimization with vector evaluated genetic algorithms. In: Proceedings of the first international conference on genetic algorithms, Lawrence Erlbaum, Hillsdale, New Jersey, (pp. 93-100). Schott, J., R. (1995). Fault tolerant design using single and multicriteria genetic algorithm optimization. Master‟s thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology. Sierra, M. R., & Coello Coello, C. A. (2005). Improving PSO-Based multi-objective optimization using crowding, mutation and ϵ-dominance. In: Proceedings of the Evolutionary Multi-Criterion Optimization, Springer (pp. 505-519). Sindhya, K., Sinha, A., Deb, K., & Miettinen, K. (2009). Local search based evolutionary multi-objective optimization algorithm for constrained and unconstrained problems. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 275-278). Srinivas, N., & Deb, K. (1994). Multiobjective optimization using non-dominated sorting in genetic algorithms. Evolutionary Computation, 2(3), 221-248. Tanaka, M., Watanabe, H., Furukawa, Y., & Tanino, T. (1995). GA-based decision support system for multicriteria optimization. In: Proceedings of IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century. 2, 1556-1561. Tayyab, M., Zhou, J. Z., Dong, X. H., Ahmad, I., & Sun, N. (2019). Rainfall-runoff modeling at Jinsha River basin by integrated neural network with discrete wavelet transform. Meteorology and Atrmosopheric Physics, 131 (1), 115-125. Tian, Y., Cheng, R., Zhang, X. Y., & Jin, Y. C. (2017). PlatEMO: a matlab platform for evolutionary multi-objective optimization. IEEE Computational Intelligence Magazine, 12 (4), 73-87. Tiwari, S., Fadel, G., Koch, P., & Deb, K. (2009). Performance assessment of the hybrid archive-based Micro Genetic Algorithm (AMGA) on the CEC09 test problems. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 1935-1942). Tsai, P. W., Nguyen, T. T., & Dao, T. K. (2017). Robot path planning optimization based on multiobjective grey optimizer. In: Advances in Intelligent Systems and Computing, vol 536, (pp.166-173). Springer, Cham. Tseng, L. Y., & Chen, C. (2009). Multiple trajectory search for unconstrained/constrained multi-objective optimization. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 1951-1958). Van Veldhuizen, D. A., & Lamont, G. B. (1998). Multiobjective evolutionary algorithm research: A history and analysis. Department of Electrical and Computer Engineering, Graduate School of Engineering, Air Force Institute of Technology, Wright-Patterson AFB, OH, Technology Report. TR-98-03. Wang, G. G., Guo, L. H., Gandomi, A. H., Hao, G. S., Wang, H. Q. (2014). Chaotic krill herd algorithm. Information Sciences, 274, 17-34. Wang, S. J., & Zhang, X. L. (2012). Long-term trend analysis for temperature in the Jinsha River Basin in China. Theoretical and Applied Climatology, 109 (3-4), 591-603. Wang, Y. P., Dang, C. Y., Li, H. C., Han, L. X., & Wei, J. X. (2009). A clustering multi-objective evolutionary algorithm based on orthogonal and uniform design. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 2927-2933). Wu, W. H., Yang, J. D., Xu, S. J., & Yin, H. W. (2008). Geochemistry of the headwaters of the Yangtze River, Tongtian He and Jinsha Jiang: Silicate weathering and CO2 consumption. Applied Geochemistry, 23(12), 3712-3727. Xia, X., Ji, J., Li, C. S., Xue, X. M., Wang, X. L., & Zhang, C. (2019). Multiobjective optimal control for hydraulic turbine governing system based on an improved MOGWO algorithm. Complexity, 2019, 1-14. Xiang, Y., Zhou, Y., & Liu, H. (2015). An elitism based multi-objective artificial bee colony algorithm. European Journal of Operational Research, 245(1), 168-193. Yang, D. X., Li, G., & Cheng, G. D. (2007). On the efficiency of chaos optimization algorithms for global optimization. Chaos, Solitons and Fractals, 34 (4), 1366-1375. Yang, J. M., Ma, M. M., Che, H. J., Xu, D. S., & Guo, Q. C. (2015). Multi-objective adaptive chaotic particle swarm optimization algorithm. Control and Decision, 30(12), 2168-2174. Yu, D., Hong, J., Zhang, J. H., & Niu, Q. B. (2018). Multi-objective individualized-instruction teaching-learning-based optimization algorithm. Applied Soft Computing, 62, 288-314. Zamuda, A., Brest, J., Bošković, B., & Žumer, V. (2009). Differential evolution with self-adaptation and local search for constrained multiobjective optimization. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 3617-3624). Zapotecas-Martínez, S., & Coello Coello, C. A. (2011). A multi-objective particle swarm optimizer based on decomposition. In: Proceedings of the 13th annual conference on genetic and evolutionary computation. In GECCO‟11 (pp. 69-76). New York, NY, USA: ACM. Zapotecas-Martínez, S., García-Nájera, A., & López-Jaimes, A. (2019). Multi-objective grey wolf optimizer based on decomposition. Expert Systems with Applications, 120, 357-371. Zhan, Z. H., Li, J. J., Cao, J. N., Zhang, J., Chung, H. S. H., & Shi, Y. H. (2013). Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems. IEEE Transactions on Cybernetics, 43, 445–463. Zhang, N., Li, C. S., Li, R. H., Lai, X. J., & Zhang, Y. C. (2016). A mixed-strategy based gravitational search algorithm for parameter identification of hydraulic turbine governing system, Knowledge-Based Systems, 109 (2016), 218-237. Zhang, Q. F., & Li, H. (2007). MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation, 11(6), 712-731. Zhang, Q. F., Liu, W. D., & Li, H. (2009). The performance of a new version of MOEA/D on CEC09 unconstrained MOP test instances. In: Proceedings of 2009 IEEE Congress on Evolutionary Computation (pp. 18-21). Zhang, Q. F., & Suganthan, P. N. (2009). Final report on CEC2009 MOEA competition. School of CS & EE, University of Essex, UK: School of EEE, Nanyang Technological University, Singapore. Zhang, Q. F., Zhou, A.M., Zhao, S. Z., Suganthan, P. N., Liu, W. D., & Tiwari, S. (2008). Multiobjective optimization test instances for the CEC 2009 special session and competition. In: Proceedings of University of Essex, Colchester, UK and Nanyang Technological University, Singapore, Special Session on Performance Assessment of Multi-Objective Optimization Algorithms, Technical Report. Zhang, R., Zhou, J. Z., Ouyang, S., Wang, X. M., & Zhang, H. F. (2013). Optimal operation of multi-reservoir system by multi-elite guide particle swarm optimization. Electrical Power and Energy Systems, 48, 58-68. Zhang, X. Q., & Qin, L. X. (2016). Glowworm swarm optimization based on hybrid mutation. Computer Application and Software, 33(2), 272-275. Zhang, X. Y., Tian, Y., & Jin, Y. C. (2015). A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation, 19 (6), 761-776. Zhao, W. G., Wang, L. Y., & Zhang, Z. X. (2019). Atom search optimization and its application to solve a hydrogeologic parameter estimation problem, Knowledge-Based Systems, 163 (2019), 283-304. Zhong, Y. B., Xiang, Y., & Liu, H. L. (2014). A multi-objective artificial bee colony algorithm based on division of the searching space. Applied Intelligence, 41(4), 987-1011. Zhou, A. M., Qu, B. Y., Li, H., Zhao, S. Z., Suganthan, P. N., & Zhang, Q. F. (2011). Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm and Evolutionary Computation, 1(1), 32-49. Zitzler, E. (1999). Evolutionary algorithms for multiobjective optimization: Methods and applications. doctor‟s dissertation, Swiss Federal Institute of Technology Zurich. Zitzler, E., Deb, K., & Thiele, L. (2000). Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2), 173-195. Zitzler, E., & Künzli, S. (2004). Indicator-based selection in multiobjective search. In: Proceedings of the 8th international conference on parallel problem solving from nature, ppsn viii (pp. 832-842). Springer. Zitzler, E., & Thiele, L. (1998). Multiobjective optimization using evolutionary algorithms-A comparative case study. In: Parallel Problem Solving from Nature-PPSN V. PPSN 1998. Lecture Notes in Computer Science, vol 1498. Springer, Berlin, Heidelberg (pp. 292-301). Zitzler, E., & Thiele, L. (1999). Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Transactions on Evolutionary Computation, 3(4), 257-271. Zitzler, E., Laumanns, M., & Thiele, L. (2001). SPEA2: Improving the strength Pareto evolutionary algorithm. Technical Report Computer Engineering and Networks Laboratory, Department of Electrical Engineering, Swiss Federal Institute of Technology (ETH) Zurich, Switzerland (pp. 95-100). Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., & da Fonseca, V. G. (2003). Performance assessment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary Computation, 7(2), 117-132. Zou, W. P., Zhu, Y. L., Chen, H. N., & Zhang, B. B. (2011). Solving multiobjective optimization problems using artificial bee colony algorithm. Discrete Dynamics in Nature and Society, 2011(2), 1-37.

Credit Author Statement The first author Dr. Junfeng Liu is a Ph.D student of the corresponding Prof. Dingfang Li who is the advisor of the first author. The second author suggested the idea of developing a multi-objective grey wolf optimizer algorithm with multiple search strategies. The first author reviewed the literature and made sure that 38

multi-objective grey wolf optimizer algorithm with multiple search strategies has not been designed. The first author conceptualizes the algorithm, designed and coded the algorithm. He also designed the computational experiments and conducted the testing and comparisons with other well-known algorithms. He constantly consulted with the second author. Also the second author validated the work of the first author.

39