Grey wolf optimizer with cellular topological structure

Grey wolf optimizer with cellular topological structure

Accepted Manuscript Grey Wolf Optimizer with Cellular Topological Structure Chao Lu , Liang Gao , Jin Yi PII: DOI: Reference: S0957-4174(18)30243-4 ...

9MB Sizes 0 Downloads 93 Views

Accepted Manuscript

Grey Wolf Optimizer with Cellular Topological Structure Chao Lu , Liang Gao , Jin Yi PII: DOI: Reference:

S0957-4174(18)30243-4 10.1016/j.eswa.2018.04.012 ESWA 11926

To appear in:

Expert Systems With Applications

Received date: Revised date: Accepted date:

15 December 2017 21 March 2018 9 April 2018

Please cite this article as: Chao Lu , Liang Gao , Jin Yi , Grey Wolf Optimizer with Cellular Topological Structure, Expert Systems With Applications (2018), doi: 10.1016/j.eswa.2018.04.012

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT

AC

CE

PT

ED

M

AN US

CR IP T

Highlights  The cellular automata concept is embedded into the GWO  CGWO with a topological structure can help to improve diversity of population  The proposed CGWO can solve multimodal problems well  The CGWO outperforms the other state-of-the-art algorithms on function and engineering problems

ACCEPTED MANUSCRIPT

Grey Wolf Optimizer with Cellular Topological Structure Chao Lua, Liang Gaob, Jin Yic Hubei Key Laboratory of Intelligent Geo-Information Processing (China University of Geosciences (Wuhan)), Wuhan 430074, China b. State Key Lab of Digital Manufacturing Equipment & Technology, Huazhong University of Science and Technology, Wuhan, China c. Department of Industrial Systems Engineering and Management, National University of Singapore, Singapore

CR IP T

a.

Abstract: Grey wolf optimizer (GWO) is a newly developed metaheuristic inspired by hunting mechanism of grey wolves. The paramount challenge in GWO is that it is prone to stagnation in local optima. This paper proposes a cellular grey wolf optimizer with a topological structure (CGWO). The proposed CGWO has two characteristics. Firstly, each wolf has its own topological neighbors, and interactions among wolves are restricted to their neighbors, which favors exploitation of CGWO. Secondly, information diffusion mechanism by overlap among neighbors can allow to maintain the population diversity for

AN US

longer, usually contributing to exploration. Empirical studies are conducted to compare the proposed algorithm with different metaheuristics such as success-history based adaptive differential evolution with linear population size reduction (LSHADE), teaching-learning based optimization algorithm (TLBO), effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR), novel dynamic harmony search (NDHS), bat-inspired algorithm (BA), comprehensive learning particle swarm optimizer (CLPSO), evolutionary algorithm based on decomposition (EAD), ring topology PSO (RPSO), crowding-based differential evolution (CDE), neighborhood based crowding differential evolution (NCDE), locally informed

M

particle swarm (LIPS), some improved variants of GWO and GWO. Experimental results show that the proposed method performs better than the other algorithms on most benchmarks and engineering problems.

1.

ED

Keywords: Grey wolf optimizer; Cellular automata; Metaheuristics; Engineering optimization; Global optimization Introduction

Metaheuristics have received widespread attention over the last two decades due to their simplicity, flexibility and

PT

derivation-free mechanism. A variety of metaheuristics, such as Genetic Algorithm (GA) (Goldberg & Holland, 1988), Particle Swarm Optimization (PSO) (J & R, 1995), Differential Evolution (DE) (Storn & Price, 1997), Cognitive Behavior Optimization Algorithm (CBO) (M. Li, Zhao, Weng, & Han, 2016) and Moth-flame Optimization Algorithm (MFO)

CE

(Seyedali Mirjalili, 2015), have been proposed and successfully applied in many engineering fields. Metaheuristics can usually be classified into two categories: (1) single solution-based algorithms like Simulated Annealing (SA) (Kirkpatrick, Gelatt, & Vecchi, 1983). It begins with a candidate solution, and then the quality of this candidate solution is improved during

AC

the search progress. (2) population-based metaheuristics such as Biogeography-Based Optimizer (BBO) (Simon, 2008), Teaching-Learning Based Optimization algorithm (TLBO) (R. V. Rao, Savsani, & Balic, 2012; R. V. Rao, Savsani, & Vakharia, 2012), Grey Wolf Optimizer (GWO) (S. Mirjalili, Mirjalili, & Lewis, 2014), Water Evaporation Optimization (WEO) (Kaveh & Bakhshpoori, 2016), Multi-Verse Optimizer (MVO) (S. Mirjalili, Mirjalili, & Hatamlou, 2016), and Yin-Yang-Pair Optimization (YYPO) (Punnathanam & Kotecha, 2016). The characteristic of population-based metaheuristics is that the optimization search is performed on a set of solutions. Compared with single solution-based metaheuristics, population-based metaheuristics have some advantages as follows:



Corresponding author.

E-mail address: [email protected] (Chao Lu)

ACCEPTED MANUSCRIPT 

A set of trial solutions can share information about the search space which guides the trial solutions toward the promising areas of the search space.



A set of trial solutions can help each other to avoid local optimum (S. Mirjalili, et al., 2014).



Population-based metaheuristics usually have a greater exploration ability than single solution-based metaheuristics. Although different metaheuristics have various search manners, most of them are based on the common conceptualization

which balances diversification (exploration of the search space) and intensification (exploitation of already found approximate solutions). Thus, exploration and exploitation are two cornerstones of metaheuristics. Exploration is the process of visiting new regions of the search space, whilst exploitation is the process of searching those areas of the search space

CR IP T

within the neighborhood of previously visited points (C. Lu, Li, Gao, Liao, & Yi, 2017; Matej, Črepinšek, Liu, & Mernik, 2013).

GWO (S. Mirjalili, et al., 2014) is one recently developed population-based algorithm inspired by hunting mechanism of grey wolves. Compared with other population-based algorithms such as PSO and GA, GWO presents a powerful search ability (Chao Lu, Gao, Li, & Xiao, 2017). Some efforts on the GWO have been made in terms of application and theory. From the applicable perspective, GWO has been utilized to address human recognition (Sanchez, Melin, & Castillo, 2017),

AN US

and unmanned combat aerial vehicle path planning (Zhang, Zhou, Li, & Pan, 2016). From a theoretical perspective, Rodriguez et al. (2016; 2017; 2017) proposed a new GWO with a hierarchical operator and an improved GWO with fuzzy logic, respectively. Joshi and Arora (2017) proposed an enhanced grey wolf optimizer with a better hunting mechanism to balance between exploration and exploitation. Heidari and Pahlavani (2017) proposed an efficient GWO where Lévy flight (LF) and greedy selection strategies are integrated with the modified hunting phases of GWO. In GWO, the search process is guided by the three best wolves at each iteration. This search scheme promotes exploitation since all candidate wolves (solutions) are attracted toward the three best wolves, thereby converging faster toward these wolves. However, as a result of

M

such a strong exploitation effect, the search diversity would be hampered in a sense. Finally, the GWO is prone to stagnation in local optimum (Chao Lu, Xiao, Li, & Gao, 2016). Meanwhile, according to the previous experiments (Qu, Liang, Wang, Chen, & Suganthan, 2016), many metaheuristics like GWO is not suitable for solving multimodal optimization problems

ED

where there exist multiple global optimum solutions because it is originally designed to address single global optimization problems. To address the above issues, cellular automata (CA) is embedded into GWO to maintain population diversity and locate multiple optimal peaks. The main motivations for the GWO combined with CA are as follows: (1) CA provides a

PT

neighborhood structure for GWO. In CA, all the individuals (solutions) are arranged in a toroidal mesh and each individual has its own neighborhood. Due to the isolation by using neighborhoods, the population diversity can be preserved well. (2)

CE

The shared information from overlap areas between neighborhoods can contribute to exploration. The information in different neighborhoods can transit from one area to another by diffusing good solutions in overlap areas between neighborhoods. Moreover, since individual’s interaction is restricted to its neighborhood, each neighbor has a good local search ability.

AC

Whereas all the individuals in different neighborhoods can find promising solutions in a parallel search way, which improves global search ability and helps to locate multiple optimal peaks. Therefore, GWO with CA can effectively balance exploitation and exploration. Some efforts on metaheuristics with a topological structure have been made in recent years. For example, Shi et al. (2011) proposed a cellular PSO by incorporating the cellular topological structure into the PSO to improve the performance of PSO. Gao et al. (2012) successfully applied the cellular PSO to parameters optimization of a multi-pass milling process. Yi et al. (2016) proposed a modified harmony search algorithm with a cellular local search to enhance the exploitation capability. Alba and Dorronsoro et al. (2005) developed a cellular GA by combining GA with cellular automata in order to enhance the performance of the basic GA. Li (2010) proposed a simple yet effective niching PSO with a ring neighborhood topology, which showed a more effective performance than some existing PSO niching algorithms. Das et al. (2009) developed an effective DE using a neighborhood-based mutation operator, which is competitive or superior to several existing DEs.

ACCEPTED MANUSCRIPT

Piotrowski (2013) extended the work of Das et al. (2009) by considering the additional strategies. Obviously, metaheuristics with a topological structure have a significant improvement in the search ability. CA can provide such a topological structure. Therefore, we propose a hybrid GWO with CA called CGWO for continuous optimization problems. This paper uses three groups of benchmarks and engineering application problems to evaluate the behavior of the proposed CGWO by comparing with other algorithms including success-history based adaptive differential evolution with linear population size reduction (LSHADE) (Tanabe & Fukunaga, 2014), effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR) (Kumar, Misra, & Singh, 2017), teaching-learning based optimization algorithm (TLBO) (Črepinšek, Liu, & Mernik, 2012), novel dynamic harmony search (NDHS) (J. Chen, Pan, & Li, 2012), bat-inspired algorithm (BA) (Yang & Gandomi, 2012), comprehensive learning particle swarm optimizer (CLPSO) (Liang, Qin,

CR IP T

Suganthan, & Baskar, 2006), evolutionary algorithm based on decomposition (EAD) (Gu, Cheung, & Luo, 2015), ring topology PSO (RPSO) (X. D. Li, 2010), crowding-based differential evolution (CDE) (Thomsen, 2004), neighborhood based crowding differential evolution (NCDE) (Qu, Suganthan, & Liang, 2012), locally informed particle swarm (LIPS) (Qu, et al., 2016), island-based harmony search (iHS) (Al-Betar, Awadallah, Khader, & Abdalkareem, 2015), some improved versions of GWO (Joshi & Arora, 2017; Malik, Mohideen, & Ali, 2016; Yu, Liu, Wang, & Gao, 2017), and grey wolf optimizer (GWO) (S. Mirjalili, et al., 2014). Experimental results show the CGWO can improve the performance of the basic

AN US

GWO.

Additionally, there are three crucial differences between our proposal and the previous researches on optimization approaches with a special topological structure. 

In general, various cellular structures have different impacts on the behavior of the algorithms. Thus, this paper incorporates six different cellular structures into the GWO to test the performance of algorithms with different structures. The most appropriate one is chosen as our proposed cellular GWO as in Section 4.3. Actually, metaheuristics with the CA are a kind of fine-grained structure algorithms. In this case of the algorithm, the

M



population is divided into many subpopulations that arranged in a given structure. Then, the search process is performed

ED

within the neighborhood of the current individual. Metaheuristics with CA are similar to niching metaheuristics, but traditional niching methods usually require extra parameters. One important advantage of the proposed CGWO algorithm is that there is no need to specify any parameters. 

The previous cellular algorithms were primarily adopted to locate a single global optimum, rather than multiple global

PT

optima. However, the proposed CGWO can locate all global optima in Section 4.4.2.

CE

The rest of this paper is organized as follows. Firstly, a brief overview of the basic GWO algorithm and cellular automata (CA) is given in Section 2. The detailed presentation of the CGWO approach is provided in Section 3. Section 4 gives the experiment and results. Section 5 presents the conclusions and future work. Overview of GWO and CA

AC

2.

Before we introduce our proposed CGWO algorithm, we give an overview of the basic GWO and CA, respectively. 2.1 GWO

In this subsection, we describe GWO. GWO is a new metaheuristic inspired by the grey wolf hunting for the prey. The main steps of GWO are provided in the following subsections (S. Mirjalili, et al., 2014).

2.1.1 Social hierarchy To establish a social hierarchy of wolves, all the grey wolves are classified into four kinds of wolf according to the fitness value. The best wolf (solution) in GWO is denoted as the alpha (α). Similarly, the second and third best wolves are called beta

ACCEPTED MANUSCRIPT

(β) and delta (δ) respectively. The rest of wolves are considered to be omega (ω). In GWO the search process is mainly guided by α, β and δ. The ω wolves obey these three wolves (S. Mirjalili, et al., 2014). 2.1.2 Encircling prey Grey wolves encircle the prey when the hunt. To mathematically model encircling behavior, the equation is defined as follows:

𝑫 = 𝑪 ∘ 𝑿𝑝 (𝑡) − 𝑿(𝑡) 𝑿(𝑡 + 1) = 𝑿𝑝 (𝑡) − 𝑨 ∘ 𝑫

CR IP T

(1) (2)

where 𝑡 is the current generation number, ∘ is the hadamard product operation, 𝑿𝑝 and 𝑿 represents the position vector of the prey and a grey wolf. The vectors 𝑨 and 𝑪 are formulated as follows (S. Mirjalili, et al., 2014): 𝑨 = 2𝒂 ∘ 𝒓1 − 𝒂 𝑪 = 2𝒓2

(3) (4)

AN US

Where the value of 𝒂 is linearly decreased from 2 to 0 during the optimization process, and it is used to emphasize exploration and exploitation respectively. 𝒓1 and 𝒓2 are random vectors in range [0,1]. 2.1.3 Hunting

The hunt is often guided by the alpha. The beta and delta might also join in the hunting. However, the position of prey (optimum) is unknown in an abstract search space. To simulate the hunting behavior of grey wolves, assume that the alpha, beta, and delta have better knowledge about the potential location of the prey. Therefore, the three best solutions found so far

M

are maintained and guide the other wolves towards the potential location of the prey. The equation of hunting is calculated as follows (S. Mirjalili, et al., 2014):

ED

𝑫𝛼 = 𝑪1 ∘ 𝑿𝛼 − 𝑿, 𝑫𝛽 = 𝑪2 ∘ 𝑿𝛽 − 𝑿, 𝑫𝛿 = 𝑪3 ∘ 𝑿𝛿 − 𝑿 𝑿1 = 𝑿𝛼 − 𝑨1 ∘ 𝑫𝛼 , 𝑿2 = 𝑿𝛽 − 𝑨2 ∘ 𝑫𝛽 , 𝑿3 = 𝑿𝛿 − 𝑨3 ∘ 𝑫𝛿

PT

𝑿(𝑡 + 1) =

𝑿1 +𝑿2 +𝑿3 3

(5) (6) (7)

Fig. 1 presents a search process of updating a candidate’s position based on alpha, beta, and delta in 2-D search space. It

CE

can be observed that the three best wolves (alpha, beta, and delta) can estimate the position of the prey, and other wolves

AC

update the positions randomly around the prey.

ACCEPTED MANUSCRIPT

α

α wolf δ

β wolf

R

the prey

CR IP T

δ wolf

move β



AN US

Position of candidate wolf

Fig. 1. Position updating in GWO

M

2.1.4 Attacking prey

To mathematically model attacking the prey, the value of a vector 𝒂 is decreased. 𝑨 is also decreased by 𝒂. When random values of 𝑨 are in [-1, 1], the next position of a search agent can be in any position between its current position and

ED

the position of the prey. With the operators proposed so far, the GWO algorithm allows its search agents to update their positions based on the α, β, δ, and ω; and attack toward the prey. However, the GWO algorithm tends to premature with these

2.1.5 Search for prey

PT

operators (S. Mirjalili, et al., 2014).

CE

As stated previously, the search direction of grey wolves mainly depends on the position of the alpha, beta, and delta. They diverge from each other to search for prey and converge to attack prey. To simulate divergence, 𝑨 is used to oblige the search agent to diverge from the prey. This emphasizes exploration and allows the GWO algorithm to search globally.

AC

Another element of GWO that favors exploration is 𝑪 in Eq.(4). The random value of 𝑪 is in [0, 2]. It provides random weights for prey to stochastically emphasize or deemphasize the effect of prey in defining the distance in Eq.(1), which can help to improve exploration and avoid local optimum. It is noted that 𝑪 is not linearly decreased in relation to 𝑨 but random values over the course of iteration which is helpful in case of local optima stagnation in the latter phase of search.

2.2 CA In this subsection, we describe CA. CA has become a popular tool in the scientific research since the concept of CA was first proposed by Von Neumann and Ulam (Neumann, 1966). One of the most well-known CA rules, the “game of life” was conceived by Conway in the late 1960s (E. R. Berlekamp, J. H. Conway, & Guy, 1982). Furthermore, CA has also been widely applied in a variety of fields such as physics, biology, computer science and traffic. CA is a set of cells distributed in a

ACCEPTED MANUSCRIPT

special topological structure. Each cell in the topological structure has its own state, which is associated with its surrounding cells in discrete time steps. The state of each cell at the next time step is determined by the current states of a surrounding neighborhood of cells. In general, CA consists of five key elements as follows: cell, cell state, cell space, neighborhood, and transition rule. As previously stated, cell state is information about the current cell which can determine the next state of the cell. Cell space represents a set of cells. It often has one-dimensional, two-dimensional, and three-dimensional space structure in real applications. It is noted that the boundary of cell space needs to be defined since we usually simulate real-world systems by finite grids. The boundary is usually a ring grid. More precisely, the left boundary connects to the right boundary and the top boundary connects to the bottom boundary. Neighborhood is a set of cells surrounding a center cell. It plays a states of its neighborhood. Some definitions of CA can be described as follows.

CR IP T

critical role in the next state of a center cell. Transition rule is used to determine the next state of a given cell based on the A d-dimensional CA contains a d-dimensional grid of cells, each of which can take on a value. The cells update their states automatically according to a given rule. Formally, a cellular automaton 𝑸 is a quadruple as follows: 𝑸 = (𝑆, 𝐺, 𝑑, 𝑓) where 𝑆 is a finite set of states, 𝐺 is the neighborhood, 𝑑 ∈ 𝑍

+

(8)

is the dimension of 𝑸, and 𝑓 is the interaction rule, also

referred to as the transition rule.

Given the position of a cell 𝑖 ∈ 𝑍 𝑑 in a d-dimensional lattice or grid, its neighborhood 𝐺 is defined by:

AN US

𝐺𝑖 = (𝑖, 𝑖 + 𝒓1 , ⋯ , 𝑖 + 𝒓𝑛 )

(9)

where 𝑛 represents the neighborhood size, and 𝒓𝑗 is a fixed vector in the d-dimensional space. Six kinds of neighborhood shapes in CA are presented in Fig. 2. Please note the names of these neighborhoods: the label Ln represents neighborhoods composed of the n nearest neighbors in a given axial direction (north, south, west and east), while the label Cn (compact) denotes the neighborhoods consisting of the n-1 nearest cell to the center one (in horizontal, vertical, and diagonal directions). Two most commonly used neighborhoods are called Von Neumann neighborhood (C5) and Moore neighborhood (C9),

Fig. 2. Structure of neighborhood

AC

CE

PT

ED

M

respectively.

Transition rule can determine the next state of a given cell according to its current state and states of its neighbors. The transition rule 𝑓 is written as follow: 𝑓: 𝑆𝑛 → 𝑆

(10)

This transition rule maps the state 𝑠𝑖 ∈ 𝑆 of a given cell 𝑖 into another state from the set 𝑆, as a function of the states of the cells in the neighborhood 𝐺𝑖 . In addition, some features of CA are stated as follows: (1) Homogeneity and regularity: homogeneity means the change of each cell in cell space obeys the same rule (i.e., CA rule or called transformation function). However, regularity indicates the identical distribution pattern, size, shape as

ACCEPTED MANUSCRIPT

well as the orderly distribution regulation of all the cells. (2) Space discretization: space discretization denotes that cells are arranged in a discrete cell grids defined by specific rules. (3) Synchronous calculation (parallelism): the state change of each cell at time step t+1 is independent behavior. If the configuration change of CA is considered as computation or processing of data or information, the disposing process of CA will be synchronous, which is suitable for parallel computation.

3.

The proposed CGWO

3.1

The framework of the CGWO algorithm

CR IP T

In this section, we first give an overview of the proposed algorithm, and then explain the improvement strategy in detail.

The goal of this work is to develop a cellular GWO called CGWO for continuous optimization problems. A flow chart of the proposed CGWO is shown in Fig. 3. One crucial strategy is that CA is utilized to improve the performance of the GWO. The improvement strategy is motivated by the following three advantages.

AN US

(1) CA can help to improve the local search ability since cell in CA only interacts with its neighbors for exploitation. Meanwhile, information diffusion mechanism contributes to exploration. Finally, a collection of such cells congregates to solve the problem.

(2) We can take the advantages of the two approaches. The candidate solutions are attracted toward the good solutions in GWO, therefore, the convergence speed of the GWO is very fast but it is also easy to trap into local optima. While CA provides slow diffusion through the population regarding the good solutions of each neighborhood. Subsequently, the attraction to the good points is weaker, which avoids local optima. Thus, we can integrate these

M

different techniques to strengthen their advantages and make up the respective shortcomings.

ED

(3) In addition, CA and GWO are easy to be implemented due to their simple mechanism. The proposed CGWO algorithm incorporates the concept of CA. It consists of five steps summarized as follows.

PT

Input. SOP (1);



A stopping condition;



N: size of wolves;



NS: neighborhood size;



Input other parameters;

CE



AC

Output. The optimal (or near-optimal) results found so far. Step 1) Initialization. Generate an initial population (wolves) of N solutions X1,…,XN ∈ Ω. Step 2) Evaluation. Once the wolf population is initialized, compute the fitness value of the solutions (wolves). Step 3) Create the neighborhood. For i=1,…,N, define a set of indexes Bi={i(1),…,i(NS)}, where {Xi(1),…, Xi(NS)} are the wolves with size NS closest to xi in the topological structure. Step 4) Stopping condition. If the stopping criterion is satisfied, then output the alpha wolf (the optimal solution found so far). Otherwise, go to step 5. Step 5) Update. Step 5.1) Selection. First, compute the fitness of solutions, and then select the three best fitness solutions from the current neighbors to guide other solutions within neighborhood to perform the search optimization.

ACCEPTED MANUSCRIPT

Step 5.2) Hunter operation. A new solution is generated from three rules. (a) encircling prey, (b) hunting and (c) attacking, as described in Section 2.1. Note that the search optimization is only implemented in its neighborhood. Step 5.3) Replacement strategy. The replacement method is based on fitness values of solutions. In detail, replace the current solution if the current solution is worse than the new solution after updating, and vice versa. Note that we have incorporated a constraint handling mechanism in CGWO when dealing with constrained problems. This mechanism is the same as in (Deb, 2000). Overlap area

yes Stop condition is met?

no

Evaluate function

α β δ

CR IP T

Select the first best three wolves from its current neighbors

end

Search the prey by the above three wolves

AN US

Update position of wolves

the current wolf is replaced if the position of the wolf is worse than that of the current one after update

Wolves and the positions

Fig. 3. Flow chart of the proposed CGWO algorithm

M

The proposed CGWO is discussed in the following parts. In CGWO, all information inherits inside the cell. Six typical cell structures are employed to construct cell space in Fig. 2. One wolf (solution) is defined as a cell, and all cells represent

ED

wolves in CGWO. Consequently, the size of cell structure is the same with that of the wolves. In the initialization phase, grey wolf population is randomly created; subsequently, each wolf is randomly assigned to one grid of the lattice structure one by one. Note that the index of each wolf in the lattice structure remains unchanged during the search process. Neighborhood

PT

could be defined based on the lattice structures. Taking C9 structure in Fig. 3 as an example, suppose that wolf size is 49, and they are randomly generated in the initialization, then 49 grids are created to construct a lattice structure. Each wolf is assigned to each cell. The gray cells surrounding around the current cell are its neighbors, and deep gray grids represent the

CE

overlapping neighbors belonging to two consecutive cell structures. The interaction among cells is restricted to the neighborhood. This could help each cell to perform exploitation inside their neighbors. Meanwhile, the overlapping neighbors provide a migration mechanism from one neighborhood to another one. It could enable each cell information to diffuse to the

AC

whole wolves, which is favorable for exploring the search space (Shi, et al., 2011). In CGWO, the center cell state is determined by its surrounding neighbors. The state of neighbors’ best position 𝑋𝑖 in the current wolf is denoted by 𝑆𝑖𝑡 (𝑋𝑛 ). For simplification, the transition rule is defined as: 𝑆𝑖𝑡+1 (𝑋𝑛 ) = 𝑓 (𝑆𝑖𝑡 (𝑋𝑖 ), 𝑆𝑖𝑡 (𝑋𝑖+𝒓1 ), ⋯ , 𝑆𝑖𝑡 (𝑋𝑖+𝒓𝑛 )) = 𝑚𝑖𝑛 (𝑓𝑖𝑡𝑛𝑒𝑠𝑠(𝑆𝑖𝑡 (𝑋𝑖 )), 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 (𝑆𝑖𝑡 (𝑋𝑖+𝒓1 )) ⋯ , 𝑓𝑖𝑡𝑛𝑒𝑠𝑠 (𝑆𝑖𝑡 (𝑋𝑖+𝒓𝑛 ))) (11) where 𝑓𝑖𝑡𝑛𝑒𝑠𝑠(𝑆𝑖𝑡 (𝑋𝑖 )) is the fitness of the current solution 𝑋𝑖 . Eq. (11) defines that the neighbor with the best fitness value is chosen for updating cell state.

ACCEPTED MANUSCRIPT 3.2

CA of the CGWO algorithm

Two important factors affect the performance of metaheuristics when solving complex problems: (1) communication structures. (2) information inheriting and diffusing scheme. As stated above, CA has also been successfully applied in many fields due to its unique topology structures and communication interaction. Although CA and GWO stem from different domains, some similar characteristics of the two techniques can be observed. First, both of them contain a set of individuals, which are called cells in CA and wolves in GWO. Second, every individual interacts with each other to transmit certain intrinsic information. In CA, each cell communicates with its neighbors and updates its cell state. Similarly, each wolf communicates with other wolves to update such intrinsic information as position,

CR IP T

fitness, α, β and δ wolf in GWO. Third, a transition rule is used to manage the evolution of cells in CA, while some search operators are used to update information of wolf in GWO. Finally, CA and GWO both run in a discrete time step. To enhance the behavior of the interaction, we utilize the idea of CA with the communication structure to investigate information diffusing mechanism of GWO. Wolves can only interact with their neighbors in the CA. It could help to exploit every cell’s local information inside the neighborhood and explore the search space due to the slow information diffusion

AN US

through the entire wolf population. Based on the view above, we introduce the concept of CA into the GWO to enhance the performance of GWO. The CA model for GWO is defined as follows: (a) cell: A candidate wolf or solution. (b) cell space: A set of all solutions.

(c) cell state: The wolf’s information such as the neighbor’s best position Xn at time t. It can be denoted by 𝑆𝑖𝑡 . (d) neighborhood: A set of solutions surrounding the current solution.

M

(e) transition rule: According to fitness value, if the new position of the current wolf is better than the old one of the current wolf before update, the current wolf moves to the new position and the information of wolf is also updated.

ED

Then, the local best wolf is chosen from the neighborhood, and guides its neighbors converge to its local optimal areas.

PT

(f) discrete time step: Iteration in GWO.

AC

CE

Based on the above definitions, the pseudo code of CGWO is given in Fig. 4.

ACCEPTED MANUSCRIPT

AN US

CR IP T

1. Input Parameters(a); // Parameters of the CGWO 2. wolf population Xi(i=1,...N) initialization(wolves_Size); 3. evaluate (wolf population) ; 4. while (evaluation_number < max_evaluation_number) 5. for i  1 to wolves_Size 6. neighbors  calculateNeighborhood(wolves(i)); 7. Xα, Xβ, Xδ  select the first best three wolves(neighbors); 8. new position of the wolf  update(the current wolf);// update the position based on Eq.(7) 9. update(a, A and C); 10. evaluate (new position of the wolf) ; 11. if new position of the wolf is better than that of the current wolf 12. replacement(new position, position of the current wolf); 13. end if 14. evaluation_number++; 15. end for 16. end while 17. Xα select the best wolf(the entire wolves); 18. return Xα Fig. 4. Pseudo code of the proposed CGWO algorithm

Fig. 4. shows the pseudo code of the proposed CGWO algorithm. The detailed steps of the CGWO are as follows. Firstly, input the parameters like wolf population size and neighborhood structure. The initial population is usually composed of randomly generated wolves in the line 2 of Fig.4. The population is distributed in n (n=2 in this paper) dimensional lattice grid according to the indices of the population members (as obtained during initialization). After initialization, the fitness

M

values of wolf population are calculated as in the line 3, and then the CGWO algorithm starts an update loop. This loop step is to generate new and promising wolf population through update operators as mentioned in Section 2.1. The first best three wolves (alpha (α), beta (β) and delta (δ)) from the current wolf’s neighbors are selected out in the line 7 of Fig.4 due to

ED

adoption of CA. Fig. 3 also presents the process of the update loop of an individual in CGWO (the considered neighborhood is Moore neighborhood in Fig. 3). Note that the overlap among the neighborhoods provides an implicit migration mechanism to the CGWO. Since the best solutions move smoothly through the whole population, diversity in the population is kept

PT

longer than that in non-structured GWO. This soft diffusion of the best solution through the population is one of the main reasons of the good balance between exploration and exploitation. Another important character of the CGWO is that these

CE

update operators are limited within the neighborhood of the current individual, therefore individuals belonging to other neighborhoods are not allowed to interact. Fig. 5 further illustrates the search process of a candidate wolf in wolf population (population size = 57). The solution

AC

indices are sorted only randomly in order to maintain the diversity of population. The solutions are located in the lattice gird with respect to their indices and each solution has its own neighborhood. For a candidate solution 𝑋⃗3,4 with index (3, 4), its Moore structure neighbors contain solutions {𝑋⃗2,3 , 𝑋⃗2,4 , 𝑋⃗2,5 , 𝑋⃗3,3 , 𝑋⃗3,5, 𝑋⃗4,3, 𝑋⃗4,4 , 𝑋⃗4,5 } besides itself. The candidate solution or wolf is guided by the three best wolves found so far in its neighbors but not in the entire population. The circle represents the range of the current wolf’s neighbors and the wolves only communicate with each other in its neighborhood. In other words, the whole population is divided into many subpopulations and the update operation is performed independently on a set of subpopulations. But these independent subpopulations can occasionally transmit information by overlap areas among neighborhoods. The overlap area, containing the solutions 𝑋⃗3,3 and 𝑋⃗4,3 , is constructed by the neighborhoods of 𝑋⃗3,4 and 𝑋⃗4,2 . Since the search process among neighborhoods is based on sequence, there is a delay in the information spread through the population regarding the best position of each neighborhood. Therefore, the attraction towards good solutions is weaker, which prevents the population from getting trapped in local minima. If stop condition is satisfied, the algorithm returns the

ACCEPTED MANUSCRIPT best wolf (α) obtained over the course of search.

1

2

3

4

5

6

7

α 1

δ 2

α wolf

C3 β wolf

R

3

move

δ wolf

β C2

CR IP T

4

5

Overlap area

the prey

Circle represents the range of the neighborhood of the current wolf

AN US

Neighborhood of

Fig. 5. Position updating in CGWO

4.

Experiments

This section is devoted to measuring the performance of the proposed CGWO. In this section the experimental studies

M

contain the following five aspects:

The best choice of neighborhoods (six kinds of neighborhoods are shown in Fig. 2) in Section 4.3.

2.

Performance comparisons with other metaheuristics on benchmarks in Section 4.4.

3.

Statistical test on results obtained by different algorithms in Section 4.5.

4.

CPU-time cost study on CGWO and GWO in Section 4.6.

5.

Real-world applications of the CGWO in engineering optimization in Section 4.7.

PT

ED

1.

In the following subsections, the benchmarks and parameter settings are described at first, and then the experimental

CE

studies are further investigated step by step.

4.1 Benchmark functions

AC

Three test suites are used to evaluate the success of CGWO and the comparison algorithms. Table 1 records the common unimodal functions (Yao, Liu, & Lin, 1999). Table 2 lists the common multimodal functions (Yao, et al., 1999). Table 3 involves 15 benchmark instances used in CEC2015. Details about these problems are provided in technical report (Qu, Liang, Suganthan, & Chen, 2015). Table 1 Unimodal benchmark functions Name Sphere Schwefel’s problem 2.22

Test Functions F1 x  

n



xi

[-100,100]

i 1

F2 x  

n

 i 1

n

xi

f min

S 2

  xi i 1

30

[-10,10]30

0 0

ACCEPTED MANUSCRIPT  i  xj   i 1  j 1  max xi , 1 

F3 x  

Schwefel’s problem 1.2 F4 x 

Schwefel’s problem 2.21

F5 x  

Rosenbrock

n



i

 100x n 1

F6 x  

Step

 xi2

i 1

i 1

 x i 1

f 7 x  

Noise

 ix i 1



x



 1    2

i

 0.5

n

n



2

2

    i  n

2

i

0

[-100,100]30

0

[-30,30]30

0

[-100,100]

 random 0, 1

4 i

[-100,100]30

30

[-1.28,1.28]30

0 0

Table 2 Multimodal benchmark functions Test Functions i 1

 x

F9 x  

Rastrigin

n



F12 x  

i 1 n



xi2 

i 1

 xi     1  i 

n



i

 u( x ,10,100,4) i

 F13 x   0.1sin 2 3x1    

n

 u( x ,5,100,4) i

i 1



i

25



ED F15 x  



11

j 1

ai  i 

1





Shekel 2

0

[-50,50]30

0

[-50,50]30

0



  2  j  i  1 ( xi  a ij ) 

[-65,65]2

1

[-5,5]4

0.0003

-1

1

x 1 bi2  bi x 2    bi2  bi x 3  x 4 

2

1 6 x1  x1 x2  4 x22  4 x24 [-5,5]2 3 2    5.1 2 5 1   cos x1  10 F17 x    x 2  x1  x1  6   101  2 [-5,10], [0,15]  8  4   



  27 x )

F18 x   1  ( x1  x2  1)2(19  14 x1  3 x12  14 x2  6 x1 x2  3 x22 )



 30  (2 x1  3 x2 )  (18  32 x1  12 x  48 x2  36 x1 x2 2

2 1

4  F19 x   - ci exp  i 1  4  F20 x   - ci exp  i 1 

3

j 1

ij

j

 x   - X - a X - a  x   - X - a X - a 

  c  c

F21 x   - X - ai X - ai   ci T

i 1

7

F22

T

i

i

7

F23

i 1

2 2

2  pij    6 2   a x  p   ij j ij j 1 

 a x

5

i 1

Shekel 3

[-600,600]30



PT

Shekel 1

0

F16 x   4 x12  2.1x14 

CE

AC

Hartman 2

[-32,32]30



  1)2 1  sin 2(3xi  1)  ( xi  1)2 1  sin 2(2xn )  

n

 (x

 1 F14 x       500 

Kowalik’s Function

Hartman 1

0

k( xi  a) xi  a  u xi , a, k , m   0  a  xi  a  m xi  a k( xi  a)

Shekel’s Foxholes Function

Goldstein-Price Function

[-5.12,5.12]30

m

i 1

Six-hump camel back

-12569.5

  1)2 1  10 sin 2(yi  1 )  ( yi  1)2  

( y

n

i 1

    

 cos i 1

n 1 i 1

i 1

2 i

   20  e cos2xi  

M

Generalized Penalized Function 2



1 4000

  10 sin y1   n 

Generalized Penalized Function 1

n

n

x



[-500,500]30

AN US

F11 x  

Griewank



xi

 1 F10 x   -20 exp  0.2  n  1  exp n

f min

S

 10 cos2xi   10

2 i

i 1

Ackley

Branin



F8 x   - xi sin n

CR IP T

Name Generalized Schwefel’s problem

T

i

i

-1.0316 0.39788

[-2,2]2

3

[0,1]3

-3.86

[0,1]3

-3.32

[0,10]4

-10.1532

[0,10]4

-10.4028

[0,10]4

-10.5363

1

1

i

1

i

Table 3 CEC2015 benchmark functions Name Shifted and Rotated Expanded two-Peak Trap

Test Functions

F24( x)  f1( x)  f M 1( x  o1 )  f f = Expanded Two-Peak Trap function

* 1

S

f min

[-100,100]10

100

ACCEPTED MANUSCRIPT F25( x)  f 2( x)  f M 2( x  o2 )  f 2*

Shifted and Rotated Expanded Five-Uneven-Peak Trap

[-100,100]8

200

[-100,100]4

300

[-100,100]10

400

[-100,100]4

500

f = Expanded Five-Uneven-Peak Trap

  x  o3      f 3* F26( x)  f 3( x)  f  M 3  20   

Shifted and Rotated Expanded Equal Minima

f = Expanded Equal Minima

  x  o4 F27( x)  f 4( x)  f  M 4   20 

Shifted and Rotated Expanded Decreasing Minima

    f 4* 

f = Expanded Decreasing Minima

f = Expanded Uneven Minima

CR IP T

  x  o5      f 5* F28( x)  f 5( x)  f  M 5  20   

Shifted and Rotated Expanded Uneven Minima

  x  o6      f 6* F29( x)  f 6( x)  f  M 6  5   

Shifted and Rotated Expanded Himmelblau’s Function

f = Expanded Himmelblau’s Function

F30( x)  f 7( x)  f M 7 x  o7   f7*

Shifted and Rotated Expanded Six-Hump Camel Back

f = Expanded Six-Hump Camel Back

AN US

  x  o8      f 8* F31( x)  f 8( x)  f  M 8  5   

Shifted and Rotated Modified Vincent function

[-100,100]8

600

[-100,100]10

700

[-100,100]4

800

[-100,100]10

900

[-100,100]10

1000

[-100,100]10

1100

f = Modified Vincent function

  N

F32( x)  f 9( x) 

i 1

i

N  10

* i g i( x)  bias i   f 9*

  [10,20,10,20,10,20,10,20,10,20]   [1,1,1e  6, ,1e  4, ,1e  5,1e  5] bias  [0, ,0]  f 9*

M

Composition Function 1

ED

g1 2( x): Rotated Sphere Function g3  4( x): Rotated High Conditioned Elliptic Function g5 6( x): Rotated Bent Cigar Function g7 8( x): Rotated Discus Function g910(x): Rotated Different Powers Function

PT

F33( x)  f10( x) 

  N

i 1

i

* i g i( x)  bias i   f10*

CE

N  10   [10,20,30,40,50,60,70,80,90,100]

  [1e  5,1e - 5,1e  6, ,1e  4,1,1]

AC

Composition Function 2

bias  [0,10,20,30,40,50,60,70,80,90]  f10*

g1 2( x): Rotated High Conditioned Elliptic Function g3  4( x): Rotated Different Powers Function g5  6( x): Rotated Bent Cigar Function g7 8( x): Rotated Discus Function g910(x): Rotated Sphere Function N

F34( x)  f11( x) 

  *  g (x)  bias   f i

i i

i

* 11

i 1

Composition Function 3

N  10   [10, ,10]   [0.1,0.1,10,10,10,10,1e  3,1e - 3,1,1] bias  [0,0,0,0,0,0,0,0,0,0]  f11*

g12( x): Rotated Rosenbrock’s Function

ACCEPTED MANUSCRIPT g3  4( x): Rotated Rastrigin’s Function g5  6( x): Rotated HappyCat Function g 7  8(x): Rotated Scaffer’s Function g9 10( x): Rotated Expanded Modified Schwefel’s Function

F35( x)  f12( x) 

  N

i 1

i

* i g i( x)  bias i   f12*

N  10   [10,10,20,20,30,30,40,40,50,50]

  [0.1,0.1,10,10,10,10,1e  3,1e - 3,1,1] bias  [0,0,0,0,0,0,0,0,0,0]  f12* g1 2( x): Rotated Rosenbrock’s Function g3  4( x): Rotated Rastrigin’s Function g5 6( x): Rotated HappyCat Function g7 8( x): Rotated Scaffer’s F6 Function g910(x): Rotated Expanded Modified Schwefel’s Function

  i 1

1200

* i g i( x)  bias i   f13*

N

i

AN US

F36( x)  f13( x) 

[-100,100]10

CR IP T

Composition Function 4

N  10   [10,20,30,40,50,60,70,80,90,100]

  [0.1,10,10,0.1,2.5,1e  3,1e - 3,1e - 3,2.5,10] bias  [0,0,0,0,0,0,0,0,0,0]  f13*

M

[-100,100]10

1300

[-100,100]10

1400

[-100,100]10

1500

PT

ED

Composition Function 5

g 1(x ): Rotated Rosenbrock’s Function g2( x): Rotated HGBat Function g3( x): Rotated Rastrigin’s Function g4(x): Rotated Ackley’s Function g5( x): Rotated Weierstrass Function g6( x): Rotated Katsuura Function g7( x): Rotated Scaffer’s F6 Function g8( x): Rotated Expanded Griewank’s plus Rosenbrock’s Function g9( x): Rotated HappyCat Function g10( x): Rotated Expanded Modified Schwefel’s Function

CE

F37( x)  f14( x) 

AC

Composition Function 6

  N

i 1

* i g i( x)  bias i   f14*

N  10   [10,10,20,20,30,30,40,40,50,50]

  [10,1,10,1,10,1,10,1,10,1]

bias  [0,20,40,60,80,100,120,140,160,180]  f14* g1,3,5,7,9( x): Rotated Rastrigin’s Function

g2,4,6,8,10( x): Rotated Expanded Modified Schwefel’s Function

F38( x)  f15( x) 

  N

i 1

Composition Function 7

i

i

* i g i( x)  bias i   f15*

N  10   [10,20,30,40,50,60,70,80,90,100]

  [0.1,10,10,0.1,2.5,1e - 3,1e  3,1e  3,2.5,10] bias  [0,0,0,0,0,0,0,0,0,0]  f15*

ACCEPTED MANUSCRIPT

CR IP T

g1( x): Rotated Rosenbrock’s Function g2( x): Rotated HGBat Function g3( x): Rotated Rastrigin’s Function g4(x): Rotated Ackley’s Function g5( x): Rotated Weierstrass Function g6( x): Rotated Katsuura Function g7( x): Rotated Scaffer’s F6 Function g8( x): Rotated Expanded Griewank’s plus Rosenbrock’s Function g 9(x): Rotated HappyCat Function g10( x): Rotated Expanded Modified Schwefel’s Function

4.2 Parameter settings

To make a fair comparison, all algorithms are implemented in java on jMetal software (Durillo & Nebro, 2011). Experimental studies are conducted on Intel Core i5-4210U CPU @1.70GHz, 4GB RAM, with Microsoft Windows 8 operating system. Table 4 presents the initial values of the relevant control parameters for the different metaheuristics

AN US

including LSHADE, TLBO, EBOwithCMAR, NDHS, BA, CLPSO, GWO, and CGWO. The parameter settings of these metaheuristics are obtained from the original literature. The relevant common parameter settings for the algorithms are as follows:

The population size is 30 (except for LSHADE and EBOwithCMAR).



The maximal number of function evaluation (NFEmax) is 100,000.



30 independent runs are conducted for each algorithm on each test problem.

M



Table 4 Parameter setting of algorithms Control parameters =1.8; 𝑟 𝑎𝑟𝑐 = 2.6; 𝑝=0.11; 𝐻=6 No control parameters 𝑃𝑆1,𝑚𝑎𝑥 =18D, 𝑃𝑆1,𝑚𝑖𝑛 =4, 𝑃𝑆2,𝑚𝑎𝑥 =46.8D, 𝑃𝑆2,𝑚𝑖𝑛 =10, 𝐻=6, 𝑃𝑆3 =4+(3log(D)), 𝑝𝑟𝑜𝑏𝑙𝑠 =0.1, 𝑐𝑓𝑒𝑙𝑠 =0.25* NFEmax HMCR=0.99, PAR_max=0.99, PAR_min = 0.01, tournament size =2 Loudness: A=0.25, plus rate: r=0.5, minimum frequency: Qmin=0, maximum frequency: Qmax=2

CLPSO

ED

𝑖𝑛𝑖𝑡

c= 1.49445; m=0; 𝑝𝑐 = 0.5

𝑒 𝑡 −𝑒 𝑡(1) 𝑒 𝑡(𝑝𝑠 )−𝑒 𝑡(1)

where, 𝑡 = 0.5 × (0:

1 𝑝𝑠 −1

: 1)

a decreases linearly from 2 to 0 Neighborhood style:C25, a decreases linearly from 2 to 0

CE

GWO CGWO

𝑟𝑁

PT

Algorithms LSHADE TLBO EBOwithCMAR NDHS BA

4.3 The performance of CGWO with different neighborhood structures

AC

In this subsection we generate six variations of the CGWO by adopting six different neighborhood structures in Fig. 2 and compare the performance of each variation on the benchmarks from Tables 1-2. Tables 5-6 show the statistical results (best, worst, mean, and standard deviation values) on different variations over all runs. We must point out that the marks L9, C9, C13, C21, and C24 in the Tables 5-6 are short for the six corresponding neighborhoods in the Section 2.2. More precisely, CGWO (L9) denotes CGWO with L9 neighborhood. Similarly, CGWO (C9) represents CGWO using C9 neighborhood. From Table 5, it can be observed that CGWO (C25) has a better performance for the unimodal benchmarks except F7 with comparison to the other versions of CGWO since CGWO (C25) obtains the best mean values on 6 out of 7 unimodal benchmarks. For multimodal benchmarks in Table 6, it is also found that none of the neighborhood structures has a dominant performance for all test problems, which implies the neighborhood size is sensitive to the behavior of the CGWO and the setting of neighborhood size depends on specific problems. However, on the whole, CGWO with C25 are slightly better than its other versions on these multimodal benchmarks. Consequently, C25 is regarded as an appropriate neighborhood structure

ACCEPTED MANUSCRIPT

to form the best CGWO that will be studied in more detail in the following subsections. Table 5 Results by CGWO with different neighborhoods on unimodal benchmark functions Problem

Statistics

CGWO(L5)

CGWO(L9)

CGWO(C9)

CGWO(C13)

CGWO(C21)

CGWO(C25)

Best

1.2E-171

6.4E-210

0

0

0

0

Worse

3.2E-165

3.8E-204

0

0

0

0

Mean

1.9E-166

2.8E-205

0

0

0

0

0

0

F1 0

0

0

0

1.02E-96

3.0E-120

1.5E-277

3.5E-239

2.4E-306

0

Worse

4.57E-95

1E-116

9.4E-271

4.1E-234

6.6E-299

0

Mean

1.76E-95

1.7E-117

5.7E-272

4.6E-235

3.5E-300

0

std

1.50E-95

2.5E-117

0

0

Best

1.83E-14

7.59E-23

4.37E-61

2.68E-47

8.42E-65

3.56E-74

Worse

7.66E-09

4.34E-14

9.75E-43

9.9E-32

2.25E-45

6.21E-55

Mean

5.69E-10

1.82E-15

4.41E-44

4.16E-33

7.79E-47

4.18E-56

std

1.59E-09

7.91E-15

1.82E-43

1.81E-32

4.11E-46

1.31E-55

Best

7.26E-25

1.02E-33

2.72E-70

3.62E-73

Worse

3.95E-22

5.95E-31

Mean

3.23E-23

1.3E-31

std

7.22E-23

1.7E-31

Best

25.05527

24.81441

Worse

27.08967

27.00348

Mean

25.89631

25.43051

std

4.91E-01

Best

0

F2

1.08E-61

1.85E-51

4.6E-64

5.17E-65

7.75E-63

1.46E-52

2.63E-65

4.38E-66

2.32E-62

3.79E-52

8.58E-65

1.32E-65

24.24317

24.86651

24.66267

24.26925

27.06747

27.044

28.71735

26.2198

25.64893

25.46054

25.55428

25.42201

5.30E-01

7.68E-01

5.63E-01

8.03E-01

5.26E-01

0

0

0

0

0

ED

Mean

Best Worse F7

CE

Mean std

M

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

PT

std

0

2.53E-58

F5

Worse

0

3.46E-67

F4

F6

AN US

F3

CR IP T

std Best

1.34E-04

7.73E-05

1.56E-04

9.66E-05

1.21E-04

1.21E-04

7.01E-04

7.52E-04

9.21E-04

6.93E-04

9.25E-04

8.07E-04

3.33E-04

3.84E-04

4.48E-04

3.69E-04

4.40E-04

4.30E-04

1.32E-04

1.82E-04

1.86E-04

1.41E-04

1.92E-04

1.83E-04

AC

Table 6 Results by CGWO with different neighborhood on multimodal benchmark functions

Problem

Statistics

CGWO(L5)

CGWO(L9)

CGWO(C9)

CGWO(C13)

CGWO(C21)

CGWO(C25)

Best

-5704.235

-7359.57

-7447.62

-7066.52

-6794.18

-6702.907

Worse

-3131.247

-3349.67

-3386.02

-3255.19

-3330.91

-3256.599

Mean

-3591.132

-4319.03

-4772.63

-4956.15

-4685.34

-5074.777 1226.6235

F8

std

442.26618

1104.816

1288.144

1261.855

1274.428

Best

0

0

0

0

0

Worse

15.40262

4.64472

11.30515

7.471893

5.27715

16.699364

Mean

1.3744344

0.292568

1.968907

1.159881

2.873349

4.2002956

std

3.5298336

0.974567

3.190097

2.315817

4.413376

5.4555579

Best

3.997E-15

4E-15

4E-15

4E-15

4E-15

4E-15

0

F9

F10

ACCEPTED MANUSCRIPT

Worse

3.997E-15

4E-15

7.55E-15

4E-15

4E-15

4E-15

Mean

3.997E-15

4E-15

6.96E-15

4E-15

4E-15

4E-15

std

0

0

1.35E-15

0

0

0

Best

0

0

0

0

0

0

Worse

0

0

1.81E-02

2.28E-02

4.01E-02

5.44E-02

Mean

0

0

3.13E-03

1.92E-03

3.15E-03

5.43E-03

std

0

0

5.89E-03

5.50E-03

8.47E-03

1.20E-02

Best

7.899E-08

5.57E-08

6.01E-08

6.13E-08

3.46E-08

6.01E-08

Worse

6.71E-03

6.47E-03

1.98E-02

1.33E-02

1.34E-02

1.98E-02

Mean

4.39E-04

2.16E-04

5.55E-03

1.73E-03

3.21E-03

5.55E-03

std

1.66E-03

1.18E-03

6.23E-03

3.40E-03

4.39E-03

6.23E-03

Best

7.045E-07

1.08E-06

1.49E-06

8.13E-07

6.01E-07

9.26E-07

Worse

3.64E-01

1.21E-01

3.64E-01

2.59E-01

4.85E-01

6.06E-01

Mean

2.56E-02

1.62E-02

1.33E-01

6.24E-02

1.29E-01

1.88E-01

std

7.61E-02

4.19E-02

1.07E-01

8.52E-02

1.46E-01

1.69E-01

9.98E-01

9.98E-01

F11

F13

Best

9.98E-01

9.98E-01

Worse

9.98E-01

9.98E-01

Mean

9.98E-01

9.98E-01

std

3.08E-14

2.26E-14

Best

3.07E-04

3.07E-04

Worse

4.12E-04

1.22E-03

Mean

3.11E-04

3.38E-04

std

1.90E-05

1.67E-04

Best

-1.031628

Worse

-1.031628

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

1.55E-14

4.00E-14

1.35E-14

0

3.07E-04

3.07E-04

3.07E-04

3.07E-04

2.04E-02

1.59E-03

2.04E-02

2.04E-02

2.34E-03

3.50E-04

3.05E-03

1.71E-03

6.11E-03

2.35E-04

6.91E-03

5.08E-03

-1.03163

3.07E-04

-1.03163

-1.03163

-1.03163

-1.03163

2.04E-02

-1.03163

-1.03163

-1.03163

ED

-1.031628

-1.03163

2.34E-03

-1.03163

-1.03163

-1.031623

std

1.312E-10

1.27E-10

6.11E-03

2.43E-10

9.75E-11

8.47E-11

Best

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

Worse

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

3.98E-01

2.37E-09

1.96E-09

2.22E-09

1.71E-09

1.59E-09

0

Best

3

3

3

3

3

3

Worse

3.0000008

3.000001

3.000001

3.000001

3.000001

3.0000006

Mean

3.0000001

3

3

3

3

3

std

1.661E-07

7.16E-08

1.3E-07

3.19E-07

2.79E-07

2.26E-15

Mean

AC

CE

std

PT

Mean

F17

F18

M

F15

AN US

9.98E-01

9.98E-01

F14

F16

CR IP T

F12

Best

-3.862782

-3.86278

-3.86278

-3.86278

-3.86278

-3.862782

Worse

-3.86278

-3.86278

-3.86278

-3.86277

-3.86278

-3.86278

Mean

-3.862782

-3.86278

-3.86278

-3.86278

-3.86278

-3.862781

std

5.41E-07

6.63E-07

1.09E-06

2.41E-06

1.05E-06

1.09E-16

Best

-3.301995

-3.322

-3.322

-3.322

-3.322

-3.322

Worse

-3.301994

-3.32199

-3.20275

-3.20274

-3.20273

-3.201948

Mean

-3.301995

-3.322

-3.29026

-3.2982

-3.29026

-3.302106

std

2.856E-07

4.21E-08

5.35E-02

4.84E-02

5.35E-02

4.52E-02

Best

-10.1532

-10.1532

-10.1532

-10.1532

-10.1532

-10.1532

F19

F20

F21

ACCEPTED MANUSCRIPT

Worse

-10.15319

-10.1532

-5.0552

-10.1532

-10.1532

-5.10077

Mean

-10.1532

-10.1532

-9.98326

-10.1532

-10.1532

-9.984783

std

2.485E-06

3.52E-06

9.31E-01

1.11E-06

2.38E-06

9.22E-01

Best

-10.4029

-10.40294

-10.4029

-10.4029

-10.4029

-10.40294

Worse

-10.40292

-10.4029

-5.08767

-5.08767

-5.08767

-5.087672

Mean

-10.4029

-10.40294

-10.2258

-10.2258

-10.2258

-10.22576

std

3.92E-06

3.72E-06

9.70E-01

9.70E-01

9.70E-01

9.70E-01

Best

-10.53641

-10.5364

-10.5364

-10.5364

-10.5364

-10.53641

Worse

-10.5364

-10.5364

-5.12848

-10.5364

-10.5364

-10.5364

Mean

-10.5364

-10.5364

-10.3561

-10.5364

-10.5364

-10.53641

std

3.41E-06

3.42E-06

9.87E-01

1.85E-06

2.05E-06

1.39E-06

F22

CR IP T

F23

Table 7 Wilcoxon sign rank test on the solution by CGWO with different structures for benchmarks in Tables 1-2 (a level of significance α=0.05). CGWO(C25) vs.

CGWO(C25) vs.

R

+

R

-

CGWO(L9)

p-value

R

+

R

-

CGWO(C9)

p-value

R

CGWO(C25) vs.

+

R

-

CGWO(C25) vs.

CGWO(C13)

AN US

CGWO(L5)

case

CGWO(C25) vs.

p-value

R

+

R

-

CGWO(C21)

p-value

R

+

R-

p-value

465

0

1.7E-06 +

465

0

1.7E-06 +

232.5

232.5

1=

232.5

232.5

1=

232.5

232.5

1=

F2

465

0

1.7E-06 +

465

0

1.7E-06 +

465

0

1.7E-06 +

465

0

1.7E-06 +

465

0

1.7E-06 +

F3

465

0

1.7E-06 +

465

0

1.7E-06 +

457

8

3.9E-06+

465

0

1.7E-06 +

432

33

4.1E-05 +

F4

465

0

1.7E-06 +

465

0

1.7E-06 +

465

0

1.7E-06 +

465

0

1.7E-06 +

347

118

1.9E-02 +

F5

411

54

2.4E-04 +

340

125

3.4E-02 +

397

68

7.2E-04 +

345

120

2.1E-02 +

360

105

8.7E-03 +

F6

232.5

232.5

1=

232.5

232.5

1=

232.5

232.5

1=

232.5

232.5

1=

232.5

232.5

1=

F7

129

336

3.3E-02 -

178

287

2.6E-01 =

345

120

3.3E-02 +

177

288

2.5E-01 =

350

115

1.6E-02 +

F8

447

18

1.0E-05 +

341

124

2.6E-02 +

351

114

1.5E-02 +

271

194

4.3E-02 +

359

106

9.3E-03 +

125

340

3.9E-02 -

77

232.5

232.5

1=

232.5

F11

126.5

338.5

7.8E-03 -

126.5

F12

50

415

1.7E-04 -

41

F13

42

423

8.9E-05 -

F14

270

195

4.2E-01 =

F15

313

152

9.8E-02 =

F16

362

103

F17

268

197

1.8E-04 -

157.5

307.5

9.3E-02 =

108

357

5.0E-02 =

181

284

2.3E-01 =

1=

457.5

7.5

5.7E-07 +

232.5

232.5

1=

232.5

232.5

1=

338.5

7.8E-03 -

226

239

7.2E-01 =

171

294

2.4E-01 =

196.5

268.5

4.9E-01 =

424

8.2E-05 -

233

232

9.9E-01 =

42

423

8.9E-05 -

115

350

1.6E-02 -

35

430

4.9E-05 -

155

310

1.1E-01 =

76

389

1.3E-03 -

161

304

1.4E-01=

258

207

5.6E-01 =

237

228

9.1E-01 =

244.5

220.5

8.6E-01 =

247

218

7.1E-01 =

150

315

8.9E-02 =

352

113

2.9E-02 +

99

366

6.0E-03 -

325

140

2.1E-02 +

7.7E-03 +

322

143

6.6E-02 =

269

196

4.5E-01 =

285

180

2.8E-01 =

257

208

6.1E-01 =

4.7E-01 =

259

206

5.9E-01 =

281

184

3.2E-01 =

278

187

3.5E-01 =

290

175

2.7E-01 =

CE

PT

388

232.5

AC

F18

ED

F9 F10

M

F1

299

166

1.7E-01 =

334

131

3.7E-02 +

237

228

9.3E-01 =

232

233

9.9E-01 =

265

200

5.0E-01 =

190

275

3.8E-01 =

325

140

5.7E-02 +

234

231

9.8E-01 =

255

210

6.4E-01 =

284

181

2.9E-01 =

254

211

6.6E-01 =

176

289

2.4E-01 =

345

120

2.1E-02 +

360

105

8.9E-03 +

345

120

2.1E-02 +

F21

366

99

6.0E-03 +

393

72

9.6E-04 +

244

221

8.1E-01 =

282

183

3.1E-01 =

255

210

6.4E-01 =

F22

365

100

6.4E-03 +

321

144

6.9E-02 =

199

266

4.9E-01 =

226

239

8.9E-01 =

179

286

2.7E-01 =

F23

442

23

1.6E-05 +

408

57

3.1E-04 +

325

138

1.6E-02 +

238

227

9.1E-01 =

365

100

6.4E-03 +

F19 F20

+/=/-

10/8/5

10/9/4

10/13/0

6/14/3

9/13/1

ACCEPTED MANUSCRIPT

4.4 Comparison with other algorithms This subsection is divided into three parts. The first part presents the comparison experiment between CGWO and other metaheuristic algorithms such as LSHADE, TLBO, EBOwithCMAR, NDHS, BA, CLPSO on unimodal functions and multimodal functions. The second part shows the comparison results between CGWO and other metaheuristics on CEC2015 test suite with multiple global peaks. The third part gives a comparison experiment using some improved versions of the GWO and multi-niche metaheuristics on multi-niche problems from CEC2015 test suite. 4.4.1 Experiment one To assess the performance of CGWO, it is compared with seven other metaheuristics on benchmark problems. These

CR IP T

algorithms in the study are LSHADE (winner in CEC2014 competition on Real-Parameter Single Objective Optimization Benchmarks), TLBO (it is a very efficient optimization algorithm), EBOwithCMAR (winner in CEC 2017 competition on bound constrained benchmark set), NDHS (it is a novel HS approach, and performs better than HS and its other variants), BA (BA is recent metaheuristic and shows better performance than basic PSO, GA, etc.), CLPSO (it is popular PSO variant and is usually used to compare performance of one algorithm), and GWO. Tables 8-9 report the statistical results for each algorithm on benchmarks.

AN US

According to the results in Table 8, CGWO can provide very competitive results compared to the other algorithms. In detail, both of TLBO and CGWO can achieve the best average results on 4 out of 7 problems, whereas LSHADE, EBOwithCMAR, NDHS, BA, CLPSO, and GWO can provide the best average ones on 2, 1, 2, 0, 2, and 1 test functions, respectively. Note that the unimodal benchmarks are suitable for testing exploitation ability of the metaheuristics (S. Mirjalili, et al., 2014). Therefore, regarding exploiting the optimum, CGWO and TLBO are in the first rank; LSHADE, NDHS and CLPSO come in the second place; EBOwithCMAR and GWO are in the third place; the BA ranks at the rear. In addition, CGWO outperforms the basic GWO on 6 of 7 unimodal benchmarks in terms of the average value. The reason seems

M

straightforward: each wolf in CGWO has its own neighbors and only interacts with its neighbors. The three best wolves in its neighbors can be used to guide the movement of other candidate wolves within neighborhood toward their good areas.

ED

Obviously, CA can contribute to exploitation or local search of CGWO. Table 8 Results by different algorithms on unimodal benchmark functions EBOwithC

TLBO

NDHS

BA

CLPSO

GWO

CGWO

MAR

Best

8.56E-30

0

1.42E-24

0

1.22E-04

5.76E-12

1.5E-206

0

Worst

5.28E-26

0

2.87E-22

0

4.91E-01

6.05E-11

7.3E-201

0

2.94E-27

0

6.67E-23

0

8.07E-02

3.37E-11

3.7E-202

0

9.50E-27

0

6.79E-23

0

1.03E-01

1.40E-11

1.40E-89

0

3.11E-14

0

2.67E-12

0

1.61E-02

6.41E-08

1.68E-45

0

F1 Mean std

AC

Best F2

LSHADE

PT

Statistics

CE

Problem

Worst

1.26E-12

0

4.97E-11

0

10.12344

3.14E-07

1.41E-34

0

Mean

3.26E-13

0

1.63E-11

0

5.1E-01

1.95E-07

4.96E-36

0

std

3.41E-13

0

1.45E-11

0

1.822092

5.70E-08

2.58E-35

0

Best

1.00002

0

4.65748

842.1160

6.3E-02

57.4581

1.68E-45

3.57E-74

Worst

1.00993

0

33.85209

4805.8723

10196.4934

319.3870

1.41E-34

6.24E-55

Mean

1.00145

0

16.91642

2036.922

1090.884

165.6838

4.96E-36

4.18E-56

std

2.25E-03

0

8.814452

893.3847

2552.026

65.5480

2.58E-35

1.82E-43

Best

1.24E-07

9.34E-47

6.89E-06

2.01E-57

1.20E-02

8.1276

3.36E-41

3.62E-73

Worst

1.73E-06

4.72E-46

2.18E-04

2.11E-46

5.89E-01

14.0159

2.43E-36

5.17E-65

Mean

6.61E-07

3.12E-46

4.06E-05

7.02E-48

1.43E-01

10.3689

2.39E-37

4.38E-66

F3

F4

ACCEPTED MANUSCRIPT

std

4.08E-07

4.27E-45

3.90E-05

3.85E-47

1.18E-01

1.3238

5.66E-37

1.32E-65

Best

8.46335

28.72192

12.63520

23.57316

28.72252

2.2130

24.38665

24.24317

Worst

11.8763

28.94249

15.43227

123.4207

90029.3932

72.4817

27.0883

27.06747

Mean

10.2642

28.87576

14.21070

27.52616

3039.519

23.2816

25.75554

25.64893

std

9.87E-01

6.04E-02

6.49E-01

18.11298

16429.8

16.8490

7.32E-01

7.68E-01

F5

Best

0

0

0

0

0

0

0

0

Worst

0

0

0

1

2

0

0

0

Mean

0

0

0

1.67E-01

1.00E-01

0

0

0

std

0

0

0

3.79E-01

4.03E-01

0

0

0

Best

1.00E-03

2.20E-04

5.28E-04

4.76E-04

1.32E-03

1.25E-24

1.90E-04

1.21E-04

Worst

2.41E-03

1.48E-01

3.18E-03

8.59E-04

1.95E-01

4.69E-23

1.20E-03

8.07E-04

Mean

1.61E-03

3.39E-02

1.91E-03

4.68E-04

7.48E-02

1.03E-23

5.49E-04

4.30E-04

std

3.46E-04

3.69E-02

6.17E-04

3.57E-04

5.06E-02

9.78E-24

2.31E-04

1.83E-04

F7

CR IP T

F6

Table 9 presents the statistical values obtained by different algorithms on multimodal benchmarks from Table 2. In

AN US

contrast to unimodal benchmarks, multimodal benchmarks are suitable for measuring exploration ability of the algorithm (S. Mirjalili, et al., 2014). It can be obviously seen from this Table 9 that the LSHADE performs better than other seven methods on 9 out of 16 test functions (i.e., F11-13, F16-18, F20, and F22-23). Thus, the LSHADE is the most effective one among these compared algorithms. The CGWO is the second most effective approach, as it can obtain the best average result on 5 benchmarks (i.e., F10-11, F14, F17, and F19). EBOwithCMAR is the third most effective, since it generates the optimal average result on 4 problems (i.e., F11, F15, F17, and F21). TLBO can obtain the best average one on F9 and F11, consequently, TLBO is the fourth most effective algorithm. CLPSO obtains the best result on only one problem (i.e., F8). The

M

rest metaheuristics including NDHS, BA and GWO cannot get any one the best result on these benchmarks. Therefore, concerning exploration, CGWO is competitive with LSHADE and superior to the compared algorithms for most multimodal

ED

benchmarks. It means that CGWO with CA model has a better global search ability than its counterparts except for LSHADE on most multimodal functions. The major reason for the good performance of CGWO is that the overlapping neighbors provide an information diffusion scheme, and they enable each cells information to diffuse toward the whole wolf population,

PT

which is favorable for exploring the search space. Meanwhile, CA provides a niching mechanism for GWO, where wolf population is divided into many subpopulations and update operation is executed independently on these subpopulations. That is, different subpopulation may have different search directions in solution space and help to explore new promising

Statistics

LSHADE

AC

Problem

CE

areas of the search space.

Table 9 Results by different algorithms on multimodal benchmark functions EBOwithC TLBO

NDHS

BA

CLPSO

GWO

CGWO

MAR

Best

-1.26E+04

-5.29E+04

-9.93E+03

-1.26E+04

-1.07E+04

-1.26E+04

-8.35E+03

-6.70E+03

Worst

-1.26E+04

-2.63E+03

-7.77E+03

-1.26E+04

-4.20E+03

-1.26E+04

-3.86E+03

-3.26E+03

Mean

-1.26E+04

-3.88E+03

-9.18E+03

-1.26E+04

-8.28+03

-1.26E+04

-6.86E+03

-5.07E+03

std

9.63E-02

5.71E+02

7.98E-01

5.06E-06

1.58E+03

6.84E-08

1.02E+03

1.23E+03

Best

6.19E-03

0

3.87E-02

0

2.23E+01

4.44E-05

0

Worst

1.2211

0

3.04331

1.10E-01

1.41E+02

1.32E-03

2.50E+01

1.66E+01

Mean

9.78E-02

0

7.98E-01

1.80E-02

5.65E+01

3.65E-04

5.68E+00

4.20E+00

std

2.26E-01

0

8.08E-01

2.15E-02

2.25E+01

2.81E-04

6.42E+00

5.46E+00

Best

2.84E-14

4.44E-15

4.22E-13

4.44E-15

8.75E-03

1.56E-06

7.55E-15

4.00E-15

F8

0

F9

F10

ACCEPTED MANUSCRIPT

Worst

1.45E-13

4.46E-15

1.08E-11

4.44E-15

4.66E-01

5.53E-06

1.47E-14

4.00E-15

Mean

5.35E-14

4.44E-15

2.14E-12

4.44E-15

9.22E-02

3.15E-06

8.62E-15

4.00E-15

std

2.54E-14

0

1.91E-12

0

9.36E-02

1.07E-06

1.90E-15

0

Best

0

0

0

0

1.52E-03

5.88E-09

0

0

Worst

0

0

0

1.91E-01

7.04E-01

1.09E-06

1.79E-02

0

Mean

0

0

0

7.45E-02

2.78E-01

1.17E-07

1.96E-03

0

std

0

0

0

5.07E-02

2.18E-01

2.18E-07

4.69E-03

0

Best

0

3.45E-01

9.96E-01

5.26E-07

5.99E-03

3.28E-13

1.39E-07

6.01E-08

Worst

0

1.11E+00

9.96E-01

1.18E-06

9.49E-01

9.57E-12

2.01E-02

1.98E-02

Mean

0

5.99E-01

9.96E-01

8.73E-07

1.15E-01

1.90E-12

6.86E-03

5.55E-03

std

0

2.06E-01

5.82E-16

1.60E-07

2.08E-01

1.93E-12

6.38E-03

6.23E-03

Best

7.18E-30

2.21E+00

1.15E-24

2.29E-05

4.49E-01

8.59E-12

2.15E-06

9.26E-07

Worst

9.36E-27

3.68E+00

1.22E-21

1.07E-01

2.55E+00

6.76E-11

4.85E-01

6.06E-01

Mean

1.40E-27

2.92E+00

9.80E-23

2.97E-02

1.48E+00

2.63E-11

2.07E-01

1.88E-01

std

2.21E-27

3.67E-01

2.39E-22

2.82E-02

4.64E-01

1.24E-11

1.41E-01

1.69E-01

9.98E-01

9.98E-01

F11

CR IP T

F12

F13

1

9.98E-01

N.A

1

4.98E+02

N.A

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

Mean

1

4.51E+01

N.A

9.98E-01

9.98E-01

9.98E-01

9.98E-01

9.98E-01

std

0

1.23E+02

N.A

2.06E-16

2.89E-16

2.15E-22

1.28E-13

0

Best

9.04E-04

3.55E-04

3.07E-04

3.07E-04

3.07E-04

3.85E-04

3.07E-04

3.07E-04

Worst

9.04E-04

5.26E-03

3.21E-04

2.04E-02

2.97E-02

2.04E-02

2.04E-02

2.04E-02

Mean

9.04E-04

1.63E-03

3.17E-04

5.76E-03

1.03E-02

1.71E-03

3.04E-03

5.71E-04

std

2.81E-19

1.50E-03

3.57E-06

8.96E-03

1.02E-02

7.08E-03

6.91E-03

7.75E-05

Best

-1.0316

-1.0316

1.63E+08

-1.0316

-1.0316

-1.0316

-1.0316

-1.03163

Worst

-1.0316

-1.0219

2.09E+08

-1.0316

-1.0316

-1.0316

-1.0316

-1.03163

F14

-1.0316

-1.0319

1.99E+08

-1.0316

-1.0316

-1.0316

-1.0316

-1.03163

std

6.66E-16

1.99E-03

1.12E+07

2.50E-10

2.81E-10

6.72E-10

7.38E-10

8.46E-11

Best

0.397887

0.397887

0.397887

3.98E-01

3.98E-01

3.98E-01

0.397887

0.397887

Worst

0.397887

0.729213

0.397887

3.98E-01

3.98E-01

3.98E-01

0.397887

0.397887

Mean

0.397887

0.423606

0.397887

3.98E-01

3.98E-01

3.98E-01

0.397887

0.397887

std

0

7.80E-02

0

1.60E-11

1.01E-14

0

1.59E-08

0

CE

PT

Mean

F17

Best

3

3

3

3

3

3

3

3

Worst

3

33.28821

3.000

30

84

3.0000

3.0000

3

Mean

3

7.5885

3.000

5.7

5.7

3.0000

3.0000

3

std

0

9.1882

1.44E-14

8.2385

1.48E+01

3.24E-15

3.77E-07

2.26E-15

AC

F18

ED

F16

M

F15

AN US

Best Worst

Best

-3.86

-3.8624

N.A

-3.8627

-3.8627

-3.8627

-3.8627

-3.8628

Worst

-3.86

-3.0819

N.A

-3.8627

-3.0898

-3.8627

-3.8551

-3.8628

Mean

-3.86

-3.7604

N.A

-3.8627

-3.8113

-3.8627

-3.8624

-3.8628

std

0

1.99E-02

N.A

1.63E-09

1.96E-01

2.71E-15

1.64E-03

1.09E-16

Best

-3.3220

-3.3016

N.A

-3.3220

-3.3220

-3.3220

-3.3220

-3.3220

Worst

-3.3220

-2.4316

N.A

-3.2031

-1.7061

-3.3220

-3.0867

-3.2020

Mean

-3.3220

-3.0465

N.A

-3.2943

-3.1976

-3.3220

-3.2759

-3.3021

std

2.22E-15

2.03E-01

N.A

5.11E-02

3.06E-01

1.77E-12

6.99E-02

4.52E-02

Best

-10.1532

-5.0535

-10.1532

-10.1532

-10.1532

-10.1532

-10.1532

-10.1532

F19

F20

F21

ACCEPTED MANUSCRIPT

Worst

-10.1532

-3.8736

-10.1532

-2.6305

-2.6305

-10.1419

-5.0552

-5.1007

Mean

-10.1532

-4.6398

-10.1532

-4.4102

-5.6498

-10.1526

-9.3096

-9.9848

std

3.55E-15

3.58E-01

0

3.2225

3.5591

2.10E-03

1.9185

9.22E-01

Best

-10.4029

-9.6104

-10.0634

-10.4029

-10.4029

-10.4029

-10.4029

-10.4029

Worst

-10.4029

-3.7112

-10.0634

-2.7519

-2.7519

-10.4029

-10.4029

-5.0877

F22 Mean

-10.4029

-4.9199

-10.0634

-8.3653

-5.0491

-10.4029

-10.4029

-10.2258

std

8.88E-15

9.67E-01

0

3.4373

3.1047

4.41E-06

2.29E-05

9.70E-01

Best

-10.5364

-10.5345

-10.0747

-10.5364

-10.5364

-10.5364

-10.5364

-10.5364

Worst

-10.5364

-2.3923

-10.0747

-2.4217

-1.8595

-10.5361

-10.5363

-10.5364

Mean

-10.5364

-4.7705

-10.0747

-8.6883

-5.0996

-10.5362

-10.5363

-10.5364

std

0

1.5789

8.70E-16

3.4090

3.2158

1.23E-05

2.16E-05

1.39E-06

CR IP T

F23

Table 10 Wilcoxon sign rank test on the solutions by different algorithms for benchmarks in Tables 1-2 (a level of significance α=0.05). CGWO vs. CGWO vs. LSHADE

CGWO vs. TLBO

0

R+

R-

232.5

232.5

0

232.5

F5

465

0

232

+ 1.7E-06

0

0

1.7E-6 465

1

1 232.5

=

=

0

232

232

465

360

.5

0

465

F14

0

465

465

465

0

=

1.7E-06

465

1.7E-06

+

1.7E-06

+

1.7E-06 465

0

+

1.7E-06 465

0

+

1.7E-06

0

0

+

1.7E-06 465

0

p-value

1.7E-06 465

0

+

+

+

+

2.2E-04

1.7E-06

1.7E-06

1.7E-06

412

465

0

465

0

465

0

+

+

+

1.7E-06

3.4E-05

1.7E-06

1.2E-02

5.8E-01

434

31

465

0

111

354

-

+

+

1

5.7E-04

1.7E-06

232

232

+

.5

.5

0

465

400

65

=

1.7E-06 465

0

+ 465

192

1

232

232

1

=

.5

.5

=

324

141

=

397

5.9E-01

-

7.2E-04 68

206

1.7E-06

+

1.7E-06 0

259 -

0

3.7E-05 32

1.7E-6

465

465

+ 433

465

=

1.7E-06 0

465

2.8E-05 29

436

-

-

-

1.1E-01

1.7E-06

1.7E-06

364

100

273

465

0

0

2.8E-05

465 .5

+

1.7E-6

1.7E-06

1.7E-06

1.7E-06

1.7E-06

379

85,

4.9E-04

+

.5

5

+

433

32

465

0

465

232

232

.5

.5

465

0

1

+ 1.7E-6 0

360

465

0

2.1E-02 120

345

+ 0

465

111

+

1.7E-06

418

-

+

-

+

232

232

1

1.7E-06

1.6E-02

2.0E-05

.5

.5

=

465

0

0

327

+

465

1.2E-02

47

NA

0

1.2E-02 354

-

1.7E-06 465

+

1.7E-06 0

+

1.4E-04

3.7E-05

-

1.7E-06 465

-

NA

0

1.7E-06 465

8.7E-03 105

465

+

+

1.7E-06

NA

+

= +

0

0

1.9E-06 1

1.7E-06

1.7E-6

465

+ 464

0

0

+

+

465

.5

465

1.7E-06

+

0

R-

.5

+

.5

0

R+

-

1.7E-06 0

1

+

1.7E-6

465

232

465

1.7E-06

0

p-value

+

0

1.7E-06 F13

232

R-

=

1.6E-2

-

=

R+

-

-

465

.5

p-value

-

327

1.7E-06 F12

R-

1

.5

1.7E-06

+

=

232

1.7E-6 0

R+

CGWO vs. GWO

-

0

1

138

.5

0

232

53

0

+

+

F11

.5

1.2E-4

AC

465

465

107

1.7E-06

F10

.5

232

9.8E-3

105

=

232

p-value

CGWO vs. CLPSO

+

465

+

358

225

0

CE

465

0

1.7E-6

465

210

0

+

6.4E-01 F9

0

-

1.7E-6 0

465

1.7E-06

+ F8

465

-

232.5

465

1.7E-06 0

-

1.7E-06 F7

+

1.7E-6

+

R-

465

+

F6 .5

0

1.7E-06

465

.5

1.7E-06

465

R+

465

+ 465

465

p-value 1.7E-06

1.7E-6 0

0

232

0

PT

F4

0

465

=

1.7E-06 465

R-

1 232.5

+ F3

R+

=

1.7E-06 465

p-value

1

+ F2

CGWO vs. BA

AN US

465

p-value

M

R-

ED

R+

1.7E-06 F1

CGWO vs. NDHS

EBOwithCMAR

case

355

138

440

+

110

25

+

ACCEPTED MANUSCRIPT

2.8E-3 378

88

+

+

-

232

232

1

5.3E-5

.5

.5

=

+

232

232

1

1.8E-5

232

232

1

.5

.5

=

+

.5

.5

=

0

465

420

45

F16

429

F17

441

87

0

36

465

465

0

435

461

4

1

1.2E-05

.5

=

+

.5

.5

=

+

1.9E-06

1.7E-06

1.4E-04

4.3E-06

1

465

101

465

465

0

146

465

1.7E-06 NA

NA

465

0

+ 0

2.6E-05

465

437

28

465

+

-

+

3.1E-05

1.7E-6

1.0E-03

1.7E-06

465

0

392

73

465

+ 1.7E-6

435

465

0

465

+

1.7E-06 465

0

+

16/3/4

276

232

11/7/5

9

1

+

.5

.5

=

1.7E-06

232

232

1

1.5E-02

.5

.5

351

114

114

+ 2.8E-03 378

87

=

+

1.5E-02

1.7E-06 465

0

+

+

+

1.7E-06

8.7E-03

2.8E-05

0

105

360

436

29

+

-

+

1.7E-06

5.3E-05

3.1E-05

0

36

429

+

456

20

+ 351

16

435

30

-

4.3E-06

9

=

11/3/6

232

0

3.7E-01

189

+

10/7/6

0

+

456

2

+

8.5E-06

449

1.7E-6

47

0

+

=

30

1.7E-06 465

7.7E-02 319

418

+

+

NA

445

0

6.8E-03 364

463

0

+

+

-

+

3.7E-05 433

32

3.1E-05 435

+

30

+

22/0/1

10/4/9

+ 19/3/1

To investigate the convergence rate of the CGWO algorithm, some convergence curves of benchmark functions selected at random are plotted in Fig. 6. To evaluate the quality of the solution obtained by each algorithm in our experiment, we employ fitness value to assess the performance of each algorithm and use the number of function evaluation (NFE) to track the trend

ED

of fitness value in a random run.

F3

F1

10

10

LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

fitnes value

-100

PT

10

0

CE

fitnes value

10

-200

AC

+/=/-

232

465

NA

1.7E-6

3.1E-05 30

+

232

+

NA

+ F23

+

1.7E-06

75

M

F22

.5

2.6E-6

212

= 0

+

2.1E-06

0

+

1.7E-6 F21

1

1.2E-04

NA

6.7E-01 253

232

1.7E-6

0

+ F20

+

232

1.7E-06 465

390

AN US

465

-

1.7E-06

+

1.7E-06 F19

58

464

1

1.2E-03

407

0

1.9E-6 464

3.3E-04

465

24

1.7E-6 F18

1.7E-06

377

CR IP T

3.0E-3 F15

0

2

4 6 8 number of function evaluation

10

10

10 x 10

4

0

LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

-100

-200

0

2

4 6 8 number of function evaluation

10 x 10

4

ACCEPTED MANUSCRIPT

10

10

10

fitnes value

10

10

10

10

6

4

10

10

10

2

0

0

2

4 6 8 number of function evaluation

10

10 x 10

0

-5

-10

-15

10 LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

0

-5

-10

2

4 6 8 number of function evaluation

2

4 6 8 number of function evaluation

10

x 10

10

10

x 10

4

F15

2

0

LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

-2

-4

4

0

2

4 6 8 number of function evaluation

10 x 10

4

M

0

10

10

-15

0

4

F11

5

LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

CR IP T

10

8

fitnes value

10

F9

LSHADE TLBO EBOwithCMAR NDHS BA CLPSO GWO CGWO

fitnes value

fitnes value

10

F5

10

AN US

10

Fig. 6. Convergence curve of these algorithms when solving benchmarks

ED

Fig. 6 presents the evolution of fitness value with the increase of NFE on these problems for each algorithm. X axis represents NFE and Y axis denotes the fitness value. These curves indicate the convergence speed of each algorithm. In terms of the NFE, the convergence speed of CGWO’s curve is much faster than those of its counterparts in minimizing the fitness

PT

value for F1 and F5. CGWO is slightly faster than its compared algorithms for F3, F9, F11 and F15. CGWO is a bit slower than the other algorithms for the rest of benchmarks but still shows strong competitiveness. The reasons for the good

CE

performance of the CGWO have two main aspects as follows. Firstly, each solution in CGWO has its own topological neighbors. Thus, the interaction among solutions is restricted to their neighborhood, which favors exploitation performance of CGWO. Secondly, information diffusion mechanism by overlap among neighborhoods can maintain the population diversity

AC

for longer, usually contributing to exploration performance. We can conclude from this observation that the result of plots is consistent with our view that CGWO with CA effectively improves the convergence of the algorithm. The following subsection reveals that CGWO can locate the whole global optimums.

4.4.2 Experiment two Empirical study on benchmarks with multiple global peaks is conducted in this experiment. Success rate is used to test performance of algorithm for CEC2015 benchmarks with multiple global optimums. Note that the success rate is the percentage of runs in which all the desired peaks are successfully located in this experiment, whereas in the experiment one the convergence only denotes that the algorithm may find a single global solution rather than all desired peaks. The level of accuracy is set to 0.1 in this competition. This parameter is used to measure how close the obtained solutions are to the known global/local peaks. If the distance between an obtained solution and the true optimal solution is smaller than a

ACCEPTED MANUSCRIPT

tolerance value (level of accuracy), it will be accepted that the optimal solution has been found. When all the desired peaks are found in one single run, this run is deemed to be successful. In addition, when all the algorithms satisfy stop condition in this experiment, they return the final population in the last iteration rather than the only one best solution. Table 11 gives the corresponding parameters of problems and summarizes the success rate on CEC2015 with multiple global peaks (i.e., F25, F26, F28, F29, F30, F31, F32, F34, F35 and F36). 30 independent runs are conducted in this experiment. Table 12 presents that CGWO algorithm gives a significantly better performance than the other algorithms in terms of success rate. Fig. 7 shows that CGWO can locate all 4 global peaks on F25 (decision dimension=2) at number of function evaluation (NFE) 40,000 in a single run. Multiple emerged niches are clearly visible at NFE equal to 80,000. Figs.

CR IP T

8-10 show global peaks on F26, F28 and F30 (decision dimension=2) in a single run. We can observe that CGWO is able to find all 25 global optimums on the F26 and F28, and 2 global peaks on the F30, respectively. It means that CGWO can successfully develop stable niches on majority of the global peaks without niching radius parameters. It also implies that CGWO has good diversity of population.

Table 11 Success rate on CEC2015 with multiple global optimum maximum Population

EBOwithC

evaluation dimension

CLPS

AN US

Decision Problem

LSHADE

size

TLBO

NDHS

BA

MAR

function number

GWO

CGWO

O

2

400

100,000

0%

0%

0%

0%

0%

0%

53.3%

100%

F26

2

400

100,000

0%

0%

0%

0%

0%

33.3%

16.7%

96.7%

F28

2

400

100,000

0%

0%

0%

0%

0%

0%

26%

80%

F29

2

526

100,000

0%

0%

0%

3.3%

3.3%

0%

6.7%

10%

F30

2

400

100,000

0%

0%

0%

0%

0%

3%

16%

100%

F31

2

400

500,000

0%

0%

0%

0%

0%

0%

3.3%

6.7%

F32

10

100

300,000

0%

0%

0%

0%

0%

0%

0%

0%

F34

10

100

300,000

0%

0%

0%

0%

0%

0%

0%

10%

F35

10

400

500,000

0%

0%

0%

0%

0%

0%

0%

23.3%

F36

10

400

300,000

0%

0%

0%

0%

0%

0%

0%

0%

PT

ED

M

F25

Table 12 Wilcoxon sign rank test on the solution by different algorithms for CEC2015 with multiple global optimum (a level of significance α=0.05). CGWO vs.

case R+

R-

p-value

465

R+

0

R-

465

0

+

F28

F29

464

455

276

1

464

262

CGWO vs. GWO

465

0

p-value

R+

R-

465

0

1.7E-06

R-

465

0

2

R+

R-

465

0

1

R+

R-

397

68

+

4.1E-05 432

33

p-value 1.2E-04

+

1.9E-06 464

p-value 1.7E-06

+

2.1E-06 462

p-value 1.7E-06

+

2.1E-06 2

R+

1.7E-06

+ 463

p-value

4.7E-06 455

10

+

+

+

+

+

+

4.7E-06

4.7E-06

4.7E-06

4.7E-06

4.7E-06

1.6E-02

10

455

10

455

10

455

10

455

10

455

10

348

117

2.5E-01

247

217

1

=

.5

.5

=

385

80

+

+

+

+

+

+

2.5E-01

2.5E-01

2.5E-01

5.0E-01

5.0E-01

189

276

0

203

CGWO vs. CLPSO

+

189

189

0

203

5.0E-01

203

0

203

5.0E-01

203

0

0

203

5.0E-01

1.7E-06 465

0

+ 262

203

5.0E-01

+

189

1.7E-06 465

+ 262

276 =

1.7E-06 465

+ 262

262

=

1.7E-06 465

+ 262

262

=

1.7E-06 465

5.0E-01

276

=

+ F31

CGWO vs. BA

4.7E-06

1.7E-06 465

R-

1.9E-06 1

= F30

R+

+

1.9E-06

F26

p-value

CGWO vs. NDHS

EBOwithCMAR

1.7E-06

AC

1.7E-06 F25

CGWO vs. TLBO

CE

CGWO vs. LSHADE

5.7E-06

+ 262

203

5.0E-01

+ 247

217

1

ACCEPTED MANUSCRIPT

=

=

2.5E-01

2.5E-01

189

276

2.5E-01

189

276

276

189

2.5E-01 276

2.5E-01

189

=

=

=

=

=

=

1.6E-02

1.6E-02

1.6E-02

1.6E-02

1.6E-02

325

138

325

138

+

325

138

325

+

5/3/0

5/3/0

80

70 60

-50

30 -100

-40

(a) number of function evaluation =4,000

40

-90

-80

-70

-60

-50

-40

(b) number of function evaluation =40,000

30 -100

-90

-80

-70

-60

-50

-40

(c) number of function evaluation =80,000

AN US

-60

global optimum CGWO

50

40

-70

5/3/0

60

50

Fig. 7 Niching behavior of CGWO (with a population of 400) on F25 over a run

5

10 global optimum CGWO

5

global optimum CGWO

0

0

-5 -10

-10

-20

-15 -20

-40 0

10

20

30

40

50

-25 20

(a) number of function evaluation =4,000

global optimum CGWO

-5

-10

-30

0

25

-15 -20

30

35

40

-25 20

(b) number of function evaluation =20,000

25

30

35

40

(c) number of function evaluation =40,000

PT

Fig. 8 Niching behavior of CGWO (with a population of 400) on F26 in a single run

-40

-40

global optimum CGWO

-60

AC

-70

-30

-20

-10

global optimum CGWO -45

-50 -50 -55 -55 -60

-80

-90 -40

-40 gobal optimum CGWO

-45

CE

-50

-60

-65

0

10

20

(a) number of function evaluation =4,000

-70 -30

-25

-20

-15

-10

-5

(b) number of function evaluation =20,000

-65 -30

138

+

5/3/0

70

50

-80

325

+

90 global optimum CGWO

60

30

138

5/3/0

70

40

325

+

90 80

-90

138

+

5/3/0

global optimum CGWO

20 -100

189

=

1.6E-02

5/3/0

80

276

.5

=

+

90

2.5E-01

189

.5

1.6E-02 138

+/=/-

2.5E-01

189

=

CR IP T

325

276

=

M

F35

276

=

ED

F34

=

-25

-20

-15

-10

-5

(c) number of function evaluation =80,000

Fig. 9 Niching behavior of CGWO (with a population of 400) on F28 in a single run

ACCEPTED MANUSCRIPT

40

35 global optimum CGWO

30

30

12 global optimum CGWO

global optimum CGWO 10

25

20

20

8

10 15 0

6

10 -10

5

4 -20

0

-30 40

50

60

70

80

90

-5 50

100

(a) number of function evaluation =4,000

60

70

80

90

2 50

100

(b) number of function evaluation =8,000

60

70

80

90

(c) number of function evaluation =80,000

CR IP T

Fig. 10 Niching behavior of CGWO (with a population of 400) on F30 in a single run

Therefore, CGWO shows a good performance over other metaheuristics. It is also obvious that CA has a positive effect on the performance improvement of the proposed CGWO. The reason behind is that the CGWO makes fully use of the cellular structure in CA to update solutions only within its neighborhood for local search and adopts an implicit migration mechanism implemented by overlap of neighborhoods to enhance information diffusion to improve the search diversity. More precisely,

AN US

CA provides different neighborhoods for the whole population by using a cellular topological structure. The population is divided into many subpopulations. Each solution is arranged in a given cellular grid, and each subpopulation is form of an independent neighborhood. Consequently, each subpopulation is updated independently in its own neighborhood. That is, different subpopulation may search for promising solutions in different search directions. Obviously, it has a stronger search ability to locate all the peaks compared to the other algorithms. Meanwhile, the migration mechanism implemented by overlap of neighborhoods can help to share valuable information among different independent subpopulations. This can improve the search efficiency. Thus, we can draw a conclusion that CA has a good effect on the behavior of the GWO

M

algorithm.

ED

4.4.3 Experiment three

To further investigate the performance of the proposed CGWO, it is compared to other improved variants of GWO and multiswarm metaheuristic, which include GWOCLS, EGWO, WdGWO, and iHS. This comparison experiment is conducted

PT

on benchmarks in Tables 1 and 2. These parameter settings of the compared algorithms are used as in their original literature (Al-Betar, et al., 2015; Joshi & Arora, 2017; Malik, et al., 2016; Yu, et al., 2017). In this subsection, we also compare the proposed CGWO to the well-known multi-niche metaheuristics including EAD,

CE

RPSO, CDE, NCDE, and LIPS for multi-niche optimization problems from CEC2015. These parameter settings of the compared algorithms are also used as in their original literature (Gu, et al., 2015; X. D. Li, 2010; Qu, et al., 2016; Qu, et al.,

AC

2012; Thomsen, 2004).

Problem

F1

F2

Table 13 Results by different improved versions of GWO and iHS on benchmarks in Tables 1 and 2

Statistics

GWOCLS

EGWO

WdGWO

iHS

CGWO

Best Worst Mean std Best Worst Mean

7.96E-94 2.35E-90 2.85E-91 5.97E-91 2.69E-54 1.13E-51 1.14Ee-52

3.01E-71 1.39E-64 1.21E-65 3.29E-65 9.94E-47 4.36E-43 3.66E-44

5.32E-70 3.91E-66 3.10E-67 7.21E-67 6.36E-40 1.93E-38 3.99E-39

2.93E-03 9.77E-03 5.27E-03 1.87E-03 1.41E-02 2.86E-02 2.18E-02

0 0 0 0 0 0 0

ACCEPTED MANUSCRIPT

F7

F8

F9

AC

CE

F10

F11

F12

F13

4.09E-03 1.07E-12 1.14E-07 4.74E-09 2.07E-08 3.58E-01 6.44E-01 5.15E-01 6.84E-02 10.1053 161.7552 78.7063 33.2843 0 0 0 0 1.97E-04 1.85E-03 8.82E-04 4.75E-04 -8.55E+03 -4.13E+03 -4.85E+03 1.03E+03 1.74E-03 8.36E-03 3.27E-03 1.48E-03 1.41E-02 2.91E-02 1.98E-02 4.44E-03 0 1.76E-02 2.45E-03 4.89E-03 4.46E-06 1.31E-04 2.75E-05 2.82E-05 2.05E-04 2.40E-03 4.89E-04

0 3.57E-74 6.24E-55 4.18E-56 1.82E-43 3.62E-73 5.17E-65 4.38E-66 1.32E-65 24.2431 27.0674 25.6489 7.68E-01 0 0 0 0 1.21E-04 8.07E-04 4.30E-04 1.83E-04 -6.70E+03 -3.26E+03 -5.07E+03 1.23E+03 0 1.66E+01 4.20E+00 5.46E+00 4.00E-15 4.00E-15 4.00E-15 0 0 5.44E-02 5.43E-03 1.20E-02 6.01E-08 1.98E-02 5.55E-03 6.23E-03 9.26E-07 6.06E-01 1.88E-01

CR IP T

4.51E-39 7.16E-13 1.59E-08 1.01E-09 3.21E-09 1.53E-14 2.21E-12 4.40E-13 6.14E-13 24.1463 26.0961 25.0443 3.54E-01 0 0 0 0 2.59E-04 1.81E-03 9.57E-04 4.16E-04 -8.22E+03 -4.12E+03 -4.85E+03 9.35E+02 2.00E+00 1.63E+02 2.08E+01 2.81E+01 7.54E-15 2.03E+01 1.35E+01 9.71E+00 0 1.75E-02 1.43E-03 4.00E-03 4.88E-08 2.86E-06 1.16E-06 4.75E-07 1.38E-05 2.42E-01 1.58E-02

AN US

F6

9.05E-44 3.08E-20 1.06E-13 7.70E-15 1.95E-14 1.84E-13 4.42E-09 4.65E-10 1.05E-09 24.13 26.18 25.29 5.82E-01 0 0 0 0 5.78E-04 2.80E-03 1.52E-03 6.23E-04 -9.94E+03 -6.77E+03 -7.64E+03 6.86E+02 0 4.20E+00 1.40E-01 7.68E-01 7.54E-15 2.01E+01 2.67E+01 6.93E+00 0 2.98E-02 1.90E-03 6.38E-03 4.68E-06 2.62E-02 9.06E-03 8.36E-03 1.22E-04 4.80E-01 1.47E-01

M

F5

ED

F4

2.28E-52 1.66E-25 2.32E-19 1.65E-20 5.41E-20 1.12E-20 1.34E-17 6.91E-19 2.44E-18 24.24 26.18 25.26 5.53E-01 0 0 0 0 1.61E-04 1.27E-03 5.95E-04 2.88E-04 -9.01E+03 -7.26E+03 -8.95E+03 3.19E+02 0 1.98E+01 7.70E+00 5.87E+00 7.54E-15 1.46E-14 1.22E-14 3.14E-15 0 2.29E-02 4.63E-03 7.67E-03 1.70E-07 1.68E-02 2.96E-03 5.10E-03 3.31E-06 3.63E-01 6.18E-02

PT

F3

std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean

ACCEPTED MANUSCRIPT

F18

F19

F20

AC

CE

F21

F22

F23

4.01E-04 9.98E-01 9.98E-01 9.98E-01 1.55E-13 4.21E-04 1.31E-03 6.93E-04 1.75E-04 -1.03163 -1.03163 -1.03163 5.24E-09 3.97E-01 3.97E-01 3.97E-01 3.91E-09 3.000 3.000 3.000 5.95E-08 -3.8627 -3.8627 -3.8627 4.06E-09 -3.3220 -3.2301 -3.3022 4.51E-02 -10.1532 -5.0552 -9.1396 2.06 -10.4029 -3.7243 -9.0304 2.55 -10.5364 -2.8711 -9.7432 2.0894

1.69E-01 9.98E-01 9.98E-01 9.98E-01 0 3.07E-04 2.04E-02 5.71E-04 7.75E-05 -1.03163 -1.03163 -1.03163 8.46E-11 3.97E-01 3.97E-01 3.97E-01 0 3.000 3.000 3.000 2.26E-15 -3.8627 -3.8627 -3.8627 1.09E-16 -3.3220 -3.2020 -3.3021 4.52E-02 -10.1532 -5.1007 -9.9848 9.22E-01 -10.4029 -5.0877 -10.2258 9.70E-01 -10.5364 -10.5364 -10.5364 1.39E-06

CR IP T

5.40E-02 9.98E-01 9.98E-01 9.98E-01 1.53E-13 3.07E-04 1.22E-03 4.29E-04 3.16E-04 -1.03163 -1.03163 -1.03163 1.92E-06 3.97E-01 3.97E-01 3.97E-01 1.42E-08 3.000 3.000 3.000 1.02E-08 -3.8627 -3.8627 -3.8627 2.95E-07 -3.3220 -3.1361 -3.2017 3.96E-02 -10.1532 -2.6828 -9.0605 2.26 -10.4029 -5.0876 -10.2256 9.70E-01 -10.5364 -10.5364 -10.5364 7.12E-05

AN US

F17

1.25E-01 9.98E-01 9.98E-01 9.98E-01 2.75E-13 3.07E-04 1.22E-03 4.29E-04 3.16E-04 -1.03163 -1.03163 -1.03163 4.99E-10 3.97E-01 3.97E-01 3.97E-01 2.29E-08 3.000 3.000 3.000 1.77E-08 -3.8627 -3.8627 -3.8627 4.07E-07 -3.3220 -3.0866 -3.2296 8.19E-02 -10.1532 -5.0551 -7.9575 2.55 -10.4029 -5.0876 -9.6968 1.83 -10.5364 -5.1283 -9.2753 2.32

M

F16

ED

F15

9.52E-02 9.98E-01 9.98E-01 9.98E-01 2.32E-13 3.07E-04 2.03E-02 2.46E-03 6.07E-03 -1.03163 -1.03163 -1.03163 9.05E-10 3.97E-01 3.97E-01 3.97E-01 2.24E-08 3.000 3.000 3.000 1.18E-07 -3.8627 -3.8627 -3.8627 1.86E-06 -3.3220 -3.1974 -3.2699 6.06E-02 -10.1532 -10.1530 -10.1531 4.27E-05 -10.4029 -10.4027 -10.4029 4.68E-05 -10.5364 -10.5364 -10.5364 4.00E-05

PT

F14

std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std Best Worst Mean std

Table 13 shows the statistic results (i.e., best, worst, mean, and std) obtained by different improved versions of GWO and iHS on benchmarks in Tables 1 and 2. 30 independent experiments of each metaheuristic are conducted on each problem.

ACCEPTED MANUSCRIPT

The optimal values in these tables are highlighted in boldface. It can be clearly observed that the proposed CGWO can find promising results on most problems compared to other improved versions of GWO and iHS. More precisely, the CGWO performs the best among these compared 5 algorithms. The good performance of CGWO is due to the CA technique. The CA technique can provide many different neighborhoods during the optimization process. As the CGWO can update its own subpopulation in each neighborhood, the search process of the entire population can be updated in different search directions. This will effectively maintain the search diversity. Table 14 Wilcoxon sign rank test on the solution by different improved versions of GWO for benchmarks in Tables 1-2 (a level of significance α=0.05). CGWO vs. EGWO

CGWO vs. WdGWO

CGWO vs. iHS

R+

R-

p-value

R+

R-

p-value

R+

R-

F1

465

0

1.7E-06 (+)

465

0

1.7E-06 (+)

465

0

F2

465

0

1.7E-06 (+)

465

0

1.7E-06 (+)

465

0

F3

465

0

1.7E-06 (+)

465

0

1.7E-06 (+)

465

0

F4

465

0

1.7E-06 (+)

465

0

1.7E-06 (+)

465

0

F5

126

339

2.8E-02 (-)

156

309

1.1E-01 (=)

86

379

CR IP T

CGWO vs. GWOCLS

F6

316

149

8.6E-02 (=)

233

232

1.0E+0 (=)

233

F7

316

149

8.6E-02 (=)

463

2

2.1E-06 (+)

F8

0

465

1.7E-06 (-)

1

464

F9

327

138

1.2E-04 (+)

3

F10

253

212

3.9E-01 (=)

F11

67

398

F12

184

F13

case

R+

R-

p-value

1.7E-06 (+)

465

0

1.7E-06 (+)

1.7E-06 (+)

465

0

1.7E-06 (+)

1.7E-06 (+)

465

0

1.7E-06 (+)

1.7E-06 (+)

465

0

1.7E-06 (+)

2.5E-03 (-)

452

13

6.3E-06 (+)

232

1.0E+0 (=)

233

232

1.0E+0 (=)

444

21

1.3E-05 (+)

465

0

1.7E-06 (+)

1.9E-06 (-)

235

230

9.5E-01 (=)

0

465

1.7E-06 (-)

462

2.3E-06 (-)

461

4

2.6E-06 (+)

0

465

1.7E-06 (-)

420

45

3.9E-05 (+)

406

59

3.7E-06 (+)

465

0

1.7E-06 (+)

2.4E-05 (-)

10

455

4.7E-06 (-)

9

456

4.2E-06 (-)

25

440

1.9E-05 (-)

281

3.1E-01 (=)

312

153

1.0E-01 (=)

105

360

8.7E-02 (=)

105

360

8.7E-03 (=)

162

303

1.4E-01 (=)

266

199

4.9E-01 (=)

65

400

5.7E-04 (-)

36

429

5.30E-05 (-)

F14

460

5

2.8E-06 (+)

447

18

1.0E-05 (+)

415

50

1.9E-05 (+)

284

181

6.4E-02 (=)

F15

292

173

2.2E-01 (=)

275

190

3.8E-01 (=)

319

146

7.5E-02 (=)

353

112

1.3E-02 (+)

F16

456

9

4.2E-06 (+)

446

19

1.1E-05 (+)

461

4

2.6E-06 (+)

456

9

4.2E-06 (+)

F17

456

9

4.2E-06 (+)

462

3

2.3E-06 (+)

461

4

2.6E-06 (+)

221

244

8.1E-01 (=)

F18

246

219

7.8E-01 (=)

284

181

6.4E-02 (=)

235

230

9.5E-01 (=)

278

187

3.5E-01 (=)

F19

232

233

9.9E-01 (=)

168

297

1.8E-01 (=)

103

362

7.7E-03 (-)

0

465

1.7E-06 (-)

F20

346

119

1.9E-02 (+)

433

32

3.7E-05 (+)

444

21

1.3E-05 (+)

69

396

7.7E-06 (-)

F21

30

435

3.1E-05 (-)

439

26

2.1E-05 (+)

440

25

1.9E-05 (+)

226

239

8.9E-01 (=)

F22

30

435

3.1E-05 (-)

437

28

2.6E-05 (+)

436

29

2.8E-05 (+)

263

202

5.3E-01 (=)

3.1E-05 (+)

441

24

1.8E-05 (+)

435

30

3.1E-05 (+)

465

0

1.7E-06 (+)

435

+/=/-

M

ED

PT

CE

AC F23

AN US

p-value

30

10/8/5

13/7/3

14/5/4

10/7/6

Table 15 Results by different algorithms on average number of optima found (F24-F31)

Problem

F24

Statistics

EAD

RPSO

CDE

NCDE

LIPS

CGWO

Best

8

1

0

0

11

0

Worst

1

0

0

0

5

0

Mean

4.17

1.17E-01

0

0

8.24

0

std

1.99

3.22E-01

0

0

1.26

0

ACCEPTED MANUSCRIPT

F29

F30

F31

26

60

Worst

33

16

0

0

14

35

Mean

50.21

15.8

0

0

23

53.2

std

4.25

2.91

0

0

3.04

3.51

Best

77

89

4

13

45

90

Worst

59

72

0

9

31

68

Mean

68.17

74.05

2

10.4

37.92

75.4

std

3.86

4.82

1.25

1.67

3.58

5.05

Best

1

0

0

0

0

0

Worst

0

0

0

0

0

0

Mean

2.15E-01

0

0

0

0

0

std

4.15E-01

0

0

0

0

0

Best

59

130

0

0

0

73

Worst

42

117

0

0

0

57

Mean

48.07

118.35

0

0

0

64.21

std

3.86

4.96

0

0

0

5.36

Best

85

0

63

90

Worst

83

0

Mean

84.66

0

std

5.88E-01

0

Best

8

0

Worst

1

0

Mean

3.66

0

std

1.49

Best

60

Worst

45

Mean std

CR IP T

0

AN US

F28

0

0

2

0

0

50

85

0

0.8

55.96

87.6

0

0.83

3.18

3.32

0

1

15

18

0

0

9

10

0

0.04

12.04

14.31

0

0

0.2

1.45

4.03

9

12

40

96

105

1

4

30

78

80

50.72

4.90

6.9

36.4

88.52

91.56

4.00

2.88

2.30

3.78

4.56

5.86

M

F27

22

ED

F26

57

PT

F25

Best

Table 16 Results by different algorithms (statistic values of 5 best solutions separated at least by 10 on F32-F38)

EAD

PSO (ring)

CDE

NCDE

LIPS

CGWO

Best

900.48

909.97

952.14

900.03

900

901.75

Worst

938.54

931.11

990.52

974.83

920.67

901.75

Mean

911.58

919.62

970.566

935.86

911.95

901.75

std

9.15

7.41

14.25

26.54

7.60

4.58E-05

Best

5081.11

1070.00

2701

1139.20

6287.1

1070

Worst

7607.26

1070.00

12551

12267

1070.4

1070

Mean

6812.02

1070.00

9622.02

7524.30

4079.02

1070

std

757.76

5.30E-04

4062.19

5618.14

2749.6

0

Best

1100.533

1100.81

1104.6

1100.30

1102

1100

Worst

1104.028

1103.32

1107.3

1111.80

1103.3

1100.00

Mean

111.993

1102.12

1105.7

1104.26

1102.74

1100.00

std

7.56E-01

5.09E-01

1.23

4.38

4.72E-01

4.17E-04

Best

1537.06

1511.40

1313.90

1221.80

1201.2

1200

AC

F32

Statistics

CE

Problem

F33

F34

F35

ACCEPTED MANUSCRIPT

F37

2808.51

1992.80

2129.70

1476.6

1201.09

Mean

1660.58

2281.25

1623.24

1652.64

1296.68

1200.07

std

101.96

298.65

286.42

392.61

132.95

2.50E-01

Best

1519.41

5261.5

1488.1

1503

1437.6

1300

Worst

1711.13

225790

1658

1656.60

1539.9

1300

Mean

1595.73

150498.9

1594.62

1597.10

1490.24

1300

std

39.75

1.09+E05

67.83

57.28

45.67

0

Best

2050.04

1440.00

2469

1950.30

1480

1520

Worst

2607.84

2523.49

3041

2959.60

2203.9

1726.77

Mean

2401.33

2044.92

2681.24

2382.08

1820.34

1569.78

std

135.55

286.21

253.17

375.00

236.13

43.29

Best

1640.96

1640.00

1686.40

1640.50

1640

1500

Worst

1758.05

1640.00

2069.70

2073.70

1824.1

2116.28

Mean

1665.74

1640.00

1918.70

1908.20

1742.1

1680.40

std

33.164

1.2E-09

142.28

162.26

68.67

113.29

AN US

F38

1831.72

CR IP T

F36

Worst

The proposed CGWO is also compared to the well-known multi-niche metaheuristics such as EAD, RPSO, CDE, NCDE, and LIPS on multi-niche optimization problems in CEC2015. These multi-niche optimization problems contain 8 extended simple functions (i.e., F24-F31) and 7 composition functions (i.e., F32-F38). For 8 extended simple problems, the performance metric like the average number of optima found (ANOF) is used to test the behavior of these metaheuristics. If the gap between one obtained solution and one known global optimum is below 𝜀 (𝜀 = 0.1), then the peak is deemed to have been found. For the 7 composition problems, a different performance metric is adopted due to its difficulty in finding

M

any good solutions. The statistic results of 5 best solutions of each algorithm are found for the 7 composition problems. Table 15 reports the best, worst, mean and standard deviation (i.e., std) of the number of optima found by different

ED

multi-niche metaheuristics for problems F24-F31. The optimal values in these tables are highlighted in boldface. We can find from this table that the proposed CGWO can provide promising results on most problems. Meanwhile, Table 16 presents statistic results of 5 best solutions found by each algorithm. CGWO is the best one of these compared 6 algorithms.

PT

Compared to other well-known multi-niche metaheuristics, it is obvious to see that CGWO shows the best performance. These results confirm our previous conclusion that the CA can contribute to balance between exploration and exploitation. The superior performance of CGWO is due to its CA mechanism. CA provides an effective topology structure for the whole

CE

population. As solution in CA only interacts with its neighbors for exploitation, the search can be enhanced in the niche. Meanwhile, information diffusion mechanism contributes to exploration. Consequently, a collection of such solutions

AC

congregates to solve the problem.

4.5 Wilcoxon sign-sum test analysis Due to the stochastic nature of these algorithms, the statistical test is necessary for providing confidential comparisons. A nonparametric statistical test called Wilcoxon sign-rank test should be carried out in this paper. This statistical test is used to detect the significant difference between the results obtained by different algorithms. The confidence level for all tests is set to 95% (corresponding to a level of significance α= 0.05). The symbol +(-) indicates that our proposed CGWO algorithm is significantly better (worse) than its counterpart. The sign = denotes that there is no significant difference between the proposed algorithm and its compared algorithm. R+ is the sum of ranks for the problems in which the first algorithm outperformed the second, and R− the sum of ranks for the opposite (Derrac, García, Molina, & Herrera, 2011). Table 7 records the CGWO with C25 is the best choice among six variants using Wilcoxon sign rank test on these results

ACCEPTED MANUSCRIPT

obtained by CGWO with different cell topology structures. Table 10 and Table 14 show that the proposed algorithm provides statistically better results compared with the other metaheuristics for benchmarks in Tables 1-2, based on the Wilcoxon sign rank test. Table 12 presents the multi-problem-based pairwise statistical comparison results using the averages of the global minimum values obtained through 30 runs of CGWO and the comparison algorithms to solve the CEC2015 with multiple global optimum. Wilcoxon sign rank test is not conducted on F32 and F36 as that all algorithms are unable to find all peaks on these problems according to Table 12. These results show that CGWO was statistically more successful than compared algorithms, with a statistical significance value α= 0.05. 4.6 CUP-time cost

CR IP T

This subsection studies CPU-time cost on CGWO and GWO. According to the above experiments, concerning average values CGWO outperforms the basic GWO on 31 of 38 benchmark problems with 29 significant results. The mean CPU-time consumed values by CGWO and GWO algorithms for all test problems are plotted in Fig. 11, and they are divided into three categories in Table 17. The last column of Table 17 provides the increase rates of CPU-time brought by the incorporation of CA model. It is observed from Fig. 11 and Table 17 that good performance achieved by the combination of CA does not come for free. The CPU-time consumption of CGWO is larger than that of the basic GWO for most instances. The main reason for the large time cost of the CGWO can be analyzed as follows. CGWO searches for the promising solution in the

AN US

current solution’s neighbors, whereas this neighborhood structure provides an information diffusion mechanism to CGWO in which population diversity is maintained longer due to the slow information diffusion through the wolf population regarding the best position of each neighborhood. Thus, there exists a delay in the information spread through the population, which escapes from a local optimum. Although CPU-time consumption of CGWO is a little larger than that of the original GWO, CGWO is better than GWO for most instances and it is well worth introducing a CA model to improve the behavior of

AC

CE

PT

ED

M

CGWO.

Fig 11. CPU-Time consumption on all test functions

Table 17 CPU-time costs on three different test problems Problems

GWO

CGWO

Increase rate (%)

Unimodal problems(F1-F7)

7.026

9.96

41.8

Multimodal problems(F8-F23)

10.525

16.574

57.5

Composition problems(F24-F38)

64.484

71.892

11.5

ACCEPTED MANUSCRIPT

4.7 CGWO for engineering problems This subsection is devoted to evaluating the performance of the proposed CGWO on the real-world engineering problems, these constrained problems including tension/compression spring design, overspeed protection of a gas turbine and rolling element bearing are studied in this subsection. Since these problems contain constraints, the constraint handling mechanism as in Section 3.1 is also adopted. Compared with gradient-based optimization approaches, most metaheuristics have a derivation-free mechanism. For metaheuristics, there is no need to calculate the derivative of search spaces to find the optimal solutions. This makes metaheuristics suitable for real applications with expensive or unknown derivative information. Additionally, metaheuristics

CR IP T

have superior abilities to avoid local optima over conventional optimization approaches. This is due to the stochastic nature of metaheuristics which allows them to avoid stagnation in local optima. The search space of these problems is usually unknown and very complex with a number of local optima. Therefore, metaheuristics are good options for solving these challenging real-world problems. Note that the significance test technique is not used in this subsection as that the compared results of these applications are from the existing literatures which just offer simple statistical results. 4.7.1 Tension/compression spring design

AN US

The aim of this problem is to minimize the weight of a tension/compression spring subject to constraints on minimum deflection, shear stress, surge frequency and limits on outside diameter. This problem has three design variables defined as: the wire diameter (0.05≤x1≤2.0), the mean coil diameter (0.25≤x2≤1.3) and the number of active coils (2.00≤x3≤15). The mathematical formulation of the problem can be defined as follows:

min f x   x3  2x2 x12  1

x 23 x3

71785 x14



12566 x 2 x13  x14

ED

 1

 0

4 x 22  x1 x 2

M

  g 1 x     g 2 x   s.t .   g 3 x     g 4 x  

140.45 x1 x 22 x3

(12)





1

5108 x12

-1  0

 0

x1  x 2 1  0 1.5

This problem has been regarded as a classical benchmark for evaluating different optimization algorithms. The best

PT

solutions among several algorithms such as GWO (S. Mirjalili, et al., 2014), HPSO (Q. He & L. Wang, 2007), CDE (Huang, Wang, & He, 2007), CPSO (Qie He & Ling Wang, 2007) and SSO-C (Cuevas & Cienfuegos, 2014) are recorded in Table 18.

CE

We can observe from Table 18 that CGWO, HPSO, and SSO-C can find the best result. However, among them, only CGWO satisfies all the constraints while others violate the constraints to some extent. Table 19 presents statistical results obtained by different algorithms such as CPSO, CDE, SSO-C, G-QPSO (Coelho, 2010), UPSO (Parsopoulos & Vrahatis, 2005), PSO-DE

AC

(Liu, Cai, & Wang, 2010), MBA (Sadollah, Bahreininejad, Eskandar, & Hamdi, 2013), HEAA (Wang, Cai, Zhou, & Fan, 2009), and TLBO. It can be seen from Table 19 that the CGWO shows a good performance and maintains an acceptable stability since it obtains the best result with comparison to most compared algorithms in terms of NFE. The proposed CGWO achieves the best average result except HEAA, PSO-DE, and TLBO metaheuristics. However, CGWO provides competitive mean results in much less NFEs than offered by the HEAA and PSO-DE method. CGWO is also competitive to TLBO for solving this problem. Table 18 Comparison of best solution obtained from various previous studies for tension/compression spring design. variable

CPSO

GWO

HPSO

SSO-C

CDE

CGWO

x1

0.051728

0.05169

0.051706

0.051689

0.051609

0.051689066186

x2

0.357644

0.356737

0.357126

0.35671775

0.354714

0.35671786256

ACCEPTED MANUSCRIPT

x3

11.244543

11.28885

11.265083

11.288965

11.410831

11.28895856

g1

−8.25E−04

-7.9065e-05

−3.06E−06

-4.746E-06

-3.864E-05

-4.51E−10

g2

−2.52E−05

-7.5056e-06

1.39E−06

3.337E-06

-1.8289E-04

−2.234E−11

g3

−4.051306

−4.053383

−4.054583

-4.0537858

-4.048627

−4.05378

g4

−0.727085

−0.7277153

−0.727445

-0.7277287

-0.7291179

−0.727728

F

0.0126747

0.012666

0.0126652

0.0126652

0.0126702

0.0126652

Table 19 Comparison of statistical results given by different optimizers for tension/compression spring design. worst

Mean

Best

std

NFEs

CPSO

0.0129240

0.0127300

0.0126747

5.20E−04

240,000

HPSO

0.0127190

0.0127072

0.0126652

1.58E−05

81,000

G-QPSO

0.017759

0.013524

0.012665

1.26E-03

2,000

DE

0.012790

0.012703

0.0126702

2.7E−05

204,800

HEAA

0.012665240

0.012665234

0.012665233

1.4E−09

24,000

PSO-DE

0.012665304

0.012665244

0.012665233

1.2E−08

24,950

0.012669249

0.012922669

N.A

0.02294

CDE

N.A

0.012703

TLBO

N.A

0.01266576

MBA

0.012900

0.012713

CGWO

0.0127173

0.01267444

5.9E−04

AN US

0.016717272

25,167

0.01312

7.20E−03

100,000

0.01267

N.A

240,000

0.012665

N.A

10,000

0.012665

6.30E−05

7,650

0.0126652

1.26E-05

2,000

M

SC UPSO

CR IP T

Algorithms

4.7.2 Overspeed protection system for a gas turbine

This problem is to maximize reliability of this system and is a mixed-integer nonlinear reliability design optimization issue

ED

(T. C. Chen, 2006). Overspeed detection is continuously provided by the electrical and mechanical systems. When an overspeed happens, it is necessary to cut off the fuel supply using control values (Coelho, 2009). The problem has two types of design variables (i.e. positive integers ni and real number value ri). The mathematical formulation of the problem can be

AC

CE

PT

written below:

max f r,n  

 g 1    s .t .g 2    g 3  

 1  1  r  m

ni

i

i 1

m

 v i ni i

2

(13)

-V  0

1

 C ri ni i m

e

1

m



 w i nie i

0.25 ni

0.25 ni

- C

 0

-W  0

1

C ri    i  T lnri  i 

1  n i  10, n i  Z  ,0.5  ri  1  10  6 , ri  

where V is the upper bound on the sum of the subsystems’ products of volume and weight, C is the upper bound on the cost of the system, C(ri) is the cost of each component with reliability ri at subsystem i, T is the operating time during which the component must not fail, W is the upper limit on the weight of the system. The input parameters of this system are listed in Table 20. Table 20 Data used in overspeed protection system of a gas turbine.

ACCEPTED MANUSCRIPT

stage

𝛼𝑖

𝛽𝑖

𝑣𝑖

𝜔𝑖

𝑉

𝐶

𝑤

𝑇

1

1.0E-05

1.5

1

6

250

400

500

1000

2

2.3E-05

1.5

2

6

3

0.3E-05

1.5

3

8

4

2.3E-05

1.5

2

7

It is noted that any improvement in the objective of this problem is very important for reliability engineering and system safety. Table 21 compared the results found in this work for the complex system with those of other research reported. The

CR IP T

best result obtained by CGWO and CS (Valian & Valian, 2013) can reach at 0.99995468, which has a slight advantage over the other solvers reported. However, CS violates a constraint whereas CGWO satisfies all constraints. Therefore, CGWO outperforms other compared algorithms in terms of the best result. Statistical results for this overspeed protection system are summarized in Table 22. According to the results in Table 22, with regard to the best results, the solutions of CGWO are just slightly better than the solution found by PSO-GC (Coelho, 2009) and GA-PSO (Sheikhalishahi, Ebrahimipour, Shiri, Zaman, & Jeihoonian, 2013) on this problem. Moreover, the standard deviation of the results by CGWO is much smaller than that by

AN US

CS, which denotes CGWO is more stable when solving this problem.

Table 21 Comparison of best solution obtained from various previous studies for overspeed protection system of a gas turbine GA-PSO

CS

CGWO

n1

5

5

5

5

n2

6

5

5

6

n3

4

4

4

4

n4

5

6

6

5

r1

0.902231

0.901628

0.90161460

0.9016347

r2

0.856325

0.888230

0.88822337

0.8499661

r3

0.948145

0.948121

0.94814103

0.94812205

r4

0.883156

0.849921

0.84992090

0.8881983

g1

-55

-55

-55

-55

-0.978964

-0.000006

1.04E-05

-1.8383E-04

g3

CE

F

PT

g2

M

PSO-GC

ED

Variable

-24.80188

-15.363463

-15.363

-24.8018827

0.999953

0.99995467

0.99995468

0.99995468

Table 22 Comparison of statistical results given by different optimizers for the overspeed protection system of a gas turbine Worst

mean

Best

std

NFEs

PSO-GC

0.99993800

0.99990700

0.99995300

1.1E-05

NA

GA-PSO

0.99995467

0.99995467

0.99995467

1.00E−16

NA

CS

0.99991922

0.99995336

0.99995468

4.5576E-06

10,000

CGWO

0.99994674

0.99995419

0.99995468

2.16E−09

5,000

AC

algorithms

4.7.3 Rolling element bearing This problem is proposed in (Gupta, Tiwari, & Nair, 2007; B. R. Rao & Tiwari, 2007) and its objective is to maximize the dynamic load carrying capacity of a rolling element bearing. The design variables are as follows: the ball diameter Dm, pitch diameter Db, the number of balls Z, the inner and outer raceway curvature coefficients fi and f0, KDmin, KDmax, ε, e, and ζ. The latter five parameters appear in constraints and affect the internal geometry. Z is the discrete design variable and the

ACCEPTED MANUSCRIPT

remainder are continuous design variables. According to the literatures (Eskandar, Sadollah, Bahreininejad, & Hamdi, 2012; Sadollah, et al., 2013) , decision variables fi and Db are assumed to be independent variables. Actually there exists a relationship between fi and Db by equations fi = ri/Db and f0=r0/Db where r0 and ri are constant values. Therefore, the decision variables fi and f0 in the optimization procedure can be eliminated. Meanwhile, the upper bound of Db is determined by 0.45(D-d) and minimum value between ri/min(fi) and r0/min(f0), which can result in range of Db from [0.15(D-d), min(0.45(D-d), min(ri/min(fi), r0/min(f0))]. Consequently, the mathematical formulation of the problem can be reconstructed as follows:  Db  fc Z max f Dm , Db , Z , K D min , K D max , , e,     33 . 647 f c Z 2 / 3 Db1.4 

 g1    g 2  g 3 g 4  s.t . g 5 g  6 g 7  g 8  g 9  g 10   g 11

  

1.72

 f i 2 f 0  1     f 0 2 f i  1 

 K D min D  d   2Db  0

 2Db  K D max D  d   0   B w  Db  0

Db  25.4 Db  25.4

(14)

 0

AN US

 0.5D  d   Dm  0

 Dm  0.5  e D  d   0

 Db  0.5D  Dm  Db   0  0.515  f i  0  f i  0.6  0

 0.515  f 0  0  f 0  0.6  0

0.41

10 3

  

ED

  1   f c  37.911  1.04  1    

0

2 sin  1 Db Dm 

if if

M

where

 Z 1

1.8

CR IP T

2/3

   

-0.3

  0.3 1 -  1.39  2 f  0.41 i    13  1     2 f i  1 

 D  d  2  3T 4 2  D 2  T 4  D 2  d 2  T 4 2  b  2D  d  2  3T 4D 2 - T 4 - Db    D r r   b , f i  i , f 0  0 , T  D  d  2Db , D  160, d  90, Bw  30, ri  r0  11.033 Dm Db Db

PT

0  2 - 2 cos 1 

CE

0.5D  d   Dm  0.6D  d , 0.15D  d   Db  min0.45D  d ,min[ r0 min( f 0 ),ri min( f i )], 4  Z  50, 0.4  K D min  0.5, 0.6  K D max  0.7, 0.3    0.4, 0.02  e  0.1, 0.6    0.85.

AC

The CGWO and its competitive algorithms such DE and GWO are run 30 times on this problem and the best values found are recorded in Table 23. Obviously, the results obtained by CGWO are better than that found by the DE and GWO. Furthermore, Table 24 gives statistical results obtained through these algorithms and shows that CGWO outperforms DE and GWO in terms of the mean, best, and standard deviation values. It means CGWO is a very effective method for parameters optimization of a rolling element bearing. Table 23 Comparison of best solution obtained by DE, GWO, and CGWO for rolling element bearing Variable

DE

GWO

CGWO

Dm

125.69547638

125.71678875

125.69088021

Db

21.4174747008

21.4174513046

21.417475728

ACCEPTED MANUSCRIPT

11

11

11

KDmin

0.42433415670

0.48394163540

0.4

KDmax

0.64383088983

0.67494041047

0.7

ε

0.30028179945

0.30028688211

0.30051

e

0.05635192733

0.05297465987

0.02

ζ

0.63988721158

0.62514555525

0.606519856

g1

-1.1732E-03

-2.8946E-03

-4.1907E-10

g2

-13.13155

-8.95898

-14.83495

g3

-2.23321

-4.41093

-6.165049

g4

-2.22086

-2.66308

-3.22188

g5

-0.69547

-0.71678

-13.3925

-12.52688

g7

-0.012246

-1.5003E-03

g8

-2.4704E-08

-5.8728E-07

g9

-0.0850

-0.0850

g10

-2.4704E-08

-5.8728E-07

g11

-0.0850

F

81803.125

-0.68088 -4.31912

-0.014656

-3.7352E-12 -0.0850

-3.7352E-12

AN US

g6

CR IP T

Z

-0.0850

-0.0850

81801.065

81803.647

Table 24 Comparison of statistical results given by DE, GWO, and CGWO for rolling element bearing Worst

Mean

DE

76787.252

78403.648

GWO

76771

79525.009

CGWO

76789.100

81336.829

Best

std

NFEs

81803.125

2343.593

10,000

81801.065

2473.211

10,000

81803. 647

1417.442

10,000

ED

M

algorithms

To sum up, the above empirical studies indicate that the proposed CGWO has superiority over the current meta-heuristics considered. First, the statistical results for the unconstrained benchmarks show the good performance of CGWO in terms of exploitation and exploration, and then regarding convergence CGWO is also superior or competitive to its rivals. Finally, the

Conclusions and future work

CE

5.

PT

results of engineering problems demonstrate that CGWO is a promising approach.

In this paper we have proposed a CGWO by integrating CA model and GWO. The search process in CGWO is guided by

AC

three good wolves (solutions) but this interaction is restricted within its neighborhood, which can keep a good balance between diversity and convergence. CGWO has been compared to the other seven current metaheuristics such as LSHADE, TLBO, EBOwithCMAR, BA, NDHS, CLPSO and GWO on unimodal, multimodal and CEC2015 benchmarks. Furthermore, CGWO has been compared to some recent improved versions of GWO on unimodal and multimodal benchmarks. Then, CGWO is compared to state-of-art multi-niche metaheuristics such as EAD, CDE, NCDE, RPSO, and LIPS on multi-niche benchmarks in CEC2015. A nonparametric statistical test called Wilcoxon sign-rank test is used to detect the significant difference between the results obtained by different algorithms. CGWO is also applied successfully into engineering problems. Empirical results reveal that CGWO is an effective approach for optimization problems. Additionally, the main contributions of this work are as follows. (1) The cellular automata (CA) concept is embedded into the GWO. CA provides a given topological neighborhood

ACCEPTED MANUSCRIPT

structure. The population can be divided into many subpopulations by this given neighborhood structure. Each subpopulation forms an independent niche, which can improve its local search. (2) CGWO with a topological structure can also help to improve diversity of population. Each subpopulation can update itself in its own neighborhood. Therefore, different subpopulations have different search directions, which helps to the search diversity. The overlap between consecutive neighborhoods can provide a recessive migration mechanism without any complex mathematical equation. The information of good solutions can be shared to some extent. It can enhance its convergence of the proposal. (3) The performance of the CGWO is sensitive to neighborhood size for most instances. An appropriate neighborhood size depends on specific problems. It can be found by numerical experiments that the neighborhood size named C25 is the best

CR IP T

choice for CGWO.

(4) Compared with other metaheuristics, the proposed CGWO is able to find all the optimal solutions rather than a single optimal solution for multiple peaks benchmarks, especially for CEC2015 test suit. It thus indicates that CGWO can maintain a good diversity of population.

In this study, the CGWO is proposed to solve single-objective and continuous optimization problems well. However, many

AN US

optimization problems involve multiple objectives and discrete decision variables in real-world application. Additionally, only GWO is incorporated into the CA, while other metaheuristics are not hybridized with CA. With respect to future work, firstly, research can be extended to a multi-objective grey wolf optimization algorithm. Secondly, more metaheuristics and problems are considered to test the performance of the proposed CGWO in the comparison experiment. Thirdly, it is important to understand how to set an adaptive neighborhood size for CGWO, although a recommend neighborhood size is given in our work. Fourthly, it would be valuable to apply the CGWO into many applications such as production scheduling

M

in industry.

Acknowledges

ED

The project was supported by the fundamental research funds for the central universities, China University of Geosciences (Wuhan) (No. CUG170688), National Natural Science Foundation of China (NSFC) under Grant nos. 51775216, 51375004

PT

and 51505439 References

CE

Al-Betar, M. A., Awadallah, M. A., Khader, A. T., & Abdalkareem, Z. A. (2015). Island-based harmony search for optimization problems. Expert Systems with Applications, 42, 2026-2035. Alba, E., & Dorronsoro, B. (2005). The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. Ieee

AC

Transactions on Evolutionary Computation, 9, 126-142. Chen, J., Pan, Q. K., & Li, J. Q. (2012). Harmony search algorithm with dynamic control parameters. Applied Mathematics and Computation, 219, 592-604.

Chen, T. C. (2006). IAs based approach for reliability redundancy allocation problems. Applied Mathematics and Computation, 182, 1556-1567.

Coelho, L. D. (2009). An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications. Reliability Engineering & System Safety, 94, 830-837. Coelho, L. D. (2010). Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Systems with Applications, 37, 1676-1683. Črepinšek, M., Liu, S.-H., & Mernik, L. (2012). A note on teaching–learning-based optimization algorithm. Information Sciences, 212, 79-93.

ACCEPTED MANUSCRIPT

Cuevas, E., & Cienfuegos, M. (2014). A new algorithm inspired in the behavior of the social-spider for constrained optimization. Expert Systems with Applications, 41, 412-425. Das, S., Abraham, A., Chakraborty, U. K., & Konar, A. (2009). Differential Evolution Using a Neighborhood-Based Mutation Operator. Ieee Transactions on Evolutionary Computation, 13, 526-553. Deb, K. (2000). An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, 186, 311-338. Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation, 1, 3-18.

CR IP T

Durillo, J. J., & Nebro, A. J. (2011). jMetal: A Java framework for multi-objective optimization. Advances in Engineering Software, 42, 760-771.

E. R. Berlekamp, J. H. Conway, & Guy, R. K. (1982). Winning Ways for your Mathematical Plays (Vol. 2). New York: Academic Press.

Eskandar, H., Sadollah, A., Bahreininejad, A., & Hamdi, M. (2012). Water cycle algorithm - A novel metaheuristic optimization method for solving constrained engineering optimization problems. Computers & Structures, 110,

AN US

151-166.

Gao, L., Huang, J. D., & Li, X. Y. (2012). An effective cellular particle swarm optimization for parameters optimization of a multi-pass milling process. Applied Soft Computing, 12, 3490-3499.

Goldberg, D. E., & Holland, J. H. (1988). Genetic algorithms and machine learning. Machine learning, 3, 95-99. Gu, F., Cheung, Y.-m., & Luo, J. (2015). An evolutionary algorithm based on decomposition for multimodal optimization problems. In

Evolutionary Computation (CEC), 2015 IEEE Congress on (pp. 1091-1097): IEEE.

Gupta, S., Tiwari, R., & Nair, S. B. (2007). Multi-objective design optimisation of rolling bearings using genetic algorithms.

M

Mechanism and Machine Theory, 42, 1418-1443.

He, Q., & Wang, L. (2007). An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence, 20, 89-99.

ED

He, Q., & Wang, L. (2007). A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Applied Mathematics and Computation, 186, 1407-1422. Heidari, A. A., & Pahlavani, P. (2017). An efficient modified grey wolf optimizer with Lévy flight for optimization tasks.

PT

Applied Soft Computing, 60, 115-134.

Huang, F. Z., Wang, L., & He, Q. (2007). An effective co-evolutionary differential evolution for constrained optimization.

CE

Applied Mathematics and Computation, 186, 340-356. J, K., & R, E. (1995). Particle swarm optimization. In

IEEE international conference on neural networks (pp. 1942-1948).

Joshi, H., & Arora, S. (2017). Enhanced Grey Wolf Optimization Algorithm for Global Optimization. Fundamenta

AC

Informaticae, 153, 235-264. Kaveh, A., & Bakhshpoori, T. (2016). Water Evaporation Optimization: A novel physically inspired optimization algorithm. Computers & Structures, 167, 69-85.

Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simmulated annealing. Science, 220, 671-680. Kumar, A., Misra, R. K., & Singh, D. (2017). Improving the local search capability of Effective Butterfly Optimizer using Covariance Matrix Adapted Retreat Phase. In

Evolutionary Computation (pp. 1835-1842).

Li, M., Zhao, H., Weng, X., & Han, T. (2016). Cognitive behavior optimization algorithm for solving optimization problems. Applied Soft Computing, 39, 199-222. Li, X. D. (2010). Niching Without Niching Parameters: Particle Swarm Optimization Using a Ring Topology. Ieee Transactions on Evolutionary Computation, 14, 150-169. Liang, J. J., Qin, A. K., Suganthan, P. N., & Baskar, S. (2006). Comprehensive learning particle swarm optimizer for global

ACCEPTED MANUSCRIPT

optimization of multimodal functions. Evolutionary Computation, IEEE Transactions on, 10, 281-295. Liu, H., Cai, Z., & Wang, Y. (2010). Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Applied Soft Computing, 10, 629-640. Lu, C., Gao, L., Li, X., & Xiao, S. (2017). A hybrid multi-objective grey wolf optimizer for dynamic scheduling in a real-world welding industry. Engineering Applications of Artificial Intelligence, 57, 61-79. Lu, C., Li, X. Y., Gao, L., Liao, W., & Yi, J. (2017). An effective multi-objective discrete virus optimization algorithm for flexible job-shop scheduling problem with controllable processing times. Computers & Industrial Engineering, 104, 156-174. Lu, C., Xiao, S., Li, X., & Gao, L. (2016). An effective multi-objective discrete grey wolf optimizer for a real-world

CR IP T

scheduling problem in welding production. Advances in Engineering Software, 99, 161-176.

Malik, M. R. S., Mohideen, E. R., & Ali, L. (2016). Weighted distance Grey wolf optimizer for global optimization problems. In

IEEE International Conference on Computational Intelligence and Computing Research (pp. 1-6).

Matej, Črepinšek, Liu, S.-H., & Mernik, M. (2013). Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv., 45, 1-33.

Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based

AN US

Systems, 89, 228-249.

Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-Verse Optimizer: a nature-inspired algorithm for global optimization. Neural Computing & Applications, 27, 495-513.

Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey Wolf Optimizer. Advances in Engineering Software, 69, 46-61. Neumann, J. v. (1966). Theory of Self-Reproducing Automata. Illinois: University of Illinois Press. Parsopoulos, K. E., & Vrahatis, M. N. (2005). Unified Particle Swarm Optimization for solving constrained engineering optimization problems. Advances in Natural Computation, Pt 3, Proceedings, 3612, 582-591.

M

Piotrowski, A. P. (2013). Adaptive Memetic Differential Evolution with Global and Local neighborhood-based mutation operators. Information Sciences, 241, 164-194.

Punnathanam, V., & Kotecha, P. (2016). Yin-Yang-pair Optimization: A novel lightweight optimization algorithm.

ED

Engineering Applications of Artificial Intelligence, 54, 62-79. Qu, B. Y., Liang, J. J., Suganthan, P. N., & Chen, Q. (2015). Problem Definitions and Evaluation Criteria for the CEC 2015 Competition on Single Objective Multi-Niche Optimization. In. Zhengzhou.

PT

Qu, B. Y., Liang, J. J., Wang, Z. Y., Chen, Q., & Suganthan, P. N. (2016). Novel benchmark functions for continuous multimodal optimization with comparative results. Swarm and Evolutionary Computation, 26, 23-34.

CE

Qu, B. Y., Suganthan, P. N., & Liang, J. J. (2012). Differential Evolution with Neighborhood Mutation for Multimodal Optimization. Ieee Transactions on Evolutionary Computation, 16, 601-614. Rao, B. R., & Tiwari, R. (2007). Optimum design of rolling element bearings using genetic algorithms. Mechanism and

AC

Machine Theory, 42, 233-250. Rao, R. V., Savsani, V. J., & Balic, J. (2012). Teaching-learning-based optimization algorithm for unconstrained and constrained real-parameter optimization problems. Engineering Optimization, 44, 1447-1462.

Rao, R. V., Savsani, V. J., & Vakharia, D. P. (2012). Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Information Sciences, 183, 1-15.

Rodríguez, L., Castillo, O., & Soria, J. (2016). Grey wolf optimizer with dynamic adaptation of parameters using fuzzy logic. In

Evolutionary Computation (pp. 3116-3123).

Rodríguez, L., Castillo, O., Soria, J., Melin, P., Valdez, F., Gonzalez, C. I., Martinez, G. E., & Soto, J. (2017). A Fuzzy Hierarchical Operator in the Grey Wolf Optimizer Algorithm. Applied Soft Computing. Rodriguez, L., Castillo, O., Garcia, M., Soria, J., Valdez, F., & Melin, P. (2017). Dynamic simultaneous adaptation of parameters in the grey wolf optimizer using fuzzy logic. In

IEEE International Conference on Fuzzy Systems (pp.

ACCEPTED MANUSCRIPT

1-6). Sadollah, A., Bahreininejad, A., Eskandar, H., & Hamdi, M. (2013). Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Applied Soft Computing, 13, 2592-2612. Sanchez, D., Melin, P., & Castillo, O. (2017). A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition. Computational Intelligence and Neuroscience, 2017, 26. Sheikhalishahi, M., Ebrahimipour, V., Shiri, H., Zaman, H., & Jeihoonian, M. (2013). A hybrid GA-PSO approach for reliability optimization in redundancy allocation problem. International Journal of Advanced Manufacturing Technology, 68, 317-338. 4460-4493.

CR IP T

Shi, Y., Liu, H. C., Gao, L., & Zhang, G. H. (2011). Cellular particle swarm optimization. Information Sciences, 181, Simon, D. (2008). Biogeography-Based Optimization. Evolutionary Computation, IEEE Transactions on, 12, 702-713. Storn, R., & Price, K. (1997). Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11, 341-359.

Tanabe, R., & Fukunaga, A. S. (2014). Improving the search performance of SHADE using linear population size reduction. In

Evolutionary Computation (pp. 1658-1665).

Evolutionary Computation,

AN US

Thomsen, R. (2004). Multimodal optimization using crowding-based differential evolution. In 2004. CEC2004. Congress on (pp. 1382-1389 Vol.1382).

Valian, E., & Valian, E. (2013). A cuckoo search algorithm by Levy flights for solving reliability redundancy allocation problems. Engineering Optimization, 45, 1273-1286.

Wang, Y., Cai, Z. X., Zhou, Y. R., & Fan, Z. (2009). Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique. Structural and Multidisciplinary Optimization, 37, 395-413. Computations, 29, 464-483.

M

Yang, X. S., & Gandomi, A. H. (2012). Bat algorithm: a novel approach for global engineering optimization. Engineering Yao, X., Liu, Y., & Lin, G. M. (1999). Evolutionary programming made faster. Ieee Transactions on Evolutionary Computation, 3, 82-102.

ED

Yi, J., Gao, L., Li, X. Y., & Gao, J. (2016). An efficient modified harmony search algorithm with intersect mutation operator and cellular local search for continuous function optimization problems. Applied Intelligence, 44, 725-753. Yu, H., Liu, Y., Wang, Y., & Gao, S. (2017). Chaotic grey wolf optimization. In

International Conference on Progress in

PT

Informatics and Computing (pp. 103-113).

Zhang, S., Zhou, Y. Q., Li, Z. M., & Pan, W. (2016). Grey wolf optimizer for unmanned combat aerial vehicle path planning.

AC

CE

Advances in Engineering Software, 99, 121-136.

Caption lists Fig. 1. Position updating in GWO Fig. 2. Structure of neighborhood Fig. 3. Flow chart of the proposed CGWO algorithm Fig. 4. Pseudo code of the proposed CGWO algorithm

ACCEPTED MANUSCRIPT

Fig. 5. Position updating in CGWO Fig. 6. Convergence curve of these algorithms when solving benchmarks Fig. 7. Niching behavior of CGWO (with a population of 400) on F25 in a single run Fig. 8. Niching behavior of CGWO (with a population of 400) on F26 in a single run Fig. 9. Niching behavior of CGWO (with a population of 400) on F28 in a single run Fig. 10. Niching behavior of CGWO (with a population of 400) on F30 in a single run Fig. 11. CPU-Time consumption on all test functions

Table 2. Multimodal benchmark functions Table 3. CEC2015 benchmark functions Table 4. Parameter setting of algorithms Table 5. Results by CGWO with different neighborhood on unimodal benchmark functions Table 6. Results by CGWO with different neighborhood on multimodal benchmark functions

CR IP T

Table 1. Unimodal benchmark functions

Table 7. Wilcoxon sign rank test on the solution by CGWO with different structures for benchmarks in Tables 1-2 (a level of significance α=0.05)

AN US

Table 8. Results by different algorithms on unimodal benchmark functions Table 9. Results by different algorithms on multimodal benchmark functions

Table 10. Wilcoxon sign rank test on the solution by different algorithms for benchmarks in Tables 1-2 (a level of significance α=0.05) Table 11. Success rate on CEC2015 with multiple global optimum

Table 12. Wilcoxon sign rank test on the solution by different algorithms for CEC2015 with multiple global optimum (a level of significance α=0.05) Table 13. Results by different improved versions of GWO and iHS on benchmarks in Tables 1 and 2

Table 14. Wilcoxon sign rank test on the solution by different improved versions of GWO for benchmarks in Tables 1-2 (a level of significance α=0.05)

M

Table 15. Results by different algorithms on average number of optima found (F24-F31)

Table 16. Results by different algorithms (statistic values of 5 best solutions separated at least by 10 on F32-F38)

ED

Table 17. CPU-time costs on three different test problems

Table 18. Comparison of best solution obtained from various previous studies for tension/compression spring design. Table 19. Comparison of statistical results given by different optimizers for tension/compression spring design Table 20. Data used in overspeed protection system of a gas turbine.

PT

Table 21. Comparison of best solution obtained from various previous studies for overspeed protection system of a gas turbine Table 22. Comparison of statistical results given by different optimizers for the overspeed protection system of a gas turbine

CE

Table 23. Comparison of best solution obtained by DE, GWO, and CGWO for rolling element bearing

AC

Table 24. Comparison of statistical results given by DE, GWO, and CGWO for rolling element bearing