A global-best guided phase based optimization algorithm for scalable optimization problems and its application

A global-best guided phase based optimization algorithm for scalable optimization problems and its application

Journal of Computational Science 25 (2018) 38–49 Contents lists available at ScienceDirect Journal of Computational Science journal homepage: www.el...

NAN Sizes 2 Downloads 102 Views

Journal of Computational Science 25 (2018) 38–49

Contents lists available at ScienceDirect

Journal of Computational Science journal homepage: www.elsevier.com/locate/jocs

A global-best guided phase based optimization algorithm for scalable optimization problems and its application Zijian Cao a,b , Lei Wang a,∗ , Xinhong Hei a a b

School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China School of Computer Science and Engineering, Xi’an Technological University, Xi’an, 710021, China

a r t i c l e

i n f o

Article history: Received 10 April 2017 Received in revised form 25 December 2017 Accepted 1 February 2018 Available online 6 February 2018 Keywords: Stochastic search Global-best guided search Large scale optimization Transmission pricing

a b s t r a c t Large scale optimization problems are more representative of real-world problems and remain one of the most challenging tasks for the design of new type of evolutionary algorithms. Very recently, a new metaheuristic algorithm named Phase Based Optimization (PBO) inspired by the different motional features of individuals under three different phases (gas phase, liquid phase and solid phase) was proposed. In order to improve PBO for solving large scale optimization problems, an effective search strategy combining complete stochastic search (the diffusion operator) and global-best guided search (the improved perturbation operator) is utilized. The proposed strategy can provide well-balanced compromise between the population diversity (diversification) and convergence speed (intensification) especially in solving large scale optimization problems. We term the improved algorithm as Global-best guided PBO (GPBO) to avoid ambiguity. Seven well-known scalable benchmark functions and a real-world large scale transmission pricing problem are used to validate the performance of GPBO compared with some state-of-the-art algorithms. The experimental results demonstrate that GPBO can provide better solution accuracy and convergence ability in both large scale benchmark functions and real-world optimization problem. © 2018 Elsevier B.V. All rights reserved.

1. Introduction Large scale optimization problems have achieved more and more attentions in evolutionary computation and swarm intelligence [1–3], because some real industry problems may have hundreds and even thousands of variables and those large scale problems bring high complex challenging tasks in the optimization process such as strong interaction variables and high multimodality [4–8]. With the increasing of the number of decision variables, as well as the change that some problems suffer from their own features with dimensions [2,9], the solution space of the problem also increases exponentially, the performance of most optimization algorithms deteriorate rapidly[10]. That is to say, a previously successful search strategy may become less effective in finding the optimal solution as the dimensionality of the search space increases. All these difficulties motivate us to deeply analyze the scalable features of large scale problems and to further develop more effective optimization algorithms to tackle problems with hundreds of variables.

∗ Corresponding author at: School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, 710048, China. E-mail addresses: [email protected] (Z. Cao), [email protected] (L. Wang). https://doi.org/10.1016/j.jocs.2018.02.001 1877-7503/© 2018 Elsevier B.V. All rights reserved.

There are several factors that cause the difficulties in solving large scale optimization problems listed above. Firstly, the difficulty of large scale optimization problem mainly lies in that it is high-dimensional, and the solution space of one problem will increase exponentially with the problem dimension, i.e. curse of dimensionality. It needs more effective search strategies to explore all promising regions in given computational resources or number of fitness evaluation cost. Secondly, in the high-dimensional space, the direction of a good individual has the low probability to point to the global optimum, so it is difficult to determine which direction of the good individual is better in fixed number of fitness evaluations. In the field of evolutionary computation and swarm intelligence, an individual is evaluated on the whole dimensions, even the individual is updated on only one dimension. The update of an individual deviates from the combination of several vectors, such as the current individual, the difference between current individual and previous best individual, the difference between current individual and neighbor best individual, or the difference between two random individuals, etc. The direction of these vectors combination has the high probability to point to the global optimum in the low dimensional space [9], but this scheme is not necessarily suitable for high dimensional space. Thirdly, the strong interaction between variables increases the difficulty to search the global optimum in tackling high-dimensional problems.

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

In above three factors, the most crucial task of an evolutionary algorithm is how to deal with the complex search space resulted from high dimensionality [11]. About the above difficulties, many effective algorithms with particular mechanisms have been proposed. There are mainly two categories of algorithms which are Cooperative Coevolution (CC) based algorithms and nondecomposition-based algorithms. The first category is CC-based algorithms with specific decomposition strategies which adopt the divide-and-conquer approach to decompose large scale problems into multiple low dimensional subcomponents [12]. After Potter and De Jong firstly incorporated CC into Genetic Algorithm (GA) by decomposing one n-dimensional problem into n 1-D problems for function optimization [13], there are many decomposition strategies utilizing the CC-based methods to decompose a high dimensional problem into a number of low dimensional problems for the purpose of solving the whole problem, such as Random Grouping [14,33], Delta Grouping [15], Variable Interactive Learning [16], Differential Grouping [17], Multilevel CC [18], High Dimensional Model Representation [19]. Cooperative coevolution with different forms has obtained better performance and great successful for separable problems. However, if the variables are fully non-separable, all the CC-based algorithms will lose the functionality [20]. Besides, in CC-based algorithms, many issues like robustness and efficiency of grouping strategies should also be considered in reasonable computational cost. Non-decomposition-based algorithms are the second category of algorithms which are devised with effective operators or combined with other optimization algorithms to solve a large scale problem as a whole [8]. Without divide-and-conquer strategy, the non-decomposition-based algorithms focus on the especial alteration mechanisms or methods, such as efficient population utilization strategy [21], dynamic multi-swarm strategy [22], competitive mechanism [23], social learning mechanism [24,25], opposition-based learning [26], sampling operators [27,28], hybridization [29], restarting strategy [10], local search [30], hybrid metaheuristic strategies[31], and etc. All those algorithms have also obtained good results for separable and non-separable large scale problems, but there is still no universal algorithm to perfectly solve all large scale optimization problems [32]. In addition, from the standpoint of utilization, many algorithms required especial burden, such as parameter tuning, population initialization [33], it will be a very tedious and trivial task. In this study, we mainly concern the second category of non-decomposition-based algorithms. Most recently, a new meta-heuristic termed Phase Based Optimization (PBO) was proposed by authors [34]. PBO is inspired by the different motional features of individuals with three different phases in nature (gas phase, liquid phase and solid phase) and it exhibits good performance on big optimization problems [3]. Thus, in this paper, we will further investigate the performance of PBO in large scale optimization problems, and attempt to improve PBO to better adapt to more complex high-dimensional problems. In this study, based on the original PBO, an improved PBO with an effective strategy combining complete stochastic search and global-best guided search which is termed Global-bestguided PBO (GPBO) for solving large scale optimization problems is proposed. In this paper, firstly, we propose an effective strategy combining complete stochastic search and global-best guided search that shows the better performance in small scale optimization problems (100-D and 500-D). The basic idea of this strategy is that the more the dimensions of the problem are, the more effective the global search and scalable capability of the search algorithm needs. Therefore, under the premise of maintaining good diversity (diversification), when positioning to the approximate position of the optimal solution, the strategy should be diverted to strengthen the local search and fine-tune capability (intensification). Sec-

39

gas phase liquid phase

solid phase

Global optimum

Gas individual

Liquid individual

Solid individual

Fig 1. Search process of PBO.

ondly, the strategy is further validated by seven well-known large scale benchmark functions (1000-D). Finally, the proposed GPBO is applied to large scale transmission pricing problem. By introducing the effective strategy combining complete stochastic search and global-best guided search into PBO, PBO has been greatly improved through experimental study not only on large scale benchmark functions, but also on real-world complex optimization problem in engineering and science. The remainder of this paper is organized as follows. In Section 2, we briefly provide a review of phase based optimization algorithm. In Section 3, the global-best guided phase based optimization algorithm is described in detail, and followed by the analyses about the dynamic search process of GPBO. In Section 4 the experimental results and comparisons with other six algorithms are demonstrated. A case analysis about large scale transmission pricing using GPBO is given in Section 5. Finally, Section 6 draws the conclusion. 2. A brief review of PBO algorithm 2.1. Basic idea PBO is a newly developed population-based stochastic search algorithm proposed by authors [34], which mainly simulates three types of motional features of individuals with three completely different phases which are gas phase, liquid phase and solid phase in nature. The simplified search process of PBO is shown in Fig. 1. In Fig. 1, the label of star presents the global optimum to be searched for. The labels of square in the nearer region with the global optimum indicate the individuals of solid phase, and the labels of circle in the farther region from the global optimum represent the individuals of gas phase, and the labels of triangle is the individuals of liquid phase. In an iterative process, three phases of individuals respectively move in the search space according to their corresponding rules. The individual of gas phase moves freely towards a casual position in an arbitrary direction, this motional characteristic of gas phase is adopted to act as the global divergence in PBO. The individual of liquid phase moves in its neighborhood range towards the individual with lower temperature, hence we adopt this motional characteristic of liquid phase to play an important role of convergence. On the contrary, the individual of solid phase slightly vibrates at its original location in a very regular mode, so this motional characteristic of solid phase is utilized to act as a fine-tuning local search.

40

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

Table 1 Motional features of individuals in three phases.

ual with relatively lower temperature, hence, the flowing operator of liquid individual is implemented as follows

Phase

Main features

Motion

GP

disorganized, moving towards an arbitrary direction a state between disorganizedand regular, having a moderate activity regular, moving in a certain rule

stochastic motion

LP SP

j

vibration motion

Definition 1. Gas Phase (GP). The individual of gas phase is in a completely disorder state, and can move large step freely towards any direction. In gas phase, the individual is devised to exert the capability of global search. Definition 2. Liquid Phase (LP). The individual of liquid phase can move in a certain law, such as towards the lower temperature point. In liquid phase, the individual is devised to own the capability of local convergence. Definition 3. Solid Phase (SP). The individual of solid phase moves in a very small range of activity. In solid phase, the individual possesses the capability of fine tune. The following Table 1 generalizes the motional features of three different phases. In PBO, the fitness value of an individual corresponds to the temperature of the individual, which is directly associated with the temperature of the natural environment around it. Suppose the location of the lowest temperature corresponds to the best solution. It is very intuitive that the higher temperature part of the population are regarded as individuals of gas phase, and the lower temperature part of the population are taken as individuals of solid phase, thus the remaining middle part of the population are individuals of liquid phase. A simple subpopulation partitioning method based on the level of individual temperature is utilized to divide the whole population into three sub-populations, i.e. gas subpopulation, liquid subpopulation and solid subpopulation. 2.2. The main operators In PBO, there are mainly three operators which are diffusion operator of gas phase, flowing operator of liquid phase and perturbation operator of solid phase. The three operators are briefly introduced as follows. The readers are referred to [34] for the more details about PBO. Firstly, diffusion is a common phenomenon that an individual of gas phase randomly moves in the whole search space. Thus, the diffusion operator of gas phase is described as follows j

j

j

j

j

(2)

j

flowing motion

In nature, it is obvious that the individual is most unstable in the state of gas phase. On the contrary, in the solid phase, the individual is most stable. The liquid phase, between gas phase and solid phase, is the intermediate phase. The specific definitions of three phases are as following.

newXi = Xi + rd · (UBj − LBj )/2

j

newXi = Xi + rf · (Xlt − Xi )

(1)

wherenewXi is the new individual after the stochastic diffusion motion randomly moving in the whole search space, rd is a random number between (0, 1). LBj , UBj denote the lower boundary and upper boundary of the j dimensional space, respectively. Besides, in liquid phase, owing to the distance between the individuals relatively closer than in gas phase, the individuals are attracted to each other with a certain degree of fluidity. In generally, an individual will be inclined to be attracted closer to any individ-

wherenewXi is the new individual after the flowing operation inclining to be attracted closer to any individual with lower temperature,Xlt is an individual with lower fitness value compared withXi , andrf is a uniformly distributed random number between 0 and 1. In addition, individuals in solid phase are usually apt to merely vibrate about their original location in a certain direction of up and down, therefore, the perturbation operator of solid phase is carried out as follows newXid1 = Xid1 + rp · (Xid1 − Xid2 )

(3)

wherenewXid1 is the new position of Xi in the d1 dimension after the perturbation operation which is apt to merely vibrate about its original location in a certain direction of up and down, and rp is a random number between −1 and 1. The indices of d1 and d2 are mutually exclusive integer values randomly chosen from [1, D]. 3. Global-best guided phase based optimization algorithm In PBO, a basic optimization framework inspired by motional features of individuals with three different phases to tackle some basic unimodal and multimodal optimization problems functions, was proposed. However, in the literature, the optimization problems have become more and more complex, such as shifted high-dimensional functions, etc. It is well-known that the meta-heuristic algorithms, especially for population-based metaheuristic algorithms, usually integrate some specific laws into the stochastic search process inspired by natural phenomena and principles. Based on this motivation, we attempt to improve the original PBO to suitable for solving more complex optimization problems especially for high-dimensional problems. 3.1. Global-best guided PBO (GPBO) In this Section, we present an effective strategy combining complete stochastic search which is derived from the diffusion operator of gas phase and guided search which is implemented by modifying the perturbation operator of solid phase using a global-best guided approach. It is worth to remind that we only modify the perturbation operator of solid phase without any major conceptual change to its structure and parameters. Since the main difference of the original PBO and the improved PBO algorithm lies in the global-best guided perturbation operator, we term the improved algorithm as global-best guided PBO (GPBO) to avoid ambiguity. The basic idea of the new strategy in GPBO is given as follows. In solving the high-dimensional optimization problems, the more the dimension of the optimization problem is, the more effective the global search of the optimization algorithm needs. In PBO, the diffusion operator of gas phase can be seen as a complete stochastic search strategy. Although the efficiency of this stochastic search strategy is relatively low in the terms of the accuracy of solution, the overall effect of the global search will be very good since the stochastic search can reach an arbitrary feasible region. However, when positioning to the approximate position of the optimal solution, the strategy should be quickly diverted to strengthen the exploitation and fine-tune capability and to expedite the convergence speed of searching the optimal solution. A quick convergence speed in local search will greatly help save more number of fitness evaluation under the given computational resources, and the number of savings is conducive to expand the more larger search space. In PBO, the perturbation operator of solid

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

41

Begin

Initialize parameter: popsize(N), dimension(D), genmax

Population initialization

Evaluate individual and determine the interval of three phases

Liquid phase

Gas phase

Solid phase

Flowing operator

Diffusion operator

Global-best guided perturbation operator

gen = gen + 1 Y

N

f(newXi) < f(Xi ) Xi = Xi

Xi = newXi

N

termination criteria satisfied Y

End Fig. 2. Main flowchart of GPBO.

phase happens only in two different dimensions of one individual, and there is no any interaction with other individual and no guided orientation to the optimal location. Based on the above considerations, a global-best guided perturbation operator is given as follows. newXid1

=

Xid1

d1 + rg · (Xgb

− Xid1 )

(4)

where newXid1 is the new position of Xi in the d1 dimension after the global-best guided perturbation operation, and rg is also a random number between −1 and 1.d1 is an integer value randomly chosen from the range between 1 and D. Besides,Xgb is the global best individual in the current population. It is worth pointing out that the global-best guided perturbation operator aims at only one dimension of the individual and the global best individual, and other dimensions of the individual Xi are not changed. It is completely different from the Particle Swarm Optimization (PSO) algorithm which also adopts a guidance of global best individual using all dimensions. Besides, compared with canonical PSO where the velocity is very important to the search performance, but there is no velocity item in global-best guided perturbation operator of GPBO. 3.2. The main flowchart of GPBO As described above, the main flowchart of GPBO is given in Fig. 2. 3.3. The dynamic search process of GPBO The step-wise optimization process using GPBO is given in this Section. Herein, Rastrigin’s function is considered as an example

for demonstration of the optimization process. Two-dimensional landscape plot for Rastrigin’s function is shown in Fig. 3 (a). The Rastrigin’s formula is described as follows. F(x) = 10D +

D i=1

(xi2 − 10 cos(2xi ))

(5)

The Rastrigin’s function is a classic test function in optimization theory that the point of global minimum is surrounded by a great number of local minima. To converge to the global minimum without being stuck at one of these local minima is extremely difficult, moreover,some numerical optimizers need take a long time to converge to the global optimum. In this experiment, the dynamic search steps with 10 individuals at various generations (gen = 1, 5 and 10) in one evolutionary process are shown in Fig. 3(b)–(d), respectively. In Fig. 3(b)–(d), the labels of red circle present the optimal solution to be found. Firstly, the individuals are uniform random distribution in the whole search space in Fig. 3(b). Then, in the fifth generation, some local minima can be positioned by serveral individuals under the influence of global-best guided perturbation operator. As we all know, in order to effectively utilize the number of fitness evaluations, the fast localization to the local minima is very important for the optimization of the high-dimensional functions. Next, in the tenth generation, many individuals can further search to the global minimum. it can be observed that the population distribution information can significantly vary at various generations during the optimization process. Hence, it can be verified by experiments that GPBO can be effective to adapt to a landscape with many local minima to find the global minimum with fast convergence and meanwhile to retain better diversity of population.

42

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

Fig. 3. Dynamic search at various generations in an evolutionary process of GPBO.

3.4. Why GPBO can work for high dimensional optimization problems As has been explained in the Section of introduction, the solution space of one problem will increase exponentially with the problem dimension, and the search space for high dimensional problems is tremendous in size. This problem is addressed in GPBO by evolving multiple subpopulations (including the subpopulations of gas phase, liquid phase and solid phase), thus ensuring the diversity of search and trying to cover larger region of the search space. It is worth noting that the subpopulation of gas phase is completely stochastic, so it can guarantee good global search performance and avoid falling into local optima. Besides, the direction of a good individual has the low probability to point to the global optimum in the high-dimensional space, so it is difficult to determine which direction of the good individual is better. This is a major difficulty to obtain better solution in case of high dimensional problems.This problem is taken care of by global-best guided search with one stochastic dimensional direction concerning the subpopulation of solid phase. Because under the condition of which the mutation is carried out in the multi-

dimensional direction, the probability of the loss of the elitist parts will be greatly increased. That is to say, the mutation of the better fitness level subpopulation in one dimension will help the retention of elitist parts.

4. Experimental studies 4.1. Test problems Seven widely used benchmark functions proposed in the CEC 2008 special session on large scale optimization problems [1], are used to evaluate the performance of GPBO. All of seven functions are scalable for any size of dimension, and the detailed description of them are summarized in Table 2. Each function corresponds to three variants, with 100, 500 and 1000 dimensions, respectively. Hence, in total, there are twenty-one minimization problems. Nevertheless, in order to assess the performance of GPBO on high-dimensional problems, we focus more on 1000 dimensions of test functions of CEC 2008.

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

43

Table 2 Test functions of CEC 2008. No.

function Name

type

low

up

separability

F1 F2 F3 F4 F5 F6 F7

Shifted Sphere Function Shifted Schwefel’s Problem 2.21 Shifted Rosenbrock’s Function Shifted Rastrigin’s Function Shifted Griewank’s Function Shifted Ackley’s Function Fast Fractal Function

unimodal unimodal multi-modal multi-modal multi-modal multi-modal multi-modal

−100 −100 −100 −5 −600 −32 −1

100 100 100 5 600 32 1

separable non-Separable Non-Separable non-Separable Non-Separable separable non-Separable Non-Separable separable non-Separable Non-Separable

4.2. Compared algorithms and parameter settings The performance of GPBO will be compared with two kinds of algorithms which one is CC based algorithm including CCPSO2 [35], DECC-G [37] and MLCC [38], another is non-decomposition-based algorithm containing CSO [23], SLPSO [24] and DMS-PSO [36]. The compared algorithms are the state-of-the-art algorithms in solving large scale optimizations problems, and are briefly described as followings. 1) CCPSO2: The new version of Cooperative Coevolving PSO (CCPSO2) algorithm, which not only employs the CC framework with an effective variable grouping technique (random grouping), but also adopts a new position update rule that relies on Cauchy and Gaussian distributions to sample new points in the search space, and a scheme to dynamically determine the coevolving subcomponent sizes of the variables [35]. 2) DECC-G: DECC-G utilizes Self adaptive Differential Evolution with Neighborhood Search (SaNSDE) algorithm [39] under theCC framework. In DECC-G, a random grouping scheme is firstly introduced to solve separable problems in the CC framework which has been widely used in other algorithms [20,35 and 38]. 3) MLCC: In order to better determine group size of DECC-G, the Multi-Level Cooperative Coevolution (MLCC) framework was proposed to improve the original DECC-G, where a set of problem decomposers is constructed based on the random grouping scheme with different group size. In MLCC, like other CC-based algorithm, the evolution process is divided into a number of cycles, and at the start of each cycle, MLCC uses a self-adapted mechanism to choose a decomposer according to its historical performance [38]. 4) CSO: The Competitive Swarm Optimizer (CSO) algorithm is fundamentally inspired by PSO with a novel pairwise competitive mechanism where any particle can be a potential leader. That is to say, neither the personal best position of each particle nor the global best position is involved in the updating of the particles in CSO [23]. 5) SLPSO: The Social Learning PSO (SLPSO), which is inspired from social learning about behavior learning among social animals. Unlike classical PSO variants where the particles are updated based on the global best and the personal best, each particle in SLPSO learns from any better particle (called demonstrators) in the current population [24]. 6) DMS-PSO: In the Dynamic Multi-Swarm PSO (DMS-PSO), the whole population is firstly divided into many small sub-swarms, and then these sub-swarms are randomly assigned by using various regrouping schedules. In DMS-PSO, all sub-swarms execute the same update operator like the classical PSO. Although there are many studies on the control parameters about different algorithms, in the compared experiments, all parameters of the above algorithms agree well the original papers. About the GPBO and PBO, the population sizes are set to 30, and the other parameters are the same. That is to say, the proportion of solid phase PS and the proportion of liquid phase PL are both set to 1/3. Besides, the same evaluation criteria proposed in the CEC 2008 special session on large scale optimization problems is adopted.

In all experiments, for each function, the termination condition is when a fixed maximum number of 5000*D fitness evaluations (max-FEs) is met. The performance of any algorithm is quantitatively measured by the value of objective functions. All experimental results for each function are obtained over 25 independent runs by each algorithm.

4.3. Numerical experiments and results 4.3.1. Comparing GPBO with PBO and CSO on functions of 100-D, 500-D and 1000-D Because CSO had shown to perform considerably better than other algorithms on large scale optimization problems [23], we deliberately make a detailed comparison between GPBO and CSO on functions of 100-D, 500-D and 1000-D, also including the original PBO. The optimization results of solution accuracy about GPBO, PBO and CSO on functions of 100-D, 500-D and 1000-D are summarized in Table 3. All results are calculated for optimization errors between the obtained function values and the theoretical global optima values. The best mean results and standard deviations obtained by each algorithm for each function are highlighted in bold. In each row of the table, the mean values (Mean) averaged over 25 independent runs are listed in the first part, and the standard deviations (Std) are listed in the third part, and the two parts are divided with a symbol “ ± ”. In order to statistically compare different results, the p-value of two-tailed t-test with a significance level ␣ = 0.05 is used to evaluate whether the mean fitness values of two sets of the obtained results are significantly different from each other. If GPBO significantly outperforms another algorithm, a ‘’ is labeled in back of the corresponding result yielded by this algorithm. Corresponding to ‘␰’, ‘␰’ denotes GPBO is worse than other algorithms, and ‘ ∼ ’ denotes there is no significance between GPBO and the compared algorithm. At the last row of the table, a summary of total number of ‘’, ‘␰’ and ‘ ∼ ’ is calculated. As it is shown from the accuracy results summarized in Table 3, GPBO yields the best results on three of seven (F1 , F3 and F4 ), except worse on F2 . The advantage on F3 and F4 fully displays that the strategy combining complete stochastic search and global-best guided search is reasonably effective. Among the functions of 100-D, 500-D and 1000-D, GPBO exhibits the better performance than the original PBO, and reveals slightly better performance than CSO on functions of 100-D and 1000-D. It is worth noting that GPBO demonstrates the better performance than on functions of 500-D and with the increase of the dimension of the functions from 100-D to 1000-D, the performance of GPBO exhibits better especially on F6 . Therefore, from a general point of view, GPBO has better scalability on functions of 100-D, 500-D and 1000-D than CSO. Except for solution accuracy, another key indicator to measure the performance of evolutionary algorithms is the convergence speed. The comparison results of convergence speed on F1 to F7 of 500-D are shown in Fig. 4. The Fig. 4 shows the convergence performance of the compared algorithms on F1 and F7 of 500-D. It can be observed that the con-

44

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

Fig. 4. Convergence performance of the compared algorithms on F1 to F7 of 500-D.

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

45

Table 3 Accuracy results of GPBO, PBO and CSO on test functions of 100-D, 500-D, and 1000-D. Func.

Dimensions

PBO

GPBO

CSO

Mean ± Std

p-value

Mean ± Std

p-value

Mean ± Std

p-value

F1

100-D 500-D 1000-D

0.00E + 00 ± 0.00E + 00 1.31E-28 ± 2.46E-28 3.93E-27 ± 1.23E-26

– – –

2.28E + 00 ± 3.01E-01 1.46E + 01 ± 1.05E + 00 2.99E + 01 ± 1.56E + 00

1.95E-37 5.71E-50 1.97E-56

4.04E–30 ± 1.40E-29 8.81E–11 ± 4.12E-10 1.08E–21 ± 4.00E-23

1.55E-01 2.91E-01 1.27E-63

F2

100-D 500-D 1000-D

7.26E + 01 ± 8.25E + 00 1.14E + 02 ± 4.00E + 00 1.28E + 02 ± 2.52E + 00

– – –

6.26E + 01 ± 3.67E + 00␰ 9.85E + 01 ± 2.01E + 00␰ 1.11E + 02 ± 3.57E + 00␰

1.21E-06 4.62E-22 7.30E-25

3.44E + 01 ± 5.41E + 00␰ 1.01E + 02 ± 4.14E + 00∼ 3.21E + 01 ± 1.25E + 00␰

2.41E-24 5.43E-15 1.72E-68

F3

100-D 500-D 1000-D

2.18E + 01 ± 1.67E + 01 2.38E + 01 ± 1.97E + 01 6.51E + 00 ± 5.30E + 00

– – –

1.22E + 03 ± 1.37E + 02 7.22E + 03 ± 4.30E + 02 1.50E + 04 ± 6.61E + 02

2.83E-40 1.26E-53 6.03E-60

2.13E + 02 ± 2.39E + 02 8.46E + 02 ± 1.74E + 02 9.92E + 02 ± 2.70E + 01

2.33E-04 5.43E-28 1.62E-69

F4

100-D 500-D 1000-D

8.55E-02 ± 2.75E-01 2.09E + 00 ± 9.46E-01 7.46E + 00 ± 1.66E + 00

– – –

2.51E + 00 ± 5.76E-01 1.93E + 01 ± 1.44E + 00 4.26E + 01 ± 1.81E + 00

5.64E-24 5.69E-43 2.18E-50

5.37E + 01 ± 1.02E + 01 1.21E + 03 ± 6.65E + 01 7.15E + 02 ± 2.66E + 01

4.48E-30 2.48E-55 2.93E-63

F5

100-D 500-D 1000-D

1.06E-04 ± 3.19E-04 1.11E-15 ± 0.00E + 00 2.13E–13 ± 9.71E-13

– – –

7.66E–01 ± 4.64E-02 1.05E + 00 ± 3.43E-02 1.23E + 00 ± 2.80E-02

2.31E-53 3.69E-66 8.67E-74

3.94E–04 ± 1.97E-03∼ 1.50E–01 ± 2.66E-01 2.22E-16 ± 0.00E + 00␰

4.73E-01 7.07E-03 2.79E-01

F6

100-D 500-D 1000-D

1.75E−01 ± 3.79E-01 8.15E-03 ± 2.84E-02 4.49E−12 ± 1.29E-12

– – –

3.16E–01 ± 3.31E-02∼ 3.99E–01 ± 2.18E-02 4.27E–01 ± 1.24E-02

7.10E-02 7.64E-45 1.10E-68

1.07E-14 ± 0.00E + 00␰ 4.83E + 00 ± 5.80E-01 1.21E-12 ± 2.05E-14∼

2.48E-02 2.94E-39 5.95E-17

F7

100-D 500-D 1000-D

−1.45E + 03 ± 1.80E + 01 −6.96E + 03 ± 1.44E + 02 −1.39E + 04 ± 4.01E + 02

– – –

−1.47E + 03 ± 1.39E + 01␰ −7.04E + 03 ± 3.35E + 01␰ −1.39E + 04 ± 3.37E + 01∼

2.11E-05 1.40E-02 2.76E-02

−1.47E + 03 ± 1.52E + 01∼ −6.70E + 03 ± 5.11E + 01∼ −1.40E + 04 ± 4.82E + 01∼

4.32E-05 3.93E-11 3.51E-04

100-D 500-D 1000-D

/␰/∼

4/2/1 5/2/0 5/1/1

3/2/2 5/2/0 3/2/2

vergence speed of GPBO is faster than CSO and PBO except F2 and F7 .

nisms, or utilizing the CC-based framework, GPBO will also obtain considerable performance improvement.

4.3.2. Comparing GPBO with other algorithms on functions of 1000-D For a more comprehensive comparison with other different kinds of algorithms, we specifically compare the performance of GPBO with other algorithms listed in Section 4.2 on functions of 1000-D. The experimental results about the solution accuracy obtained by GPBO and other seven algorithms on F1 -F7 functions of 1000-D are listed in Table 4. In each row of Table 4, the first line is the mean values averaged over 25 independent runs, and the standard deviations are listed in the second line. The p-value of two-tailed t-test with a significance level ␣ = 0.05 is listed in the third line. The best mean results and standard deviations obtained by each algorithm for each function are highlighted in bold. The above presentation format about the labels of ‘’, ‘␰’ and ‘∼’ in Table 3 is also suitable for Table 4 in the subsequent comparative results. As it is shown from the accuracy results listed in Table 4, GPBO remarkably yields the best result on F3 , and MLCC obtains the best results on F4 and F7 . Besides, CSO achieves the best result on F2 , and SLPSO performs the best results on F6 . Although DMS-PSO has the best results on F1 and F5 , the solution accuracy on other six functions is worse than other seven algorithms. Besides, it can be observed that GPBO beats PBO, CCPSO2, DECC-G, MLCC, CSO, SLPSO and DMS-PSO on five, three, six, two, three, three and four functions. Conversely, PBO, CCPSO2, DECC-G, MLCC, CSO, SLPSO and DMSPSO outperform GPBO on one, three, one, three, two, two and three functions, respectively. In general, GPBO exhibits the better performance than PBO, DECC-G and DMS-PSO, and is slightly better than CSO and SLPSO. However, it should be noted that, the performance of GPBO is not very competitive with CCPSO2 and MLCC which two algorithms all utilize divide-and-conquer strategy and the CC-based framework. It can be expected that, by introducing more sophisticated mecha-

4.3.3. The performance comparison of GPBO in one dimension and all dimensions In order to illustrate the effectiveness of the global-best guided strategy in only one dimension, an experiment is conducted to compare the performance of GPBO in one dimension with GPBO in all dimensions. To avoid ambiguity, we name the version of GPBO in all dimensions as GPBO-All. The experimental results about the solution accuracy obtained by GPBO and GPBO-All on 1000-D of F1 -F7 functions are given in Table 5. From Table 5, it can be observed that the solution accuracy obtained by GPBO is remarkably better than the results by GPBO-All. It confirms that the global-best guided strategy in one dimension is quite effective than the guided strategy in all dimensions. As described above, it is difficult to determine which direction of the good individual is better in the high-dimensional space, so the trial-and-error method in one dimension will bring a large probability to point to the global optimum than changing in all dimensions. Thus, we can conclude that the global-best guided strategy of GPBO in one dimension plays an important role in the search process of high-dimensional space. 4.3.4. The effect of different population sizes on GPBO In order to investigate the effect of different population sizes on GPBO, we compare the performance of GPBO on F1 -F7 functions of 100 dimensions in the case of population size of 30, 50 and 100. For the sake of simplicity, we term the version of GPBO in the case of population size of 50 as GPBO-50, and the version of GPBO in the case of population size of 100 as GPBO-100. Note that the population size of GPBO is 30. The compared results of GPBO under different population sizes are presented in Table 6. From Table 6, it can be observed that the solution accuracy of F1 -F4 obtained by GPBO is absolutely better than the results by GPBO-50 and GPBO-100. However, interestingly, for F5 -F7

46

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

Table 4 Results of solution accuracy of functions of 1000-D. Func.

Results

GPBO

PBO

CCPSO2

DECC-G

MLCC

CSO

SLPSO

DMS-PSO

F1

Mean Std p-value

3.93E-27 1.23E-26 –

2.99E + 01 1.56E + 00 1.97E-56

5.18E-13 9.61E-14 –

1.17E-22 3.91E-23 1.09E-19

8.46E-13 5.01E-14 –

1.08E-21 4.00E-23 1.27E-63

7.56E-23 2.06E-24 5.95E-70

0.00E + 00␰ 0.00E + 00 –

F2

Mean Std p-value

1.28E + 02 2.52E + 00 –

1.11E + 02␰ 3.57E + 00 7.30E-25

7.82E + 01␰ 4.25E + 01 –

9.00E + 01␰ 2.90E-14 1.29E-51

1.09E + 02∼ 4.75E + 00 –

3.21E + 01␰ 1.25E + 00 1.72E-68

8.99E + 01␰ 5.81E + 00 7.87E-33

9.15E + 01␰ 7.14E-01 –

F3

Mean Std p-value

6.51E + 00 5.30E + 00 –

1.50E + 04 6.61E + 02 6.03E-60

1.33E + 03 2.63E + 02 –

3.32E + 03 2.48E + 02 5.59E-49

1.80E + 03 1.58E + 02 –

9.92E + 02 2.70E + 01 1.62E-69

1.00E + 03 3.59E + 01 5.19E-64

8.98E + 09 4.39E + 08 –

F4

Mean Std p-value

7.46E + 00 1.66E + 00 –

4.26E + 01 1.81E + 00 2.18E-50

1.99E-01␰ 4.06E-01 –

6.46E + 02 4.64E + 01 1.28E-49

1.37E-10␰ 3.37E-10 –

7.15E + 02 2.66E + 01 2.93E-63

5.73E + 02 2.40E + 01 9.64E-61

3.84E + 03 1.71E + 02 –

F5

Mean Std p-value

2.13E-13 9.71E-13 –

1.23E + 00 2.80E-02 8.67E-74

1.18E-03 3.27E-03 –

2.58E + 04 0.00E + 00 0.00E + 00

4.18E-13∼ 2.78E-14 –

2.22E-16␰ 0.00E + 00 2.79E-01

5.55E-16␰ 3.20E-17 2.80E-01

0.00E + 00␰ 0.00E + 00 –

F6

Mean Std p-value

4.49E-12 1.29E-12 –

4.27E-01 1.24E-02 1.10E-68

1.02E-12∼ 1.68E-13 –

1.87E + 01 5.16E + 00 4.49E-23

1.06E-12∼ 7.68E-14 –

1.21E-12∼ 2.05E-14 5.95E-17

3.54E-13∼ 5.72E-15 7.12E-21

7.76E + 00 8.92E-02 –

F7

Mean Std p-value /␰/∼

-1.39E + 04 4.01E + 02 –

-1.39E + 04∼ 3.37E + 01 2.76E-02 5/1/1

-1.43E + 04␰ 8.27E + 01 – 3/3/1

-1.16E + 04 7.63E + 01 1.63E-29 6/1/0

-1.47E + 04␰ 1.52E + 01 – 2/2/3

-1.40E + 04∼ 4.82E + 01 3.51E-04 3/2/2

-1.40E + 04∼ 4.10E + 01 2.90E-03 3/2/2

-7.51E + 03 1.64E + 01 – 4/3/0

Results for CCPSO2, MLCC and DMS-PSO quoted from [35], [37], and [36]. ‘-’ indicates that the result cannot be obtained.

Table 5 Results of solution accuracy of GPBO and GPBO-All on 1000-D functions.

GPBO GPBO-All

Results

F1

F2

F3

F4

F5

F6

F7

Mean Std Mean Std

3.93E-27 1.23E-26 2.57E + 04 1.32E + 04

1.28E + 02 2.52E + 00 1.50E + 02 5.69E + 00

6.51E + 00 5.30E + 00 2.32E + 11 8.28E + 10

7.46E + 00 1.66E + 00 5.33E + 03 2.52E + 02

2.13E-13 9.71E-13 2.27E + 02 1.37E + 02

4.49E-12 1.29E-12 1.92E + 01 1.89E-01

−1.39E + 04 4.01E + 02 −7.23E + 03 2.29E + 02

Table 6 Compared results of GPBO under different population sizes on 100-D functions.

GPBO GPBO-50 GPBO-100

Results

F1

F2

F3

F4

F5

F6

F7

Mean Std Mean Std Mean Std

0.00E + 00 0.00E + 00 5.77E-17 4.75E-17 1.33E-05 5.10E-06

7.26E + 01 8.25E + 00 7.51E + 01 2.72E + 00 8.54E + 01 3.22E + 00

2.18E + 01 1.67E + 01 2.55E + 01 1.35E + 01 2.90E + 02 4.41E + 01

8.55E-02 2.75E-01 4.68E + 00 1.00E + 00 2.72E + 01 2.16E + 00

1.06E-04 3.19E-04 4.11E-05 1.78E-04 9.43E-05 2.71E-04

1.75E-01 3.79E-01 4.34E-02 2.17E-01 8.89E-02 5.18E-02

−1.45E + 03 1.80E + 01 −1.45E + 03 7.63E + 00 −1.43E + 03 7.98E + 00

functions, GPBO-50 is slightly better than GPBO and GPBO-100. Therefore, GPBO is generally better than the GPBO-50 and GPBO100. That is to say, large population size does not contribute a significant effect on the performance of GPBO. 5. Case analysis In this Section, the large scale transmission pricing problem is considered as a real-world application to further verify the performance of GPBO. As one of the most important optimization problems in electric power systems, transmission pricing tackles the issue of allocating the fixed costs of transmission to various stake-holders [40]. Many factors affect the decision about the scheme of transmission pricing, such as ex-ante/ex-post pricing, market based/nonmarket methods, methods for centralized or decentralized markets, whether loss is part of transmission pricing or not, etc. Evolutionary algorithms (EAs), such as, Genetic Algorithm (GA) [41], PSO [42] and Differential Evolution (DE) [43], have

received the greatest attention from researchers in power system field. However, most EAs are mainly concerned with the underlying evolutionary mechanism, and some expensive computational cost motivates high interest in the effectiveness of these algorithms, especially when applied to solve high-dimensional optimization problems. Thus, the development of optimization algorithm that performs successfully within a limited number of function evaluations is still an open challenging research issue [44]. 5.1. Problem formulation Equivalent Bilateral Exchange (EBE) is a widely-used method on the modeling of the electric power system introduced in [45] which is developed for a pool market, in order to calculate the final transmission charges for each node. The method presents a rule based on the assumption that every generator contributes to every load. In other words, the amount of contribution is decided in proportionate manner. The EBE method creates a load-generation

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

47

interactive matrix of equivalent bilateral exchange based on the following proportionate principle [45]. GDij =

PGi · PDj

(6)

sys

PD

where GDij is the amount of equivalent bilateral exchange that brings about between generator PG i at bus i and load PDj at busj, sys andPD denotes the total load in the whole system. The net flow in the line k is the absolute sum of all the flows according to all transactions, and is calculated as follows. pfk =

 i

|ijk | · GDij

(7)

j

whereijk is the sensitivity of the line k for a transaction between bus i and bus j, which is calculated as follows. ijk =

Xmi − Xni − Xmj − Xnj

(8)

xmn

where xmn is the reactance of line connection bus of m and n, and Xmi , Xni ,Xmj and Xnj are entries of the reactance matrix X. As described above, the equivalent bilateral transactions between pool generations and pool demands are calculated by minimizing usage rate deviations according to bilateral transactions described as follows.



⎢ ⎢ ⎢ min f = ⎢ ⎢ i ⎣ ⎡ ⎢ ⎢ ⎢ + ⎢ j ⎣







GDij



j



⎤2

ijk [F(GD)]

k

i − Rge



Pgi − Pgi

 GDij



i





Pdj − Pdj



⎥ ⎥

(9)



Considering that F(GD) =

 i

FC k pfk +

j

 i

(10)

|BTij · ijk |

j

And subject to

 i 





= min Pgi − BTij , Pdj − BTij



(12)

5.2. Simulation results

− Rde ⎥ ⎥ j

the dimension of the transmission pricing problem is 126-D which Ng is 6 and Nd is 21 for IEEE 30 bus system, thus a powerful and effective optimization algorithm is required to find the best feasible solution. In addition to the above description of the problem, the lower bound for each GDij is set to zero, and the upper bound is determined as follows [44]. max GDij

⎤2

ijk [F(GD)]

k

⎥ ⎥ ⎥ ⎥ ⎥ ⎦

Fig. 5. Convergence performance of the compared algorithms on large scale transmission pricing problem.



GDij = Pgi − Pgi ∀i ∈ Ng (11)



GDij = Pdi − Pdi ∀j ∈ Nd

j i is the obtained transmission usage rate for a generation whereRge j

at bus i,Rde is the obtained transmission usage rate for a demand at 

bus j, and FCk is the fixed cost of a line k. Pgi and Pgi are the total generation and the sum of generations according to all bilateral  transactions at bus i, respectively. Pdj and Pdj are the total demand and the sum of demands according to all bilateral transactions at bus j, respectively. BTij represents bilateral transaction between generator at busiand demand at bus j. It is worth pointing out that an input file, including bus and line specifications, fixed cost to be recovered and bilateral transactions data, is previously defined. In the formula (9), each GD considers as an exchange between a generator and a load in the power system, thus the size of the optimization problem increases with the system size. In this study,

For the comparative study about the large scale transmission pricing problem, we choose the popular PSO [42], DE [43], CSO [23] and SLPSO [24] as the compared algorithms to test the performance of GPBO. About PSO, DE, CSO and SLPSO, the parameters of these four algorithms also agree well the original papers. GPBO and PBO use the same parameter describes in the Section 4.2. The termination criteria is also the same as in the Section 4.2 and the results for each algorithms are obtained over 25 independent runs to eliminate influences of statistical errors. A comparative results about the obtained minimum value, maximum value, median value, mean value and standard deviation of objective function value are given in Table 7. As it can be observed from the results in Table 7, although there is no big difference between the accuracy of the solution between GPBO and PBO, GPBO obtains the better solution accuracy than PSO, DE, CSO and SLPSO. From the mean value, GPBO, PBO, PSO, DE, CSO and SLPSO rank first, second, sixth, fifth, fourth and third, respectively. And from the p-value of two-tailed t-test with a significance level ␣ = 0.05 between GPBO and other five compared algorithms, except for PBO, GPBO significantly outperforms PSO, DE, CSO and SLPSO. In general, the experimental result demonstrates the effectiveness and practicality of GPSO in solving the large scale transmission pricing problem. Besides, the standard deviation obtained by GPBO is the smallest among the compared algorithms, and it illustrates that GPBO has more stable performance. The convergence speed of the compared six algorithms on large scale transmission pricing problem is shown in Fig. 5. The Fig. 5 reveals that the convergence speed of GPBO is similar with PBO which implies the improved GPBO algorithm does not change the inherent features of the PBO. However, the convergence speed of GPBO is faster than PSO, DE, CSO and SLPSO. In general, GPBO has better convergence ability in real-world optimization problem.

48

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49

Table 7 Results of solution accuracy of six algorithms on large scale transmission pricing problem. Results

GPBO

PBO

PSO

DE

CSO

SLPSO

Min Max Median Mean Std p-value

2.11E + 02 5.06E + 02 3.32E + 02 3.42E + 02 7.82E + 01 –

2.17E + 02 6.41E + 02 3.91E + 02 3.83E + 02 9.39E + 01 9.95E-02

5.91E + 05 1.75E + 06 1.11E + 06 1.11E + 06 2.51E + 05 7.83E-27

1.14E + 03 6.53E + 04 3.46E + 03 1.04E + 04 1.61E + 04 3.04E-03

1.17E + 03 3.30E + 03 2.19E + 03 2.09E + 03 5.25E + 02 2.37E-21

1.01E + 03 2.83E + 03 1.79E + 03 1.85E + 03 4.57E + 02 3.60E-21

6. Conclusion In this study, a Global-best guided Phase Based Optimization (GPBO) algorithm is introduced. GPBO is designed to expand and improve the original PBO algorithm to solve large scale optimization problem. In GPBO, an effective search strategy combining complete stochastic search and global-best guided search is utilized. In this search strategy, the complete stochastic search is derived from the original diffusion operator of gas phase, and the guided search is implemented by modifying the perturbation operator of solid phase using a global-best guided strategy. We only modify the perturbation operator of solid phase without any major conceptual change to original structure and any parameter setting. The performance of GPBO is firstly carried out on seven scalable CEC2008 benchmark functions and compared with six state-of-the-art algorithms. The experimental results demonstrate that GPBO exhibits a better performance than most of compared algorithms in terms of solution accuracy and convergence ability. Besides, in order to further verify the performance of GPBO in real-world application, we apply it to address large scale transmission pricing problem. The simulation results indicatethe overall performance of GPBO is very competitive among the compared algorithms. The successful application of GPBO to solve the large scale benchmark functions optimization and real-world large scale transmission pricing problem indicates that GPBO has a certain potential in addressing high-dimensional optimization problems in engineering and science. Further research will remain to be conducted in large scale multi-objective optimization problems. In addition, our future research directions also involve devising a new method to adapt the parameters of GPBO, studying different schemes about the starting point of the optimization, and utilizing GPBO within a cooperative co-evolutionary framework. Besides, we will apply GPBO in more complex large scale optimization problems, and develop a discrete GPBO version to solve large scale combinatorial optimization problems. Acknowledgment This research work was partially supported by National Natural Science Foundation of China (Grant No. 61773314, U1334211, 61572392, 61672027), Key research and development plan project of Shaanxi Science and Technology Department (Grant No. 2017ZDXM-GY-016) and Special Science Research Program of the Educational Office of Shaanxi province (Grant No. 17JK0371). References [1] K. Tang, X. Yao, P.N. Suganthan, et al., Benchmark Functions for the CEC’2008 Special Session and Competition on Large Scale Global Optimization, Nature Inspired Computation and Applications Laboratory, USTC, China, 2007. [2] K. Tang, X. Li, P.N. Suganthan, et al., Benchmark Functions for the CEC’2010 Special Session and Competition on Large-scale Global Optimization, Nature Inspired Computation and Applications Laboratory, USTC, China, 2009. [3] S.K. Goh, H.A. Abbass, K.C. Tan, et al., Evolutionary big optimization (BigOpt) of signals, IEEE Congr. Evol. Comput. (2015), Sendai Japan.

[4] M. Locatelli, F. Schoen, Efficient algorithms for large scale global optimization: Lennard-Jones clusters, Comput. Optim. Appl. 26 (2) (2003) 173–190. [5] K. Slavakis, G.B. Giannakis, G. Mateos, Modeling and optimization for big data analytics, IEEE Signal Process Mag. 31 (5) (2014) 18–31. [6] A. Abraham, C. Grosan, V. Ramos, Swarm Intelligence in Data Mining, vol. 34, Springer, Heidelberg, 2006. [7] Y. Lu, S. Wang, S. Li, et al., Particle swarm optimizer for variable weighting in clustering high-dimensional data, Mach. Learn. 82 (1) (2011) 43–70. [8] S. Mahdavi, M.E. Shiri, S. Rahnamayan, Metaheuristics in large-scale global continues optimization: a survey, Inform. Sci. 295 (2015) 407–428. [9] S. Cheng, Y. Shi, Q. Qin, et al., Swarm Intelligence in Big Data Analytics, Intelligent Data Engineering and Automated Learning-IDEAL, Springer, Berlin Heidelberg, 2013, pp. 417–426. [10] J. García-Nieto, E. Alba, Restart particle swarm optimization with velocity modulation: a scalability test, Soft Computing 15 (11) (2011) 2221–2232. [11] M. Bhattacharya, R. Islam, J. Abawajy, Evolutionary optimization: a big data perspective, Journal of Network and Computer Applications 59 (2016) 416–426. [12] M.A. Potter, A. Kenneth, D. Jong, A cooperative coevolutionary approach to function optimization, in: parallel oroblem solving from nature PPSN III, Springer (1994) 249–257. [13] M. Potter, K. De Jong, Cooperative Coevolution An architecture for evolving coadapted subcomponents, Evol. Comput. 8 (1) (2000) 1–9. [14] Z. Yang, K. Tang, X. Yao, Large scale evolutionary optimization using cooperative coevolution, Inform. Sci. 178 (2008) 2985–2999. [15] M.N. Omidvar, X. Li, X. Yao, Cooperative co-evolution with delta grouping for large scale non-separable function optimization, Proc. IEEE Trans. Evol. Comput. (2010) 1762–1769. [16] W. Chen, T. Weise, Z. Yang, K. Tang, Large-scale global optimization using cooperative coevolution with variable interaction learning, PPSN, LNCS 6239 (2011) 300–309. [17] M.N. Omidvar, X. Li, Y. Mei, X. Yao, Cooperative Co-evolution with differential grouping for large scale optimization, IEEE Trans. Evol. Comput. 18 (3) (2014) 378–393. [18] Z. Yang, K. Tang, X. Yao, Multilevel cooperative coevolution for large scale optimization, IEEE Congr. Evol. Comput. 2008 (2008) 1663–1670. [19] S. Mahdavi, M.E. Shiri, S. Rahnamayan, Cooperative co-evolution with a new decomposition method for large-scale optimization, IEEE Congr. Evol. Comput. (2014) 1285–1292. [20] Z. Cao, L. Wang, Y. Shi, et al., An effective cooperative coevolution framework integrating global and local search for large scale optimization problems, IEEE Congr. Evol. Comput. (2015) 1986–1993. [21] S.-T. Hsieh, T.-Y. Sun, C.-C. Liu, et al., Solving large scale global optimization using improved particle swarm optimizer, IEEE Congr. Evol. Comput. (2008) 1777–1784. [22] J. Liang, P. Suganthan, Dynamic multi-swarm particle swarm optimizer, Proc IEEE SIS (2005) 124–129. [23] R. Cheng, Y. Jin, A competitive swarm optimizer for large scale optimization, IEEE Trans. Cybern. 45 (2) (2015) 191–204. [24] R. Cheng, Y. Jin, A social learning particle swarm optimization algorithm for scalable optimization, Inform. Sci. 291 (2015) 43–60. [25] M.A. Montes, K.V. Enden, T. Stützle, Incremental particle swarm-guided local search for continuous optimization, Hybrid Metaheuristics, Springer (2008) 72–86. [26] H. Wang, Z. Wu, S. Rahnamayan, Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems, Soft Comput. 15 (11) (2011) 2127–2140. [27] W. Dong, T. Chen, P. Tino, X. Yao, Scaling up estimation of distribution algorithms for continuous optimization, IEEE Trans. Evolu. Comput. 17 (6) (2013) 797–822. [28] S. Rahnamayan, G.G. Wang, Center-based sampling for population-based algorithms, IEEE Congr. Evolut. Comp. (2009) 933–938. [29] Y. Chu, H. Mi, H. Liao, et al., A fast bacterial swarming algorithm for high-dimensional function optimization, IEEE Congr. Evol. Comput. (2008) 3135–3140. [30] D. Molina, M. Lozano, A.M. Sánchez, et al., Memetic algorithms based on local search chains for large scale continuous optimisation problems: ma-ssw-chains, Soft Comput. 15 (11) (2011) 2201–2220. [31] S.K. Azad, Enhanced hybrid metaheuristic algorithms for optimal sizing of steel truss structures with numerous discrete variables, Struct. Multidiscip. Optim. 55 (6) (2017) 2159–2180.

Z. Cao et al. / Journal of Computational Science 25 (2018) 38–49 [32] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput. 1 (1) (1997) 67–82. [33] S.K. Azad, Seeding the initial population with feasible solutions in metaheuristic optimization of steel trusses, Engi. Optim. 50 (1) (2018) 89–105. [34] Z. Cao, L. Wang, X. Hei, et al., A phase based optimization algorithm for big optimization problems, IEEE Congr. Evol. Comput. (CEC) 2016 (2016) 5209–5214. [35] X. Li, Y. Yao, Cooperatively coevolving particle swarms for large scale optimization, IEEE Trans. Evol. Comput. 16 (2) (2011) 1–15. [36] J. Liang, P. Suganthan, Dynamic multi-swarm particle swarm optimizer, in Proc, IEEE SIS (2005) 124–129. [37] Z. Yang, K. Tang, X. Yao, Large scale evolutionary optimization using cooperative coevolution, Inform. Sci. 178 (15) (2008) 2985–2999. [38] Z. Yang, K. Tang, X. Yao, Multilevel cooperative coevolution for large scale optimization, Proc. IEEE Congr. Evol. Comput. Hong Kong (2008) 1663–1670. [39] Z. Yang, K. Tang, X. Yao, Self-adaptive differential evolution with neighborhood search, Proceedings of the 2008 IEEE Congress on Evolutionary Computation (2008) 110–116. [40] S. Das, P.N. Suganthan, Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problem, Jadavpur University, Nanyang Technological University, Kolkata, 2011 (Technical Report). [41] S.M. Elsayed, R.A. Sarker, D.L. Essam, A new genetic algorithm for solving optimization problems, Eng. Appl. Artif. Intell. 27 (2014) 57–69. [42] V.S. Pappala, Application of PSO for Optimization of Power Systems Under Uncertainty, PhD Dissertation, University Duisburg-Essen, Duisburg, Germany, 2009. [43] S. Das, P.N. Suganthan, Differential evolution a survey of the state-of-the-Art, IEEE Trans. Evol. Comput. 15 (1) (2011) 4–31. [44] J.L. Rueda, I. Erlich, F.M. Gonzalez-Longatt, Performance assessment of evolutionary algorithms in power system optimization problems, in: IEEE Power Tech 2015, Eindhoven, Netherlands, 2015. [45] F.D. Galiana, A.J. Conejo, H.A. Gil, Transmission network cost allocation based on Equivalent Bilateral Exchanges, IEEE Trans. Power Systems 18 (no. 4) (2003).

49

Zijian Cao received his B.S. degree and M.S. degree in computer application from Xi’an Technology of University, Xi’an, China, in 2000 and 2003, respectively. He is currently pursuing the Ph.D. degree in pattern recognition and intelligent systems at the School of Computer Science and Engineering, Xian University of Technology, Xi’an, China. His current research interests include evolutionary computation, and large scale global optimization.

Lei Wang received his B.S. degree and M.S. degree in computer science and technology from Xi’an University of Technology, Xi’an, China, in 1994 and 1997, respectively, and his Ph.D. degree from in electronic science and technology from Xidian University, Xi’an, China, in 2001. He is currently a professor with the Faculty of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China. His current research interests include evolutionary algorithms, neural networks, and data mining.

Xinhong Hei received his B.S. degree and M.S. degree in computer science and technology from Xi’an University of Technology, Xi’an, China, in 1998 and 2003, respectively, and his Ph.D. degree from Nihon University, Tokyo, Japan, in 2008. He is currently a professor with the Faculty of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China. His current research interests include intelligent systems, safety-critical system, and train control system.