Accepted Manuscript A novel multi-objective optimization algorithm based on Lightning Attachment Procedure Optimization algorithm A. Foroughi Nematollahi, A. Rahiminejad, B. Vahidi
PII: DOI: Reference:
S1568-4946(18)30664-1 https://doi.org/10.1016/j.asoc.2018.11.032 ASOC 5208
To appear in:
Applied Soft Computing Journal
Received date : 22 September 2017 Revised date : 7 October 2018 Accepted date : 19 November 2018 Please cite this article as: A. Foroughi Nematollahi, A. Rahiminejad and B. Vahidi, A novel multi-objective optimization algorithm based on Lightning Attachment Procedure Optimization algorithm, Applied Soft Computing Journal (2018), https://doi.org/10.1016/j.asoc.2018.11.032 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Novel Multi-Objective Optimization Algorithm Based on Lightning Attachment Procedure Optimization Algorithm A. Foroughi Nematollahia, A. Rahiminejadb, B. Vahidia,1 a
b
Electrical Engineering Department, Amirkabir University of Technology, Tehran,1591634311, Iran, E-mail
Department of Electrical and Computer Science, Esfarayen University of Technology, Esfarayen, North Khorasan, 9661998195, Iran
Abstract- In this paper, a novel multi-objective optimization method based on a recently introduced algorithm known as Lightning Attachment Procedure Optimization (LAPO) is presented. The proposed algorithm is based on non-dominated sorting approach where the best solutions chosen from the Pareto Optimal Front (POF), based on crowding distance, are stored in a repository matrix called an Archive matrix. The procedure is performed such that the final best solutions are distributed evenly along the optimal PF. Then, the proposed algorithm is tested by some multi-objective optimization functions and some classical engineering problems also. The results are compared to those of four well-known methods and then discussed. The results are compared using 4 criteria which show how to select a POF close to the true POF, how the results are distributed, and how close the final results approximate all the possible outcomes of true POF. It is shown that the proposed method outperforms the other methods with regards to 3 criteria and yields comparable results regarding the last criteria. Superiority of the proposed method in finding the true POF while covering a wide range of possible optimal results is discussed in the results section. Therefore, it is concluded that the proposed method does an excellent job at solving a wide range of multi-objective optimization problems. Keywords: multi objective optimization method; Lightning Attachment Procedure Optimization, non-dominated sorting, evolutionary multi-objective optimization, benchmark problem, Pareto optimal solutions, Meta-heuristic
Nomenclature X
decision variable vector
F(x)
set of objective functions
Ω
the search space
1
Corresponding author, Prof. Behrooz Vahidi, Amirkabir University of Technology, 424 Hafez Ave., Tehran, Iran, Postal Code: 1591634311, E-mail:
[email protected] Tel: +98-21-64543330, Fax: +98-21-66406469 A. Foroughi Nematollahi (
[email protected]) A. Rahiminejad (
[email protected])
1
POF
Pareto Optimal Front
Xmin
lower bounds of variables
Xmax
upper bounds of variables
rand
random variable in the range of [0 , 1].
Xave
the average of all solutions.
t
the number of iterations
tmax
the maximum number of iterations
XSmin.
The best solutions of the population
XSmax
The worst solutions of the population
CDi
the crowding-distance of solution i, (
OF
value of the objective function j of the solution i+1(i-1),
) (OF
)
the maximum (minimum) value of objective function
1
Introduction
In the last decades, advancements in computer technology have not only enhanced the quality of solving complicated problems, but also reduced the time and cost of achieving the solution. However, human input is still required to select the best of several possible computer-generated solutions. Recently, much effort has been made to design a computer system which can, without any previous knowledge, optimally problem-solve. One of the best ways to achieve this relies on optimization techniques [1]. There are two main groups of optimization techniques: conventional or numerical methods and stochastic-based methods. The main drawbacks of the conventional methods, in which the ideal variables are found by searching for the point at which the differential is zero, are getting stuck in local optimum points, impossible solving of non-linear non-convex problems with lots of variables and constraints, and suffering from additional complex mathematical operations such as derivative [2]. To cope with the aforementioned issues, the researchers turned to metaheuristic optimization techniques which are classified into two main categorizes including single and population-based solutions. Neither method requires knowledge unique to the problem. The optimal solution can be derived using general knowledge only [3]. In addition to considering the optimal solution to the problem, one must also consider whether the problem is better solved tackling only a single facet or if multiple objectives should be 2
satisfied. There must be a balance between the two most important parameters of any system, i.e. the quality and costs of the system. Such problems, called multi-objective optimization problems (MOP), are more challenging than single-objective ones [4]. Based on these two paragraphs, it can be concluded that the optimal solution to a problem is more likely if several, rather than a single, goal is taken into account. Meta-heuristic optimization algorithms are the appropriate method to obtain the optimal solution to a multiobjective problem. In this paper, a meta-heuristic algorithm, based on the concept of nondominated sorting approach, is proposed. Numerous Multi Objective Evolutionary Algorithms (MOEAs) have been proposed by different researchers in the literature. However, none of these methods can claim the best performance for solving all MOPs. For instance, "Is this method suitable for all multi-objective problems"? "Are the solutions obtained by this method the best ever solutions"? "Is there a need for researchers to introduce new methods"? Based on the No-Free-Lunch (NFL) theorem, the answer to these questions is absolutely "no". This is the reason that every year new MOEAs have been introduced for solving MOPs, in order to achieve better performance of a system. Thus, in this paper, a multi-objective version of a recently introduced evolutionary algorithm known as Lightning Attachment Procedure Optimization (LAPO) algorithm, which has been tested on single optimization problems, is presented. The contribution of the proposed method can be listed as follow:
The multi-objective version of LAPO is introduced for the first time. The proposed multi-objective optimization algorithm is free from any parameter tuning. An Archive matrix is used to store the best non-dominated solutions. Grid mechanism is employed to enhance the quality of solutions stored in the Archive matrix
The proposed method is tested by some benchmark test functions and the results are compared to those of two well-regarded MOEAs including MOPSO [5], NSGA-II [6], MOALO [1], and MOGWO [4]. Moreover, the proposed method is applied to some real multi-objective problems and performance of the proposed method is compared to those of MOPSO, NSGA-II, MOALO, and MOGWO. The results demonstrate how the proposed method finds the final results very close to the true POF with a suitable distribution. In other words, a more suitable method compared to previous ones for solving multi-objective optimization problems, as the aim of this work, is achieved. The rest of the paper is organized as follows: in section 2 the multi-objective (MO) version of LAPO is illustrated. In this section, first the single, then the MO version (based on nondominating sorting concept) of LAPO is introduced. In section 3 the results and discussion are presented. Two groups of tests, including benchmark test functions and classical engineering problems are used to evaluate the proposed approach. Conclusion is drawn in section 4.
3
2
Literature Review
2.1 Solving Multi Objective Optimization Problems (MOPs) A multi objective optimization problem consists of a set of objective functions which should be optimized simultaneously [7]. It can be said that it is almost impossible to find a solution with which all the objective functions have their best global values simultaneously [7]. This makes solving these kinds of problems very challenging and controversial. The methods used to solve MOPs can be generally categorized into two main groups. The first group which is called priori [8] includes the methods by which the MOP is changed to a single one. Weighted coefficients [9] and Fuzzy logic [10,11] are two well-known methods of this group. These methods are highly depend on weighted coefficients or objective priorities [12] The second group, which is known as posterior [13], includes the methods used to achieve a set of optimal final results known as no-dominated solutions. In this situation, a trade-off should be established between the objectives. The trade-off solutions, which are certainly more than one [14], are called Pareto optimal solutions (POS) [15]. These results are a set of non-dominated solutions which are not dominated by any other solution in the population. A solution can be selected and applied to a special system based on priority and importance level of the objective. Meanwhile, priority of the objectives is determined by the experts. It should be mentioned that almost all the possible solutions for different objective priorities are obtained in one implementation. Thus, changing in objectives priorities does not need any new implementation. The main definitions of MOPs can be found in [15–18]. Since, multi-objective problems are naturally population-based, population-based meta-heuristic optimization methods are more suitable for solving them than numerical methods. In 1984 David Schaffer introduced stochastic optimization techniques for solving multi-objective optimization problems [14]. Populationbased meta-heuristic optimization methods which are known as Evolutionary Algorithms (EA) are suitable for solving multi-objective optimization problems [7]. Different EAs are developed and used for solving both single-objective ([16]) and multi-objective problems. The EA methods which are used for solving multi-objective problems are called Multi-objective Evolutionary Algorithms (MOEA) [14,18–20]. The EAs for both single- and multi-objective problems are almost the same, except sorting solutions in a population [1]. Almost all EAs try to obtain the best global answer by adjusting the given solutions based on some special situation, such as the best global or the worst global scenario. In the single-objective problems, finding the best or the worst solutions is very simple, because comparing solutions is easy, but not in multi-objective problems [21]. For multiobjective problems, there is a set of solutions as the best answers known as POF. These solutions are non-dominated to each other and no other solution in the whole population dominates them. But the challenging task is to determine which single solution among the POF is the best used to modify those solutions derived by EA method [1,22]. Two important aspects which should be considered to select a single solution among POFs as the best solution of the population are convergence and coverage [23]. The former is responsible to 4
find the POF which is very close to the true POF (true POF is the exact POF of an MOP). Every multi-objective problem has a true POF and a good MOEA must be able to find results which are very close to this front [1]. The latter is related to distribution of solutions in the POF. In other words, a wide range of possible solutions in POF should be found. 2.2 A brief literature review For nearly every evolutionary method proposed for single-objective problems, a multi-objective version is also presented. However, the quality of final results and distribution of the results might be different. In other words, it is very important for a method to find not only a set of best solutions, but also a wide range of possible solutions. Moreover, there is always a question “might a new algorithm give a better solution?” The answer is “yes” based on the No-FreeLunch (NFL) theorem [24] for optimization. This theorem believes that there is no optimization algorithm which can solve all the problems and find the best global answers. Thus, researchers are always motivated to introduce new optimization algorithms wishing to solve a wider range of problems or specific unsolved optimization problems. This situation exists for solving both single- and multi-objective optimization problems. Multi-objective Genetic Algorithm (MOGA) which has been proposed by Goldberg in 1989 can be considered as the first attempt of using EAs for solving MOPs [25]. This method which is based on the concept of POF was improved by Fonesca and Fleming in 1993 [26]. Two approaches were introduced for improving the MOGA. In the first approach, a rank is dedicated to each solution based on their fitness. This approach increases solution classification speed. The second approach employs niche formation method. This method avoids divergence and does not depend on a specific solution. Niche Pareto Genetic Algorithm (NPGA) is another MOEA which is based on GA [27]. Performance of this method is fast and it can be employed for large size populations with high number of iterations. Strength Pareto Approach (SPEA), proposed by Zitzeler et. al. integrates elite methods and non-dominating sorting concept [28]. Its best feature is its ability to introduce best solutions based on multi-criteria evolutionary optimization. However, it is inappropriate in non-convex spaces. In 2002 Deb introduced a multi-objective meta-heuristic optimization algorithm known as Nondominated Sorting GA (NSGA-II)[6]. This method which is a multi-objective version of the well-regarded GA algorithm is the most popular multi-objective method in the literature in which a fast non-dominating technique has been introduced [4]. This concept is also employed in this paper. In the literature, many efforts have been performed in solving multi-objective optimization problems. Some of the well-known and most popular MOEAs are NSGA_I [19], NSGA-II [6], MOPSO [5,29–32], MODE [32–37], MOTLBO[38], MOSFL [39–43], MOABC[22,44–47], MOALO [1], and MOWGO [4].
5
Some of the real MOPs include robot routing, system identification [48–51], index determination [52], optimal operation of robots [50], and controller design [53] .In these kinds of problems which are known as Dynamic Multi-Objective Problems (DMOP), the parameters and constraints would change with time and ambient variation. Thus, the time and accuracy of solving these problems are very important. In [54], GA is employed to solve the DMOP. This paper dedicates a weighted coefficient to each objective function and the problem is solved using GA. The weighted coefficients are determined using Fuzzy logic based on problem conditions [54].
3
MOLAPO
As mentioned before, for each EA which is proposed for single objective optimization problem, a multi-objective version is also introduced. In this situation, the procedure of updating solutions is nearly the same for both single- and multi-objective problems. The most challenging task in the multi-objective version is to determine a solution as the best or worst solution among the population. In this paper, the multi-objective version of LAPO is presented. First of all, a brief description of LAPO is illustrated in the following phrases. 3.1 LAPO Lightning Attachment Procedure Optimization algorithm is a recently introduced method proposed by Foroughi et. al in 2017 [55]. The method is inspired by the nature of lightning moving down to the ground. This method mimics four important parts of lightning attachment procedure to find the best global solution including 1- emanation of the lightning from the cloud, 2- downward lightning channel movement, 3- upward leader formation and propagation from the ground or earthed objects, and 4- attachment of downward leader and upward leader [56]. The electric charge of a cloud is collected in three parts including a big portion of positive charges which is collected at the top of the cloud, a big portion of negative charges collected in the lower part of the cloud, and a small positive charge in the lower part of the cloud. When value of electric charges increases, breakdown may occur between huge/small positive charges and the huge negative charges. In this situation, the voltage gradient on the edge of the cloud increases and the lightning emanates from the cloud. There may be more than one emanating point for lightning formations. In LAPO algorithm, a number of solutions are considered as the lightning emanating points from the cloud as the initial guess. As the lightning goes down toward the ground, the opposite charges are gathered on sharp points on the ground and some upward leaders are also formed and they move up. The emanating points of the upward leader are also considered as the initial solutions. Thus, the initial guess is in fact the emanating points for downward and upward leaders. Based on the upper and lower bound, the initial guess (Xtestpoints) would be defined as follows: i i i i X testpoint X min ( X max X min ) rand
(1) where Xmin and Xmax are lower and upper bounds of variables; rand is a random variable in the range of [0 , 1]. The fitness value of each solution is obtained by putting the solutions in the objective function. 6
In order to update the solutions, fitness value of Xave is required. Xave is the average of all solutions. Mathematically, X ave mean( X testpoint )
(2) Fave obj ( X ave ) (3) In order to update the solutions i, a random solution j is selected so that i≠j. Fitness value of solution j (Fj) is compared to Fave. If Fj
(4)
If Fave
(5)
For modelling upward leader of lightning, an exponent factor is defined as follows: i i X tetpoint X testpoint_new
i i if Ftestpoint_new Ftetpoint
i i X testpoint_new X tetpoint
otherwise
(6)
For upward leader modeling of lightning an exponent factor is defined as follows: t t exp t t max max
S 1
(7)
where t is the number of iterations, and tmax is the maximum number of iterations. The solutions are also updated based on this factor as follows: X
testpoint_new
X
testpoint_new
rand S ( X
S
min
X
S
max
)
(8)
Where XSmin and XSmax are the best and worst solutions of the population. In order to enhance performance of the proposed method in each iteration, average of the whole population is calculated and fitness of average solution is obtained. If fitness of the worst solution is worse than the average solution, it is replaced by the average solution. 3.2 MOLAPO As can be seen, the best and worst solutions of the population are needed in the LAPO algorithm. As mentioned before, finding the best and worst solutions of MO problems is much more challenging than single objective problems. Other parts of the method (solutions updating) are the same as single objective form of the method. First of all, the initial population is defined based on Eq. (1) and the fitness function for each solution is obtained. Then, for each solution i, the number of solutions which dominate this solution is determined. If the number of solutions which dominate the solution i is equal to zero, it means that solution i is not dominated by any other solution and this solution is put in the first front. By discarding solutions of the first front from the population and obtaining the number of solutions which dominate each solution for the rest of the population, the second front is obtained. This procedure is repeated until all solutions are put in their relevant front. 7
Solutionns of the firrst front aree saved in tthe Achievee matrix. If number of solutions oof the first front exxceeds the llimitation of Archive m matrix, the first f front iss gridded based b on [5]]. In other words, the t crowdinng-distance of each sollution in thee first front is obtainedd. For a twoo objective functionn problem, the crowdinng-distances of boundary solutionns are conssidered to be b infinite. Crowdinng-distance of the nonn-boundary solution s i which w is in vicinity v of ssolutions i+ +1 and i-1 (see Figg. 1 ) can be calculated uusing the foollowing equuation. CDi
N off
OFi j 1 OFi j 1 j j max OFmi n
(9)
OF j 1
Where C CDi is the crrowding-disstance of solution i,
(
) is value of tthe objectivve function
j of thee solution i+1(i-1), O OF (OF ) is the maximum (minimum m) value of objective functionn j in the firsst front (the value of obbjective funcctions for thhe boundaryy solutions).
Fig. 1. Croowding-Distannce in a two-oobjective functtion problem
8
Solutions with higher value of crowding-distance are prior to be saved in the Archive matrix. As depicted in Fig. 1, solutions with higher value of crowding-distance are in less crowded area. It means that the area around these solutions is not searched well. Thus, by selecting these solutions, a wider search space area would be searched and more solutions can be obtained in the POF. Finally, solutions in the first front are sorted in a descending order based on value of crowding-distance. The solutions are saved in Archive matrix from the first solution of sorted first front until the Archive matrix is completely filled. First solution of the Archive matrix is considered as the best solution among the whole population. The crowding distances of the solutions in the last front are also calculated and the solutions in this front are sorted in descending order based on their crowding distances. The last solution in this front would be considered as the worst solution among the whole population. By identifying the best and worst solutions of the population, the solutions are updated based on LAPO. The stepwise procedure of MOLAPO is illustrated as follows: Step1- The initial guess of the solutions as the test points are defined randomly in the predefined range. Step2- The fitness values (different objective functions) of all solutions are obtained. Step3- The non-dominated sorting is performed and the first front solutions are put in a matrix named Archive. If the number of solutions in the first front is bigger than the size of Archive matrix, the solutions are stored according to their crowding-distance values. Step4- Xave is calculated and the fitness value (Fave) is obtained. Step6- If Xworst is dominated by Xave, it is replaced by Xave, if Xave is dominated by Xworst, nothing happens, and if these two solutions are non-dominated over each other, Xworst is replaced by Xave by 50% probability. Step5- For each individual of the population (for instance solution i [i=1:Np]) a random solution j is selected so that i≠j. If solution j dominates Xave If solution j dominates Xave j Xitestpoint_new =Xitestpoint +rand×(Xave+rand×(Xpotentialpo
(10)
If solution j is dominated by Xave j Xitestpoint_new =Xitestpoint -rand×(Xave+rand×(Xpotentialpoint ))
(11) 9
If solution j and Xave are non-dominated over each other, then j Xitestpoint_new =Xitestpoint +sign(0.5-rand)×rand×(Xave+rand×(Xpotentialpoint ))
(12)
Step6- If the new test point dominates the older one, the new one remains and the older one is discarded. Step7- after updating all solutions, the matrix Archive is updated as follows: the updated population and the Archive matrix of previous iteration are put in a matrix. The nondominated sorting is performed and the solutions are sorted based on their front and sorted in each front based on their crowding distances. The Archive matrix would be filled by the first front Pareto solutions. If the first front Pareto solution is greater than the size of Archive matrix, the solution with higher crowding distances of the first front are put in Archive matrix. Step8- First solution of the Archive matrix is considered as the best solution i.e., Xleader. Solutions of the last front are also sorted based on their crowding-distance values and the last solution is considered as the worst solution i.e., Xworst. Step9- For each solution X testpoint_new =X testpoint_new +rand×S×(X Leader -X testpoint_new )
(13)
where
t t ×exp - t t max max
S=1-
(14)
If the new test point dominates the older one, the new one remains and the older one is discarded. If they are non-dominated over each other, one solution remains by the probability of 50%. Step10- If the convergence criterion is satisfied, the algorithm is stopped; otherwise, go to Step3. This stepwise procedure is performed until the convergence criterion is satisfied. The stopping criterion is reached when the maximum cycle number (maximum iteration of algorithm) is reached. As far as constraints are concerned, there are two kinds of constraints in each optimization problem. The first group includes the constraints related to the variables. In other words, each variable must lie in a predefined range which is determined by its lower and upper bound. The other type of constraints is related to the objective function. For the former, whenever a variable violates its limitation, the common way is to put the variable in the violated limit. However, in this paper, whenever a variable violates its limit, the previous value of the variable (which was in the range) is considered for this violated variable. For the latter, if a solution violates any constraint, a big value is considered as the penalty factor and added to fitness of this solution. In 10
other w words, fitness value off the solutionns by whichh the constrraints are not satisfied would be catastroophic. The Pseudo code oof MOLAP PO is shownn in Fig. 2.
Fig. 2. Pseudo code off MOLAPO
11
4 Simulation Results In this section, the proposed MOLAPO is evaluated using some benchmark test functions and some engineering problems. There are 30 test functions with different characteristics. Results of the proposed method are discussed and compared to those of four well-regarded algorithms including MOPSO, NSGA-II, MOALO, and MOGWO. 4.1 Comparison criteria The following 4 comparison criteria are considered: 1) Generational Distance (GD) [57] which is a measurement representing how “far” the obtained POF is from the true POF. The lower the GD value, the better the performance of the algorithm; 2) Inverted Generational Distance (IGD) [29] which measures both convergence and diversity. Again, a lower IGD value indicates better performance of the method. Furthermore, the obtained results should not only be close to the true POF, but must also be sure not to exclude any part of POF; 3) metric of spread [16] measures how well the true POF is covered by the obtained POF. A high metric of spread value means better performance, 4) metric of spacing [58] measures how evenly the non-dominated solutions of the obtained first Pareto front are distributed [59]. A lower value represents better performance of the method. 4.2 Method’s parameters In all simulations, for all the methods, the population size is considered to be 100 particles, the total number of iterations for each method is considered so that the total Function Evaluation (FE) would be the same, maximum capacity of Archive matrix is 50, and each method is executed for 30 different trials. It should be mentioned that the number of FE means how many times a solution is updated and its fitness value is evaluated in each iteration. The other parameters of each method are listed in Table I. In this paper, all methods are implemented in MATLAB 2012a in a Core i5 PC with 3GHz processing frequency of CPU and 8GB of RAM. Table I- Different parameters of different method Algorithm MOLAPO
FE in All iteration 20000
FE in each iteration 2
Archive Capacity 50
NSGAII
20000
1
50
MOPSO
20000
1
50
12
Other parameter -pCrossover=0.7; Crossover Percentage nCrossover=2*round(pCrossover*nPop/2); Number of Parents (Offspring) pMutation=0.4; Mutation Percentage nMutation=round(pMutation*nPop); Number of Mutants mu=0.02; Mutation Rate Sigma=0.1*(VarMax-VarMin); Mutation Step Size (based on the MATLAB code available in [60]) C1=C2=2, w = wmax-t×(wmax-wmin)/(tmax), wmax=0.9, wmin = 0.4, tmax: maximum iteration number((based on the MATLAB code available in [61])
MOALO
20000
2
50
stochastic function limit = 0.5, ratio I based on the current iteration and maximum number of iteration (based on the MATLAB code available in [62]),
MOGWO
20000
1
50
Based on the MATLAB code available in [63]
4.3 Test functions There are 30 test functions including two main groups. The first group includes the benchmark test functions (25 test functions) and the second group includes the engineering multi-objective test functions (5 test functions). In the benchmark test functions, there are 22 test functions (F1F22) which were introduced in 2015 [64] as the only reference for evaluating multi-objective test functions. Using these benchmark test functions, performance of an MOEA can be examined from different aspects. To evaluate performance of the method in solving large problems, three famous standard benchmark test functions of ZDT (ZDT1, ZDT2, and ZDT3) introduced in [65] and [66] are employed. The 30 different test functions are classified as follows: Class 1- The first class of benchmark test functions have a local and a global front; both fronts are continuous though different because one front is linear, the other non-linear [64]. Class 2- Has three benchmark test functions which have more than one local front. These test functions are different from their front shape. Using this class of test functions, the method performance is investigated in the case with several local fronts [64]. Class 3- These test functions have multiple discrete local fronts. In this class, there are three test functions with two objective functions and an additional test function with three objective functions [64]. Class 4- The forth class has two test functions whose fronts vary in a stepwise and continuous manner [64]. Class 5- Investigates the performance of the method in solving the test functions with three objective functions. In this class, there is one test function with plate-stair shape and three test functions whose fronts are shaped like an inclined plane [64]. Class 6- Investigates performance of the method for the test functions with multiple variables. In this class, there are two continuous fronts and one discrete front. Class 7- The last class includes four classical engineering problems and an electrical multioptimization problem. These test functions are highly constrained. The solutions of these test functions are explained in tables II, and III. Table II- Solution representation of test functions F1-F25 Function Name F1 F2 F3 F4 F5 F6 F7
Number of objective Function 2 2 2 2 2 2 2
Number of variable 2 2 2 2 2 2 2
13
Variable limit [Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,2]
F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25
2 2 2 2 2 2 2 2 3 2 2 3 3 3 3 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 30 30 30
[Up,Low]=[0,2] [Up,Low]=[0,2] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1] [Up,Low]=[0,1]
Table III- Solution representation of test functions F26-F30 Number of objective Function
Objective Function Name
Number of variable
Variable limit
Variable Name
4
F F 2 x 2 3 F F 2 x 3 3 F F x 4 3 F 10KN , 5 2 E=(2)10 KN/cm 200 L cm 10 KN/cm 3
four design variables (x1-x4) related to cross sectional area of members
F
F26 : Four-bar truss design problem
F27 : Speed reducer design problem
F28 : Disk brake design problem
F29 : Welded beam design problem
2
F1: volume of the four-bar truss F2: joint displacement
x 1 3
2.6 x 1 3.6, 0.7 x 2 0.8
2
F1: weight F2 stress
7
17 x 3 28, 7.3 x 4 8.3 7.3 x 5 8.3, 2.9 x 6 3.9 5 x 7 5.5
2
F1: stopping time F2: mass of a brake
4
2
F1: fabrication cost F2: deflection of the beam
4
55 x 1 80, 75 x 2 110 1000 x 3 3000, 2 x 4 20
0.125 x 1 5, 0.1 x 2 10 0.1 x 3 10, 0.125 x 4 5
14
F
gear face width (x1), teeth module (x2), number of teeth of pinion (x3 integer variable), distance between bearings 1 (x4), Distance between bearings 2 (x5), Diameter of shaft 1 (x6), Diameter of shaft 2 (x7). the inner radius of the disk (x1), the outer radius of the disk (x2), the engaging force(x3), the number of friction surfaces (x4) There are four design variables: the thickness of the weld (x1), the length of the clamped bar (x2), the height of the bar (x3) and the thickness of the bar (x4).
0 Pg 1 1.4, 0 Pg 2 1
F30 : Optimal Power Flow
F1: Operation cost F2: Emission
2
0 Pg 3 1, 0 Pg 4 1 4
PD Pgi 3.32, V min V V max ,
4
i 1
Pti Limit i
There are 4 variables which are the power generation of 4 power plants, PD is the load demand, voltages must be in the predefined range and the power transmitted (Pt) through the lines must be lower than thermal limit of line.
4.4 Results and discussion 4.4.1. Class 1. In this class, there are 9 test functions, each having two objective functions, i.e. a local and a global optimal front. The test functions are illustrated as follows [64]: f 1 (x) x1 f 2 (x) H(x 2 ) {G(x) S (x1 )} 1
H (x)
2
e
0.5(
x 1.5 2 ) 0.5
2 2
e
0.5(
x 0.5
)2
N
G (x) 50x i 2 i 3
S (x) x1
0
1 2
F1 0.1 1.5 1 1
F2 0.1 1.5 1.5 1.5
F3 0.1 1.5 0.5 0.5
F4 0.1 1.5 0.5 1
F5 0.1 1.5 1 0.5
F6 0.1 1.5 1 1.5
F7 0.1 1.5 1.5 1
F8 0.1 1.5 1.5 0.5
F9 0.1 1.5 0.5 1.5
The statistics results of different comparison criterion with different methods are listed in Table IV. For each criteria, two parameters are listed: the mean value and the standard deviation of 30 different trials. Based on the results in the table, for all the test functions and comparison criterion, the proposed method outperforms NSGA-II. The proposed method also obtains better results than MOPSO, MOALO, and MOGWO in terms of GD, IGD, and metric of spacing. It means the proposed method achieves closer results to the true POF (DG and IGD) with an even spread along the obtained front (metric of spacing). The results also find most of the parts of the true PF based on IDG criterion. Results of the proposed method are also competitive to MOPSO, MOALO, and MOGWO in terms of metric of spread which means that the results cover the true POF adequately. The last column of the table illustrates the CPU time. Clearly, the fastest method is MOALO followed closely by the proposed method, both of which are far quicker than the other methods. As the results presented in the Table IV show, the proposed method has a better performance compared to the other methods. The optimal results obtained by the proposed method for different test functions are also depicted in figures 3, 5 and 7. These figures represent different positions of Pareto optimal fronts of test functions F1 to F9. The POF of these test functions can be linear, convex, or concave and can be positioned in 9 different situations 15
with respect to each other. In these figures, the local fronts, global fronts and front obtained by the proposed method are also depicted. As observed, the true POF is reached by the proposed method very well. In other words, it could also be concluded that using the proposed method, not only the global front is reached, but also the obtained results cover it very well. Now, in another comparison, the statistical analysis for these test functions for different methods is depicted in figures 4, 6, and 8. Using these figures, the statistical performance of the methods can be easily investigated. The ranges of the obtained results in different trials (i.e., the best and the worst solutions) are depicted as a rectangle through which a red line indicates the median of the results. The better method has three features: 1- the corresponding rectangle is placed beneath the other methods (has the lowest best result); 2the red line is lower than other methods (has the lowest median result); and 3- has the smallest rectangle (the difference between the best and the worst results is relatively little). Note that the proposed method has the best performance in criteria GD, IGD, and metric of spacing. For example, regarding the GD for the test function F1, the proposed method has the smallest rectangle which is placed in the lowest part of the figure. It means it has a robust performance with the lowest median value. The same performance can be seen for IGD. For Metric of Spacing, although the proposed method does not have the smallest rectangle, its best and mean value are much lower than for the other methods. As shown in Table 1, the proposed method is not the best in terms of Metric of Spread. However, its results are competitive. These situations are valid for other test functions too. Table IV- Statistics results of different comparison criterion with different methods for test functions F1-F9 GD
MOLAPO F1
F2
F3
F4
Metric of spread
Metric of spacing
IGD
Ave.
Std.
Ave.
Std.
Ave.
Std.
Ave.
Std.
0.000554
2.510E-05
0.59138
0.05358
0.01109
0.001085
0.00854
0.00034
24.21
MOPSO
0.000928
4.27E-05
0.7385
0.08794
0.0180
0.001713
0.0139
0.00149
67.3
NSGA_II
0.00075
1.72E-05
0.39
0.0296
0.012
0.000692
0.00859
0.00048
275
MOALO
0.000847
0.000146
0.4784
0.09041
0.02198
0.006437
0.0477
0.00849
23.5
MOGWO
0.001934
0.0009622
0.5663
0.07258
0.03198
0.009714
0.02545
0.00567
96.82
MOLAPO
0.000586
4.419E-05
0.57875
0.0455
0.0112
0.00132
0.00898
0.00083
23.1
MOPSO
0.000995
3.769E-05
0.6944
0.04323
0.0188
0.00144
0.0145
0.0013
69.9
NSGA_II
0.0009
4.783E-05
0.4025
0.0414
0.0160
0.00082
0.0103
0.00031
265
MOALO
0.001676
0.00064
0.564
0.0465
0.03709
0.019787
0.0522
0.00541
20.797
MOGWO
0.0011049
6.214e-05
0.4253
0.0217
0.03626
0.005426
0.02712
0.005087
107.4
MOLAPO
0.000681
9.668E-05
0.6222
0.0314
0.00945
0.000763
0.00803
0.00106
21.25
MOPSO
0.000955
8.45E-05
0.7352
0.0284
0.01488
0.00189
0.01158
0.000502
64.1
NSGA_II
0.0010
5.55E-05
0.4137
0.0525
0.0119
0.00078
0.00840
0.00048
231.1
MOALO
0.001141
0.000354
0.5713
0.1171
0.03248
0.0158
0.04072
0.01211
22.44
MOGWO
0.0007817
7.514e-05
0.5565
0.1295
0.018124
0.00383
0.02253
0.00667
100.07
MOLAPO MOPSO NSGA_II MOALO
0.000694 0.00098 0.00106 0.001259
4.965E-05 0.000113 4.407E-05 0.000596
0.582 0.75823 0.40855 0.4668
0.0762 0.0397 0.0331 0.0791
0.00975 0.015025 0.0121 0.02367
0.00097 0.0005 0.00130 0.004414
0.00777 0.0111 0.00805 0.05412
0.00032 0.00102 0.00035 0.01429
52.57 242.7 24.51
16
Time(s)
26.1
F5
F6
F7
F8
F9
0.0008328
0.000120
0.64975
0.0398
0.02284
0.004155
0.01679
0.00131
MOLAPO
0.000926
4.314E-05
0.58574
0.0397
0.0119
0.00140
0.01055
0.00080
14.65
MOPSO
0.00124
0.000127
0.69505
0.0731
0.0184
0.00243
0.0145
0.00058
63.1
NSGA_II
0.00112
1.05E-05
0.3965
0.0642
0.01335
0.0011
0.010
0.00051
192.1
MOALO
0.001534
0.000560
0.4299
0.0828
0.02132
0.008785
0.05572
0.04042
18.78
MOGWO
0.0013673
0.0003587
0.52068
0.0775
0.02925
0.006140
0.01973
0.00246
101.2
MOLAPO
0.000574
2.243E-05
0.5764
0.0449
0.01096
0.00084
0.00866
0.00020
25.24
MOPSO
0.00088
0.00011
0.6567
0.061
0.01606
0.0012
0.013
0.00201
60.1
NSGA_II
0.00074
2.706E-05
0.4418
0.0716
0.01396
0.00141
0.00879
0.00033
212.1
MOALO
0.001192
0.0004515
0.580
0.0515
0.02785
0.009101
0.03917
0.00694
MOGWO
0.0009161
8.8403e-5
0.5275
0.0835
0.02732
0.009374
0.02580
0.00726
98.12
MOLAPO
0.000646
2.853E-05
0.58144
0.0541
0.01318
0.00086
0.01014
0.00050
27.2
MOPSO
0.00102
5.39E-05
0.6185
0.1044
0.0190
0.0023
0.0141
0.0014
74.57
NSGA_II
0.000959
4.987E-05
0.3738
0.055
0.01495
0.001755
0.01035
0.000885
203.1
MOALO
0.0011052
0.000216
0.6071
0.1202
0.03748
0.008547
0.05505
0.00938
22.09
MOGWO
0.000994
6.9372e-5
0.5506
0.0309
0.03174
0.0005753
0.02471
0.00098
99.84
23.55
MOLAPO
0.002359
5.62E-05
0.60145
0.0146
0.013894
0.00123
0.02169
0.00043
23.8
MOPSO
0.00284
0.00015
0.5925
0.0624
0.01887
0.001303
0.02439
0.00131
61.07
NSGA_II
0.0029
0.000193
0.407949
0.0260
0.01574
0.001201
0.02238
0.0008
221.7
MOALO
0.00360
0.00114
0.5397
0.1048
0.04229
0.01207
0.05350
18.77
MOGWO
0.002405
0.0005806
0.93677
0.2392
0.03127
0.006724
0.0318
0.00979 0.01265
MOLAPO
0.000633
5.13E-05
0.58869
0.0192
0.00933
0.00102
0.00738
0.00028
28.3 58.38
98.40
MOPSO
0.0010
4.635E-05
0.6733
0.0226
0.0138
0.0011
0.0108
0.00051
NSGA_II
0.00110
0.00012
0.4372
0.0493
0.0118
0.000798
0.00814
0.00041
193.1
MOALO
0.0011615
0.00087
0.427
0.127
0.02285
0.01174
0.06005
0.01984
22.36
MOGWO
0.000914
0.00016
0.5977
0.04366
0.02163
0.00208
0.02149
0.00566
99.05
17
102.36
MOGWO
Fig. 3. The test functiions shape, truue POF, and POF obtained by the proposeed method for the test functiions F1-F3
18
Figg. 4. Statistical analysis of different methods for teest function F1-F3
19
Fig. 5. The test functiions shape, truue POF, and POF obtained by the proposeed method for the test functiions F4-F6
20
Figg. 6. Statistical analysis of different methods for teest function F4-F6
21
Fig. 7. The test functiions shape, truue POF, and POF obtained bby the proposeed method for the test functiions F7-F9
22
Figg. 8. Statistical analysis of different methods m for teest function F F7-F9
4.4.2. Class 2. In this classs, there aree three test functions which w havee several loccal fronts. a follows These tesst functions also have ttwo objectivve functionss which are explained as [64]: f 1 (x) x1 H 2 ) {G(x) S (x1 )} f 2 (x) H(x e x cos( 2 x) x 2
H (x)
0.5
N
G (x) 50x i 2 i 3
S (x) x1
1.3 1
F F10 3 4 1
F F11 3 4 1
F12 3 4 1
23
0.5
1
1.5
The statistical results of different comparison criterion and the obtained results are shown in Table V and Fig. 9, respectively. Note that the proposed method outperforms NSGA-II for all criteria and the other three methods regarding GD, IGD, and metric of spacing. Its results are also competitive to MOPSO, MOGWO and MOALO in terms of Metric of spread. Table V shows better performance of the proposed method in finding a POF close to the true POF and covers it very well. The plots of Fig. 9 also illustrate that the proposed MOLAPO finds the true POF very well. The statistical analysis which is shown in Fig. 10 also demonstrates better performance of the proposed method compared to other methods. In fact, for class 2 test functions, the proposed method’s performance in comparison to other methods is the same as for class 1. Thus, no more discussion is needed. Table V- Statistical results of different comparison criterion for various test functions F10-F12
F10
F11
F12
MOLAPO
GD Ave. 0.000352
Std. 3.93E-05
Metric of spread Ave. Std. 0.7897 0.0626
Metric of spacing Ave. Std. 0.00621 0.000467
IGD Ave. 0.004480
Std. 0.000183
MOPSO
0.000610
6.02E-05
0.76053
0.06485
0.0082
0.00112
0.00633
0.000862
22.5 62.9
NSGA_II
0.000749
8.50E-05
0.423
0.0216
0.00671
0.000244
0.00472
0.000346
162.4
MOALO
0.001086
0.00053
0.7300
0.119
0.0241
0.0211
0.0626
0.04206
28.79
MOGWO
0.00045
0.0001
0.636
0.1581
0.0157
0.004345
0.01222
0.00408
175.6
MOLAPO
0.00022
6.87E-06
0.6942
0.0476
0.00594
0.000493
0.004241
0.00038
25.3
MOPSO
0.000474
9.24E-05
0.7906
0.05623
0.00902
0.00071
0.0070
0.000842
69.03
NSGA_II
0.000366
2.19E-05
0.4754
0.05992
0.0071
0.0003317
0.004318
0.00023
183.2
MOALO
0.000417
8.943e-5
0.542
0.1054
0.01548
0.00751
0.02334
0.01930
28.66
MOGWO
0.000385
1.95e-05
0.7671
0.0260
0.0107
0.00136
0.00756
0.00068
171.19
MOLAPO
0.0002033
MOPSO
0.000385
NSGA_II
0.000401
MOALO MOGWO
8.471E06 1.601E05 1.1489E05
0.774
0.05404
0.00570
0.00025
0.00433
0.00023
0.7232
0.0729
0.00772
0.00058
0.006
0.00048
0.41520
0.05272
0.00643
0.000891
0.00433
0.000322
0.000883
0.000676
0.6366
0.06485
0.02461
0.01528
0.02508
0.01672
0.00036
3.87e-05
0.6545
0.1689
0.01418
0.00224
0.0121
0.00289
24
Time
33.7 72.4 214.4 30.08 177.25
Fig. 9. The test functioons shape, truee POF, and obbtained POF ussing the propoosed method foor test functionns F10-F12
25
Fig. 110. Statisticaal analysis of different meethods for tesst functions F F10-F12 4.5
f with seeveral local Cllass 3. This class includdes the test ffunctions whhich have disscrete stair front froonts [64]. In this class, thhere are 3 tesst functions w with two objeectives and oone test function with 3 obbjectives. Thhe functions F F13-F15 aree the functionns with two objectives aand F16 is thhe function wiith 3 objectivves. In Tablee VI and Figg. 11, note thhat the propoosed methodd outperform ms the other methods for thhese test funcctions. Also, for GD criteeria and Metrric of spacingg, the propossed method ouutperforms thhe other meethods. In tterms of IG GD, both thee proposed method andd NSGA-II ouutperform othher methods and their ressults are alm most comparaable. For Metric of spreadd criterion, altthough the pproposed metthod does noot have the bbest results, its results arre competitivve with the othher methodss. Statistical analysis forr these test functions arre presented in Fig. 12 where the suuperiority of the proposedd method caan be observeed. Clearly, the proposedd method haas the same peerformance foor this class oof test functioon as it did ffor the two prrevious test functions. f
f 1 (x) x1 H 2 ) {G(x) S (x1 )} f 2 (x) H(x e x cos( 2 x) x 2
H (x)
0.5
N
G (x) 50x i 2 i 3
S (x) x1
1.3 1
26
F13 2 6 3 0.5
F14 4 6 3 0.5
F15 8 6 3 0.5
f 1 (x) x1 f 2 (x) x 2 x1 x 1 sin( 2 x1 )} H(x1 ))) G (x) G (x)
f 3 (x) G (x) (({1
x2 x 2 sin( 2 x 2 )} H(x 3 ))) G (x) G (x)
(({1
e 2 x sin( 2 (x 2
H(x)
G (x) 1 10
N i 2
)) x 4 0.5
F16
xi
N
1 1
Table VI- Statistical results of different comparison criterion with various test functions F13-F16
F13
F14
F15
F16
GD Ave. MOLAPO
Std.
Metric of spacing Ave. Std.
IGD Ave.
Std.
0.000844
7.426E-05
0.8248
0.0515
0.0053
0.000482
0.1889
0.00049
21.6
MOPSO
0.00153
4.705E-05
0.8677
0.0386
0.0088
0.000632
0.1918
0.00264
59.6
NSGA_II
0.00165
6.476E-05
0.654
0.0367
0.0075
0.000491
0.1883
0.00032
109.6
MOALO
0.00119
0.00018
0.7024
0.09265
0.0382
0.026122
0.2405
0.05326
20.09
MOGWO
0.002713
0.0004216
0.8530
0.0566
0.01391
0.004696
0.2160
0.00568
63.12
MOLAPO
0.001394
7.228E-05
0.8182
0.05315
0.00696
0.000525
0.1882
0.00057
21.4
MOPSO
0.00220
0.000213
0.8516
0.0610
0.00995
0.000596
0.1902
0.00110
54.55
NSGA_II
0.00243
0.000111
0.6058
0.0767
0.0089
0.00127
0.1870
0.00032
101.4
MOALO
0.001878
0.0003460
0.7209
0.13247
0.04507
0.03447
0.2656
0.05645
19.52
MOGWO
0.000113
0.0000335
0.7617
0.03240
0.01250
0.000674
0.2160
0.00600
70.1
MOLAPO
0.00176
0.0001221
0.8508
0.03729
0.007891
0.000997
0.2037
0.00090
24.9
MOPSO
0.0029
0.000275
0.880
0.0691
0.0120
0.0010
0.207
0.00211
53.94
NSGA_II
0.00304
0.000145
0.6873
0.0636
0.0104
0.00085
0.2029
0.00019
104.1
MOALO
0.002833
0.000386
0.8182
0.0695
0.03873
0.01211
0.2197
0.01260
18.20
MOGWO
0.00329
0.000310
0.7681
0.0769
0.01879
0.00659
0.2277
0.00438
78.5
MOLAPO
0.01459
0.0001795
0.5912
0.02327
0.0554
0.00380
0.3225
0.00709
25.4
MOPSO
0.0354
0.001028
0.6589
0.0274
0.07593
0.0108
0.3066
0.00489
57.78
NSGA_II
0.06142
0.00739
0.6431
0.0381
0.0848
0.00789
0.2947
0.00356
107
MOALO
0.044912
0.003151
0.5709
0.14199
0.11588
0.007530
0.3211
0.01128
23.14
MOGWO
0.036908
0.00140
0.6346
0.0560
0.09176
0.00
0.2985
0.00784
69.4
27
Time(s)
Metric of spread Ave. Std.
Fig. 11. The test functions shape, true POF and obtained POF using the proposed method for test functions F13-F16
28
Fig. 12. Statisticaal analysis off different meethods for tesst function F13-F16 4.6
Cllass 4. In thiis class, the ttest functionss have continnuous stair frronts [64]. The results are shown in Taable VII. Forr the test funcction F17, thhe proposed method m has thhe worst resuults in terms of GD, but the best ones inn terms of IG GD and Metrric of spacingg. Its results rank second best in terms of Metric off spread. For the test funcction F18, thee proposed m method has thhe best resultts in terms of IGD, and metric of spreaad and seconnd best resullts regarding GD, and meetric of spaciing. Table V VII and Fig. 133, demonstratte the excelleent performaance of the pproposed metthod. The truue POF is achhieved and the obtained reesults cover iit very well. S Statistical annalysis is exhiibited in Fig.. 14.
Table VII- Statisticss results of diffferent comparrison criterion with differentt methods for test functions F17-F18
F17
F18
MO OLAPO MO OPSO NS SGA_II MO OALO MO OGWO MO OLAPO
GD D Ave. 0.0603 0.000468 0.00037 0.00137 0.00033 0.00043
MO OPSO NS SGA_II
Std. 0.00124 0.0000228 1.93E E-05 0.0000853 6.12ee-06 0.000014
Metric of sppacing Ave. Std. 0.0053 0.000386 0.00796 0.001042 0.00608 0.000293 0.012404 0.004353 0.008864 0.001618 0.0054 0.000283
IGD Ave. 0.0040 0.0059 0.00401 0.019914 0.00961 0.00404
Std. 0.000693 0.000377 0.000181 0.003609 0.000513 0.000237
0.00061
0.0000251
0.709
0.047
0.0070
0.00146
0.0059
0.000420
23.1 62.41 207.1 27.14 73.1 26.4 57.14
0.00043
2.49E E-05
0.43299
0.03549
0.00634
0.000433
0.00415
0.00036
212
29
Time(s)
Metricc of spread Ave. Std. 0.69555 0.0531 0.775 0.0536 0.446 0.0520 0.62611 0.11658 0.69300 0.1357 0.679 0.0251
MO OALO MO OGWO
0.8907 0.000398
0.013317 2.58ee-05
0.61355 0.54477
0.009596 0.071704
0.02777 0.0091818
0.025459 0.0012363
9.3888 0.0066913
0.48265 0.0009530
Fig. 13. T The test functiions shape, truue POF and obbtained POF uusing the propoosed method for f test functioons F17-F18
30
31.47 75.89
Fig. 14. Statisticaal analysis off different meethods for tesst function F17-F18 4.7
Cllass 5. This cclass includes the test funnctions with 3 objectives and stair or ssloped plate fronts f [64]. Foor these types of test funcctions, basedd on results oof Table VIIII, the proposeed method ooutperforms othher methods in terms off GD, IGD. It also ranks second best in terms of metric of sppacing after M MOGWO, butt not in termss of metric oof spread. Figg. 15 also shoows how weell the propossed method finnds the optim mal front cloose to the tru rue POF. Froom the statisstical analysiis, seen in F Fig. 16, the suuperior perforrmance of thee proposed m method can be concluded..
T Table VIII- Sttatistical resultts of different comparison criterion c with various v test fuunctions F19-F F22 GD
F20
IGD
Time(s)
Std.
Ave.
Std.
Ave.
Stdd.
Ave.
Std.
0.003321 0.00551
7.46E-005 0.0002889
0.40299 0.42538
0.03176 0.02291
0.04565 0.058
0.000158 0.000627
0.0499686 0.06660
0.001855 0.001355
NSG GA_II
0.170027
0.01261
0.56
0.0295
0.102
0.00174
0.1333
0.030622
101.1 385.5
MO OALO
0.0055873
0.000466
0.4452
0.04166
0.0886
0.0009524
0.15997
0.017711
54.3
MO OGWO
0.0066736
0.000177
0.52347
0.052285
0.042496
0.0002364
0.257771
0.051577
103.1
MO OLAPO
0.003306
0.00011
0.3697
0.0212
0.0359
0.000104
0.04661
0.003022
51.2
MO OPSO
0.004442
0.0002005
0.42524
0.05675
0.0473
0.0007978
0.061165
0.0030115
107.4
47.1
NSG GA_II
0.04997
0.002155
0.4642
0.0093
0.0502
0.0003193
0.060048
0.008211
383.7
MO OALO MO OGWO
0.0044426 0.0044666
0.0002999 9.504e--05
0.4421 0.32349
0.05336 0.01592
0.07105 0.035893
0.001221 0.0004349
0.12668 0.175593
0.009788 0.0078996
12.7 117.1
31
Metric of spaacing
Ave.. MO OLAPO MO OPSO F19
Metricc of spread
F21
F22
MOLAPO MOPSO
0.00298 0.0043
2.838E-05 0.00017
0.3664 0.387
0.0030 0.0412
0.0350 0.0422
0.001822 0.00494
0.04450 0.0592
0.00107 0.00373
51.3 105.1
NSGA_II MOALO MOGWO MOLAPO MOPSO NSGA_II MOALO
0.04695 0.004215 0.004796 0.00301 0.00423 0.1665 0.00431
0.00145 0.00054 0.0001487 8.392E-05 8.662E-05 0.0146 0.00031
0.4936 0.4927 0.4649 0.3569 0.40474 0.56067 0.4782
0.0157 0.04342 0.031314 0.03162 0.0404 0.0424 0.06215
0.0467 0.07420 0.03529 0.0341 0.0451 0.0782 0.07495
0.00241 0.00962 0.001729 0.001543 0.00391 0.010 0.01896
0.0553 0.1126 0.18837 0.04695 0.058639 0.17222 0.11889
0.00985 0.007595 0.02032 0.00353 0.00321 0.03819 0.01617
381.4 60.1 107.3 48.9 98.14 372.3 59.47
MOGWO
0.00484
4.167e-05
0.49572
0.05059
0.03454
0.00494
0.22079
0.01802
104.1
32
Fig. 15.. The test funcctions shape, trrue POF and oobtained POF by the propossed method forr test functions F19-F22
33
Fig. 16. Statisticaal analysis off different meethods for tesst function F19-F22
4.8
Cllass 6. The class of testt functions w with multiplee variables. T The test funcctions F23-F F25 are the fuunctions for w which a hugee number off variables must m be obtainned optimallly. These tesst functions haave two objecctives explainned as follow ws [65]:
Objective(m minimize)
variable
f 1 (x) x1 f 2 ( x ) g ( x ) h( f 1 ( x ), g ( x ))
F23
g( x ) 1
0 xi 1
9 30 xi N - 1 i 2
h( f 1 ( x ), g ( x )) 1 -
1 i 300 f1 ( x ) g( x )
f1 (x) x1 f 2 ( x ) g ( x ) h( f1 ( x ), g ( x ))
F24
g( x ) 1
0 xi 1
9 30 xi N - 1 i 2
f ( x) h( f1 ( x ), g ( x )) 1 - 1 g( x )
1 i 300 2
34
f1 (x) x1 f 2 ( x ) g ( x ) h( f1 ( x ), g ( x ))
F25
g( x ) 1
0 xi 1
9 30 xi N - 1 i2
h( f1 ( x ), g ( x )) 1 -
1 i 30 f1 ( x ) f1 ( x ) sin( 10 f1 (x)) g( x ) g( x )
The results shown in Table IX and Fig. 17, prove the acceptable performance of the proposed method in solving the test functions with lots of variables. In terms of GD, the proposed method outperforms the other methods. It also has better results in terms of IGD, and metric of spacing and competitive results regarding metric of spread. Table IX- Statistical results of different comparison criterion with various test functions F23-F25
F23
F24
F25
MOLAPO MOPSO NSGA_II
GD Ave. 0.000249 2.83E+1 7.071E-3
Std. 8.8E-06 1.69E+1 6.26E-3
Metric of spread Ave. Std. 0.73875 0.0196 NaN NaN 8.8155E-1 1.36E-2
Metric of spacing Ave. Std. 0.00471 0.00018 NaN NaN 1.03E-2 4.92E-3
IGD Ave. 0.00384 3.749E+1 2.869E-1
Std. 0.00023 7.49E+0 7.17E-2
MOALO MOGWO
0.00254 0.01587
0.00012 0.002518
0.6835 0.54587
0.11258 0.1773
0.00854 0.01472
0.0004532 0.00751
0.01524 0.017875
0.005022 0.00457
MOLAPO MOPSO NSGA_II
0.00028 8.991E+0 2.869E-1
3.7E-05 9.78E+0 7.17E-4
0.7043 NaN 8.0556E-1
0.088 NaN 1.85E-2
0.00506 NaN 1.9881E-1
0.00043 NaN 1.08E-3
0.0033 1.836E+1 1.988E-1
0.00046 7.16E+0 1.08E-3
MOALO MOGWO
0.01157 0.02457
0.00014 0.00545
0.5531 0.4578
0.0985 0.0141
0.012489 0.00255
0.005478 0.00756
0.009248 0.00867
0.00145 0.000947
MOLAPO MOPSO
0.1245 1.10E+0
0.0155 2.54E+0
0.8775 1.3366E+0
0.02604 4.54E-1
0.0867 6.6199E+3
0.03963 1.22E+4
4.2784 1.911E+2
0.52137 2.30E+2
NSGA_II MOALO MOGWO
8.608E-2 0.08524 0.14256
2.69E-4 0.00362 0.00936
1 0.7014 0.6854
0 0.06253 0.09758
0 0.19824 0.3157
0 0.057128 0.0874
6.802E-1 3.0157 0.9254
2.02E-3 0.38265 0.1247
35
Fig. 17. Compparison of optiimal front obtained by diffeerent methods for the test funnctions F23-F F25
4.9 Claass 7. The llast group oof test functiions includes the engineeering probllems which are highly connstrained. 3.7.1.. Mechanical engineeringg problems Thhese functionns include F26: F Four-baar truss desiign problem m [67], F27: Speed reduucer design prooblem [67,688], F28: Diskk brake desiggn problem[69], F29: Wellded beam deesign problem m [69], and F330: Optimal P Power Flow [55,70]. The first four tesst functions are a completelly illustrated in [1], and F330 is illustratted in [71]. T The results aare comparedd to those off MOPSO, N NSGA-II, MO OALO and MO OGWO in Table X and thhe fronts obttained by the proposed meethod for diff fferent test fuunctions are methods in shoown in Fig. 118. Similar too previous reesults, the prooposed methood outperform ms all these m terrms of differeent criteria. Table X- Staatistical resultts of different comparison crriteria with vaarious test funcctions F26-F299
F26
OLAPO MO MO OPSO[65] NS SGA_II[65] MO OALO[1]
MO OGWO
GD Ave. 0.1245
S Std. 00.0014
Meetric of spreadd Avve. Std. 0.3381 0.00115
Metric of spacing Ave. Std. 1.1214 0.143
IGD Ave. 0.0867
Std. 0.0014
0.3741 0.3601 0.1264 0.13542
00.0422 00.0470 00.0327 00.08536
N/A A N/A A 0.370 0.4251
2.5303 2.3635 1.1805 1.4765
N/A N/A 0.1062 0.14978
N/A N/A 1.52E-02 0.0245
N/A N/A 0.002251 0.008847
36
0.227 0.255 0.144 0.3145
F27
OLAPO MO MO OPSO[65] NS SGA_II[65] MO OALO[1] MO OGWO
F28
MO OLAPO MO OPSO[65] NS SGA_II[65] MO OALO[1] MO OGWO
F29
MO OLAPO MO OPSO[65] NS SGA_II[65] MO OALO[1] MO OGWO
0.4537
00.0987
0.9924
0.11
0.84
0.11
0.4527
0.0954
0.98831 9.8437 1.1767 1.9851
00.1789 77.0810 00.2327 00.1424
N/A A N/A A 0.839 0.9347
N/A N/A 0.12667 0.14885
16.685 2.7654 1.7706 2.4751
2.696 3.534 2.769 3.3244
N/A N/A 0.8672 0.9414
N/A N/A 1.49E-01 0.19147
0.00014
1.12E-2
0.44501
0.031154
0.04158
0.002145
0.0012455
0.00053
0.0244 3.0771 0.0011 0.54889 8.247E-5 N/A N/A 0.00665 0.0175
00.12314 00.10782 00.00245 00.1495 8.32E-6 N N/A N N/A 00.00742 00.004765
0.46041 0.79717 0.44958 0.7122
0.109961 0.066608 0.054427 0.051145
N/A N/A 0.0421 0.0745
N/A N/A 0.0058 0.0247
N/A N/A 1.94E-2 0.00554
N/A N/A 0.78E-3 0.000475
0.665243
0.14337
0.0007441
0.000082
0.00145
0.00085
N/A A N/A A 0.19784 0.4567
N/A N/A 0.079962 0.057786
N/A N/A 0.0426 0.045
N/A N/A 0.0077 0.0758
N/A N/A 1.52E-3 0.00955
N/A N/A 4.65E-3 0.000413
Fig. 18. Optimal fronnt obtained byy the proposed method for teest functions F F26-F29
3.7.2.. Optimal pow wer flow (OP PF)
Electrical engineerring has lotss of multi-ooptimizationn problems which ideaally must bee solved with exxcellent perfformance [772,73]. Opttimal powerr is an elecctrical enginneering prooblem in which thhe amount of o power prroduced by each generaator should be obtainedd optimally in order to supplly the whole electric looad in a pow wer system [70], [74]. The objecttive functionn of this 37
problem might be minimizing power supply cost, reducing emissions, improving voltage profile, and/or minimizing active power loss [75–77]. In the case of multi-objective problem, all or some of these goals can be considered simultaneously. This problem is a highly constrained problem. Whenever a solution is sent to the objective function, a demanding program known as load flow is run. By this program, the voltage of busses and the electricity current of transmission lines are obtained. If the voltages and the power flow of transmission lines are in predefined range, the constraints are satisfied and the fitness of the solution can be calculated. Thus, to check the constraints of this problem, a demanding program must be run to calculate the voltages and the transmission line currents. In addition to these constraints, the limitation of power production of generators must also be satisfied. More details of this problem can be found in [70]. In this paper, this problem is solved by the proposed method considering two objective functions including power supply cost and emission minimization. The results of the proposed method are compared to those of four aforementioned algorithms, i.e. NSGA-II, MOPSO, MOALO, and MOGWO. The cost function related to power supply of the system can be formulated as follows: NG
C aPi 2 bPi c
(15)
i 1
Where C is total cost of power production, Pi is the amount of power produced by generator i, NG is the number of generators in the power system; a, b, and c are the coefficients of cost function. The amount of emissions given off by the generators can be mathematically presented as follows: NG
EM e i (PGi )
(16)
i 1
e i (PGi ) i PGi i PGi i i exp(i PGi ) Where EM is total emission generated in the power system, ei(PGi) is emission of the generator i when PGi is generated, and γ, β, α, ξ, and λ are the coefficients. The OPF is solved for IEEE-14 bus test system shown in Figure 19. All coefficients related to the generators of this test system are listed in Table XI. 2
38
Fig. 19. Single line ddiagram of IEE EE-14 bus testt system
m Table XI- The coefficients of generators in IEEE-14 buus test system
Bus nnumber
a b c
G1
G2
G3
G4
G5
1 0.04586 -0.050944 0.04258 0.0000011
2 0.006490 -0.005554 0.004091 0.000020
3 0.05638 -0.06047 0.02543 0.00050
6 0.04586 -0.050994 0.04258 0.0000001
8 0.03380 -0..03550 0.05326 0.00200
8.00000 0.04302933 20 0
2.885700 00.25 20 0
3.33300 .01 40 0
8.00000 .01 40 0
2.00000 0.01 40 0
The POF obtained bby 5 differeent methods are depicted in Fig. 200. Clearly, thhe proposedd method methods, buut also offerrs a wider range r of not onlyy obtains beetter resultss with regarrd to other m possiblee solutions.
39
Fig. 20. POF obtainned by differennt methods forr OPF of IEEE E-14-bus test ssystem
5
Coonclusion
In this paper, the multi-objecctive versioon of a reccently introoduced optim mization allgorithm O) algorithm m is introduuced. To known aas Lightninng Attachmeent Proceduure Optimizaation (LAPO obtain thhe Multi-obbjective LAP PO (MOLA APO), two m mechanisms are added to t the singlee version of LAPO O includingg 1- non-dom minating soorting and sttoring the bbest front in an Archivee matrix, and 2- selecting s thee best test point amongg the Archivee matrix forr updating thhe other tesst points. The Arcchive matrixx is used too guarantee that best aanswers are obtained during d optim mization. Moreovver, the roullette wheel iis used to ggrid the searrch space annd enhancess the explorration in the search space. Inn addition, itt improves cconvergencee behavior oof the propoosed methodd. The prooposed metthod is applied to 7 ddifferent grooups of tesst functions (generally 29 test functionns). The tesst functionss are differeent in termss of shape of the fronnt, number of local fronts, nnumber of objective ffunctions, ddimension of o the problem, and thhe constrainnts. The results oobtained byy the propossed methodd are compaared to those of four w well-regarded multiobjectivve optimizattion methodds known aas NSGA-II, MOPSO, MOALO, aand MOGW WO. The compariison criteria are incluuded the G Generationall Distance (GD), Invverted Geneerational 40
Distance (IGD), metric of spread, and metric of spacing. The lower GD value of the results of the proposed method shows that the proposed method could obtain a POF closer to the true POF. In addition, regarding low value of IGD, it can be said that not only the POF obtained by the proposed method is very close to true one, but also diverse results are achieved. Furthermore, based on high value of metric of spread and low value of metric of spacing, it can be comprehended that the results obtained by the proposed method cover the POF very well and an extended range of feasible solutions are achieved. The excellent performance of the proposed method can also be seen in the figures which show the fronts obtained by the proposed method. After the benchmark test functions, some real multi-objective optimization problems are employed to test the performance of the proposed method. To investigate the performance of the proposed method, the four aforementioned criteria are used. Similar to benchmark test functions, the proposed method outperforms the other methods in three criteria and has competitive results regarding the fourth criterion. According to the results and comparisons, the proposed method is highly recommended for solving complicated multiobjective problems because of the following features: 1- The proposed method does not have any parameter which needs to be tuned. In other words, the proposed method has a stable performance for different problems. 2- Based on the criterion GD, the proposed method is able to find a set of solutions very close to true POF. It means high quality results can be obtained by the proposed method. 3- Lastly, the proposed method offers a wide range of solutions. This allows experts to envision various states of the system and choose the most suitable for their unique system. As future work, the proposed method can be applied to the problem of Unit Commitment, and Economic Dispatch. Moreover, the proposed method can be combined with other methods and a hybrid algorithm can be proposed.
Acknowledgment We would like to show our gratitude to Ms. Jacqueline Hodge for her help in reviewing the English language of the paper.
Reference [1]
S. Mirjalili, P. Jangir, S. Saremi, Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems, Appl. Intell. 46 (2017) 79–95.
[2]
K. Miettinen, P. Preface By-Neittaanmaki, Evolutionary algorithms in engineering and computer science: recent advances in genetic algorithms, evolution strategies, evolutionary programming, GE, John Wiley & Sons, Inc., 1999.
[3]
I. BoussaïD, J. Lepagnot, P. Siarry, A survey on optimization metaheuristics, Inf. Sci. (Ny). 237 (2013) 82–117.
[4]
S. Mirjalili, S. Saremi, S.M. Mirjalili, L. dos S. Coelho, Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization, Expert Syst. Appl. 47 (2016) 106–119.
41
[5]
X. Hu, R. Eberhart, Multiobjective optimization using dynamic neighborhood particle swarm optimization, in: Evol. Comput. 2002. CEC’02. Proc. 2002 Congr., Ieee, 2002: pp. 1677–1681.
[6]
K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II, in: Int. Conf. Parallel Probl. Solving From Nat., Springer, 2000: pp. 849–858.
[7]
A. Zhou, B.-Y. Qu, H. Li, S.-Z. Zhao, P.N. Suganthan, Q. Zhang, Multiobjective evolutionary algorithms: A survey of the state of the art, Swarm Evol. Comput. 1 (2011) 32–49.
[8]
J. Branke, T. Kaußler, H. Schmeck, Guidance in evolutionary multi-objective optimization, Adv. Eng. Softw. 32 (2001) 499–507.
[9]
W. Zhang, Y. Liu, Multi-objective reactive power and voltage control based on fuzzy optimization strategy and fuzzy adaptive particle swarm, Int. J. Electr. Power Energy Syst. 30 (2008) 525–532.
[10]
R.E. Bellman, L.A. Zadeh, Decision-making in a fuzzy environment, Manage. Sci. 17 (1970) B-141.
[11]
K. Tomsovic, M.Y. Chow, Tutorial on Fuzzy Logic Application in Power Systems, IEEE Press, 1999.
[12]
C.A.C. Coello, A comprehensive survey of evolutionary-based multiobjective optimization techniques, Knowl. Inf. Syst. 1 (1999) 129–156.
[13]
R.T. Marler, J.S. Arora, Survey of multi-objective optimization methods for engineering, Struct. Multidiscip. Optim. 26 (2004) 369–395.
[14]
C.A.C. Coello, G.B. Lamont, D.A. Van Veldhuizen, Evolutionary algorithms for solving multi-objective problems, Springer, 2007.
[15]
W. Stadler, A survey of multicriteria optimization or the vector maximum problem, part I: 1776–1960, J. Optim. Theory Appl. 29 (1979) 1–52.
[16]
K. Deb, Multi-objective optimization using evolutionary algorithms, John Wiley & Sons, 2001.
[17]
P. Ngatchou, A. Zarei, A. El-Sharkawi, Pareto multi objective optimization, in: Intell. Syst. Appl. to Power Syst. 2005. Proc. 13th Int. Conf., IEEE, 2005: pp. 84–91.
[18]
K. Deb, J. Sundar, Reference point based multi-objective optimization using evolutionary algorithms, in: Proc. 8th Annu. Conf. Genet. Evol. Comput., ACM, 2006: pp. 635–642.
[19]
N. Srinivas, K. Deb, Muiltiobjective optimization using nondominated sorting in genetic algorithms, Evol. Comput. 2 (1994) 221–248.
[20]
C.A.C. Coello, Evolutionary multi-objective optimization: a historical view of the field, IEEE Comput. Intell. Mag. 1 (2006) 28–36.
[21]
J. Branke, K. Deb, Integrating user preferences into evolutionary multi-objective optimization, in: Knowl. Inc. Evol. Comput., Springer, 2005: pp. 461–477.
[22]
J.-Q. Li, Q.-K. Pan, K.-Z. Gao, Pareto-based discrete artificial bee colony algorithm for multi-objective flexible job shop scheduling problems, Int. J. Adv. Manuf. Technol. 55 (2011) 1159–1169.
[23]
L. Cai, S. Qu, G. Cheng, Two-archive Method for Aggregation-based Many-objective Optimization, Inf. Sci. (Ny). (2017).
[24]
D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Trans. Evol. Comput. 1 (1997) 67–82.
42
[25]
D.E. Goldberg, Genetic algorithms in search, optimization, and machine learning, 1989, Read. AddisonWesley. (1989).
[26]
C.M. Fonseca, P.J. Fleming, Genetic Algorithms for Multiobjective Optimization: FormulationDiscussion and Generalization., in: Icga, 1993: pp. 416–423.
[27]
J. Horn, N. Nafpliotis, D.E. Goldberg, A niched Pareto genetic algorithm for multiobjective optimization, in: Evol. Comput. 1994. IEEE World Congr. Comput. Intell. Proc. First IEEE Conf., Ieee, 1994: pp. 82– 87.
[28]
E. Zitzler, L. Thiele, Multiobjective optimization using evolutionary algorithms—a comparative case study, in: Int. Conf. Parallel Probl. Solving from Nat., Springer, 1998: pp. 292–301.
[29]
M.R. Sierra, C.A.C. Coello, Improving PSO-based multi-objective optimization using crowding, mutation and e-dominance, in: Evol. Multi-Criterion Optim., Springer, 2005: pp. 505–519.
[30]
A.J. Nebro, J.J. Durillo, J. Garcia-Nieto, C.A.C. Coello, F. Luna, E. Alba, Smpso: A new pso-based metaheuristic for multi-objective optimization, in: Comput. Intell. Miulti-Criteria Decis. 2009. mcdm’09. Ieee Symp., IEEE, 2009: pp. 66–73.
[31]
S. Mostaghim, J. Teich, Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO), in: Swarm Intell. Symp. 2003. SIS’03. Proc. 2003 IEEE, IEEE, 2003: pp. 26–33.
[32]
K.E. Parsopoulos, M.N. Vrahatis, Particle swarm optimization method in multiobjective problems, in: Proc. 2002 ACM Symp. Appl. Comput., ACM, 2002: pp. 603–607.
[33]
K. Price, R.M. Storn, J.A. Lampinen, Differential evolution: a practical approach to global optimization, Springer Science & Business Media, 2006.
[34]
B. V Babu, M.M.L. Jehan, Differential evolution for multi-objective optimization, in: Evol. Comput. 2003. CEC’03. 2003 Congr., IEEE, 2003: pp. 2696–2703.
[35]
H.A. Abbass, R. Sarker, C. Newton, PDE: a Pareto-frontier differential evolution approach for multiobjective optimization problems, in: Evol. Comput. 2001. Proc. 2001 Congr., IEEE, 2001: pp. 971–978.
[36]
N.K. Madavan, Multiobjective optimization using a Pareto differential evolution approach, in: Evol. Comput. 2002. CEC’02. Proc. 2002 Congr., IEEE, 2002: pp. 1145–1150.
[37]
A.W. Iorio, X. Li, Solving rotated multi-objective optimization problems using differential evolution, in: Australas. Jt. Conf. Artif. Intell., Springer, 2004: pp. 861–872.
[38]
F. Zou, L. Wang, X. Hei, D. Chen, B. Wang, Multi-objective optimization using teaching-learning-based optimization algorithm, Eng. Appl. Artif. Intell. 26 (2013) 1291–1300.
[39]
A. Rahimi-Vahed, A.H. Mirzaei, A hybrid multi-objective shuffled frog-leaping algorithm for a mixedmodel assembly line sequencing problem, Comput. Ind. Eng. 53 (2007) 642–666.
[40]
T. Niknam, M. rasoul Narimani, M. Jabbari, A.R. Malekpour, A modified shuffle frog leaping algorithm for multi-objective optimal power flow, Energy. 36 (2011) 6420–6432.
[41]
A. Rahimi-Vahed, M. Dangchi, H. Rafiei, E. Salimi, A novel hybrid multi-objective shuffled frog-leaping algorithm for a bi-criteria permutation flow shop scheduling problem, Int. J. Adv. Manuf. Technol. 41 (2009) 1227–1239.
[42]
T. Niknam, E.A. Farsani, A hybrid self-adaptive particle swarm optimization and modified shuffled frog leaping algorithm for distribution feeder reconfiguration, Eng. Appl. Artif. Intell. 23 (2010) 1340–1349.
[43]
J. Li, Q. Pan, S. Xie, An effective shuffled frog-leaping algorithm for multi-objective flexible job shop
43
scheduling problems, Appl. Math. Comput. 218 (2012) 9353–9371. [44]
S.N. Omkar, J. Senthilnath, R. Khandelwal, G.N. Naik, S. Gopalakrishnan, Artificial Bee Colony (ABC) for multi-objective design optimization of composite structures, Appl. Soft Comput. 11 (2011) 489–499.
[45]
D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. 42 (2014) 21–57.
[46]
R. Akbari, R. Hedayatzadeh, K. Ziarati, B. Hassanizadeh, A multi-objective artificial bee colony algorithm, Swarm Evol. Comput. 2 (2012) 39–52.
[47]
Q.-K. Pan, M.F. Tasgetiren, P.N. Suganthan, T.J. Chua, A discrete artificial bee colony algorithm for the lot-streaming flow shop scheduling problem, Inf. Sci. (Ny). 181 (2011) 2455–2468.
[48]
V.B. Semwal, M. Raj, G.C. Nandi, Biometric gait identification based on a multilayer perceptron, Rob. Auton. Syst. 65 (2015) 65–75.
[49]
V.B. Semwal, S.A. Katiyar, R. Chakraborty, G.C. Nandi, Biologically-inspired push recovery capable bipedal locomotion modeling through hybrid automata, Rob. Auton. Syst. 70 (2015) 181–190.
[50]
V.B. Semwal, J. Singha, P.K. Sharma, A. Chauhan, B. Behera, An optimized feature selection technique based on incremental feature analysis for bio-metric gait data classification, Multimed. Tools Appl. (2016) 1–19.
[51]
V.B. Semwal, G.C. Nandi, Generation of Joint Trajectories Using Hybrid Automate-Based Model: A Rocking Block-Based Approach, IEEE Sens. J. 16 (2016) 5805–5816.
[52]
V.B. Semwal, K. Mondal, G.C. Nandi, Robust and accurate feature selection for humanoid push recovery and classification: deep learning approach, Neural Comput. Appl. 28 (2017) 565–574.
[53]
K. Moosavi, B. Vahidi, H. Askarian Abyaneh, A. Foroughi Nematollahi, Intelligent control of power sharing between parallel-connected boost converters in micro-girds, J. Renew. Sustain. Energy. 9 (2017) 65504.
[54]
Z. Bingul, Adaptive genetic algorithms applied to dynamic multiobjective problems, Appl. Soft Comput. 7 (2007) 791–799.
[55]
A.F. Nematollahi, A. Rahiminejad, B. Vahidi, A novel physical based meta-heuristic optimization method known as Lightning Attachment Procedure Optimization, Appl. Soft Comput. (2017).
[56]
A. Rahiminejad, B. Vahidi, LPM-Based Shielding performance Analysis of High-Voltage Substations against Direct Lightning Strokes, IEEE Trans. Power Deliv. (n.d.).
[57]
D.A. Van Veldhuizen, G.B. Lamont, Multiobjective evolutionary algorithm research: A history and analysis, Technical Report TR-98-03, Department of Electrical and Computer Engineering, Graduate School of Engineering, Air Force Institute of Technology, Wright-Patterson AFB, Ohio, 1998.
[58]
C.T. Kelley, Detection and Remediation of Stagnation in the Nelder--Mead Algorithm Using a Sufficient Decrease Condition, SIAM J. Optim. 10 (1999) 43–55.
[59]
E. Zitzler, K. Deb, L. Thiele, Comparison of multiobjective evolutionary algorithms: Empirical results, Evol. Comput. 8 (2000) 173–195.
[60]
S.M.K.Herris, NSGAII Matlab code, (n.d.).
[61]
S.M.K.Herris, MOPSO Matlab Code, (n.d.).
[62]
S. Mirjalili, MOALO Matlab Code, (2018).
44
[63]
S. Mirjalili, MOGWO Matlab Code, (2018).
[64]
S. Mirjalili, A. Lewis, Novel frameworks for creating robust multi-objective benchmark problems, Inf. Sci. (Ny). 300 (2015) 158–192.
[65]
A. Sadollah, H. Eskandar, J.H. Kim, Water cycle algorithm for solving constrained multi-objective optimization problems, Appl. Soft Comput. 27 (2015) 279–298.
[66]
S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for solving singleobjective, discrete, and multi-objective problems, Neural Comput. Appl. 27 (2016) 1053–1073.
[67]
C.A.C. Coello, G.T. Pulido, Multiobjective structural optimization using a microgenetic algorithm, Struct. Multidiscip. Optim. 30 (2005) 388–403.
[68]
A. Kurpati, S. Azarm, J. Wu, Constraint handling improvements for multiobjective genetic algorithms, Struct. Multidiscip. Optim. 23 (2002) 204–213.
[69]
T. Ray, K.M. Liew, A swarm metaphor for multiobjective design optimization, Eng. Optim. 34 (2002) 141–153.
[70]
A. Rahiminejad, A. Alimardani, B. Vahidi, S.H. Hosseinian, Shuffled frog leaping algorithm optimization for AC--DC optimal power flow dispatch, Turkish J. Electr. Eng. Comput. Sci. 22 (2014) 874–892.
[71]
X. Yuan, B. Zhang, P. Wang, J. Liang, Y. Yuan, Y. Huang, X. Lei, Multi-objective optimal power flow based on improved strength Pareto evolutionary algorithm, Energy. 122 (2017) 70–82.
[72]
M.H. Amini, R. Jaddivada, S. Mishra, O. Karabasoglu, Distributed security constrained economic dispatch, in: Innov. Smart Grid Technol. (ISGT ASIA), 2015 IEEE, IEEE, 2015: pp. 1–6.
[73]
S. Bahrami, V.W.S. Wong, J. Huang, An online learning algorithm for demand response in smart grid, IEEE Trans. Smart Grid. (2017).
[74]
A. Mohammadi, M. Mehrtash, A. Kargarian, Diagonal quadratic approximation for decentralized collaborative TSO+ DSO optimal power flow, IEEE Trans. Smart Grid. (2018).
[75]
M. Hamzeh, B. Vahidi, A.F. Nematollahi, Optimizing Configuration of Cyber Network Considering Graph Theory Structure and Teaching-Learning-Based Optimization (GT-TLBO), IEEE Trans. Ind. Informatics. (2018).
[76]
A. Foroughi Nematollahi, A. Rahiminejad, B. Vahidi, H. Askarian, A. Safaei, A new evolutionaryanalytical two-step optimization method for optimal wind turbine allocation considering maximum capacity, J. Renew. Sustain. Energy. 10 (2018) 43312.
[77]
A. Forooghi Nematollahi, A. Dadkhah, O. Asgari Gashteroodkhani, B. Vahidi, Optimal sizing and siting of DGs for loss reduction using an iterative-analytical method, J. Renew. Sustain. Energy. 8 (2016) 55301.
45
Highlights 1- In this paper a novel multi-objective optimization algorithm known as MOLAPO is proposed. 2- The Archive matrix is used to guarantee finding the best answers during the optimization. 3- According to the results the proposed method is highly recommended for the complicated multiobjective problems.