Journal Pre-proof
Orthogonally-designed Adapted Grasshopper Optimization: A Comprehensive Analysis
Zhangze Xu Writing – Original Draft; Writing – Review & Editing; Software; Visualization; Investigation , Zhongyi Hu Conceptualization; Methodology; Formal Analysis; Investigation; Writing – Review & Editing; Funding A Ali Asghar Heidari Writing – Original Draft; Writing – Review & Editing; Software; Visualization; Investigation , Mingjing Wang Writing – Review & Editing; Software; Visualization , Xuehua Zhao Writing – Review & Editing; Software; Visualization , Huiling Chen Writing – Review & Editing; Software; Visualization , Xueding Cai Conceptualization; Methodology; Formal Analysis; Investigation; Writing – Review & Editing; Funding PII: DOI: Reference:
S0957-4174(20)30107-X https://doi.org/10.1016/j.eswa.2020.113282 ESWA 113282
To appear in:
Expert Systems With Applications
Received date: Revised date: Accepted date:
29 August 2019 21 January 2020 5 February 2020
Please cite this article as: Zhangze Xu Writing – Original Draft; Writing – Review & Editing; Software; Visualization; Zhongyi Hu Conceptualization; Methodology; Formal Analysis; Investigation; Writing – Review & Editing; Funding A Ali Asghar Heidari Writing – Original Draft; Writing – Review & Editing; Software; Visualization; Investigation , Mingjing Wang Writing – Review & Editing; Software; Visualization , Xuehua Zhao Writing – Review & Editing; Sof Huiling Chen Writing – Review & Editing; Software; Visualization , Xueding Cai Conceptualization; Methodology; F Orthogonally-designed Adapted Grasshopper Optimization: A Comprehensive Analysis, Expert Systems With Applications (2020), doi: https://doi.org/10.1016/j.eswa.2020.113282
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2020 Published by Elsevier Ltd.
Highlights
This paper proposes an improved variant of the grasshopper optimization algorithm. Orthogonal learning and chaos-based exploitative search are introduced Extensive comparison using various datasets and benchmark problems are performed. A new feature selection model is established using the proposed method
1
Orthogonally-designed Adapted Grasshopper Optimization: A Comprehensive Analysis Zhangze Xua, Zhongyi Hua*#, Ali Asghar Heidarib,c, Mingjing Wangd, Xuehua Zhaoe, Huiling Chena, Xueding Caif*# a
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
[email protected],
[email protected],
[email protected] b School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran
[email protected],
[email protected] c Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
[email protected],
[email protected] d Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
[email protected] e School of Digital Media, Shenzhen Institute of Information Technology, Shenzhen 518172, China
[email protected] f Division of Pulmonary Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, Zhejiang, P. R. China
[email protected]
# These authors contributed equally to this work
*Corresponding author: Zhongyi Hu(
[email protected]), and
[email protected](Xueding Cai).
Abstract Grasshopper optimization algorithm (GOA) is a newly proposed meta-heuristic algorithm that simulates the biological habits of grasshopper seeking for food sources. Nonetheless, some shortcomings exist in the basic version of GOA. It may easily fall into local optima and show slow convergence rates when facing some complex basins. In this work, an improved GOA is proposed to alleviate the core shortcomings of GOA and handle continuous optimization problems more efficiently. For this purpose, two strategies, including orthogonal learning and chaotic exploitation, are introduced into the conventional GOA to find a more stable trade-off between the exploration and exploitation cores. Adding orthogonal learning to GOA can enhance the diversity of agents, whereas a chaotic exploitation strategy can 2
update the position of grasshoppers within a limited local region. To confirm the efficacy of GOA, we compared it with a variety of famous classical meta-heuristic algorithms performed on 30 IEEE CEC2017 benchmark functions. Also, it is applied to feature selection cases, and three structural design problems are employed to validate its efficacy in terms of different metrics. The experimental results illustrate that the above tactics can mitigate the deficiencies of GOA, and the improved variant can reach high-quality solutions for different problems. Keywords: Grasshopper optimization; Meta-heuristics; Orthogonal learning; Chaotic exploitation
1 Introduction In recent years, several meta-heuristic algorithms (MAs) have been developed and adapted to deal with various problems (Hao Chen, Heidari, Zhao, Zhang, & Chen, 2019; Huiling Chen, Wang, & Zhao, 2020; Huiling Chen, Zhang, Luo, Xu, & Zhang, 2019; Wu Deng, Xu, Song, & Zhao, 2019; W. Deng, Xu, & Zhao, 2019; Wu Deng, Zhao, Yang, et al., 2017; Wu Deng, Zhao, Zou, et al., 2017; Luo, et al., 2019; M. Wang & Chen, 2019; Xu, Chen, Heidari, et al., 2019; Xu, Chen, Luo, et al., 2019; Yu, Zhao, Wang, Chen, & Li, 2020) owing to its simplicity, effectiveness, and excellent global searching ability. MAs have shown to be more effective than traditional gradient-based algorithms (Zhang, Wang, Zhou, & Ma, 2019). There are many novel and traditional methods, which each of them is fitter to be utilized with specific kinds of problems. Harris hawks optimizer (HHO) (H. Chen, Jiao, Wang, Heidari, & Zhao, 2019; Heidari, Mirjalili, et al., 2019) is a new swarm intelligence algorithm in this field. Also, there is various famous MAs in the literature, for instance: particle swarm optimization (PSO) (Kennedy & Eberhart, 1995), differential evolution (DE) (Storn & Price, 1997), bacterial foraging optimization (BFO) (Passino, 2002), artificial bee colony optimization (ABC) (Karaboga & Basturk, 2007), firefly algorithm (FA) (Yang, 2009), bat algorithm (BA) (Yang, 2010), fruit fly optimization algorithm (FOA) (Pan, 2012), flower pollination algorithm (FPA) (Yang, 2012), grey wolf optimization (GWO) (S. Mirjalili, Mirjalili, & Lewis, 2014), moth-flame optimization algorithm (MFO) (S. Mirjalili, 2015b), ant lion optimization (ALO) (S. Mirjalili, 2015a), sine cosine algorithm (SCA) (S. Mirjalili, 2016b), whale optimization algorithm (WOA) (S. Mirjalili & Lewis, 2016), multi-verse optimization algorithm (MVO) (S. Mirjalili, Mirjalili, & Hatamlou, 2016), dragonfly algorithm (DA) (S. Mirjalili, 2016a), salp swarm algorithm (SSA)(S. Mirjalili, et al., 2017), Moth search algorithm (MSA) (G. G. Wang, 2018) and grasshopper optimization algorithm (GOA) (Saremi, Mirjalili, & Lewis, 2017). Among all these algorithms, GOA has been widely studied in recent years owing to its simple implementation and relatively impressive performance in realizing complex problems. Until now, the basic GOA has been widely utilized in various fields because of its reasonably good optimization capability and simple implementation. Aljarah et al.(Aljarah, et al., 2018) optimized parameters on support vector machines by using GOA. Arora et al. (Arora & Anand, 2018) improved the original GOA by using a chaotic map to keep the exploration and exploitation in a proper balance. Ewees et al.(Ewees, Abd Elaziz, & 3
Houssein, 2018) improved GOA through an opposition-based learning strategy and compared it on four engineering cases. Luo et al.(Luo, et al., 2018) equipped GOA with three kinds of approaches, such as levy flight, opposition-based learning, and gauss mutation, which successfully demonstrated the predictive ability of financial stress problems. Mirjalili et al.(S. Z. Mirjalili, Mirjalili, Saremi, Faris, & Aljarah, 2018) put forward a multi-objective GOA based on the original GOA and optimized it for a group of different standard multi-objective test problems. The performance of methods and obtained results reveal that this method has substantial superiority and competitiveness. Saxena et al. (Saxena, Shekhawat, & Kumar, 2018) developed a modified GOA based on ten kinds of chaotic maps. Tharwat et al.(Tharwat, Houssein, Ahmed, Hassanien, & Gabel, 2018) designed an enhanced multi-objective GOA to advance similar problems, and they observed that results are better than other algorithms. Barik et al.(Barik & Das, 2018) proposed a method that was coordinating the generation and load demand of microgrid through GOA to cope with the unpredictability of renewable energy and dependence on nature. Crawford et al. (Crawford, Soto, Peña, & Astorga, 2019) verified the fantastic results in solving combination problems (such as SCP) with the help of improved GOA that percentile concept was equipped with the general binarization mechanism of continuous element heuristic. El-Fergany et al.(El-Fergany, 2018) has shown that it was feasible and effective to optimize the parameter of the fuel cells stack based on the searching phases of GOA. Hazra et al.(Hazra, Pal, & Roy, 2019) presented a comprehensive method to prove the superiority of GOA in dealing with wind power availability compared with other algorithms when realizing the economic operation of the hybrid power system. Jumani et al.(Jumani, et al., 2019) optimized a grid-connected MG controller developed by GOA. Based on the performance of the existing controller under the condition of MG injection and sudden load change, the superiority of GOA was proved. Mafarja et al.(Mafarja, et al., 2018) adopted GOA to be an exploration strategy in the feature selection method of wrapper design. The experimental results demonstrated the advantage of the proposed GOA methods compared with others based on 22 UCI datasets. Taher et al. (Taher, Kamel, Jurado, & Ebeed, 2019) proposed a modified GOA (MGOA) to realize the optimization of the power flow problem, which was realized by modifying the mutation process of traditional GOA. Wu et al. (J. Wu, et al., 2017) came up with an adaptive GOA (AGOA) for finding a better solution for the cooperative target tracking trajectory. In AGOA, it adapted several kinds of optimization strategies, such as the dynamic feedback mechanism, the survival of the fittest mechanism, and democratic selection strategy to primal GOA. Tumuluru et al. (Tumuluru & Ravi, 2017) proposed a GOA-based deep belief neural networks to perform the cancer classification with improved classification accuracy, for which the logarithmic transformation and Bhattacharya distance were used. Although the above GOA variants improve the search capability or convergence speed, they are still challenging to avoid local optimum when faced with the complex and high-dimensional optimization task. The following conclusions have been drawn from the literature. At first, the limited search capability makes basic GOA easy to fall into the local optimum and results in slow convergence. Secondly, single mutation algorithms can hardly achieve the right balance the exploration and exploitation abilities. To alleviate this situation and enhance the efficacy of this method, a revised variant of GOA named orthogonal 4
learning and chaotic exploitation-based GOA (OLCGOA) is developed in this work. In OLCGOA, two useful strategies (orthogonal learning (OL) and chaotic exploitation (CLS)) are combined into GOA. Also, orthogonal learning was used to enhance the ability of searching solution space, while the CLS mechanism was merged to give the best current agent more opportunities to execute deeper exploitation in the adjacent area. In other words, the developed OLCGOA enriches the individual diversity of GOA through inserting patterns induced by the orthogonal experiments into its exploratory movement and enhances the local searching ability through the CLS to the position changing procedure of GOA. The expression of OLCGOA was assessed based on 30 classical reference functions in the CEC2017 (G. Wu, Mallipeddi, & Suganthan, 2016) with several classical MAs and part of advanced optimization methods. The results illustrate that the enhanced OLCGOA is superior to the basic GOA and other MAs. Besides, OLCGOA has also been validated for some well-known engineering problems and feature selection problems successfully. According to the result of the experiment, it demonstrates that the modified OLCGOA is better than the rest of the methods in terms of generating more competitive optimal solutions when dealing with the constraint problems. The rest of the paper is divided into four chapters: a simple description of GOA, OL, and CLS are provided in par 2. Part 3 illustrates the improved GOA in detail. We descript the experimental research and simulation results in part 4. Finally, part 5 gives a summary and outlook.
2 Background 2.1 Grasshopper optimization algorithm (GOA) Saremi et al. (Saremi, et al., 2017) proposed a new heuristic algorithm named GOA, which mimics the aggregation and foraging behavior of the grasshoppers in nature. Grasshopper populations establish a relationship with each other, and the repulsion and attraction between individuals give grasshoppers an optimal position to move to for finding the food source. Inspired by this behavior, it can be mathematically defined by: 𝑋𝑖 = 𝑆𝑖 + 𝐺𝑖 + 𝐴𝑖 (1) According to the Eq.(1), the way grasshoppers get their food is disrupted by three main components: interaction between grasshoppers, gravity factor, and the direction of the wind, which can be represented respectively by 𝑆𝑖 , 𝐺𝑖 and 𝐴𝑖 . Using 𝑋𝑖 represent the position of i-th grasshopper individual, and the most vital element is an interaction between grasshoppers. ̂ 𝑆𝑖 = ∑𝑁 𝑗<1 𝑠(𝑑𝑖𝑗 )𝑑𝑖𝑗
(2)
𝑗≠𝑖
𝑑𝑖𝑗 = |𝑥𝑗 − 𝑥𝑖 |
(3)
𝑑̂𝑖𝑗 = (𝑥𝑗 − 𝑥𝑖 )/𝑑𝑖𝑗
(4)
𝑠(𝑟) = 𝑓𝑒 ;𝑟/𝑙 − 𝑒 ;𝑟
(5)
5
where 𝑑𝑖𝑗 describes the spatial distance between two grasshoppers. 𝑑̂𝑖𝑗 on behalf of the unit vector between the two grasshoppers. Both of two grasshoppers are determined by subscript. The interaction between grasshoppers 𝑠 generates attraction when its value is positive, yet repulsion, while s is negative, the strength of attraction is variable 𝑓, and the length of attraction is variable 𝑙. It should be considered that the s function cannot exert a strong force at a considerable distance between locusts. In order to work better, the spatial distance of grasshoppers needs to be controlled in a comfortable range [1,4]. The gravity factor and wind advection of grasshoppers can be depicted as: 𝐺𝑖 = −𝑔𝑒̂ 𝑔
(6)
𝐴𝑖 = 𝑢𝑒̂ 𝑤
(7)
In here, the gravitational constant 𝑔 and the unit vector 𝑒𝑔 towards the center of the earth, which is used to obtain the gravity factor 𝐺. u represents the wind coefficient and 𝑒̂ 𝑤 represents the direction of the wind on the unit vector. Combining Eqs. (1)- (9), we can update the position of grasshoppers as follows: 𝑋𝑖 = ∑𝑁 𝑗<1 𝑠(|𝑥𝑗 − 𝑥𝑖 |)
𝑥𝑗 ;𝑥𝑖
𝑗≠𝑖
𝑑𝑖𝑗
− 𝑔𝑒̂ ̂ 𝑔 + 𝑢𝑒 𝑤
(8)
Finally, the mathematical model is established as follows: 𝑋𝑖𝑑 = 𝛽 (∑𝑁 𝑗<1 𝛽 𝑗≠𝑖
𝑢𝑏𝑑 ;𝑙𝑏𝑑 2
𝑠(|𝑥𝑗𝑑 − 𝑥𝑖𝑑 |)
𝑥𝑗 ;𝑥𝑖 𝑑𝑖𝑗
̂ )+𝑇 𝑑
(9)
In the Eq. (9), the maximum value and minimum value of the boundary range of function s in d-dimensional space are 𝑢𝑏𝑑 and 𝑙𝑏𝑑 , respectively. Furthermore, 𝑁 is equal ̂ to the total number of grasshoppers, 𝑇 𝑑 is used to represent the best value in 𝑑 -th dimensional space so far. Parameter 𝛽 is a constriction factor; with the increase of iteration, the global search is reduced, and the local precision search is increased. It can be represented as follows: 𝛽 = 𝛽𝑚𝑎𝑥 − 𝑝
𝛽𝑚𝑎𝑥;𝛽𝑚𝑖𝑛 𝑃
(10)
Among them, 𝛽𝑚𝑎𝑥 = 𝑚𝑎𝑥(𝛽) , 𝛽𝑚𝑖𝑛 = 𝑚𝑖𝑛(𝛽) , 𝑝 expresses the number of iterations at present and 𝑃 represents the maximum number of iterations running. The detailed process of GOA can be exhibited as:
2.2 Chaotic Exploitation The state of chaos in the nonlinear system is irregular, which is the unity of disorder and the internal order, as well as the unity of certainty and randomness of the system behavior. In mathematics, chaotic systems can be regarded as the source of randomness (Alatas, 2010). Chaotic motion can experience all states in the space according to its law in a specified range. Compared with probabilistic random traversal search, chaotic exploitation reduces blindness and randomness and can perform a search with higher efficiency (Coelho & Mariani, 2008). Also, it is very convenient to generate and store chaotic sequences. Therefore, a lot of 6
different sequences can be acquired easily by modifying the initial conditions. Besides, these sequences are deterministic and repeatable (Jia, Zheng, & Khurram Khan, 2011). Chaotic sequence is generated by Eq. (11) according to logistic mapping. 𝐶𝑖:1 = 𝛾 × 𝐶𝑖 × (1 − 𝐶𝑖 ) 𝑖 = 1, ⋯ , 𝑛 − 1 (11) Set the variable value 𝛾 = 4 , control 𝐶1 ∈ (0,1) ∪ 𝐶1 ≠ 0.25 ∪ 𝐶1 ≠ 0.5 ∪ 𝐶1 ≠ 0.75 ∪ 𝐶1 ≠ 0 ∪ 𝐶1 ≠ 1. When 𝛾 = 4, logistic mapping is in complete chaos. Where n is equal to the grasshopper's number. Optimization of chaotic exploitation in a small range is acceptable. If the search space is too large, its time cost will become unbearable so that we can integrate chaotic exploitation (Zhan, Zhang, Li, & Shi, 2011) into other heuristic algorithms. The integration of the CLS mechanism and GOA can not only develop its search capability but also better find the global optimum. Candidate solutions of target position generated by CLS are: 𝐶𝑆 = (1 − 𝑠) × 𝑇 + 𝑠 × 𝐶𝑖′ 𝑖 = 1, ⋯ , 𝑛 (12) The constriction factor 𝑠 was represented as follows: 𝑠 = (𝐺 − 𝑔 + 1)/𝐺 (13) ′ Ci = lb + Ci × (ub − lb) (14) The chaotic variable 𝐶𝑖 in Eq. (14) consists of Eq. (11), and chaotic vector 𝐶𝑖′ is mapped to within the range of [𝑙𝑏, 𝑢𝑏], 𝑙𝑏, and 𝑢𝑏 mean the boundary of the grasshopper. The former is the minimum value, and the other is the maximum value. The candidate solution 𝐶𝑆 is obtained by a linear combination of chaotic vector 𝐶𝑖′ and target position 𝑇.
2.3 Orthogonal learning (OL) The OL strategy mimics orthogonal experimental design (OED) (Alcalá-Fdez, et al., 2009) to obtain the best optimal combination. It achieves optimal experimental results with as few tests as possible based on the set factors and levels. Suppose that there is an experiment, whose level number is 𝑄 and the factor number is 𝐹. In order to find the best combination of all the combinations of 𝑄 levels and 𝐹 factors, we need to experiment with all orthogonal combinations, among which the total number of combinations is 𝑄𝐹 . However, orthogonal arrays can be used to determine the optimal combination through a less representative experimental design. 𝐿𝑀 (𝑄𝐹 ) can be used to represent the orthogonal array (OA) of this factor and horizontal case. An orthogonal array (OA) of F factors and 𝑄 levels per factor can be represented by 𝐿𝑀 (𝑄𝐹 ), where M is equal to the minimum number of test combinations. 𝐿9 (34 ) is shown in the below:
7
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3 1 2 3 2 3 1 3 1 2 1 3 2 2 1 3 3 2 1
The orthogonal array has two main characteristics. Firstly, in each column, different numbers appear equal times. For instance, in this matrix, any column has 1, 2, 3, and the number of occurrences in any column is equal. This feature indicates that each level of each factor is the same as the probability of each level of other factors participating in the test, thus ensuring that the interference of other factors is eliminated to the maximum extent at each level, and the test can be effectively compared. Secondly, in any pair of horizontally composed pairs of pairs, each number pair appears equally. This feature ensures that the test points are evenly dispersed in the complete combination of factors and levels, and therefore are highly representative. As shown in Fig. 1, there are 9 types of ordered pairs (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3), which all appear once in any two columns. If we consider a combination test of all four factors and three levels, which needs 34 = 81 experiments, using 𝐿9 (34 ) only requires nine combinations of experiments to provide essential information. Hence, the orthogonal array reduces experimental resource consumption. We summarize and describe the overall flow of orthogonal learning in Algorithm 2.
The Arch’s size was used for storing more appropriate possible solutions. If a better solution is obtained in the orthogonal process, it is going to be kept in Arch. Furthermore, when a new superior individual is acquired and ready to be transferred into the Arch, if space is already full, a previous individual will be randomly selected to be removed from the region. This strategy provides an effective searching direction for the more effective discovery of more valuable individual information in the population. By this means, reconnaissance search becomes efficient.
3 Proposed OLCGOA method In this part, we will combine the previous chapter to describe OLCGOA based on OL strategy and CLS strategy in detail. In OLCGOA, the grasshopper search process employed two significant strategies, namely CLS and OL, to maintain a more stable equilibrium between the exploitation and exploration cores. Increasing the diversity of the population can reduce the situation of falling into local 8
optimum and improve the evolutionary algorithm in terms of exploratory trends. In the proposed method, the original GOA updates its search agents based on the grasshopper's current position and behavior pattern. Nevertheless, the orthogonal learning strategy starts by creating an appropriate orthogonal array. Then, factor analysis is utilized to get the best combination. Finally, the optimal solution is selected from the candidate solution of the orthogonal table and the combined solution of optimal factors constructed by factor analysis. This strategy chooses a better solution to search and develop, which improves the search efficiency of the original GOA and the optimality of solutions. Orthogonal learning is expressed as the following: 𝑆𝑋𝑖𝑡 = 𝑂𝐸𝐷(𝑆𝑋𝑖𝑡 , 𝑔𝑏𝑒𝑠𝑡, 𝑆𝑋𝑘𝑡 ) (15) 𝑂𝐸𝐷 means the idea of using orthogonal experimental design, including constructing an orthogonal array and selecting the best solution from a few experimental combinations as possible. One is currently the best individual 𝑔𝑏𝑒𝑠𝑡 to provide the excellent premise information. The rest of value is the currently selected grasshopper 𝑆𝑋𝑖 and random individual 𝑆𝑋𝑘 that do not include 𝑆𝑋, which are used to get more search information and abundant the diversity of the population. Under certain circumstances, GOA will fall into local optimal too early, or the convergence speed will be too slow. In order to further improve the overall performance of the original GOA, we integrated the chaotic exploitation strategy into the algorithm, to realize more efficient search and exploitation of grasshoppers within the movement process. To be more efficient and to combine the original GOA and CLS, the most suitable search agent should be selected to go through the CLS process. Therefore, only the most fitting grasshopper in the whole population is chosen to find a better solution. Therefore, according to the current optimal location of grasshoppers, the CLS strategy is used to form the new location of the best grasshopper. The candidate solution of the fittest grasshopper’s position is shown below: 𝐶𝑆 = (1 − 𝑠) × 𝐹𝑏𝑒𝑠𝑡 + 𝑠 × 𝐶𝑖′ (16) Fbest is the most appropriate grasshopper of the entire swarm in GOA. Notice that the candidate solution is in a stronger position than it was before, which automatically reduces the stagnation problem. Based on this approach, it can be guaranteed that the OLCGOA improves the ability to find the global optimum. Therefore, the achieved balance between exploratory and exploitative tendencies helps OLCGOA to move towards a higher quality of solutions. The OLCGOA consists of seven steps: In conclusion, the flowchart of OLCGOA is exhibited in Fig. 1.
3.1 Complexity analysis The time complexity of OLCGOA is mainly related to the dimension (m), the grasshopper swarm (n), and the generation (t). To sum up, calculate time complexity in five parts, which can be calculated as follows: O(OLCGOA)=O(random initialization)+G × (O(fitness evaluations)+O(grasshoppers updating)+O(the orthogonal learning strategy)+O(the CLS strategy)). Considering the group of n grasshoppers, several factors impacting the computational complexity are analyzed below: the grasshopper position initialization computational complexity is O(n×m+2nlog2n), the operating cost brought by the orthogonal learning 9
strategy is O(t×n2×m), that of getting new location of each grasshopper is O(g×n×m), and the computational consumption of the CLS mechanism is O(t×n×m). As a result, the final running consumption of OLCGOA is expressed as O(OLCGOA) ≈ 2 O(mn+2nlog2n)+O(g ×(n m+nm+nm)). Finally, the time complexity of the OLCGOA is calculated as O(n2).
4 Experimental research In this part, the results of the proposed GOA-based technique are compared with other algorithms in competition functions, engineering applications, and feature problems. The experiments are done to validate the achieved efficacy compared to other peers. Firstly, the added strategies on GOA were tested on 30 CEC2017 functions to see if they were improved. After that, OLCGOA, original algorithm, and some advanced MAs are compared based on the attained results under the same test environment. Finally, we compared the proposed OLCGOA using a set of engineering problems. All of OLCGOA's experiments are coded in the MATLAB R2014a compiler and run on the Windows Server 10 operating system. Computer hardware configuration is Intel (R) Core (TM) i5-4200U CPU (2.3 GHz) and 4 GB of RAM.
4.1 Experiments on CEC2017 functions The results of OLCGOA for problems of CEC2017 competition are shown in detail in this subsection. Table 1 gives formulas, dimensions, and range for all test functions; the test functions consist of four classes: unimodal functions (𝑓1 − 𝑓3 ) , multimodal functions (𝑓4 − 𝑓10 ), hybrid functions (𝑓11 − 𝑓20 ), and composite functions(𝑓21 − 𝑓30 ). Thus, the performance of the developed OLCGOA can be measured in a variety of ways through 30 benchmark functions. In order to record some fair results, the initial conditions set by all algorithms are consistent. The initialization of all algorithms is uniform and random, the initial data of individual number in entire algorithms is 50, and the iteration numbers are unified to 1000. Because there are many benchmark functions, the Dim are different, as shown in Table 1. To reduce the impact of randomness and engender the statistic results, OLCGOA and other methods execute each function 30 times. In all of the next experiments, Table 2 lists the parameter settings for these algorithms. Table 3-4 reveal the parameter sensitivity analysis experiments for different population numbers and iterations. Moreover, Tables 5-7 show the comparison results for 30 classic benchmark problems between MAs, where Avg index represents the average value of each function, and Std index is the standard deviation. Beyond that, the Friedman test (Li, Chen, & Mou, 2005) is adopted to sort the comprehensive ability of OLCGOA with other comparison methods, and ARV represents the average ranking value in the experiment. To more clearly compare the values of each basic function in each table, all the OLCGOA results and the best results are shown in bold. We also recorded the wall-clock time cost of OLCGOA when compared to other algorithms in Tables 8-10.
10
4.1.1 Parameter sensitivity analysis In order to better analyze the parameter sensitivity of the algorithm, the influence of the population size and evaluation time on the algorithm is analyzed by controlling the change of the single parameter while other parameters remain unchanged. In this part of the experiment, when the other conditions are unchanged, the three functions 𝑓1 , 𝑓2 and 𝑓3 are selected for verification. The number of agents in the population is set to 10, 30, 60, 100, and 200, respectively, to study the impact of population size on algorithm performance. It can be seen from the experimental results in Table 3 that in the case of different population sizes, the proposed OLCGOA can still maintain its advantages, which indicates that OLCGOA is more powerful and less affected by the population. Table 3 also shows that IGOA and GOA are more affected by the population, while OLCGOA is more stable when the population changes. Five iterations of 100, 250, 500, 1000, and 2000 were chosen to compare the three functions of 𝑓1 , 𝑓2 and 𝑓3 under the same conditions. This task was completed to investigate the impact of the number of evaluations on the performance of OLCGOA. It can be seen from the results in Table 4 that OLCGOA ranks first in all iterations of the three functions, and the fitness values of IGOA and GOA fluctuate with the number of iterations. It indicates that OLCGOA is less affected by the number of iterations.
4.1.2 Comparison with CLSGOA and OLGOA In this part, the proposed OLCGOA is compared with improved GOA variants, including CLSGOA, OLGOA, and the basic GOA. Among them, CLSGOA represents that only the chaotic exploitation mechanism is added to the original GOA. OLGOA represents that only the orthogonal learning mechanism is added to the original GOA. The purpose is to compare whether the combination of the CLS and the OL mechanism is more effective than the single mechanism. Table 5 lists the detailed experimental data of the mentioned methods testing on 𝑓1 − 𝑓30 test functions. As per results in Table 5, OLCGOA obtained the optimal or second-best solution performing on all three unimodal functions (𝑓1 − 𝑓3 ). In general, OLCGOA is superior to other algorithms in solving unimodal functions. Furthermore, the sorting results of all multimodal functions (𝑓6 , 𝑓8 − 𝑓10 ) also demonstrate that the developed OLCGOA can obtain the optimal cases among four problems focusing on the mean index. It can also achieve the second-best solution on 𝑓4 and 𝑓7 . Even in the worst case ( 𝑓5 ), it is better than the original algorithm as well. When treated in the ten hybrid functions mentioned above, OLCGOA obtains the best solution on the functions of 𝑓11 , 𝑓13 − 𝑓15 , 𝑓16 , 𝑓19 , 𝑓20 and sub-optimal solutions in the rest of the functions. Also, OLCGOA does not perform as well as the former do in the composite function, but its overall performance is still better than other algorithms. We also rank the overall ability of OLCGOA and other algorithms based on the Friedman test. From the last few lines of Table 5, based on the ARV index, the average ranking of the improved OLCGOA was first when handling four different kinds of functions, which is close by the OLGOA, CLSGOA and the original GOA. According to experimental results, the improved OLCGOA is the best method to process 30 benchmark tasks in 30 11
independent runs. This finding also demonstrates that the CLS and OL strategy applied in the OLCGOA have improved the efficiency to a high degree. Standard deviation can reflect the degree of dispersion of a data set. We know that the smaller the standard deviation, the better the stability of the optimization. From the comparison results of the standard deviation on each function in Table 5, we can find OLCGOA performs better on 𝑓2 − 𝑓4 , 𝑓12 − 𝑓13 , 𝑓15 − 𝑓19 , 𝑓22 , 𝑓25 , 𝑓28 than other algorithms. We can also observe that although the stability of OLCGOA is the worst on the 𝑓24 , 𝑓26; 𝑓27 functions, its average fitness value results show that OLCGOA has an excellent optimization ability compared with other algorithms. On the rest of the functions, the standard deviation of OLCGOA is also acceptable. In addition to 𝑓20 and the other three functions 𝑓24 , 𝑓26; 𝑓27 which has been mentioned before, the standard deviation of OLCGOA is much smaller than that of GOA, which indicates that the stability of OLCGOA is much better than that of the original GOA. Also, as far as the symbol " +/=/-" is concerned, it is also clearly shown that in the test, OLCGOA is significantly better than CLSGOA and GOA, slightly better than OLGOA. According to the comparison of the Friedman test in Table 5, OLCGOA obtained the lowest ARV in all the 30 benchmark functions. Therefore, in this case, it is found that the improved GOA based on the orthogonal learning mechanism and the chaotic exploitation has improved the global optimization ability to a great extent. When a chaotic system is implemented with limited precision in a digital computer, the negative effects of the degradation of digital chaos may affect the optimization work [42]. In order to choose the best map, we choose multiple maps in the CLS strategy for comparison. As shown in Table 6, GOAs using the Tent map, Sinusoidal map, and Singer map are named TGOA, SGOA, and RGOA, respectively. According to the results in Table 6, the performance of OLCGOA is the best among the four algorithms. Among them, OLCGOA ranks first in the optimization results of 𝑓5 , 𝑓13 , 𝑓17 − 𝑓19 , 𝑓24 , 𝑓26 , 𝑓27 and 𝑓29 . Overall, OLCOGA performs weakly on unimodal and multimodal functions, and performs well on multimodal and mixed functions. From the aspect of standard deviation, OLCGOA has the smallest fluctuation and the most reliable stability on 𝑓3 , 𝑓5 , 𝑓7 , 𝑓10 , 𝑓13 , 𝑓15 , 𝑓17 − 𝑓18 , 𝑓20 , 𝑓26 and 𝑓27 . Taking all 30 functions into account, the standard deviation values of OLCGOA is also smaller than other counterparts. Besides, as far as the symbol "+ / = /-" is concerned, it is also clear that the results of the four algorithms are close, but we can still conclude that OLCGOA is superior to TGOA, SGOA, and RGOA. According to the comparison results of the Friedman test in Table 6, OLCGOA has obtained the lowest ARV among the 30 benchmarks. Therefore, in this case, it is found that the improved GOA algorithm based on the logical map is superior to the GOA base on the Tent map, Sinusoidal map, and Singer map.
4.1.3 Comparison with other original algorithms OLCGOA was compared with some kinds of classical MAs, including GWO, WOA, MFO, SCA, DA, PSO, GA, PBIL(Folly, 2006). Table 7 depicts the experimental data of comparison between developed OLCGOA, and MAs tested on 𝑓1 − 𝑓30 problems. As can be seen from Table 3, OLCGOA has obtained the optimal solution for all 12
unimodal functions (𝑓1 − 𝑓3 ). Therefore, it can be clearly concluded that OLCGOA is superior to other MAs in processing unimodal functions. In the simple multimodal functions (𝑓4 − 𝑓10 ), except that the ninth function ranks third, the sorting results of the others are in the optimal or sub-optimal performance. These prove that OLCGOA has a better performance in simple multimodal function than classical MAs. Also, the proposed OLCGOA did well in ten hybrid functions. OLCGOA obtains the best solution on 𝑓11 − 𝑓15 , 𝑓18 , 𝑓19 and it is suboptimal in the rest of the functions. When dealing with the ten composite functions, half of the results are optimal, and the others also perform well, though not the best. This paper also sorts the average performance of OLCGOA and other original methods through the non-parametric statistical Friedman test. From the last few lines of Table 7, based on the ARV index, the average ranking of the improved OLCGOA was first when handling four different kinds of functions, which is close by the GWO, PSO, GA, MFO, WOA, SCA, DA and the PBIL. According to experiment results, the improved OLCGOA is the best algorithm to process 30 benchmark tasks in 30 independent runs. This finding also demonstrates that the CLS and OL strategy applied to the OLCGOA have improved the efficiency to a high degree. As can be seen from Table 7, the standard deviation results of the proposed OLCGOA and classical algorithms are minimal at 𝑓1 , 𝑓2 , 𝑓4 ,𝑓9 , 𝑓12 − 𝑓15 ,𝑓18 − 𝑓19 ,𝑓25 ,𝑓28 and 𝑓30 . Moreover, the standard deviation on 𝑓1 , 𝑓2 and 𝑓13 is much lower than other algorithms. It proves that OLCGOA has excellent global optimization stability in unimodal functions, multimodal functions, hybrid functions, and composite functions. Also, carefully observing the standard deviation of the remaining functions, OLCGOA is less stable than some classical optimizers. Considering the overall performance of 30 functions, OLCGOA has excellent standard deviation values and high global optimization stability. Moreover, as far as the symbol " +/=/-" is concerned. It is also clearly shown that OLCGOA is always superior to the WOA, SCA, DA, and PBIL. Besides, the performance of OLCGOA is better than the PSO algorithm on 26 functions, and the remaining four functions perform equally well. MFO and GA are only better in one function than OLCGOA. In comparison with GWO, OLCGOA also has a stronger global optimization capability. According to the comparison results of the Friedman test in Table 7, OLCGOA obtains the lowest ARV value. Therefore, we can conclude that the proposed algorithm has a better and more comprehensive global optimization ability than the original algorithms. Moreover, to make it easier to compare the optimize performance of OLCGOA with the rest of the methods. The optimization curves of OLCGOA, GA, WOA, SCA, DA, PSO, PBIL, GWO, and MFO, are revealed in Fig.2. The convergence curve of OLCGOA is much better than other algorithms, illustrating that OLCGOA gets the optimal solutions on unimodal functions (𝑓1 and 𝑓3 ) and can maintain a good search ability globally and locally yet other algorithms trap into local optimum prematurely. On the other hand, when tested on multimodal functions, OLCGOA obtains a speedy rate of convergence and quickly achieve the best solutions in the 𝑓7 function. Also, although one of basic algorithm converges quickly at the beginning in the f10 function, the final convergence accuracy of OLCGOA is still more significant than the other algorithm. As depicted in Fig.2, OLCGOA has the fastest rate of convergence; however, GWO, 13
WOA, MFO, SCA, DA, PSO, GA, and PBIL cannot show a better trend. On 𝑓12 , 𝑓14 , 𝑓15 and 𝑓18 − 𝑓20 , OLCGOA all show the most delicate convergence point in the initial stage, while others caught in local optimal solution because of their weak optimization ability. The experimental results demonstrate that OLCGOA beyond other optimizers in hybrid functions as well. All the methods achieved competitive results, but OLCGOA is the finest among the entire approaches mentioned above. That is to say, OLCGOA not only has a better convergence speed but also arrives at a better optimal solution. To sum up, by comparing the convergence speed of OLCGOA with other 8 optimizers, it is proved that OLCGOA can enhance the ability of searching solution space. As can be seen from Fig.2, OLCGOA beyond other primal methods is revealed from the speed of convergence on 𝑓1 , 𝑓3 , 𝑓7 , 𝑓10 ,𝑓12 , 𝑓14 , 𝑓15 , 𝑓18 − 𝑓20 , 𝑓26 , 𝑓30 . Accordingly, it can conclude that the orthogonal learning mechanism and chaotic exploitation strategy help OLCGOA to improve searching capability.
4.1.4 Comparison with advanced algorithms OLCGOA is competed with 13 well-known forerunner MAs, which included IGOA(Luo, et al., 2018), CGSCA(Kumar, Hussain, Singh, & Panigrahi, 2017), OBSCA(Abd Elaziz, Oliva, & Xiong, 2017), SCADE(Nenavath & Jatoth, 2018), CLPSO(Cao, et al., 2018), BLPSO(X. Chen, Xu, & Du, 2018), IWOA(Tubishat, Abushariah, Idris, & Aljarah, 2019), LWOA(Ling, Zhou, & Luo, 2017), BMWOA(Heidari, Aljarah, et al., 2019), CBA(Adarsh, Raghunathan, Jayabarathi, & Yang, 2016), CDLOBA(Yong, He, Li, & Zhou, 2018), RCBA(Liang, Liu, Shen, Li, & Man, 2018), HGWO(Zhu, Xu, Li, Wu, & Liu, 2015). IGOA combines levy flight, gauss mutation, and oppositional learning based on GOA. CGSCA is an optimization algorithm of SCA promoted by Cauchy strategy and Gaussian mechanism, which is based on a single current sensor. OBSCA proposes a more advanced approach that adopts opposition-based learning to improve effective spatial search and obtains better solutions. SCADE combines a differential evolution algorithm with SCA, and its performance is superior to either of two. CLPSO adopts a new comprehensive learning mechanism to keep the variety of the group because of increasing global optimization ability. The particles in the CLPSO update their flight speeds based on the best historical information of all individuals. BLPSO method adopts the learning strategy based on biogeography which is similar to CLPSO. Each particle updates itself by finding the best position between them through biogeography-based optimization. IWOA adopt a way including differential evolution and elite opposition-based learning strategy on the basis of WOA to avoid local optimum and obtain better convergence. LWOA adopts the levy flight trajectory strategy, which makes the original WOA convergence faster and more accurate, achieving better optimization results. BMWOA is a recently developed exploitation strategy on the basis of hill climbing, which further improves the explore procedures. CBA is an improved algorithm based on chaotic strategy. CDLOBA is an enhanced BA based on cooperative and dynamic learning of relative group and challengers which overcome the shortcoming of falling into local optimum exist in the basic BA. RCBA is on the basis of bat algorithm, which integrates chaotic sequence and random black hole concept together to prevent premature convergence and to increase the global optimization capability. HGWO adds differential evolution to the GWO which enhance searching ability and optimization performance. In experiments with advanced algorithms, not only is the iteration number and population number consistent with 14
others primal methods, but also the parameters of OLCGOA are equal to those set by the original references for fair play. The specific optimization performance of each algorithm is exhibited in Table 8 and Fig. 3. Table 8 reveals that OLCGOA is the most excellent solution, includes 18 of the 30 test functions on 𝑓1 − 𝑓4 , 𝑓6 − 𝑓8 , 𝑓10 − 𝑓11 , 𝑓13 − 𝑓16 , 𝑓18 − 𝑓19 , 𝑓26 , 𝑓28 , 𝑓30 . Furthermore, OLCGOA performs always well on 30 base functions except the f22 base function. It can also be seen that OLCGOA is the suboptimal on 𝑓5 , 𝑓12 , 𝑓17 , 𝑓20 , 𝑓21 , 𝑓25 , 𝑓27 , 𝑓29 and is the third one on 𝑓9 , 𝑓23 , 𝑓24 functions. In general, OLCGOA performs well in unimodal functions, multimodal functions, and hybrid functions. On the composite function, though OLCGOA’s optimization capability is not as good as the other functions, especially in the f22 benchmark function, it is still better than other advanced algorithms except for IGOA. OLCGOA and IGOA compared with CGSCA, OBSCA, SCADE, CLPSO, BLPSO, IWOA, LWOA, BMWOA, CBA, CDLOBA, RCBA, and HGWO have better convergence and optimization ability. Besides, observing the overall ranking in Table 8, we can get a conclusion that OLCGOA ranks first. Also, the proposed OLCGOA provides the best ranking at 1.7. As can be seen from Table 8, the standard deviation of proposed OLCGOA compared to the advanced algorithms is the smallest on 𝑓2 , 𝑓3 , 𝑓13 − 𝑓15 , 𝑓18 − 𝑓19 , 𝑓28 and 𝑓30 functions. According to the comparison results of the advanced algorithms in Table 8, we can find that the advantage of OLCOGOA is no longer as apparent as the basic algorithm because the advanced algorithms have a certain degree of improvement compared with the basic algorithms. On the whole, OLCGOA still shows the apparent stability of OLCGOA in function optimization tasks compared with the advanced algorithms. Furthermore, in the column "+/=/-", we can get a conclusion that OLCGOA is far superior to CGSCA, OBSCA, SCADE, CLPSO, BLPSO, IWOA, LWOA, BMWOA, CBA, CDLOBA, RCBA, and HGWO. In the test, the optimization ability of IGOA is only weaker than OLCOGA, and IGOA performs better than OLCGOA only on 6 functions. According to the comparison of the Friedman test in Table 8, OLCGOA obtains the lowest ARV on the 30 benchmarks. Therefore, we can conclude that the proposed OLCGOA based on orthogonal learning mechanism and chaotic exploitation improves the global optimization ability of the original GOA to a great extent. Fig.3 illustrates the convergence patterns of the involved algorithms. It can be seen that the improved OLCGOA gets a superior solution to the other 13 competitors and has a faster convergence rate when dealing with the involved functions. Therefore, according to the experimental results, it can be concluded that OLCGOA obtained the most promising results in 30 independent runs. In order to show the diversity characteristic of the proposed OLCGOA, a total of eight representative functions from four types of functions including unimodal function, multimodal function, hybrid function, and composite function. A diversity diagram of the average Euclidean distance between agents is shown in Fig.4. As can be seen from the figure, the average Euclidean distance in OLCGOA is much smaller than the original GOA, which means that the new modification increases the potential of GOA. It reveals that group members can use the best information available in an evolutionary population. It also shows how the proposed OLCGOA guides the population to the favorable areas of the searching 15
space. Based on the above analysis, we found that the proposed strategy can significantly improve the performance of GOA in most cases.
One of the core reasons that we can discuss is that the proposed elements have assisted the trends of GOA in maintaining a more stable tradeoff between main searching phases when facing with sub-optimal or near-optimal solutions. A better exploration during the initial stages based on OL is the first reason in the boosted performance. This operator has effectively mitigated the core drawback of GOA, which is stagnation to local optima. In this case, stagnated solutions can again re-distribute within a larger area, and more chance will be provided to scan more areas of the feature basins. The OL strategy helps the agents to learn more useful information from the experience of the other agents within the search process. The OL also assist the agents in being directed with much better guidance compared to the original GOA. Hence, it can visit promising areas in a faster manner. The results for different kinds of cases also showed that the OL had enriched the efficacy of GOA in terms of reliability and stability of performance. Another reason for the enhanced results can be owing to the constructive role of chaos-based exploitation CLS within the restricted areas and neighborhood of the explored solutions. The comparison results indicated that more iteration had improved the exploitative tendencies of the GOA, and this enhanced the quality of results in the case of convergence to high-quality solutions. In the case of convergence to local optima, the proposed GOA-based solver has this capacity to jump out of them based on chaos-based patterns that happen within the last stages. A smooth-shifting from broad exploration to focused intensification has played a significant role in enhancing the results of GOA. Note that the proposed OLCGOA also inherits the promising core advantages of the basic approach in terms of exploratory and exploitative drifts.
4.1.5 Wall-clock Time Cost The wall-clock time of OLCGOA and other competitive counterparts is recorded in minutes and seconds of 30 independent runs, as shown in Tables 9, 10, and 11. Table 9 shows the wall-clock time consumed by OLCGOA, OLGOA, CLSGOA, and GOA on 30 CEC2017 benchmark cases. In Table 9, we divided each value to 60 for simplicity of representation. Also, the time results are compared in Fig. 5. Tables 10 and 11 show the wall clock-time consumed by OLCGOA and 21 other algorithms on 30 CEC2017 benchmark cases. We also monitor the portion of time consumed by each method in a serial order in dealing with all problems in Fig. 6. In the first look, it can be concluded that the proposed OLCGOA consumes more time than the basic GOA. The main reason for the higher time cost of OLCGOA is the introduction of two strategies (OL and CLS) in conventional GOA to achieve a better balance between neighborhood-informed capabilities and comprehensive exploration capabilities. It is well-known that OL-based design will increase the time of procedures. In Table 10, we can see that in dealing with the classic 30 benchmark functions, the time cost of DA and GA is higher than that of the basic GOA. We need to note that the wall clock time of GOA itself consumes more resources than most classic algorithms. As per bar plots in Fig. 5, it is observed that the time rates of advanced competitive variants are in a comparable range. Also, time portions in Fig. 6 show the DA and GOA needs more time to complete the searching phases. The operators of GOA initially are more expensive than other well-known, previous 16
methods such as DA. Hence, the GOA itself is not a very fast optimizer and when we compare the advanced variant, side effects of the modified exploratory and exploitative cores are expected. For example, as per the results in Table 11, the time cost of IGOA is among the highest costs compared to other variants and optimizers. We can also find that the computational cost of the OLCGOA is the highest, and the cost of the IGOA is also high. Although the proposed OLCOGOA consumes more computational cost than other competitors, it can be found from the experimental results that OLCGOA is superior to GOA and other peers in most cases in terms of the convergence speed and the quality of solutions. Therefore, when the time and resources are not limited, and a more priority decision-maker to the quality of solutions, it is valuable and feasible to introduce two synchronization strategies into conventional GOA, even though time consumption is a little bit more. However, it is possible to use some technologies like parallel computing to decrease the computing time of any method. Hence, the proposed method is more favorable to the offline applications and when the decision-maker or user gives more weight to the quality of solutions and local optima avoidance. It is a fact that even a random search can find some solutions (infeasible and local optima) very fast, but the main target is to reach a more stable optimizer able to process high-quality solutions during an adequate time. This adequate time is a subjective measure, and it depends on the limitations, computational budget, and resources of the decision-maker or user. Another point is that if an orthogonal variant of any other method is also developed, still a side effect on the computational time is expected.
4.2 Application to the engineering problems In this subsection, OLCGOA is utilized to optimize three kinds of engineering modeling cases with constraints, concluding pressure vessel, welded beam, and tension/compression spring. In the actual situation, different mathematical models have different constraints, so an appropriate constraint processing method is required. As shown in (S. Mirjalili & Lewis, 2016), the functions of penalty include dynamic functions, static functions, co-evolutionary functions, the death penalty, and adaptive functions. In all of these functions, the death penalty is a moderate function to construct the primary target value of the mathematical model structured. In the optimization process, the heuristic algorithm will automatically discard the infeasible solutions. This method has the advantages of simple calculation and small computation. However, the method does not take advantage of the information of non-feasible solutions, which may be useful in solving problems with dominated infeasible regions. To verify its excellence, we have equipped OLCGOA with a death penalty function that deals with constraints in this section to verify its practical feasibility. In the following engineering problems, we adopt 1000 iterations to find the optimal solution.
4.2.1 Welded beam problem The goal of this engineering design is as far as possible to reduce the production cost in a welded beam. Results of the optimization are related to bending stress of beam(𝜃), end deflection of the beam (𝛿), buckling load (𝑃𝑐 ), and shear stress (𝜏 ). The total type of optimization variables is four, which are respectively weld thickness () , height (𝑡) ,
17
length (𝑙), and thickness (𝑏) of the clamped bar. The welded beam engineering design can be depicted as:
Consider
y
Objective
f y
Subject to
[ y1 , y2 , y3 , y4 ]
1 . 1 0 4 127 y 12y
m i n
y
m a x
0
g2 y
y
m a x
0
g3 y
y
m a x
g4 y
y
y
g5 y
P
g6 y
0.125
1
1.10471y y1
2,
0.1 0.1
y2 y3
10, 10,
0.1
y4
2
2 P 2y1y2 y22
R J
2 y
6PL
, 2
y 4y 3
y22
P max
x2
2
2R P (L
2
L
5.0
0
, y2 2
),
, y1
2
y3
] ,
2
y
y2 )
2
4
4.013E P y c
0.04811y 3y 4 (14.0
y3 2
2y1y2 [
0
2 1
MR ,M J
y1
4
0 0
2
,
)
(17)
y1
0.1
( 12y 4 . 0
0
4
Pc y
y
where
0 . 0 43y 84y1 1
g1 y
g7 y
Variable range
[h , l , t , b ]
6PL3 Ey 4y 32 y 32y 46 36 (1
,
y3 2L
E ) 4G
6000lb, L 14in., max 0.25in., E 13600psi, max 30000psi.
30 106 psi,G
12 106 psi,
In this part, OLCGOA was compared with three new SCA variants. As shown in Table 12, Compared with other advanced SCA-based algorithms, the SCA combined with the Cauchy and Gaussian mechanism has the best optimization effect on the welded beam problem under the same conditions. CGSCA can get the optimal cost of 1.898909723. Moreover, SCADE can get the optimal cost of 2.004799008. The GWO algorithm that mimics the wolf habits reduces the loss of the welded beam design very well. Its final optimization result is 1.709155074. Also, Singh et al. (Singh, Mukherjee, & Ghoshal, 2016) proposed an algorithm named ALCPSO based on particle swarm optimization with an aging leader and challengers strategy, which could obtain optimization results of 1.695814014, only next to than OLCGOA. The results of Table 12 reveal that OLCGOA compared with other methods can 18
be acquired the finest optimal design result. According to the data in Table 12, the result 1.695567 acquired from OLCGOA is the finest one among all the results, considering the lowest cost. This shows that when the parameters are set as = 0.205425, 𝑙 =3.258593, 𝑡 =9.036472, 𝑏 =0.205736, the manufacturing cost of the welded beam problem can reach 1.695567. Therefore, the improved OLCGOA has made prominent progress in dealing with the welded beam engineering problem.
4.2.2 Pressure vessel problem As a benchmark problem, pressure vessel engineering is widely used in structural design. The ultimate goal of this engineering design is as far as possible to reduce material costs. This is strongly linked to materials, structures, and welding(Kannan & Kramer, 1994). The end of the pressure vessel is covered, and the front end is a hemispherical shape. In this design, the thickness of the inner radius(𝑟), shell (𝑇𝑠 ), range of section reducing head(𝑙) and head (𝑇ℎ ) are parameters that need to improve. The mathematical equation of the pressure vessel is shown in the below:
Consider
y
Objective
f y
Subject to
[y1y2, y3, y4 ] m i n
1 . 7 7y32 8 1y1
0 . 6 2 21y4 y3 y4
g1 y
y1
g2 y
y2
0.0193y
g3 y
0
y1
9 9,
0
y2
9 9,
1 0 y3
2 0 0,
1 0 y4
200
3 . 21y4 61y6 1
2
y31 91y . 8 4
0,
3
0.00954y 4 3 y 4y 32 y 3 3 y 4 240 0,
g4 y
Variable ranges
[Ts ,Th , R, L]
0,
3
1296000
0,
(18)
Many algorithms have designed to optimize the pressure vessel design problem. He et al.(He & Wang, 2007) made use of the PSO algorithm to deal with it; the optimization result acquired was 6061.0777. GA(Coello Coello, 2000) was also used to tackle this problem, and the optimal cost is 6410.3811. The SCA based on opposition-based learning fails to obtain better optimization results, and its value is 6687.895034. We also used CGSCA, GWO, and SCADE to tackle this problem. Among them, GWO has achieved very close results to OLCGOA, and its optimal cost is 5990.070358. Table 13 depicts the comparison experiments of OLCGOA with the above meta-heuristic methods in solving the problem of the pressure vessel. Based on data in Table 13, the outcome of 5922.209425 obtained by OLCGOA is the best among all the methods. This shows that when the parameters are set as 𝑇𝑠 =2.346243331, 𝑇ℎ = 0.62271985, 𝑅 = 65.27461405, 𝐿 = 10, the manufacturing cost of this problem can reach up to 5922.209425. Therefore, improved OLCGOA can be recognized as an effective method in solving this type of problem.
4.2.3 Tension/compression spring problem The ultimate goal of this test design is as much as possible to decrease spring weight. The engineering problem needs to meet the constraint condition of deflection, surge 19
frequency, and shear stress. The sum number of optimization variables is three, including wire diameter (𝑑), mean coil diameter (𝐷) and effective coils number (𝑁). The spring design can be depicted as follows:
Consider
y
Minimize
f y
[y1, y2, y3 ] y
g1 y g2 y
Subject to g3 y g4 y
1
2
y23y 3
12566 y y 1 y1
0,
71785y14 4y22 y1y2 3 2 1
0 . 0 5 y1
Variable ranges
y 2 1y
2
3
m i n
[d, D, N ]
140.45y1 y22y 3 y2 1.5
1
1 5108y12
4 1
y
0,
(19)
0, 0,
2 . 0 0,
0 . 2 5 y2
1 . 3 0,
2 . 0 0 y3
15.0
Table 14 exhibits the comparison results of OLCGOA with other meta-heuristic algorithms, including OBSCA, CGSCA, SCADE, CBA, WOA, PSO, GWO, and BA in solving tension/compression spring design problems. We can get a conclusion that the optimization ability of OLCGOA is superior to other approaches. Based on the data in Table 14, the result of 0.012667456 obtained by OLCGOA is the best of all results. This shows that when the parameters are set as 𝑑 = 0.051586809, 𝐷 = 0.354262809, 𝑁 = 11.4365114, the manufacturing cost of PVD can reach up to 0.012667456. Therefore, the improved OLCGOA has achieved some improvement in the design of tension and compression spring.
4.3 Application to the feature selection problems Feature selection and search optimization problems have a lot in common. The purpose of this problem is to remove the unimportant features with low correlation characteristics to obtain the most exceptional classification precision possible. In this segment, we dealt with 21 different UCI data sets and compared the proposed OLCGOA algorithm with the advanced feature selection algorithm. Table 15 shows the feature number and sample number of 21 data sets (Emary, Zawbaa, & Hassanien, 2016b).
We know that in the OLCGOA algorithm, a more suitable solution is selected by constantly comparing the location of the new locust with the original location. Feature problems are abiding by the same principle. In the process of dealing with the feature selection, the first step is to determine whether to choose this feature. So we have to change the continuous value of grasshopper location in the algorithm to a binary value. The initial 20
position is allocated with a random probability of 0 and 1 to obtain the binary value. As is exhibited in the below: 𝑋𝑖,𝑗 = {
0 1
𝑟𝑎𝑛𝑑 < 0.5 𝑟𝑎𝑛𝑑 ≥ 0.5
(20)
𝑋𝑖,𝑗 on behalf of the position of the grasshopper, 𝑖 and 𝑗 represent 𝑖 − 𝑡 row and 𝑗 − 𝑡 column. What’s more, the continuous solution of each dimension is transformed by using sigmoid transfer function, which forces the search of two-dimensional space into a movement in binary space, as shown in the following formula: 𝑠 = 1/(1 + 𝑒𝑥𝑝(−𝑥/3)) (21) 𝑝𝑜𝑠 = ~𝑝𝑜𝑠𝑡𝑖𝑜𝑛 𝑟𝑎𝑛𝑑 < 𝑠 𝑥={ (22) 𝑝𝑜𝑠 = 𝑝𝑜𝑠𝑖𝑡𝑖𝑜𝑛 𝑟𝑎𝑛𝑑 ≥ 𝑠 where 𝑥 represents a continuous value, 𝑝𝑜𝑠 represents the value of binary after the change, position denotes the initialized binary value. Feature selection based on OLCGOA algorithm is run N times on each data set, and 𝑘 times of cross-validation is performed each time. During the crossover process, data samples are divided into a training set, verification set, and test set in a particular proportion. This paper uses K nearest neighbor classifier for classification. At first, the classifier trains and classifies the whole data in the training set, then compares and verifies with the samples in the verification set, and finally runs the selected features in the test data to obtain the computational accuracy. The results of feature selection by the mentioned method are contrasted with several kinds of feature selection algorithms, such as BALO (Emary, Zawbaa, & Hassanien, 2016a), BGWO (Emary, et al., 2016b), BPSO (S. Mirjalili & Lewis, 2013), BBA (S. Mirjalili, Mirjalili, & Yang, 2014) and BSSA (Faris, et al., 2018). In this experiment, we select features based on the binary principle. When the number is displayed as 0, it indicates this feature is discarded. On the contrary, we choose this feature when the number is not equal to 0. The number of features contained in the training information will determine the length of each vector. Each result is compared by fitness value, and the fitness formula is as follows: 𝑇𝑃
𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃:𝐹𝑃
(23)
𝑇𝑃 = ∑𝐷 𝑖<1 𝑇𝑃𝑖 ∑ 𝐹𝑃 = 𝐷 𝑖<1 𝐹𝑃𝑖 𝑓𝑖𝑡 = 𝛼 × (1 − 𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦) + 𝛽 ×
(24) (25) 𝑁𝑖 𝑁
(26)
where accuracy represents the accuracy obtained by the KNN classifier under the verification set. 𝑇𝑃𝑖 is the number of test documents correctly classified under ith category (𝐷𝑖 ), 𝐹𝑃𝑖 is the number of test documents incorrectly classified under 𝐷𝑖 . Also, where accuracy represents the accuracy obtained by the KNN classifier under the verification set. Also, 𝑁 is equal to all the number of features contained in the current validation data set, and 𝑁𝑖 is equal to feature number obtained by the 𝑖 − 𝑡 individual after feature selection. Furthermore, the two variables 𝛼 and 𝛽 are related, 𝛼 ∈ (0,1) and 𝛽 = 1 − 𝛼 . Each individual in all groups was evaluated with fitness values to determine whether to change to the optimal location. The entire process is carried out on an iterative basis until the number of iterations set in the initial stage is reached. 10-fold cross-validation method was employed for 21
the whole experiment, the maximum iterations were set to 50, and the parameter K in the KNN classifier was set to 3. In order to give a more intuitive representation of OLCGOA and the other algorithms for the feature selection task, the optimal fitness values, accuracy, and the number of features for the three typical representative data sets including D4, D5, and D19 are displayed in Fig. 7.
Table 16 illustrates the average fitness values of OLCGOA compared with BGWO, BPSO, BALO, BBA, and BSSA under 50 iterations. We are bolding the optimal solution in the table. It leads us to know the results of OLCGOA contrast with the other five feature selection algorithms that the performance is the best in 11 data sets. In data sets with good performance, it can get a conclusion that OLCGOA not only plays a good role in data sets with few features, such as D3, D5, D7, D10, D13 but also has good optimization performance in the case of a large number of features and large sample size. It is worth noting that the BA algorithm is weaker than OLCGOA proposed in any data set. Other classical algorithms have competitive advantages in a single data set, but overall performance shows that OLCGOA has the best average fitness. Other algorithms rank in order of BALO, BGWO, BPSO, BSSA, and BBA. The average fitness values demonstrate that OLCGOA has the most reliable spatial search capabilities. The optimization capability of OLCGOA has also reflected on the test data, as exhibited in Table 17, where the table outlines the classification error rate on 21 data sets. In the experiment, the reliability parameter 𝛼 = 0.95. Table 17 reveals that the improve OLCGOA error rate in D9, D17, and D19 data sets is all 0, achieving 100% classification accuracy. Except for the data sets D3 and D12, OLCGOA is superior to other feature selection algorithms. Especially in the D11 data set, the error rate of other algorithms is more than 50%, which can reflect that the data features in this data set are not easy to distinguish. However, OLCGOA still has an error rate of only 0.201. Also, even in the D3 and D12 data sets, the gap between OLCGOA and BALO and BPSO is minimal, which fully reflects that OLCGOA is far better than other algorithms in terms of accuracy. Table 18 depicts the average number of features select through all of the methods. In all data sets, OLCGOA is superior to other algorithms except D17, which fails to show the least features. Even the BGWO with the second-fewest selected features is significantly different from COLOGOA in most datasets. OLCGOA chooses approximately 1.5 features less than BGWO on D18. According to the number of feature selection, the proposed OLCGOA is the best. Generally, the application in feature selection shows that OLCGOA has good optimization search ability, high accuracy, and useful feature selection ability. This fact is due
22
to this fact that orthogonal learning strategy can find the most suitable scheme in less time based on the known information, which makes it possible for grasshoppers to find a more useful search direction to some extent. Also, a chaotic exploitation strategy provides help for GOA to avoid falling into the local optimal situation. The overall impact makes a more stable balance between the core drifts of the modified GOA. Table 19 describes the running time selected by all method features. It can be found that OLCGOA does not perform well in all data sets, indicating that OLCGOA has higher time complexity, which is also a negative consequence of its performance being far better than other algorithms.
5 Conclusion and future works In this study, a new variant of GOA named orthogonal learning and chaotic exploitation-based GOA (OLCGOA) is developed. In OLCGOA, two effective strategies (orthogonal learning (OL) and chaotic exploitation (CLS)) are embedded into GOA. The simulation results reveal that these two mechanisms are significantly useful to augment the main trends of the GOA further and mitigate the immature convergence drawback. Firstly, the availability of the mentioned OLCGOA method was demonstrated by comparing it with several kinds of famous methods, including GWO, WOA, MFO, SCA, DA, PSO, GA, and PBIL. The comparison performances illustrate that OLCGOA can acquire more appropriate results and is outstanding superior to the rest of the competitors. Secondly, OLCGOA competes with several enhanced MAs, including IGOA, CGSCA, OBSCA, SCADE, CLPSO, BLPSO, IWOA, LWOA, BMWOA, CBA, CDLOBA, RCBA, and HGWO. Compared with the experimental results of other methods, OLCGOA has stronger global optimization capability and a more suitable solution. Thirdly, OLCGOA was used to optimize the parameters of engineering problems. Aiming at three practical engineering design problems, OLCGOA experiments with other methods like RO, HS, BA, GA, WOA, and PSO. The results illustrate that OLCGOA is better than other famous approaches in cost evaluation indexes. Finally, the application of OLCGOA in feature selection still has good search optimization ability, high precision, and satisfactory feature selection ability. Therefore, the conclusion of structural design problems demonstrated that OLCGOA could be a new way of thinking not only to deal with classic benchmarking problems, but also to solve practical applications. In the future study, there are still many aspects worth discovering. For instance, the developed OLCGOA can be combined with other new MAs to enhance its optimization capability better. Moreover, extending the proposed OLCGOA to multi-objective scenarios and image segmentation are also interesting topics. In future works, we will also investigate the impact of the degradation of chaotic systems in digital computers on the performance of optimization algorithms. 23
Acknowledgment This research is supported by National Natural Science Foundation of China (U1809209), Science and Technology Plan Project of Wenzhou, China (ZY2019020, ZG2017019), and Guangdong Natural Science Foundation (2018A030313339), MOE (Ministry of Education in China) Youth Fund Project of Humanities and Social Sciences (17YJCZH261), Scientific Research Team Project of Shenzhen Institute of Information Technology (SZIIT2019KJ022), National Natural Science Foundation of China (71803136,61471133).
24
Conflict of Interest The authors declare that there is no conflict of interests regarding the publication of article.
25
Author Contributions: Author 1: Zhangze Xu Contributions: Writing – Original Draft, Writing – Review & Editing, Software, Visualization, Investigation. Author 2: Zhongyi Hu Contributions: Conceptualization, Methodology, Formal Analysis, Investigation, Writing – Review & Editing, Funding Acquisition, Supervision. Author 3: Ali Asghar Heidari Contributions: Writing – Original Draft, Writing – Review & Editing, Software, Visualization, Investigation. Author 4: Mingjing Wang Contributions: Writing – Review & Editing, Software, Visualization. Author 5: Xuehua Zhao Contributions: Writing – Review & Editing, Software, Visualization. Author 6: Huiling Chen Contributions: Writing – Review & Editing, Software, Visualization. Author 7: Xueding Cai Contributions: Conceptualization, Methodology, Formal Analysis, Investigation, Writing – Review & Editing, Funding Acquisition, Supervision.
Term
Definition Ideas; formulation or evolution of overarching research Conceptualization goals and aims Development or design of methodology; creation of Methodology models Programming, software development; designing computer programs; implementation of the computer code and supporting algorithms; testing of existing Software code components Verification, whether as a part of the activity or Validation separate, of the overall replication/ reproducibility of 26
Term
Definition results/experiments and other research outputs Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize Formal Analysis study data Conducting a research and investigation process, specifically performing the experiments, or Investigation data/evidence collection Provision of study materials, reagents, materials, patients, laboratory samples, animals, instrumentation, computing resources, or other Resources analysis tools Management activities to annotate (produce metadata), scrub data and maintain research data (including software code, where it is necessary for interpreting the data itself) for initial use and later Data Curation reuse Preparation, creation and/or presentation of the Writing – Original published work, specifically writing the initial draft Draft (including substantive translation) Preparation, creation and/or presentation of the published work by those from the original research Writing – Review & group, specifically critical review, commentary or Editing revision – including pre-or postpublication stages Preparation, creation and/or presentation of the published work, specifically visualization/ data Visualization presentation Oversight and leadership responsibility for the research activity planning and execution, including Supervision mentorship external to the core team Project Management and coordination responsibility for the Administration research activity planning and execution Funding Acquisition of the financial support for the project Acquisition leading to this publication
27
References Abd Elaziz, M., Oliva, D., & Xiong, S. (2017). An improved Opposition-Based Sine Cosine Algorithm for global optimization. Expert Systems with Applications, 90, 484-500. Adarsh, B. R., Raghunathan, T., Jayabarathi, T., & Yang, X. S. (2016). Economic dispatch using chaotic bat algorithm. Energy, 96, 666-675. Alatas, B. (2010). Chaotic bee colony algorithms for global numerical optimization. Expert Systems with Applications, 37, 5682-5687. Alcalá-Fdez, J., Sánchez, L., García, S., del Jesus, M. J., Ventura, S., Garrell, J. M., Otero, J., Romero, C., Bacardit, J., Rivas, V. M., Fernández, J. C., & Herrera, F. (2009). KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Computing, 13, 307-318. Aljarah, I., Al-Zoubi, A. M., Faris, H., Hassonah, M. A., Mirjalili, S., & Saadeh, H. (2018). Simultaneous Feature Selection and Support Vector Machine Optimization Using the Grasshopper Optimization Algorithm. Cognitive Computation. Arora, S., & Anand, P. (2018). Chaotic grasshopper optimization algorithm for global optimization. Neural Computing and Applications, 1-21. Barik,
A.
K.,
&
Das,
D.
C.
(2018).
Expeditious
frequency
control
of
solar
photovoltaic/biogas/biodiesel generator based isolated renewable microgrid using grasshopper optimisation algorithm. IET Renewable Power Generation, 12, 1659-1667. Cao, Y., Zhang, H., Li, W., Zhou, M., Zhang, Y., & Chaovalitwongse, W. A. (2018). Comprehensive Learning Particle Swarm Optimization Algorithm with Local Search for Multimodal Functions. IEEE Transactions on Evolutionary Computation. Chen, H., Heidari, A. A., Zhao, X., Zhang, L., & Chen, H. (2019). Advanced Orthogonal Learning-Driven Multi-Swarm Sine Cosine Optimization: Framework and Case Studies. Expert
Systems
with
Applications,
113113,
https://doi.org/113110.111016/j.eswa.112019.113113. Chen, H., Jiao, S., Wang, M., Heidari, A. A., & Zhao, X. (2019). Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. Journal of Cleaner Production. Chen, H., Wang, M., & Zhao, X. (2020). A multi-strategy enhanced sine cosine algorithm for global optimization and constrained practical engineering problems. Applied Mathematics
and
Computation,
369,
124872
(https://doi.org/124810.121016/j.amc.122019.124872). Chen, H., Zhang, Q., Luo, J., Xu, Y., & Zhang, X. (2019). An enhanced Bacterial Foraging Optimization and its application for training kernel extreme learning machine. Applied Soft Computing, https://doi.org/10.1016/j.asoc.2019.105884, 105884. Chen, X., Xu, B., & Du, W. (2018). An improved particle swarm optimization with biogeography-based learning strategy for economic dispatch problems. Complexity, 2018. Coelho, L. d. S., & Mariani, V. C. (2008). Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Systems with Applications, 34, 1905-1913.
28
Coello Coello, C. A. (2000). Use of a self-adaptive penalty approach for engineering optimization problems. Computers in Industry, 41, 113-127. Crawford, B., Soto, R., Peña, A., & Astorga, G. (2019). A binary grasshopper optimisation algorithm applied to the set covering problem. In
Advances in Intelligent Systems and
Computing (Vol. 765, pp. 1-12). Deng, W., Xu, J., Song, Y., & Zhao, H. (2019). An effective improved co-evolution ant colony optimization algorithm with multi-strategies and its application. International Journal of Bio-Inspired Computation. Deng, W., Xu, J., & Zhao, H. (2019). An Improved Ant Colony Optimization Algorithm Based on Hybrid Strategies for Scheduling Problem. IEEE Access, 7, 20281-20292. Deng, W., Zhao, H., Yang, X., Xiong, J., Sun, M., & Li, B. (2017). Study on an improved adaptive PSO algorithm for solving multi-objective gate assignment. Applied Soft Computing, 59, 288-302. Deng, W., Zhao, H., Zou, L., Li, G., Yang, X., & Wu, D. (2017). A novel collaborative optimization algorithm in solving complex optimization problems. Soft Computing, 21, 4387-4398. El-Fergany, A. A. (2018). Electrical characterisation of proton exchange membrane fuel cells stack using grasshopper optimizer. IET Renewable Power Generation, 12, 9-17. Emary, E., Zawbaa, H. M., & Hassanien, A. E. (2016a). Binary ant lion approaches for feature selection. Neurocomputing, 213, 54-65. Emary, E., Zawbaa, H. M., & Hassanien, A. E. (2016b). Binary grey wolf optimization approaches for feature selection. Neurocomputing, 172, 371-381. Ewees, A. A., Abd Elaziz, M., & Houssein, E. H. (2018). Improved grasshopper optimization algorithm using opposition-based learning. Expert Systems with Applications, 112, 156-172. Faris, H., Mafarja, M. M., Heidari, A. A., Aljarah, I., Al-Zoubi, A. M., Mirjalili, S., & Fujita, H. (2018). An efficient binary Salp Swarm Algorithm with crossover scheme for feature selection problems. Knowledge-Based Systems, 154, 43-67. Folly, K. A. (2006). Design of power system stabilizer: a comparison between genetic algorithms (GAs) and population-based incremental learning (PBIL). In Power Engineering Society General Meeting. Hazra, S., Pal, T., & Roy, P. K. (2019). Renewable energy based economic emission load dispatch using grasshopper optimization algorithm. International Journal of Swarm Intelligence Research, 10, 38-57. He, Q., & Wang, L. (2007). An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Engineering Applications of Artificial Intelligence, 20, 89-99. Heidari, A. A., Aljarah, I., Faris, H., Chen, H., Luo, J., & Mirjalili, S. (2019). An enhanced associative learning-based exploratory whale optimizer for global optimization. Neural Computing and Applications. Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris hawks optimization: Algorithm and applications. Future Generation Computer Systems, 97, 849-872. Jia, D., Zheng, G., & Khurram Khan, M. (2011). An effective memetic differential evolution algorithm based on chaotic local search. Information Sciences, 181, 3175-3187.
29
Jumani, T. A., Mustafa, M. W., Rasid, M. M., Mirjat, N. H., Baloch, M. H., & Salisu, S. (2019). Optimal power flow controller for grid-connected microgrids using grasshopper optimization algorithm. Electronics (Switzerland), 8. Kannan, B. K., & Kramer, S. N. (1994). An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design, Transactions of the ASME, 116, 405-411. Karaboga, D., & Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. Journal of Global Optimization, 39, 459-471. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In
IEEE International
Conference on Neural Networks - Conference Proceedings (Vol. 4, pp. 1942-1948). Kumar, N., Hussain, I., Singh, B., & Panigrahi, B. K. (2017). Single Sensor-Based MPPT of Partially Shaded PV System for Battery Charging by Using Cauchy and Gaussian Sine Cosine Optimization. IEEE Transactions on Energy Conversion, 32, 983-992. Li, S., Chen, G., & Mou, X. (2005). On the dynamical degradation of digital piecewise linear chaotic maps. International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, 15, 3119-3151. Liang, H., Liu, Y., Shen, Y., Li, F., & Man, Y. (2018). A Hybrid Bat Algorithm for Economic Dispatch with Random Wind Power. IEEE Transactions on Power Systems, 33, 5052-5261. Ling, Y., Zhou, Y., & Luo, Q. (2017). Lévy Flight Trajectory-Based Whale Optimization Algorithm for Global Optimization. IEEE Access, 5, 6168-6186. Luo, J., Chen, H., Heidari, A. A., Xu, Y., Zhang, Q., & Li, C. (2019). Multi-strategy boosted mutative whale-inspired optimization approaches. Applied Mathematical Modelling, 73, 109-123. Luo, J., Chen, H., zhang, Q., Xu, Y., Huang, H., & Zhao, X. (2018). An improved grasshopper optimization algorithm with application to financial stress prediction. Applied Mathematical Modelling, 64, 654-668. Mafarja, M., Aljarah, I., Heidari, A. A., Hammouri, A. I., Faris, H., Al-Zoubi, A. M., & Mirjalili, S. (2018). Evolutionary Population Dynamics and Grasshopper Optimization approaches for feature selection problems. Knowledge-Based Systems, 145, 25-45. Mirjalili, S. (2015a). The ant lion optimizer. Advances in Engineering Software, 83, 80-98. Mirjalili, S. (2015b). Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowledge-Based Systems, 89, 228-249. Mirjalili, S. (2016a). Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications, 27, 1053-1073. Mirjalili, S. (2016b). SCA: A Sine Cosine Algorithm for solving optimization problems. Knowledge-Based Systems, 96, 120-133. Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., & Mirjalili, S. M. (2017). Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163-191. Mirjalili, S., & Lewis, A. (2013). S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm and Evolutionary Computation, 9, 1-14. Mirjalili, S., & Lewis, A. (2016). The Whale Optimization Algorithm. Advances in Engineering Software, 95, 51-67.
30
Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-Verse Optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications, 27, 495-513. Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey Wolf Optimizer. Advances in Engineering Software, 69, 46-61. Mirjalili, S., Mirjalili, S. M., & Yang, X. S. (2014). Binary bat algorithm. Neural Computing and Applications, 25, 663-681. Mirjalili, S. Z., Mirjalili, S., Saremi, S., Faris, H., & Aljarah, I. (2018). Grasshopper optimization algorithm for multi-objective optimization problems. Applied Intelligence, 48, 805-820. Nenavath, H., & Jatoth, R. K. (2018). Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Applied Soft Computing Journal, 62, 1019-1043. Pan, W. T. (2012). A new Fruit Fly Optimization Algorithm: Taking the financial distress model as an example. Knowledge-Based Systems, 26, 69-74. Passino, K. M. (2002). Biomimicry of Bacterial Foraging for Distributed Optimization and Control. IEEE Control Systems, 22, 52-67. Saremi, S., Mirjalili, S., & Lewis, A. (2017). Grasshopper Optimisation Algorithm: Theory and application. Advances in Engineering Software, 105, 30-47. Saxena, A., Shekhawat, S., & Kumar, R. (2018). Application and Development of Enhanced Chaotic Grasshopper Optimization Algorithms. Modelling and Simulation in Engineering, 2018. Singh, R. P., Mukherjee, V., & Ghoshal, S. P. (2016). Particle swarm optimization with an aging leader and challengers algorithm for the solution of optimal power flow problem. Applied Soft Computing Journal, 40, 161-177. Storn, R., & Price, K. (1997). Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization, 11, 341-359. Taher, M. A., Kamel, S., Jurado, F., & Ebeed, M. (2019). Modified grasshopper optimization framework for optimal power flow solution. Electrical Engineering. Tharwat, A., Houssein, E. H., Ahmed, M. M., Hassanien, A. E., & Gabel, T. (2018). MOGOA algorithm for constrained and unconstrained multi-objective optimization problems. Applied Intelligence, 48, 2268-2283. Tubishat, M., Abushariah, M. A. M., Idris, N., & Aljarah, I. (2019). Improved whale optimization algorithm for feature selection in Arabic sentiment analysis. Applied Intelligence, 49, 1688-1707. Tumuluru, P., & Ravi, B. (2017). GOA-based DBN: Grasshopper optimization algorithm-based deep belief neural networks for cancer classification. International Journal of Applied Engineering Research, 12, 14218-14231. Wang, G. G. (2018). Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization problems. Memetic Computing, 10, 151-164. Wang, M., & Chen, H. (2019). Chaotic multi-swarm whale optimizer boosted support vector machine
for
medical
diagnosis.
Applied
Soft
Computing,
105946,
https://doi.org/105910.101016/j.asoc.102019.105946. Wu, G., Mallipeddi, R., & Suganthan, P. (2016). Problem Definitions and Evaluation Criteria for the CEC 2017 Competition and Special Session on Constrained Single Objective Real-Parameter Optimization.
31
Wu, J., Wang, H., Li, N., Yao, P., Huang, Y., Su, Z., & Yu, Y. (2017). Distributed trajectory optimization for multiple solar-powered UAVs target tracking in urban environment by Adaptive Grasshopper Optimization Algorithm. Aerospace Science and Technology, 70, 497-510. Xu, Y., Chen, H., Heidari, A. A., Luo, J., Zhang, Q., Zhao, X., & Li, C. (2019). An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Systems with Applications, 129, 135-155. Xu, Y., Chen, H., Luo, J., Zhang, Q., Jiao, S., & Zhang, X. (2019). Enhanced Moth-flame optimizer with mutation strategy for global optimization. Information Sciences, 492, 181-203. Yang, X. S. (2009). Firefly algorithms for multimodal optimization. In
Lecture Notes in Computer
Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5792 LNCS, pp. 169-178). Yang, X. S. (2010). A new metaheuristic Bat-inspired Algorithm. In
Studies in Computational
Intelligence (Vol. 284, pp. 65-74). Yang, X. S. (2012). Flower pollination algorithm for global optimization. In
Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7445 LNCS, pp. 240-249). Yong, J., He, F., Li, H., & Zhou, W. (2018). A Novel Bat Algorithm based on Collaborative and Dynamic Learning of Opposite Population. In Proceedings of the 2018 IEEE 22nd International Conference on Computer Supported Cooperative Work in Design, CSCWD 2018 (pp. 541-546). Yu, H., Zhao, N., Wang, P., Chen, H., & Li, C. (2020). Chaos-enhanced synchronized bat optimizer. Applied Mathematical Modelling, 77, 1201-1215. Zhan, Z. H., Zhang, J., Li, Y., & Shi, Y. H. (2011). Orthogonal learning particle swarm optimization. IEEE Transactions on Evolutionary Computation, 15, 832-847. Zhang, X., Wang, D., Zhou, Z., & Ma, Y. (2019). Robust Low-Rank Tensor Recovery with Rectification and Alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10.1109/TPAMI.2019.2929043, 1-1. Zhu, A., Xu, C., Li, Z., Wu, J., & Liu, Z. (2015). Hybridizing grey Wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. Journal of Systems Engineering and Electronics, 26, 317-328.
32
Initialize the cmax. cmin, Q, F
Start
Check if i < n
Initialize the swarm of grasshopper X
Yes
Calculate the fitness of each agent
Choose the best fitness Fbest
Update X
No Check if j < position
Record the position of grasshopper population
Yes
Create orthogonal array No
No Relocate grasshoppers that go outside the search space
All individuals have been updated by grasshopper movement
Calculate the objective values for grasshoppers
Update Fbest if there is a better solution
Perform orthogonal learning strategy
Generate the new individual Xi’
Does the Xi’has a better fitness than Xi?
Generate the candidate position CS by performing CLS strategy
If Fbest is better Update Fbest No
Does the CS better than Fbest?
Yes
Save the CS as the new best search agent Fbest
Yes Save Xi’as the new population Xi
Save Xi
No Save Fbest
Stopping condition satisfied?
Yes
Return Fbest
Fig.1. Flowchart of the proposed OLCGOA
33
End
Fig.2. Convergence curves of OLCGOA, GA, WOA, SCA, DA, PSO, PBIL, GWO, and MFO, tested on 12 benchmark functions.
34
Fig. 3. Convergence curves of OLCGOA, IGOA, CGSCA, OBSCA, SCADE, CLPSO, BLPSO, IWOA, LWOA, BMWOA, CBA, CDLOBA, RCBA, and HGWO tested on 15 test functions.
35
Fig. 4. Diversity plot for 8 test functions
36
Fig. 5 Comparison of OLCGOA, OLGOA, CLSGOA, and GOA on CEC2017 functions
37
Fig.6. Time consumption of methods on problems
38
M-of-n
SpectEW
CongressEW
Fig. 7. Comparison results of OLCGOA and other algorithms on D4, D5, and D19
39
Table 1 Details of CEC2017 Function Equation
Dim
Range
2 6 D f1 x x1 10 xi2 i 2
30
[-100,100]
100
D f 2 x xi2 i 1
30
[-100,100]
200
2 4 D D D f3 x xi2 0.5 xi2 0.5 xi2 i 1 i 1 i 1
30
[-100,100]
300
30
[-100,100]
400
30
[-100,100]
500
30
[-100,100]
600
30
[-100,100]
700
30
[-100,100]
800
30
[-100,100]
900
30
[-100,100]
1000
i 1 D D 1 2 6 f11 x 10 xi i 1
3
[-100,100]
1100
D f12 x 106 x12 xi2 i 2
3
[-100,100]
1200
3
[-100,100]
1300
D k max k max f14 x a k cos 2 bk x 0.5 D a k cos 2 bk 0.5 i 1 k 0 k 0
4
[-100,100]
1400
D x2 D x f15 x i cos i 1 i 1 4000 i 1 i
4
[-100,100]
1500
4
[-100,100]
1600
5
[-100,100]
1700
5
[-100,100]
1800
2 D 1 2 f 4 x 100 xi2 xi 1 xi 1 i 1
D f5 x xi2 10cos 2 xi 10 i 1
f6 x g x1 , x2 g x2 , x3
g xD-1 , xD g xD , x1
Optimum
2 2 2 sin x y 0.5 g x , y 0.5 2 2 2 1 0.001 x y
2 2 D D D f 7 x min xi 0 ,dD s xi 1 10 D cos 2 zi i 1 i 1 i 1
D f8 x zi2 10cos 2 zi 10 f13 i 1
f9 x sin
2
w1 i1 wi 12 110sin2 wi 1 wD 1 D
D f10 x 418.9829 D g zi i 1
2 2 1sin 2 wD
, zi xi 4.209687462275036e + 002
D 1 D 2 1 f13 x 20 exp 0.2 xi exp cos 2 xi D i 1 D i 1
j j 32 2 xi round 2 xi 10 D f16 x 2 1 i j D i 1 2 j 1
20 e
10 1.2 D 10 2 D
1/4 D D D f17 x xi2 D 0.5 xi2 xi / D 0.5 i 1 i 1 i 1 2
D D f18 x xi2 xi i 1 i 1
2
1/4
D D 0.5 xi2 xi / D 0.5 i 1 i 1
40
[-100,100]
1900
6
[-100,100]
2000
3
[-100,100]
2100
3
[-100,100]
2200
4
[-100,100]
2300
4
[-100,100]
2400
5
[-100,100]
2500
5
[-100,100]
2600
6
[-100,100]
2700
6
[-100,100]
2800
3
[-100,100]
2900
3
[-100,100]
3000
1 D 1 0.2 1 f20 x si sin 50.0 si D 1 i 1
2 , si
xi2 xi21
f21 x f1 M x o1 f21 f22 x f2 M x o2 f22 f23 x f3 M x o3 f23
2.048 x o 4 f 24 x f 4 M 100
f 25
5
f19 x f 7 f 4 x1 , x2 f7 f 4 x2 , x3 f7 f 4 xD 1 , xD f7 f 4 xD , x1
1 f
24
x f5 M x o5 +f25
2.048 x o 6 f 26 x f 20 M 100
600 x o 7 f 27 x f 7 M 100
5.12 x o 8 f 28 x f8 100
5.12 x o 9 f 29 x f9 M 100
f
f
f
26
27
28
f
1000 x o 10 f30 x f30 M 100
29
f
30
41
Table 2. Parameters setting for algorithms Method
Population size
Maximum generation
OLCGOA
30
1000
Other parameters
Q=3; F=4; βMax=1;
βMin=0.00004 MFO
30
1000
b=1; t=[-1 1]; a∈[-1 -2]
GA
30
1000
cross=1; mutation=0.01
DA
30
1000
w∈[0.9 0.2]; s=0.1; a=0.1; c=0.7; f=1; e=1
1000 1000 1000
c1=2; c2=2; vMax=6
WOA
30 30 30
PBIL
30
1000
Rate=0.05; b=1; w=0;
PSO SCA
A=2 a1=[2 0]; a2=[-2 -1]; b=1
F=1; pMuatate=1; shiftMutate=0.01 GOA
30
100
βMax=1; βMin=0.00004
GWO
CBA RCBA CGSCA SCADE
30 30 30 30 30
1000 1000 1000 1000 1000
a=[2,0] Qmin=0; Qmax=2; Qmin=0; Qmax=2; delta = 0.1 cmin=0.2; cmax=0.8; pCR=0.8
HGWO
30
1000
cMin=0.2 cMax=0.8; pCR=0.2
LWOA
30
1000
a1=[2 0]; a2=[-2 -1]; b=1; h=1.5
IWOA
30
1000
a1=[2 0]; a2=[-2 -1]; b=1; Cr=0.1
BLPSO
30
1000
w=[0.2 0.9]; c= 1.496; I=1; E=1
CLPSO BMWOA
30 30
1000 1000
w=[0.2 0.9]; c=1.496 a1=[2 0]; a2=[-2 -1]; b=1
OBSA CDLOBA
30 30
1000 1000 42
A=2; Qmin=0; Qmax=2;
IGOA
30
1000
βMax=1; βMin=0.00004; f=0.5; l=1.5; h=1.5
43
Table 3. Population size analysis of representative functions F1 10
Sizepop
30
60
100
200
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
OLCGOA
7.97E+02
2.99E+01
6.25E+02
3.99E+01
5.89E+02
2.31E+01
5.79E+02
2.56E+01
5.70E+02
1.68E+01
IGOA
7.96E+02
6.60E+01
6.29E+02
4.02E+01
6.11E+02
2.97E+01
6.13E+02
3.72E+01
6.25E+02
3.07E+01
GOA
9.03E+02
8.16E+01
6.67E+02
4.22E+01
6.26E+02
4.57E+01
6.37E+02
4.18E+01
6.85E+02
5.29E+01
F2 Sizepop
10
30
60
100
200
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.04E+04
2.85E+04
7.25E+03
6.18E+03
1.59E+04
1.25E+04
1.10E+04
8.77E+03
1.17E+04
1.25E+04
IGOA
7.83E+04
4.91E+04
1.01E+05
6.59E+04
1.37E+05
7.24E+04
1.27E+05
9.34E+04
1.62E+05
8.70E+04
GOA
8.71E+04
5.54E+04
7.31E+04
3.92E+04
1.14E+05
7.24E+04
9.83E+04
5.82E+04
1.18E+08
3.67E+08
F3 Sizepop
10
30
60
100
200
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.80E+05
2.71E+05
2.31E+04
9.00E+03
1.61E+04
6.23E+03
1.68E+04
9.97E+03
1.64E+04
1.42E+04
IOA
1.04E+06
6.76E+05
8.33E+05
6.95E+05
7.16E+05
4.98E+05
6.54E+05
4.07E+05
5.89E+05
4.12E+05
GOA
1.30E+07
1.12E+07
7.87E+06
4.48E+06
6.23E+06
6.58E+06
1.09E+07
1.05E+07
5.50E+07
1.41E+08
44
Table 4. Evaluations analysis of representative functions F1 100
Sizepop
250
500
1000
2000
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
OLCGOA
6.88E+02
3.40E+01
6.81E+02
3.66E+01
6.75E+02
3.63E+01
6.41E+02
3.56E+01
6.50E+02
4.11E+01
IGOA
7.58E+02
7.11E+01
6.56E+02
4.82E+01
6.54E+02
4.59E+01
6.53E+02
1.40E+01
6.48E+02
2.53E+01
GOA
7.55E+02
5.86E+01
7.05E+02
3.99E+01
6.87E+02
4.98E+01
6.66E+02
4.09E+01
6.48E+02
4.54E+01
F2 Sizepop
100
250
500
1000
2000
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.74E+04
1.94E+04
1.09E+04
1.04E+04
1.52E+04
1.43E+04
1.51E+04
1.23E+04
1.46E+04
1.34E+04
IGOA
1.07E+05
6.92E+04
1.16E+05
8.71E+04
1.04E+05
8.62E+04
1.01E+05
8.33E+04
8.87E+04
4.65E+04
GOA
1.09E+05
6.72E+04
8.27E+04
6.74E+04
1.10E+05
7.32E+04
1.06E+05
8.41E+04
9.66E+04
8.23E+04
F3 Sizepop
OLCGOA
100
250
500
1000
2000
Avg
Std
Avg
Std
Avg
Std
Avg
Std
Avg
Std
5.15E+05
4.33E+05
8.28E+04
6.43E+04
4.99E+04
5.91E+04
3.00E+04
2.60E+04
1.70E+04
7.79E+03
1.59E+06
1.29E+06
6.71E+05
1.31E+06
2.19E+06
4.41E+05
2.60E+05
1.56E+07
1.09E+07
8.14E+06
8.70E+06
1.44E+07
8.03E+06
5.83E+06
IGOA
3.90E+06
3.03E+06
2.03E+06
GOA
2.30E+07
1.94E+07
1.50E+07
45
Table 5. Results of OLCGOA, OLGOA, GOA, and CLSGOA on 30 benchmark functions F1
F2
F3
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.26E+03
5.59E+03
5.79E+06
1.24E+07
3.00E+02
2.46E-04
CLSGOA
1.02E+07
8.11E+06
1.16E+21
3.48E+21
1.39E+04
5.62E+03
OLGOA
3.01E+03
3.46E+03
7.03E+06
2.48E+07
3.00E+02
3.63E-04
GOA
1.06E+07
1.09E+07
8.56E+21
4.41E+22
2.70E+04
1.44E+04
F4
F5
F6
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.99E+02
1.90E+01
6.67E+02
4.78E+01
6.30E+02
1.93E+01
CLSGOA
5.23E+02
3.26E+01
6.74E+02
5.04E+01
6.44E+02
1.31E+01
OLGOA
4.91E+02
2.56E+01
6.60E+02
5.42E+01
6.35E+02
2.04E+01
GOA
5.21E+02
3.63E+01
6.64E+02
4.57E+01
6.51E+02
1.72E+01
F7
F8
F9
Avg
Std
Avg
Std
Avg
Std
OLCGOA
8.44E+02
3.61E+01
9.39E+02
3.63E+01
4.14E+03
1.40E+03
CLSGOA
9.13E+02
3.77E+01
9.40E+02
2.69E+01
4.30E+03
1.15E+03
OLGOA
8.36E+02
2.65E+01
9.65E+02
3.93E+01
6.72E+03
2.38E+03
GOA
9.39E+02
5.16E+01
9.48E+02
3.56E+01
5.68E+03
2.74E+03
F10
F11
F12
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.41E+03
6.20E+02
1.24E+03
5.90E+01
1.17E+06
1.03E+06
CLSGOA
5.34E+03
6.91E+02
1.37E+03
7.91E+01
2.29E+07
2.25E+07
OLGOA
4.48E+03
5.87E+02
1.27E+03
4.52E+01
1.48E+06
1.48E+06
GOA
5.47E+03
7.51E+02
1.46E+03
9.76E+01
1.68E+07
1.34E+07
F13
F14
F15
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.48E+04
1.54E+04
6.14E+03
4.04E+03
8.16E+03
9.38E+03
CLSGOA
1.25E+05
6.60E+04
4.52E+04
4.50E+04
8.37E+04
5.46E+04
OLGOA
1.95E+04
2.07E+04
5.99E+03
3.89E+03
1.06E+04
1.12E+04
GOA
2.50E+06
1.29E+07
4.58E+04
3.81E+04
9.17E+04
7.64E+04
F16
F17
F18
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.53E+03
3.04E+02
2.26E+03
1.85E+02
1.61E+05
8.09E+04
CLSGOA
3.01E+03
3.13E+02
2.30E+03
2.51E+02
1.01E+06
7.59E+05
OLGOA
2.55E+03
3.66E+02
2.18E+03
2.07E+02
1.42E+05
1.43E+05
GOA
2.83E+03
3.30E+02
2.27E+03
2.52E+02
8.49E+05
8.99E+05
F19
F20
F21
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.01E+04
1.01E+04
2.54E+03
2.21E+02
2.49E+03
4.54E+01
CLSGOA
5.08E+06
3.47E+06
2.62E+03
1.73E+02
2.45E+03
4.17E+01
OLGOA
1.54E+04
1.55E+04
2.56E+03
2.14E+02
2.50E+03
7.57E+01
46
GOA
4.26E+06
3.49E+06
2.64E+03
F22
2.46E+03
1.72E+02 F23
4.75E+01 F24
Avg
Std
Avg
Std
Avg
Std
OLCGOA
5.59E+03
1.53E+03
2.87E+03
5.58E+01
3.09E+03
8.80E+01
CLSGOA
3.70E+03
2.17E+03
2.83E+03
4.70E+01
2.98E+03
5.28E+01
OLGOA
5.76E+03
1.73E+03
2.86E+03
7.51E+01
3.07E+03
8.13E+01
GOA
5.97E+03
1.59E+03
2.82E+03
5.69E+01
2.99E+03
4.62E+01
F25
F26
F27
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.89E+03
1.15E+01
4.59E+03
1.41E+03
3.26E+03
3.64E+01
CLSGOA
2.93E+03
2.70E+01
5.04E+03
1.41E+03
3.26E+03
2.79E+01
OLGOA
2.89E+03
1.95E+01
5.21E+03
1.11E+03
3.26E+03
3.43E+01
GOA
2.94E+03
3.42E+01
5.17E+03
1.12E+03
3.25E+03
2.54E+01
F28
F29
F30
Avg
Std
Avg
Std
Avg
Std
OLCGOA
3.21E+03
2.23E+01
3.85E+03
2.31E+02
3.37E+04
1.79E+04
CLSGOA
3.28E+03
4.49E+01
4.16E+03
2.70E+02
8.25E+06
5.74E+06
OLGOA
3.23E+03
2.76E+01
3.86E+03
2.10E+02
3.01E+04
1.50E+04
GOA
3.30E+03
4.90E+01
4.16E+03
2.85E+02
9.68E+06
5.14E+06
Overall Rank OLCGOA
OLGOA
CLSGOA
GOA
Rank
1
2
3
4
ARV
1.7
2.1
2.9
3.3
+/-/=
~
4/0/26
19/4/7
20/3/7
47
Table 6. Results of OLCGOA, TGOA, SGOA, and RGOA on 30 benchmark functions F1
F2
F3
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.71E+03
4.28E+03
5.31E+02
1.05E+03
3.00E+02
1.32E-05
TGOA
3.40E+03
3.91E+03
5.23E+02
5.82E+02
3.00E+02
4.24E-06
SGOA
4.10E+03
3.95E+03
9.67E+02
1.64E+03
3.00E+02
2.94E-06
RGOA
3.72E+03
2.75E+03
4.94E+02
9.29E+02
3.00E+02
1.38E-05
F4
OLCGOA
F5
F6
Avg
Std
Avg
Std
Avg
Std
4.05E+02
2.05E+00
5.27E+02
8.95E+00
6.08E+02
8.38E+00
TGOA
4.04E+02
1.66E+00
5.37E+02
1.36E+01
6.03E+02
7.07E+00
SGOA
4.04E+02
2.43E+00
5.38E+02
1.39E+01
6.05E+02
8.23E+00
RGOA
4.04E+02
2.03E+00
5.28E+02
1.35E+01
6.05E+02
7.19E+00
F7
F8
F9
Avg
Std
Avg
Std
Avg
Std
OLCGOA
7.23E+02
5.06E+00
8.27E+02
1.03E+01
9.00E+02
8.37E-04
TGOA
7.24E+02
9.09E+00
8.24E+02
9.70E+00
9.00E+02
3.01E-03
SGOA
7.22E+02
7.77E+00
8.26E+02
9.39E+00
9.33E+02
1.06E+02
RGOA
7.20E+02
8.37E+00
8.22E+02
8.28E+00
9.00E+02
2.24E-06
F10
F11
F12
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.55E+03
1.95E+02
1.12E+03
1.58E+01
1.42E+04
9.16E+03
TGOA
1.52E+03
2.28E+02
2.04E+03
2.90E+03
1.82E+04
9.96E+03
SGOA
1.59E+03
3.21E+02
1.11E+03
6.08E+00
1.24E+04
8.50E+03
RGOA
1.75E+03
2.87E+02
1.11E+03
7.44E+00
1.42E+04
1.04E+04
F13
F14
F15
Avg
Std
Avg
Std
Avg
Std
OLCGOA
7.28E+03
6.33E+03
1.68E+03
1.93E+02
7.90E+03
3.60E+03
TGOA
1.28E+04
1.17E+04
9.03E+03
9.12E+03
1.20E+04
8.74E+03
SGOA
1.26E+04
1.28E+04
1.63E+03
1.08E+02
6.04E+03
4.91E+03
RGOA
1.48E+04
1.17E+04
2.58E+03
1.15E+03
9.71E+03
7.98E+03
F16
F17
F18
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.89E+03
1.28E+02
1.75E+03
3.38E+01
8.00E+03
5.53E+03
TGOA
1.86E+03
1.21E+02
1.81E+03
5.59E+01
9.94E+03
9.62E+03
SGOA
1.85E+03
1.43E+02
1.77E+03
4.14E+01
9.77E+03
6.99E+03
RGOA
1.92E+03
1.73E+02
1.80E+03
6.29E+01
1.77E+04
1.46E+04
F19
F20
F21
Avg
Std
Avg
Std
Avg
Std
OLCGOA
9.46E+03
9.41E+03
2.16E+03
1.13E+02
2.29E+03
6.31E+01
TGOA
1.07E+04
8.06E+03
2.14E+03
8.65E+01
2.34E+03
1.24E+01
SGOA
1.45E+04
1.01E+04
2.13E+03
6.94E+01
2.32E+03
6.53E+01
48
RGOA
1.47E+04
1.14E+04
7.44E+01
2.09E+03
F22
2.31E+03
F23
6.10E+01 F24
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.30E+03
1.32E+00
2.63E+03
1.18E+01
2.74E+03
8.58E+01
TGOA
2.44E+03
4.44E+02
2.63E+03
1.15E+01
2.78E+03
3.27E+01
SGOA
2.30E+03
6.25E-01
2.64E+03
1.25E+01
2.75E+03
9.49E+01
RGOA
2.53E+03
5.11E+02
2.63E+03
1.35E+01
2.78E+03
2.32E+01
F25
F26
F27
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.94E+03
2.25E+01
3.16E+03
5.70E+02
3.10E+03
4.63E+00
TGOA
2.90E+03
1.12E+02
3.40E+03
6.63E+02
3.11E+03
3.64E+01
SGOA
2.93E+03
1.91E+01
3.43E+03
6.32E+02
3.11E+03
3.17E+01
RGOA
2.90E+03
1.07E+02
3.59E+03
5.73E+02
3.17E+03
4.60E+01
F28
F29
F30
Avg
Std
Avg
Std
Avg
Std
OLCGOA
3.41E+03
1.92E+01
3.25E+03
6.29E+01
4.75E+05
7.53E+05
TGOA
3.34E+03
1.41E+02
3.28E+03
9.99E+01
1.45E+05
3.88E+05
SGOA
3.36E+03
1.40E+02
3.31E+03
5.15E+01
1.51E+05
4.20E+05
RGOA
3.30E+03
5.83E-10
3.26E+03
8.54E+01
2.01E+04
1.51E+04
Overall Rank OLCGOA
TGOA
SGOA
RGOA
Rank
1
4
3
2
ARV
2.4
2.633333
2.5
2.46667
+/-/=
~
3/0/27
2/0/28
4/3/23
49
Table 7. Results of OLCGOA and 8 basic algorithms on 30 CEC2017 functions F1
F2
F3
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.15E+03
5.63E+03
6.12E+07
2.79E+08
3.00E+02
2.58E-04
GWO
2.58E+09
1.42E+09
7.24E+32
3.90E+33
5.09E+04
1.17E+04
WOA
1.66E+09
1.03E+09
3.14E+36
1.67E+37
2.35E+05
4.80E+04
MFO
1.05E+10
7.67E+09
2.11E+39
7.83E+39
1.58E+05
6.63E+04
SCA
1.77E+10
3.06E+09
7.12E+36
2.21E+37
6.64E+04
1.04E+04
DA
5.65E+09
3.10E+09
2.04E+39
1.01E+40
1.29E+05
4.19E+04
PSO
1.72E+08
2.38E+07
4.23E+15
5.36E+15
1.51E+04
5.81E+03
GA
1.98E+08
1.12E+08
8.80E+17
2.72E+18
1.76E+05
4.80E+04
PBIL
4.81E+10
5.12E+09
8.10E+38
1.81E+39
1.38E+05
2.11E+04
F4
F5
F6
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.98E+02
2.11E+01
6.60E+02
4.01E+01
6.24E+02
1.74E+01
GWO
6.61E+02
1.20E+02
6.29E+02
4.77E+01
6.11E+02
3.93E+00
WOA
8.50E+02
1.70E+02
8.31E+02
7.32E+01
6.82E+02
1.25E+01
MFO
1.41E+03
1.00E+03
7.15E+02
5.28E+01
6.40E+02
1.11E+01
SCA
2.36E+03
6.31E+02
8.14E+02
2.01E+01
6.61E+02
7.75E+00
DA
1.29E+03
3.97E+02
8.70E+02
6.32E+01
6.87E+02
1.63E+01
PSO
4.84E+02
3.28E+01
7.71E+02
3.08E+01
6.61E+02
9.32E+00
GA
6.05E+02
4.88E+01
7.57E+02
3.16E+01
6.42E+02
5.95E+00
PBIL
4.30E+03
8.44E+02
9.01E+02
2.49E+01
6.82E+02
5.44E+00
F7
F8
F9
Avg
Std
Avg
Std
Avg
Std
OLCGOA
8.37E+02
3.68E+01
9.50E+02
3.54E+01
4.55E+03
1.49E+03
GWO
9.09E+02
5.71E+01
8.96E+02
2.39E+01
2.45E+03
8.29E+02
WOA
1.28E+03
8.01E+01
1.04E+03
5.41E+01
1.05E+04
3.13E+03
MFO
1.11E+03
1.47E+02
1.01E+03
5.21E+01
7.47E+03
2.40E+03
SCA
1.19E+03
5.12E+01
1.08E+03
1.80E+01
7.81E+03
1.74E+03
DA
1.12E+03
1.25E+02
1.13E+03
6.02E+01
1.35E+04
3.43E+03
PSO
9.70E+02
3.29E+01
1.02E+03
2.29E+01
7.05E+03
2.34E+03
GA
1.08E+03
5.41E+01
1.04E+03
3.13E+01
3.39E+03
1.87E+03
PBIL
2.44E+03
1.05E+02
1.20E+03
2.04E+01
1.62E+04
1.66E+03
F10
F11
F12
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.42E+03
5.95E+02
1.26E+03
4.78E+01
2.22E+06
1.76E+06
GWO
5.06E+03
1.46E+03
2.30E+03
1.05E+03
1.03E+08
9.02E+07
WOA
7.04E+03
7.88E+02
6.45E+03
3.56E+03
2.09E+08
1.35E+08
MFO
5.46E+03
7.20E+02
4.05E+03
4.51E+03
3.25E+08
5.75E+08
SCA
8.76E+03
3.36E+02
3.32E+03
8.54E+02
2.26E+09
5.07E+08
DA
7.71E+03
7.14E+02
3.33E+03
1.28E+03
6.19E+08
6.05E+08
PSO
6.73E+03
5.96E+02
1.33E+03
4.33E+01
3.65E+07
1.60E+07
50
GA
5.82E+03
5.68E+02
6.16E+03
5.49E+03
4.54E+07
3.25E+07
PBIL
8.67E+03
3.60E+02
8.61E+03
2.20E+03
5.05E+09
1.06E+09
F13
F14
F15
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.84E+04
1.83E+04
5.75E+03
3.42E+03
1.27E+04
1.21E+04
GWO
3.53E+07
8.48E+07
7.46E+05
7.14E+05
1.96E+06
6.45E+06
WOA
1.87E+06
1.69E+06
2.16E+06
2.28E+06
8.25E+05
8.66E+05
MFO
4.03E+07
1.93E+08
3.43E+05
5.30E+05
6.25E+04
6.00E+04
SCA
9.19E+08
5.32E+08
4.43E+05
3.41E+05
4.62E+07
3.92E+07
DA
3.61E+07
3.61E+07
1.27E+06
1.30E+06
8.70E+05
1.52E+06
PSO
8.20E+06
2.85E+06
3.14E+04
2.74E+04
1.01E+06
3.46E+05
GA
3.01E+07
3.56E+07
3.70E+06
3.37E+06
3.14E+06
2.43E+06
PBIL
2.70E+09
9.18E+08
1.04E+06
5.52E+05
3.99E+08
1.93E+08
F16
F17
Avg
Std
OLCGOA
2.62E+03
3.01E+02
GWO
2.54E+03
2.93E+02
WOA
4.22E+03
5.81E+02
MFO
3.22E+03
SCA
Avg
F18 Std
Avg
Std
2.21E+03
2.21E+02
1.63E+05
1.31E+05
2.04E+03
1.83E+02
3.23E+06
6.03E+06
2.70E+03
3.20E+02
8.97E+06
1.05E+07
3.85E+02
2.57E+03
3.14E+02
4.25E+06
8.46E+06
3.93E+03
2.14E+02
2.71E+03
2.09E+02
9.82E+06
7.33E+06
DA
4.05E+03
5.49E+02
2.78E+03
2.62E+02
1.27E+07
1.29E+07
PSO
3.17E+03
2.47E+02
2.36E+03
2.40E+02
5.77E+05
4.66E+05
GA
3.04E+03
3.10E+02
2.51E+03
2.35E+02
3.62E+06
2.45E+06
PBIL
4.32E+03
2.54E+02
3.22E+03
2.33E+02
1.61E+07
7.25E+06
F19
F20
F21
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.01E+04
1.04E+04
2.48E+03
1.52E+02
2.48E+03
4.75E+01
GWO
1.01E+06
2.17E+06
2.48E+03
1.43E+02
2.40E+03
4.88E+01
WOA
1.85E+07
1.83E+07
2.85E+03
2.11E+02
2.62E+03
5.38E+01
MFO
7.93E+06
3.30E+07
2.72E+03
2.30E+02
2.50E+03
3.87E+01
SCA
6.93E+07
3.46E+07
2.86E+03
1.55E+02
2.59E+03
2.90E+01
DA
2.33E+07
1.84E+07
2.93E+03
2.11E+02
2.67E+03
6.86E+01
PSO
3.30E+06
1.72E+06
2.71E+03
2.17E+02
2.56E+03
4.11E+01
GA
5.69E+06
5.89E+06
2.56E+03
1.33E+02
2.54E+03
2.78E+01
PBIL
6.07E+08
2.53E+08
2.92E+03
1.40E+02
2.66E+03
1.83E+01
F22
F23
F24
Avg
Std
Avg
Std
Avg
Std
OLCGOA
5.01E+03
1.76E+03
2.87E+03
9.17E+01
3.07E+03
8.54E+01
GWO
4.53E+03
2.02E+03
2.79E+03
5.18E+01
2.95E+03
6.28E+01
WOA
7.97E+03
1.61E+03
3.13E+03
1.03E+02
3.21E+03
1.11E+02
MFO
6.45E+03
1.59E+03
2.83E+03
4.55E+01
2.98E+03
2.75E+01
SCA
8.53E+03
2.38E+03
3.06E+03
3.71E+01
3.23E+03
4.33E+01
DA
8.68E+03
2.02E+03
3.29E+03
1.61E+02
3.51E+03
1.67E+02
51
PSO
5.26E+03
2.97E+03
3.16E+03
1.05E+02
3.23E+03
8.63E+01
GA
7.61E+03
1.46E+03
2.98E+03
4.98E+01
3.15E+03
5.24E+01
PBIL
8.79E+03
1.73E+03
3.01E+03
2.28E+01
3.14E+03
2.04E+01
F25
F26
F27
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.89E+03
9.27E+00
4.57E+03
1.28E+03
3.26E+03
4.76E+01
GWO
3.00E+03
4.70E+01
4.71E+03
5.01E+02
3.26E+03
3.61E+01
WOA
3.12E+03
7.12E+01
8.17E+03
1.21E+03
3.44E+03
1.27E+02
MFO
3.28E+03
3.54E+02
6.05E+03
6.09E+02
3.26E+03
2.74E+01
SCA
3.49E+03
2.02E+02
7.54E+03
3.75E+02
3.52E+03
6.33E+01
DA
3.32E+03
2.46E+02
9.14E+03
1.17E+03
3.46E+03
9.71E+01
PSO
2.93E+03
2.39E+01
5.42E+03
2.17E+03
3.29E+03
1.41E+02
GA
3.14E+03
1.04E+02
6.28E+03
4.98E+02
3.35E+03
3.65E+01
PBIL
6.89E+03
8.34E+02
7.87E+03
2.69E+02
3.38E+03
4.11E+01
F28
F29
F30
Avg
Std
Avg
Std
Avg
Std
OLCGOA
3.21E+03
1.99E+01
3.83E+03
2.29E+02
3.52E+04
2.54E+04
GWO
3.43E+03
6.15E+01
3.88E+03
1.78E+02
1.05E+07
8.33E+06
WOA
3.59E+03
1.11E+02
5.34E+03
4.40E+02
5.43E+07
4.91E+07
MFO
4.45E+03
8.77E+02
4.25E+03
3.14E+02
6.91E+05
9.54E+05
SCA
4.25E+03
2.86E+02
4.98E+03
2.38E+02
1.62E+08
6.27E+07
DA
4.07E+03
5.90E+02
5.58E+03
7.05E+02
6.24E+07
5.85E+07
PSO
3.27E+03
2.35E+01
4.48E+03
2.88E+02
7.88E+06
3.40E+06
GA
3.48E+03
1.17E+02
3.92E+03
2.25E+02
3.00E+06
2.64E+06
PBIL
5.45E+03
4.99E+02
5.30E+03
2.94E+02
4.32E+08
1.45E+08
Overall Rank OLCGOA
GWO
WOA
MFO
SCA
Rank
1
2
6
5
7
ARV
1.533333333
2.566666667
6.2
4.566666667
6.766666667
+/-/=
~
15/8/7
30/0/0
26/1/3
30/0/0
DA
PSO
GA
PBIL
Rank
8
3
4
9
ARV
7.333333333
3.6
4.4
8.033333333
+/-/=
30/0/0
26/0/4
28/1/1
30/0/0
52
Table 8. Results of OLCGOA and 13 advanced algorithms on CEC2017 30 functions F1
F2
F3
Avg
Std
Avg
Std
Avg
Std
OLCGOA
3.91E+03
5.56E+03
8.31E+08
4.53E+09
3.00E+02
2.60E-04
IGOA
2.23E+06
1.40E+06
6.11E+10
2.33E+11
3.31E+02
2.19E+01
CGSCA
2.14E+10
3.72E+09
2.66E+37
6.65E+37
6.73E+04
8.52E+03
OBSCA
2.13E+10
3.71E+09
6.83E+38
2.56E+39
6.84E+04
7.35E+03
SCADE
2.60E+10
3.49E+09
8.70E+38
2.07E+39
7.25E+04
5.74E+03
CLPSO
1.51E+09
3.34E+08
3.40E+32
1.14E+33
1.28E+05
2.78E+04
BLPSO
3.28E+09
5.97E+08
3.79E+31
1.58E+32
9.63E+04
2.14E+04
IWOA
8.96E+08
6.46E+08
2.68E+30
7.97E+30
2.14E+05
6.85E+04
LWOA
2.76E+06
8.21E+05
7.88E+12
1.04E+13
8.59E+04
3.04E+04
BMWOA
2.73E+08
1.18E+08
4.72E+26
2.56E+27
6.83E+04
7.50E+03
CBA
2.08E+05
4.12E+05
3.65E+13
1.45E+14
4.20E+04
3.30E+04
CDLOBA
1.22E+04
5.18E+03
1.42E+37
4.40E+37
3.23E+04
1.52E+04
RCBA
2.85E+05
9.38E+04
9.74E+08
3.18E+09
4.07E+04
2.30E+04
HGWO
6.35E+09
2.00E+09
1.25E+35
6.71E+35
7.99E+04
5.56E+03
F4
F5
F6
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.94E+02
2.48E+01
6.57E+02
3.68E+01
6.24E+02
1.16E+01
IGOA
5.01E+02
2.06E+01
6.35E+02
3.87E+01
6.34E+02
1.86E+01
CGSCA
3.30E+03
1.05E+03
8.38E+02
2.79E+01
6.66E+02
6.03E+00
OBSCA
3.88E+03
9.65E+02
8.25E+02
2.33E+01
6.61E+02
6.04E+00
SCADE
5.35E+03
1.35E+03
8.60E+02
1.79E+01
6.69E+02
8.22E+00
CLPSO
9.50E+02
1.04E+02
7.36E+02
2.22E+01
6.29E+02
4.19E+00
BLPSO
9.07E+02
7.48E+01
7.53E+02
1.47E+01
6.30E+02
3.35E+00
IWOA
6.79E+02
6.49E+01
7.96E+02
5.57E+01
6.68E+02
1.28E+01
LWOA
5.16E+02
2.57E+01
7.90E+02
7.35E+01
6.70E+02
9.51E+00
BMWOA
6.15E+02
4.48E+01
8.00E+02
3.97E+01
6.65E+02
7.57E+00
CBA
5.00E+02
2.20E+01
8.13E+02
5.36E+01
6.73E+02
1.04E+01
CDLOBA
5.10E+02
3.22E+01
8.67E+02
7.60E+01
6.69E+02
8.79E+00
RCBA
4.98E+02
3.18E+01
7.98E+02
6.64E+01
6.75E+02
1.05E+01
HGWO
8.40E+02
1.21E+02
7.37E+02
2.19E+01
6.27E+02
5.25E+00
F7
F8
F9
Avg
Std
Avg
Std
Avg
Std
OLCGOA
8.33E+02
2.42E+01
9.40E+02
2.92E+01
4.48E+03
1.34E+03
IGOA
8.64E+02
3.55E+01
9.57E+02
3.07E+01
6.74E+03
2.96E+03
CGSCA
1.23E+03
4.54E+01
1.09E+03
2.24E+01
8.44E+03
1.57E+03
OBSCA
1.21E+03
4.76E+01
1.09E+03
2.12E+01
8.14E+03
1.41E+03
SCADE
1.24E+03
2.80E+01
1.10E+03
1.88E+01
9.83E+03
1.16E+03
CLPSO
1.03E+03
2.30E+01
1.04E+03
2.01E+01
6.83E+03
1.80E+03
BLPSO
1.09E+03
2.54E+01
1.05E+03
1.48E+01
3.16E+03
4.02E+02
IWOA
1.25E+03
8.83E+01
1.04E+03
3.95E+01
8.31E+03
2.65E+03
53
LWOA
1.12E+03
1.11E+02
1.00E+03
4.44E+01
8.31E+03
2.53E+03
BMWOA
1.22E+03
1.05E+02
1.01E+03
3.68E+01
7.46E+03
1.46E+03
CBA
1.90E+03
3.28E+02
1.04E+03
5.06E+01
8.49E+03
3.17E+03
CDLOBA
2.61E+03
3.03E+02
1.12E+03
5.43E+01
1.06E+04
2.61E+03
RCBA
1.92E+03
3.27E+02
1.05E+03
5.56E+01
8.45E+03
3.02E+03
HGWO
9.88E+02
3.00E+01
9.94E+02
1.82E+01
3.33E+03
8.50E+02
F10
F11
F12
Avg
Std
Avg
Std
Avg
Std
OLCGOA
4.34E+03
5.89E+02
1.26E+03
6.62E+01
2.18E+06
1.38E+06
IGOA
4.45E+03
7.15E+02
1.34E+03
6.63E+01
6.21E+06
4.03E+06
CGSCA
8.63E+03
3.02E+02
3.66E+03
1.00E+03
2.68E+09
6.33E+08
OBSCA
7.89E+03
4.00E+02
3.86E+03
8.92E+02
2.61E+09
5.81E+08
SCADE
8.55E+03
2.72E+02
4.33E+03
8.78E+02
2.70E+09
9.05E+08
CLPSO
7.40E+03
4.87E+02
2.76E+03
5.58E+02
2.19E+08
1.11E+08
BLPSO
8.71E+03
3.71E+02
2.16E+03
3.21E+02
2.59E+08
6.63E+07
IWOA
6.76E+03
8.38E+02
4.01E+03
1.75E+03
7.97E+07
8.89E+07
LWOA
5.64E+03
4.09E+02
1.30E+03
6.58E+01
1.08E+07
6.48E+06
BMWOA
7.38E+03
6.92E+02
1.62E+03
1.64E+02
6.59E+07
3.72E+07
CBA
5.91E+03
6.75E+02
1.36E+03
9.44E+01
2.13E+07
1.68E+07
CDLOBA
5.53E+03
6.69E+02
1.34E+03
7.77E+01
1.76E+06
1.22E+06
RCBA
6.13E+03
7.20E+02
1.34E+03
9.84E+01
7.26E+06
4.85E+06
HGWO
6.61E+03
4.55E+02
5.20E+03
1.14E+03
4.03E+08
1.50E+08
F13
F14
F15
Avg
Std
Avg
Std
Avg
Std
OLCGOA
1.51E+04
1.51E+04
5.28E+03
4.49E+03
1.06E+04
1.05E+04
IGOA
1.35E+05
9.23E+04
4.63E+04
3.44E+04
1.20E+05
1.20E+05
CGSCA
1.00E+09
3.78E+08
6.78E+05
3.98E+05
2.42E+07
2.69E+07
OBSCA
1.01E+09
3.23E+08
5.18E+05
2.70E+05
1.82E+07
1.68E+07
SCADE
1.12E+09
4.68E+08
7.16E+05
4.03E+05
2.31E+07
2.02E+07
CLPSO
9.00E+07
5.03E+07
3.10E+05
2.36E+05
7.55E+06
6.03E+06
BLPSO
5.09E+07
2.77E+07
2.86E+05
1.47E+05
6.05E+06
3.36E+06
IWOA
5.70E+05
3.87E+05
1.91E+06
2.01E+06
1.58E+06
4.59E+06
LWOA
1.95E+05
1.20E+05
7.37E+04
6.12E+04
7.80E+04
4.84E+04
BMWOA
4.03E+05
3.87E+05
9.99E+05
6.06E+05
9.23E+04
7.71E+04
CBA
2.13E+05
1.94E+05
7.39E+04
4.61E+04
7.02E+04
8.91E+04
CDLOBA
1.80E+05
1.01E+05
2.16E+04
2.14E+04
9.01E+04
6.62E+04
RCBA
1.77E+05
1.07E+05
3.02E+04
2.65E+04
7.00E+04
5.24E+04
HGWO
2.57E+08
1.61E+08
1.01E+06
7.99E+05
1.11E+07
1.55E+07
F16
F17
F18
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.53E+03
2.51E+02
2.27E+03
1.92E+02
1.73E+05
1.40E+05
IGOA
2.60E+03
3.01E+02
2.20E+03
2.25E+02
4.31E+05
3.12E+05
CGSCA
4.05E+03
3.36E+02
2.76E+03
1.99E+02
9.67E+06
4.95E+06
54
OBSCA
4.02E+03
2.39E+02
2.77E+03
1.42E+02
6.12E+06
3.74E+06
SCADE
4.21E+03
2.64E+02
2.67E+03
1.75E+02
1.03E+07
7.24E+06
CLPSO
3.26E+03
2.39E+02
2.34E+03
1.58E+02
2.04E+06
1.31E+06
BLPSO
3.64E+03
2.07E+02
2.41E+03
1.55E+02
4.98E+06
2.32E+06
IWOA
3.48E+03
6.03E+02
2.66E+03
2.65E+02
5.44E+06
6.44E+06
LWOA
3.09E+03
3.45E+02
2.56E+03
2.66E+02
1.69E+06
1.39E+06
BMWOA
3.43E+03
3.20E+02
2.45E+03
2.26E+02
3.47E+06
3.09E+06
CBA
3.91E+03
5.45E+02
2.93E+03
3.71E+02
8.39E+05
7.74E+05
CDLOBA
3.57E+03
4.36E+02
2.92E+03
3.50E+02
2.75E+05
2.87E+05
RCBA
3.63E+03
3.91E+02
2.80E+03
3.55E+02
3.29E+05
2.35E+05
HGWO
3.39E+03
3.26E+02
2.42E+03
2.15E+02
2.52E+06
2.54E+06
F19
F20
F21
Avg
Std
Avg
Std
Avg
Std
OLCGOA
9.06E+03
9.20E+03
2.56E+03
1.91E+02
2.46E+03
6.41E+01
IGOA
1.38E+05
9.85E+04
2.53E+03
2.20E+02
2.43E+03
3.31E+01
CGSCA
8.03E+07
5.08E+07
2.86E+03
1.75E+02
2.62E+03
3.09E+01
OBSCA
6.40E+07
4.23E+07
2.74E+03
1.17E+02
2.52E+03
8.74E+01
SCADE
6.48E+07
2.88E+07
2.89E+03
1.37E+02
2.60E+03
3.19E+01
CLPSO
6.12E+06
4.62E+06
2.58E+03
1.53E+02
2.52E+03
3.80E+01
BLPSO
9.49E+06
6.46E+06
2.69E+03
1.26E+02
2.54E+03
1.47E+01
IWOA
1.16E+06
2.33E+06
2.80E+03
1.86E+02
2.58E+03
4.92E+01
LWOA
5.12E+05
4.03E+05
2.88E+03
2.60E+02
2.56E+03
6.21E+01
BMWOA
7.49E+05
1.17E+06
2.66E+03
2.39E+02
2.55E+03
5.46E+01
CBA
2.52E+06
1.30E+06
2.94E+03
2.28E+02
2.64E+03
7.56E+01
CDLOBA
3.23E+05
8.30E+04
2.96E+03
2.18E+02
2.60E+03
6.13E+01
RCBA
3.57E+05
2.57E+05
2.99E+03
2.30E+02
2.61E+03
6.50E+01
HGWO
1.04E+07
1.33E+07
2.66E+03
1.54E+02
2.48E+03
2.53E+01
F22
F23
F24
Avg
Std
Avg
Std
Avg
Std
OLCGOA
5.50E+03
1.53E+03
2.88E+03
9.28E+01
3.08E+03
1.01E+02
IGOA
5.10E+03
1.78E+03
2.78E+03
3.60E+01
2.97E+03
4.18E+01
CGSCA
5.23E+03
1.63E+03
3.06E+03
3.44E+01
3.21E+03
4.36E+01
OBSCA
4.65E+03
5.09E+02
3.06E+03
3.84E+01
3.24E+03
3.48E+01
SCADE
5.80E+03
7.85E+02
3.07E+03
4.44E+01
3.22E+03
4.14E+01
CLPSO
4.71E+03
1.82E+03
2.92E+03
3.00E+01
3.11E+03
2.60E+01
BLPSO
2.79E+03
6.16E+01
2.93E+03
2.25E+01
3.09E+03
1.87E+01
IWOA
7.28E+03
1.99E+03
3.04E+03
9.73E+01
3.21E+03
1.05E+02
LWOA
6.40E+03
1.95E+03
3.05E+03
1.06E+02
3.22E+03
1.12E+02
BMWOA
4.80E+03
2.99E+03
2.96E+03
8.17E+01
3.09E+03
8.02E+01
CBA
7.30E+03
1.30E+03
3.34E+03
1.38E+02
3.46E+03
1.73E+02
CDLOBA
7.10E+03
1.22E+03
3.19E+03
1.34E+02
3.33E+03
1.04E+02
RCBA
7.24E+03
1.47E+03
3.42E+03
1.94E+02
3.50E+03
1.39E+02
HGWO
3.42E+03
7.27E+02
2.86E+03
3.37E+01
2.99E+03
3.30E+01
F25
F26 55
F27
Avg
Std
Avg
Std
Avg
Std
OLCGOA
2.89E+03
1.59E+01
4.39E+03
1.27E+03
3.27E+03
4.86E+01
IGOA
2.89E+03
1.07E+01
4.80E+03
7.41E+02
3.22E+03
1.54E+01
CGSCA
3.48E+03
1.41E+02
7.88E+03
3.69E+02
3.48E+03
5.72E+01
OBSCA
3.65E+03
2.48E+02
7.50E+03
5.70E+02
3.54E+03
7.12E+01
SCADE
3.72E+03
2.36E+02
7.99E+03
4.65E+02
3.54E+03
6.64E+01
CLPSO
3.10E+03
3.89E+01
6.13E+03
5.88E+02
3.35E+03
2.92E+01
BLPSO
3.12E+03
4.88E+01
6.25E+03
7.62E+02
3.39E+03
2.21E+01
IWOA
3.05E+03
4.52E+01
7.58E+03
1.07E+03
3.38E+03
7.90E+01
LWOA
2.90E+03
1.91E+01
7.02E+03
1.38E+03
3.31E+03
5.14E+01
BMWOA
3.03E+03
3.95E+01
6.67E+03
1.24E+03
3.31E+03
4.77E+01
CBA
2.91E+03
2.23E+01
9.66E+03
2.56E+03
3.52E+03
1.91E+02
CDLOBA
2.93E+03
2.89E+01
1.00E+04
2.11E+03
3.50E+03
1.92E+02
RCBA
2.90E+03
2.13E+01
9.13E+03
2.17E+03
3.45E+03
1.23E+02
HGWO
3.06E+03
3.23E+01
5.49E+03
5.65E+02
3.30E+03
3.20E+01
F28
F29
F30
Avg
Std
Avg
Std
Avg
Std
OLCGOA
3.21E+03
1.80E+01
3.94E+03
2.02E+02
3.82E+04
2.70E+04
IGOA
3.24E+03
3.41E+01
3.90E+03
1.85E+02
7.51E+05
3.82E+05
CGSCA
4.38E+03
3.39E+02
5.18E+03
2.88E+02
1.99E+08
7.03E+07
OBSCA
4.61E+03
3.66E+02
5.20E+03
2.40E+02
1.63E+08
5.06E+07
SCADE
4.89E+03
3.94E+02
5.43E+03
3.15E+02
1.71E+08
5.76E+07
CLPSO
3.78E+03
1.18E+02
4.47E+03
2.34E+02
9.49E+06
5.66E+06
BLPSO
3.55E+03
5.28E+01
4.60E+03
1.41E+02
1.17E+07
4.44E+06
IWOA
3.44E+03
7.89E+01
4.88E+03
4.62E+02
5.96E+06
5.13E+06
LWOA
3.24E+03
2.55E+01
4.42E+03
3.04E+02
2.26E+06
1.36E+06
BMWOA
3.41E+03
4.47E+01
4.73E+03
3.30E+02
5.58E+06
3.70E+06
CBA
3.24E+03
2.77E+01
5.37E+03
7.12E+02
4.57E+06
3.08E+06
CDLOBA
3.38E+03
6.55E+02
5.23E+03
5.49E+02
8.70E+05
6.92E+05
RCBA
3.24E+03
5.49E+01
5.25E+03
4.95E+02
2.52E+06
2.12E+06
HGWO
3.60E+03
1.31E+02
4.52E+03
1.82E+02
7.09E+07
4.02E+07
Overall Rank OLCGOA
IGOA
CGSCA
OBSCA
SCADE
Rank
1
2
13
12
14
ARV
1.7
2.733333333
11
10.23333333
12.3
+/-/=
~
16/6/8
29/0/1
29/1/0
29/0/1
CLPSO
BLPSO
IWOA
LWOA
BMWOA
Rank
6
8
11
3
4
ARV
6.966666667
7.6
8.8
5.966666667
6.733333333
+/-/=
26/0/4
27/2/1
30/0/0
29/0/1
27/0/3
CBA
CDLOBA
RCBA
HGWO
Rank
10
9
7
5
ARV
8.533333333
8.033333333
7.566666667
6.833333333
56
+/-/=
29/0/1
28/0/2
29/0/1
57
24/3/3
Table 9. The wall-clock time cost of OLCGOA, OLGOA, and CLSGOA on 30 CEC2017 functions F
OLCGOA
OLGOA
CLSGOA
C01
4.66E+01
4.68E+01
4.20E+01
C02
4.12E+01
4.05E+01
3.74E+01
C03
4.57E+01
4.59E+01
4.17E+01
C04
5.12E+01
5.11E+01
4.63E+01
C05
4.87E+01
4.92E+01
4.47E+01
C06
6.01E+01
5.91E+01
4.42E+01
C07
4.59E+01
4.98E+01
4.56E+01
C08
4.77E+01
4.51E+01
4.10E+01
C09
5.16E+01
4.77E+01
4.29E+01
C10
4.95E+01
5.08E+01
4.59E+01
C11
5.40E+01
4.94E+01
4.47E+01
C12
5.20E+01
5.37E+01
4.71E+01
C13
5.35E+01
5.14E+01
4.72E+01
C14
4.83E+01
5.30E+01
4.83E+01
C15
5.02E+01
4.82E+01
4.36E+01
C16
4.94E+01
5.02E+01
4.57E+01
C17
4.53E+01
4.91E+01
4.46E+01
C18
4.41E+01
4.48E+01
4.08E+01
C19
4.20E+01
4.38E+01
3.85E+01
C20
4.62E+01
4.24E+01
3.85E+01
C21
4.21E+01
4.55E+01
4.14E+01
C22
4.38E+01
4.16E+01
3.76E+01
C23
4.53E+01
4.38E+01
3.75E+01
C24
4.40E+01
4.43E+01
3.94E+01
C25
5.75E+01
4.43E+01
3.87E+01
C26
5.90E+01
5.62E+01
4.01E+01
C27
4.52E+01
5.77E+01
4.06E+01
C28
4.97E+01
4.58E+01
3.85E+01
C29
4.62E+01
4.94E+01
4.13E+01
C30
4.59E+01
4.53E+01
4.02E+01
58
Table 10. The wall-clock time cost of original algorithms on 30 CEC2017 functions F
SCA
GWO
MFO
PSO
GA
PBIL
WOA
DA
GOA
C01
1.49E+01
1.63E+01
1.29E+01
1.17E+01
1.36E+02
3.19E+01
3.42E+01
4.23E+02
7.89E+01
C02
1.42E+01
1.50E+01
1.27E+01
1.06E+01
1.35E+02
2.95E+01
3.37E+01
4.27E+02
7.09E+01
C03
1.43E+01
1.45E+01
1.19E+01
1.04E+01
1.20E+02
2.75E+01
3.28E+01
4.05E+02
7.76E+01
C04
1.44E+01
1.45E+01
1.25E+01
1.04E+01
1.22E+02
2.75E+01
3.27E+01
3.80E+02
8.53E+01
C05
1.46E+01
1.53E+01
1.23E+01
1.10E+01
1.27E+02
2.73E+01
3.20E+01
4.03E+02
8.21E+01
C06
5.69E+01
5.60E+01
5.35E+01
5.15E+01
1.68E+02
6.32E+01
7.53E+01
4.38E+02
8.09E+01
C07
1.42E+01
1.55E+01
1.26E+01
1.12E+01
1.44E+02
3.28E+01
3.27E+01
4.40E+02
8.31E+01
C08
1.32E+01
1.49E+01
1.20E+01
1.03E+01
1.42E+02
3.12E+01
3.27E+01
4.32E+02
7.58E+01
C09
1.39E+01
1.52E+01
1.23E+01
1.10E+01
1.40E+02
3.09E+01
3.34E+01
4.29E+02
7.89E+01
C10
1.49E+01
1.65E+01
1.39E+01
1.24E+01
1.42E+02
3.28E+01
3.54E+01
4.34E+02
8.44E+01
C11
1.54E+01
1.67E+01
1.40E+01
1.24E+01
1.43E+02
3.35E+01
3.51E+01
4.34E+02
8.02E+01
C12
2.07E+01
2.25E+01
1.96E+01
1.78E+01
1.52E+02
3.94E+01
4.03E+01
4.51E+02
8.58E+01
C13
1.34E+01
1.47E+01
1.18E+01
1.03E+01
1.59E+02
3.49E+01
3.22E+01
4.83E+02
8.55E+01
C14
1.38E+01
1.49E+01
1.18E+01
1.03E+01
1.39E+02
3.12E+01
3.27E+01
4.32E+02
8.83E+01
C15
1.42E+01
1.48E+01
1.24E+01
1.10E+01
1.56E+02
3.63E+01
3.32E+01
4.98E+02
8.05E+01
C16
1.44E+01
1.56E+01
1.23E+01
1.08E+01
1.63E+02
3.46E+01
3.28E+01
4.90E+02
8.29E+01
C17
1.43E+01
1.62E+01
1.30E+01
1.16E+01
1.54E+02
3.59E+01
3.29E+01
4.88E+02
8.08E+01
C18
1.39E+01
1.52E+01
1.22E+01
1.03E+01
1.47E+02
3.29E+01
3.27E+01
4.70E+02
7.59E+01
C19
2.26E+01
2.37E+01
2.05E+01
1.92E+01
1.50E+02
3.94E+01
4.13E+01
4.54E+02
7.13E+01
C20
1.39E+01
1.50E+01
1.22E+01
1.08E+01
1.41E+02
3.29E+01
3.26E+01
4.62E+02
7.17E+01
C21
1.41E+01
1.56E+01
1.31E+01
1.13E+01
1.30E+02
2.83E+01
3.36E+01
4.36E+02
7.55E+01
C22
1.52E+01
1.66E+01
1.35E+01
1.23E+01
1.21E+02
2.65E+01
3.39E+01
3.95E+02
7.03E+01
C23
2.25E+01
2.36E+01
2.08E+01
1.94E+01
1.38E+02
3.56E+01
4.10E+01
4.16E+02
7.04E+01
C24
2.01E+01
2.08E+01
1.81E+01
1.67E+01
1.41E+02
3.28E+01
3.81E+01
4.24E+02
7.25E+01
C25
2.15E+01
2.23E+01
1.90E+01
1.81E+01
1.43E+02
3.60E+01
3.97E+01
4.38E+02
7.19E+01
C26
6.85E+01
6.74E+01
6.54E+01
6.29E+01
1.66E+02
6.89E+01
8.51E+01
4.42E+02
7.42E+01
C27
6.76E+01
6.59E+01
6.32E+01
6.25E+01
1.28E+02
5.07E+01
8.56E+01
3.29E+02
7.46E+01
C28
2.52E+01
2.65E+01
2.33E+01
2.17E+01
1.05E+02
2.82E+01
4.40E+01
3.16E+02
7.35E+01
C29
2.87E+01
3.01E+01
2.66E+01
2.58E+01
1.66E+02
4.81E+01
4.76E+01
4.77E+02
7.81E+01
C30
2.11E+01
2.24E+01
1.95E+01
1.77E+01
1.48E+02
3.94E+01
3.94E+01
4.48E+02
7.28E+01
59
Table 11. The wall-clock time cost of advanced algorithms on 30 CEC2017 functions F
BLPSO
BMWOA
HGWO
IGOA
IWOA
SCADE
RCBA
CLPSO
CGSCA
CBA
CDLOBA
LWOA
OBSCA
5.79E+
2.46E+
3.74E
3.25E+
4.12E
4.33E+
1.93E
1.48E
2.51E+
1.71
2.81E+
3.35E
2.46E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.97E+
2.11E+
3.36E
3.02E+
3.51E
4.19E+
1.82E
1.55E
2.45E+
1.67
2.60E+
3.15E
2.28E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.59E+
1.97E+
3.20E
2.85E+
3.01E
4.21E+
1.80E
1.41E
2.43E+
1.64
2.64E+
3.31E
2.31E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.43E+
2.01E+
3.14E
2.88E+
2.96E
4.13E+
1.82E
1.52E
2.39E+
1.71
2.71E+
3.35E
2.39E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.61E+
2.16E+
3.28E
2.83E+
3.01E
4.38E+
1.89E
1.58E
2.53E+
1.76
2.82E+
3.37E
2.48E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
5.55E+
6.92E+
1.01E
2.90E+
6.88E
1.67E+
5.96E
4.09E
6.67E+
5.88
1.44E+
7.51E
1.41E+
01
02
+02
03
+01
02
+01
+01
01
E+01
02
+01
02
4.51E+
2.19E+
3.19E
2.79E+
3.07E
4.36E+
1.92E
1.61E
2.46E+
1.75
2.85E+
3.37E
2.57E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.54E+
2.05E+
3.07E
2.76E+
2.96E
4.17E+
1.81E
1.59E
2.45E+
1.65
2.60E+
3.35E
2.33E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.53E+
2.14E+
3.12E
2.78E+
3.01E
4.28E+
1.85E
1.55E
2.42E+
1.68
2.84E+
3.33E
2.44E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.53E+
2.28E+
3.41E
2.88E+
3.25E
4.64E+
1.97E
1.56E
2.59E+
1.77
3.08E+
3.50E
2.82E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.44E+
2.21E+
3.46E
2.79E+
3.13E
4.79E+
2.00E
1.61E
2.66E+
1.83
3.25E+
3.47E
2.94E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.47E+
2.92E+
4.30E
2.76E+
3.61E
6.40E+
2.56E
1.89E
3.21E+
2.34
4.79E+
4.08E
4.45E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.43E+
1.93E+
3.09E
2.78E+
2.91E
4.17E+
1.85E
1.53E
2.47E+
1.66
2.70E+
3.27E
2.33E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.38E+
1.90E+
3.01E
3.01E+
2.95E
4.14E+
1.85E
1.51E
2.45E+
1.67
2.71E+
3.31E
2.35E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.46E+
1.97E+
3.12E
2.76E+
2.91E
4.34E+
1.85E
1.58E
2.46E+
1.69
2.87E+
3.31E
2.53E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.39E+
1.94E+
3.05E
2.72E+
2.85E
4.32E+
1.83E
1.50E
2.49E+
1.71
2.78E+
3.40E
2.48E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.19E+
1.90E+
3.01E
2.55E+
2.84E
4.48E+
1.95E
1.50E
2.51E+
1.80
2.96E+
3.39E
2.71E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
3.87E+
1.81E+
2.69E
2.42E+
2.76E
4.25E+
1.90E
1.51E
2.45E+
1.67
2.75E+
3.26E
2.45E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.30E+
2.83E+
4.14E
2.47E+
3.46E
7.00E+
2.84E
2.26E
3.31E+
2.59
5.16E+
4.16E
4.83E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.10E+
2.01E+
2.99E
2.47E+
2.76E
4.34E+
1.88E
1.51E
2.44E+
1.69
2.86E+
3.35E
2.53E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
3.96E+
2.05E+
2.96E
2.46E+
2.78E
4.45E+
1.92E
1.51E
2.50E+
1.81
2.92E+
3.42E
2.65E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
C01
C02
C03
C04
C05
C06
C07
C08
C09
C10
C11
C12
C13
C14
C15
C16
C17
C18
C19
C20
C21
60
4.08E+
2.11E+
3.08E
2.48E+
2.75E
4.74E+
2.03E
1.57E
2.62E+
1.83
3.15E+
3.42E
2.86E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.21E+
2.95E+
4.25E
2.46E+
3.42E
6.81E+
2.74E
2.21E
3.30E+
2.56
5.18E+
4.22E
4.92E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.48E+
2.70E+
3.82E
2.50E+
3.26E
5.99E+
2.42E
2.10E
2.94E+
2.31
4.44E+
3.91E
4.16E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.38E+
2.84E+
4.08E
2.55E+
3.40E
6.51E+
2.66E
2.04E
3.21E+
2.50
4.81E+
4.05E
4.55E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
5.71E+
7.47E+
1.09E
2.59E+
6.91E
2.04E+
7.08E
4.97E
7.81E+
7.00
1.77E+
8.56E
1.73E+
01
02
+02
03
+01
02
+01
+01
01
E+01
02
+01
02
4.83E+
7.58E+
1.14E
2.66E+
7.22E
2.08E+
7.24E
4.65E
7.73E+
7.15
1.78E+
8.60E
1.75E+
01
02
+02
03
+01
02
+01
+01
01
E+01
02
+01
02
4.35E+
3.04E+
5.03E
2.75E+
3.97E
7.91E+
3.06E
2.16E
3.60E+
2.93
6.06E+
4.49E
5.90E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.48E+
3.54E+
5.78E
2.82E+
4.28E
9.01E+
3.34E
2.24E
4.01E+
3.18
6.92E+
4.77E
6.71E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
4.71E+
2.81E+
4.50E
2.86E+
3.61E
6.46E+
2.54E
1.77E
3.15E+
2.41
4.79E+
4.02E
4.53E+
01
02
+01
03
+01
01
+01
+01
01
E+01
01
+01
01
C22
C23
C24
C25
C26
C27
C28
C29
C30
61
Table 12. The comparative results of OLCGOA and other competitive methods on the welded beam design problem Algorithm Optimization results of parameters Best Cost
h
l
t
b
OLCGOA
0.205425
3.258593
9.036472
0.205736
1.695567
OBSCA
0.219881158
2.96174376
9.358761484
0.235094144
1.953607314
GWO
0.208813103
3.220459195
8.972129306
0.209066203
1.709155074
0.205534049
3.257938484
9.037406008
0.205738502
1.695814014
CBA
0.1
10
0.1
1.252925311
2.256736118
CGSCA
0.190259704
4.704667696
9.007366399
0.211061734
1.898909723
SCADE
0.165692067
4.133554617
10
0.215431319
2.004799008
BA
2
0.1
3.174303
2
1.818138
ALCPSO (Singh, et al., 2016)
62
Table 13. The comparative performance of pressure vessel optimization results Algorithm Optimization results of parameters 𝑇𝑠 𝑇ℎ 𝑅 𝐿
Best Cost
OLCGOA
2.346243331
0.62271985
65.27461405
10
5922.209425
WOA
0.8125
0.4375
10.98606
176.638998
6059.741
PSO(He & Wang, 2007)
0.8125
0.4375
42.091266
176.7465
6061.0777
GA(Coello Coello, 2000)
0.9375
0.5
48.329
112.679
6410.3811
OBSCA
2.529208169
0.661042861
67.38913913
10
6687.895034
CGSCA
1.429169496
0.628183116
63.45414914
21.43298341
6013.566614
GWO
1.99582568
0.621720251
64.70076212
13.53496193
5990.070358
SCADE
2.944782831
0.610443636
63.70692927
21.49650081
7664.444406
63
Table 14. The comparative performance of tension/compression spring optimization results Algorithm Optimization results of parameters Best Cost d D N OLCGOA
0.051586809
0.354262809
11.4365114
0.012667456
WOA
0.051207
12.0043032
0.345215
0.0126763
PSO(He & Wang, 2007)
0.015728
11.244543
0.357644
0.0126747
OBSCA
0.051918648
0.362232567
10.97789788
0.012671808
CGSCA
0.05
0.316785017
14.39547695
0.012984604
CBA
0.050118395
0.712866923
15
0.013188284
SCADE
0.05
0.31447915
15
0.013365364
GWO
0.055172952
0.44276283
7.681401301
0.013048537
BA
2
0.396417263
11.32756908
0.012745928
64
Table 15. Detail of 21 datasets No. Name D1 D2
segment
No. of Attribute 20
Breastcancer
10
699
D3
Australian
15
690
D4
SpectEW
23
267
D5
CongressEW
17
435
D6
Cleveland_heart
14
303
D7
vehicle
19
846
D8
wdbc
31
569
D9
Zoo
17
101
D10
Lymphography
19
148
D11
primary-tumor
18
339
D12
Exactly
14
1000
D13
Vote
17
300
D14
IonosphereEW
35
351
D15
heart
14
270
D16
Wielaw
31
240
D17
WineEW
14
178
D18
BreastEW
31
569
D19
M-of-n
14
1000
D20
German
25
1000
D21
CTG3
22
2126
65
No. of Sample 2310
Table 16. Average fitness value of OLCGOA and other feature selection algorithms on 21 datasets OLCGOA BGWO BPSO BALO BBA BSSA D1
0.026
0.025
0.026
0.027
0.042
0.033
D2
0.030
0.032
0.031
0.030
0.041
0.030
D3
0.097
0.103
0.098
0.099
0.149
0.111
D4
0.067
0.076
0.075
0.069
0.138
0.093
D5
0.017
0.020
0.020
0.018
0.048
0.026
D6
0.094
0.096
0.088
0.089
0.156
0.096
D7
0.192
0.197
0.204
0.198
0.267
0.217
D8
0.010
0.011
0.017
0.014
0.036
0.024
D9
0.011
0.010
0.011
0.011
0.019
0.014
D10
0.019
0.033
0.030
0.027
0.109
0.042
D11
0.534
0.539
0.533
0.530
0.621
0.548
D12
0.049
0.056
0.035
0.023
0.218
0.028
D13
0.019
0.023
0.021
0.021
0.049
0.030
D14
0.010
0.011
0.021
0.018
0.072
0.041
D15
0.090
0.087
0.081
0.079
0.153
0.089
D16
0.051
0.062
0.068
0.062
0.137
0.085
D17
0.012
0.011
0.013
0.012
0.028
0.014
D18
0.017
0.032
0.026
0.022
0.059
0.034
D19
0.024
0.024
0.023
0.023
0.138
0.027
D20
0.181
0.189
0.196
0.190
0.266
0.221
D21
0.067
0.060
0.062
0.061
0.087
0.072
ARV
2
2.9048
3.0952
2.381
6
4.619
rank
1
3
4
2
6
5
66
Table 17. Average error rate of OLCGOA and other feature selection algorithms on 21 data sets OLCGOA BGWO BPSO BALO BBA BSSA D1
0.003
0.013
0.013
0.014
0.066
0.016
D2
0.009
0.013
0.012
0.011
0.065
0.010
D3
0.081
0.092
0.080
0.082
0.223
0.093
D4
0.058
0.070
0.062
0.059
0.203
0.076
D5
0.006
0.011
0.008
0.008
0.088
0.0103
D6
0.017
0.082
0.070
0.069
0.254
0.076
D7
0.179
0.188
0.190
0.185
0.316
0.203
D8
0.003
0.004
0.005
0.004
0.053
0.006
D9
0.000
0.000
0.000
0.000
0.096
0.000
D10
0.007
0.023
0.017
0.014
0.255
0.022
D11
0.201
0.541
0.533
0.529
0.702
0.544
D12
0.001
0.035
0.012
0.000
0.343
0.001
D13
0.010
0.015
0.010
0.011
0.117
0.014
D14
0.003
0.005
0.007
0.006
0.115
0.022
D15
0.024
0.070
0.062
0.060
0.239
0.069
D16
0.042
0.056
0.055
0.052
0.216
0.065
D17
0.000
0.000
0.000
0.000
0.114
0.000
D18
0.006
0.009
0.010
0.008
0.083
0.012
D19
0.000
0.001
0.000
0.000
0.254
0.000
D20
0.167
0.180
0.181
0.176
0.327
0.205
D21
0.008
0.047
0.047
0.046
0.105
0.055
ARV
1.0952
2
4.1905
3.3333
4.7143
5.5714
rank
1
4
3
2
6
5
67
Table 18. Average feature number of OLCGOA and other feature selection algorithms on 21 data sets OLCGOA BGWO BPSO BALO BBA BSSA D1
3.95
4.88
5.57
5.56
7.68
7.05
D2
3.54
3.56
3.71
3.62
3.82
3.62
D3
4.09
4.57
6.19
5.92
6.00
6.23
D4
3.75
4.69
7.27
6.07
8.82
9.30
D5
2.66
3.23
4.02
3.62
6.36
5.49
D6
4.03
4.79
5.64
5.9
5.48
6.32
D7
5.79
6.87
8.38
8.32
7.76
9.02
D8
3.74
4.55
7.75
6.63
11.5
10.9
D9
3.17
3.28
3.43
3.41
5.82
4.46
D10
3.38
4.07
5.10
4.82
6.99
7.41
D11
6.74
8.66
9.33
9.27
7.54
10.5
D12
4.34
5.60
5.89
6.00
5.98
6.98
D13
2.46
2.95
3.62
3.41
6.17
5.24
D14
3.92
5.13
9.63
8.62
13.2
14.0
D15
4.63
5.47
5.63
5.73
5.61
6.29
D16
4.25
5.61
9.24
7.56
12.2
13.9
D17
3.07
2.95
3.27
3.03
5.35
3.62
D18
4.88
6.37
9.81
8.52
11.9
13.3
D19
4.39
6.00
6.03
6.00
6.17
7.11
D20
7.15
8.53
11.7
11.07
9.61
12.6
D21
5.23
6.54
7.48
7.39
8.21
8.59
ARV
1.0952
2
4.1905
3.3333
4.7143
5.5714
rank
1
2
4
3
5
6
68
Table 19. The CPU time of OLCGOA and other feature selection algorithms on 21 data sets OLCGOA BGWO BPSO BALO BBA BSSA D1
174.2912
7.0928
7.8358
8.7192
9.2352
9.701
D2
78.2872
3.4633
3.4526
3.5189
4.5103
3.8661
D3
83.7912
3.5484
3.4592
4.6562
3.9127
3.9192
D4
57.6432
2.8684
2.7497
2.9499
3.0173
3.0888
D5
62.7134
3.0137
3.0171
3.211
3.3419
3.3694
D6
57.9001
2.9317
2.8393
3.9017
3.1935
3.2019
D7
77.8085
3.9215
3.7871
4.2084
4.2864
4.2154
D8
74.2489
3.5154
3.035
4.3272
3.5222
3.4665
D9
55.2078
2.6585
2.569
2.7922
2.9153
2.9345
D10
54.6134
2.7223
2.623
2.6531
2.8813
2.9765
D11
65.5221
3.5309
3.3841
3.4136
4.0537
3.7947
D12
89.626
4.488
4.4899
4.576
5.2563
4.9874
D13
59.6475
2.8789
2.8897
3.0384
3.1675
3.1887
D14
57.3447
3.1016
2.647
2.6987
3.065
3.1731
D15
58.4137
2.9466
2.8433
2.8816
3.1134
3.2463
D16
60.7913
2.9879
2.471
3.6951
2.9399
2.9398
D17
55.7921
2.6914
2.6232
2.8298
2.9367
3.0156
D18
63.8741
3.5643
3.0418
3.1521
3.4206
3.5321
D19
93.8096
4.4464
4.5255
4.6719
4.8456
4.9643
D20
88.3717
4.5523
4.2092
4.4141
4.6525
4.5962
D21
174.6147
7.2827
8.2983
8.5392
9.009
9.2817
ARV
6
2.2857
1.2857
3.0952
4
4.3333
rank
6
2
1
3
4
5
69
Algorithm 1 A simplified description of GOA Start Grasshopper swarm initialization Xi (i=1, 2,…,n); Initialize 𝛽𝑚𝑎𝑥, 𝛽𝑚𝑖𝑛 and maximum iterations N; The fitness value of each grasshopper was calculated; Select the best individual T in the group according to fitness value; while (p ≤ N) Contraction factor p is updated by Eq.(10); for each grasshopper Adjust the distance between grasshoppers to [1,4]; Use Eq.(9) to change the position of selected individual grasshoppers; When the grasshopper is out of the boundary, control it back to the appropriate range; end for Replace T when it is stronger than before. p=p+1 end while return the best individual T; Stop
70
Algorithm 2. OL strategy Input: guiding archive Output: predictive solution (𝑥𝑝 ) 1: Selected two individuals from Arch randomly, 𝑥𝑟1 and 𝑥𝑟2 2: Construct an OA 𝐿𝑀 (34 ) 3: Generate M trial combinations via 𝑥𝑟1 and 𝑥𝑟2 , and evaluate their fitness 4: Perform factor analysis 5: Determine the predictive solution 𝑥𝑝 based on the results of factor analysis 6: return 𝑥𝑝
71
Algorithm 3 A simplified description of OLCGOA Start Initialize the levels of 𝑄, the amount of factors 𝐹; Initialize the maximum and minimum control parameters of the contraction factor 𝛽𝑚𝑎𝑥, 𝛽𝑚𝑖𝑛,𝑁(total iterations times), the grasshopper swarm 𝑋 , the best search agent 𝐹𝑏𝑒𝑠𝑡 and the dimensionality 𝑑 of the space; The fitness of each individual in the population was calculated; Then evaluate the fitness of all individuals and record the position of the smallest fitness; Fbest = the optimal fitness individuals; while (l ≤ L) Update 𝛽 according to Eq.(10); for i < n Normalize the distance between grasshoppers in [1,4]; Update the position of the current search agent Xi by using Eq.(9); Selected two individuals, 𝑆𝑋𝑖 and 𝑆𝑋𝑘 ; From Arch randomly to construct four factors and three levels OA 𝐿𝑀 (34 ), where 𝑀 = 𝑄 𝑙𝑜𝑔2 (𝐹 + 1); Generate M trial combinations via 𝑆𝑋𝑖 and 𝑆𝑋𝑘 , and evaluated fitness through factor analysis; Perform Orthogonal learning strategy according to Eq.(15) ; Bring the Xi back if it goes outside the boundaries; end for Evaluate the fitness of all agent; Update 𝐹𝑏𝑒𝑠𝑡 if there is a better solution; Establish an initial chaotic sequence, according to Eq. (11); for i < n Take 𝐹𝑏𝑒𝑠𝑡 as a prerequisite for the new location and execute the CLS mechanism based on Eq. (16); Get a candidate’s position 𝐶𝑆, which will be controlled in the search range; Replace 𝐹𝑏𝑒𝑠𝑡 if the optimization ability of 𝐶𝑆 is superior to the 𝐹𝑏𝑒𝑠𝑡; end for Update 𝐹𝑏𝑒𝑠𝑡 if there is a better solution; l=l+1 end while return the best individual 𝐹𝑏𝑒𝑠𝑡; Stop
72