Journal Pre-proof OHDA: An Opposition based High Dimensional optimization Algorithm Manizheh GhaemiDizaji, Chitra Dadkhah, Henry Leung
PII: DOI: Reference:
S1568-4946(20)30125-3 https://doi.org/10.1016/j.asoc.2020.106185 ASOC 106185
To appear in:
Applied Soft Computing Journal
Received date : 2 August 2019 Revised date : 20 January 2020 Accepted date : 19 February 2020 Please cite this article as: M. GhaemiDizaji, C. Dadkhah and H. Leung, OHDA: An Opposition based High Dimensional optimization Algorithm, Applied Soft Computing Journal (2020), doi: https://doi.org/10.1016/j.asoc.2020.106185. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
© 2020 Published by Elsevier B.V.
Journal Pre-proof
*Highlights (for review)
An Optimization algorithm suitable for high dimensions is proposed. Opposition based learning is employed for fast convergence of the algorithm. The proposed algorithm is tested in 3 test suits namely CEC2005, CEC2014 and CEC2017 in 30, 100, 1000 and 2000 dimensions . The proposed algorithm is compared with 11 other algorithms. Results verify the better performance of the proposed algorithm.
Jou
rna
lP repro of
Highlights
Journal Pre-proof
lP repro of
*Manuscript Click here to view linked References
OHDA: An Opposition based High Dimensional optimization Algorithm Manizheh GhaemiDizajia , Chitra Dadkhahb , Henry Leungc a
Faculty of Computer Engineering, K. N. Toosi university of Technology, Tehran, Iran. Email:m
[email protected] [email protected] b Faculty of Computer Engineering, K. N. Toosi university of Technology, Tehran, Iran. Email:
[email protected] c Department of Electrical and Computer Engineering, University of Calgary, Alberta, Canada, Email:
[email protected]
Abstract
Jou
rna
One of the challenging problems to-date is to deal with high dimensional data. This problem is getting more severe as the data gathering tools are progressing. This paper proposes an opposition-based optimization algorithm suitable for high dimensions. Its novelty is the angular movement according to a few selected dimensions that makes it effective to search in high dimensions. Accordingly, the Opposition-based High Dimensional optimization Algorithm (OHDA) is proposed. Its performance is studied using functions from CEC2005 for high dimensional data including 1000D and 2000D. In addition, the performance of the proposed algorithm is tested with CEC2014, which is more complicated than CEC2005 and CEC2013. The performance of OHDA is also examined using CEC2017 constraint optimization test suit. The comparing algorithms are CA, ICA, AAA, ABC, KH, MVO, WOA, RW-GWO, B-BBO, LX-BBO and LSHADE44-IEpsilon. The results verify that the proposed algorithm outperforms some conventional optimization algorithms in terms of their accuracies. The efficiency of employing opposite points in optimization is also validated in this paper. Keywords: High Dimensional Optimization, Evolutionary Algorithms, Opposition-based Learning, Constraint Optimization.
Preprint submitted to ?
January 20, 2020
1. Introduction
lP repro of
Journal Pre-proof
Jou
rna
With the help of improved technological instruments, data gathering can be done with relatively low cost; but at the same time it may result in some problems, such as warehousing, management and analyzing [1, 2]. Data increases in both dimension and samples and this paper investigates optimization in high dimensions. High dimension is a challenge in optimization tasks when the dimension becomes a thousand or more. Recently many meta-heuristic optimization algorithms have been introduced while the majority are nature-inspired. Glow Worm Optimization (GWO) is one of the newly introduced swarm algorithms based on the ability of glowworms in attracting the preys with their luminescence emission capability [3]. Another algorithm based on plants is Artificial Root Foraging Optimization (ARFO) algorithm inspired by foraging behavior of roots in finding the high nutritious regions [4]. Water Cycle Algorithm (WCA) inspired from the flow of rivers and water to the sea, is a swarm based algorithm [5]. Small pieces of information are raindrops which combine to form streams and then other gatherings like rivers and sea. Cat Swarm Optimization (CSO) [6], Social Spider Algorithm (SSA) [7], Wolf Search Algorithm (WSA) [8] and Grey Wolf Optimizer (GWO) [9], and Dolphin Partner Optimization [10] are few samples of swarm algorithms. Other groups of optimization algorithms are evolutionary ones with different characteristics. Biogeography Based Optimization (BBO), as a discrete optimization algorithm is inspired from the migration of species among habitats [11]. Another evolutionary algorithm is Group Counseling Optimizer (GCO) inspired from the way human beings solve their problems by counseling with others [12]. Forest Optimization Algorithm (FOA) simulates seed dispersal of trees in the forests and it has both global and local seed dispersal methods in addition to following the survival of the fittest rule [13]. FOA uses the information of fitter trees directly and it has the mechanism to remove the less fit individuals in the search area. Other well known evolutionary algorithms include Genetic Algorithm (GA) [14], Differential Evolution (DE) [15], and Genetic Programming (GP) [16]. Some algorithms make use of both evolutionary and swarm operators. To name a few, there are Bacterial Foraging Algorithm (BFA) [17], Imperialist Competitive Algorithm (ICA) [18], Lion Optimization Algorithm (LOA) [19], Fish School Search (FSS) [20], Artificial Algae Algorithm (AAA) [21], and BrunsVigia Optimization Algorithm (BVOA) [22] based on the behavior of Brunsvigia flower simulating seeding and head tumbling behavior of 2
lP repro of
Journal Pre-proof
the flower to reproduce itself. Some other new algorithms are Moth-Flame Optimization Algorithm (MFO) [23], Dragon-Fly Algorithm (DA) inspired from dragonflies’ behavior [24], and Polar Bear Optimization (PBO) algorithm inspired from the hunting behavior of polar bears to survive in arctic [25]. In this paper, the opposite-based learning (OBL) is combined with high dimensional optimization. The overall structure of the paper is as follows. Section 2 presents some definitions. Section 3 is devoted to the detailed description of the proposed OHDA. The behavioral analysis of OHDA is provided in Section 4. Contribution of OHDA in high dimensional problems is discussed in Section 5. Simulation results for CEC2005 in 1000D and 2000D, CEC2014 in 30D and 100D and CEC2017 for constraint optimization in 100D are presented in Section 6. Section 7 concludes the paper. 2. Definitions
Jou
rna
The rationale behind opposite-based learning (OBL) scheme is that if an individual is far from near-optimal solution, its anti-solution or individual at the opposite that may have a better fitness value [26]. As a result, some of the methods that just start searching by initial random guess, try to make use of counter-guess or anti-solution and select the better one to continue the search process. In this regard, according to [26], in which it is stated that OBL improves the convergence speed of the algorithms, some optimization algorithms have used OBL to improve the convergence. One definition of opposite solution in optimization is X ′ = A + B − X, where the opposite of X in the search range of [A, B] is X ′ . The schematic representation of these concepts is depicted in Figure 1. Other definitions of opposite points are also introduced in Table 1.
Figure 1: Demonstration of different definition of opposite points. X qo , X qr and X cb are opposite points as Table 1 and search range is [A,B] with center c [27]. 3
lP repro of
Journal Pre-proof
Table 1: Definition of opposite points in range [A,B] for point X ; c is the search center and X qo , X qr and X cb are opposite points [27]. Method Opposition Quasi-opposition Quasi-reflection Center-based sampling
Definition Xo = A + B − X X qo = rand(c, X o ) X qr = rand(c, X) X cb = rand(X, X o )
A good optimization algorithm should be able to conduct fair exploration and it should cross the infeasible regions to better search the space. A constraint optimization problem is defined as (1), where the objective function ∏ is f(x) and x is the decision vector. Decision space is denoted by S = D i=1 [Li , Ui ] and Li , Ui , D are lower bound of xi , upper bound of xi , and dimension of the problem respectively. gi (x) ≤ 0 is the i-th inequality constraint totally q constraints, and hj (x) = 0 is the j-th equality constraint totally p constraints. minimize f (x), x = (x1 , ..., xD ) ∈ S subject to gi (x) ≤ 0, i = 1, ..., q hj (x) = 0, j = 1, ..., p
(1)
Jou
rna
There are different strategies for handling the constraints: penalty methods, repair approaches, separatist approaches, hybrid approaches, Lagrangian algorithm, and projection methods [28, 29, 30]. There are different methods in each of these categories and more details can be found in [30]. For example in the penalty approach, a constraint problem is transferred to an unconstrained one by adding a penalty value according to the violation to the objective value. In the separatist approach the objective function and the constraints are dealt separately. One of the methods for calculating constraint violation is to find the overall constraint violation value defined as (2) in [28]. If ϕ(x) is equal to 0, the individual x is feasible, otherwise it is considered as infeasible. ϕ(x) = Σqi=1 max(gi (x), 0) + Σpj=1 max(|hi (x)| − σ, 0)
(2)
In this paper the improved ϵ constraint handling (IEpsilon) method [28, 31] is used as with the following rules where the value of ϵ is changed adaptively according to the number of function evaluations. 4
lP repro of
Journal Pre-proof
• When the overall constraint violations of two individuals are both lower than or equal to ϵ, the one with the smaller objective is considered to be a better one. • When the overall constraint violations of two individuals are equal, the one with a smaller objective value is better than the other. • When at least one of the constraint violations of two individuals are larger than ϵ, the one with a lower constraint violation is better than the other. 3. The proposed Opposition-based High Dimensional optimization Algorithm (OHDA)
In this section the proposed Opposition-based High Dimensional optimization Algorithm is presented. After outlining the basic idea, more details are presented at the following subsections.
Jou
rna
3.1. The basic idea The proposed Opposition-based High Dimensional optimization Algorithm (OHDA) applies a limited number of functions on a population for a limited number of times in order to reach near-optimal solution(s). OHDA uses four functions as I : RD → RD for information sharing, A : RD → RD for angular movement, O : RD → RD for opposite point generation and N : RD → RD for population limiting. After applying these functions for a limited number of times, an individual is selected as the best member of the population according to the fitness value. This procedure is presented in Figure 2. In Figure 2, if the population at time t is Pt , then applying information sharing operator in population level will result in Pt+∆t as an intermediate population and consecutively applying the angular movement operator with the opposite point operator will reach to Pt+2∆t as the second intermediate population. At the end of each iteration, population limiting stage will result in Pt+1 as the next generation population with the same population size as Pt . The previous population will be replaced with the current population members. The process of applying the operators on the population members is given as (3):
5
lP repro of
Journal Pre-proof
Figure 2: Diagram of applying operators on population members. Pt is population at time t, Pt+∆t and Pt+2∆t are intermediate populations and Pt+1 is the next generation population.
xi (t + ∆t) = I(xi (t)) xi (t + 2∆t) = O(A(xi (t + ∆t))) xi (t + 1) = N (xi (t + 2∆t))
(3)
Jou
rna
The overall flowchart of the algorithm is depicted in Figure 3. Table 2 shows the pseudo code of the introduced OHDA. The whole process of OHDA on a population with three members is shown in Figure 4. The optimal point is denoted by letter g inside the circle and ignored members according to fitness values are shown with a red cross over them. Information sharing is shown symbolically with double sided arrows and opposite points as stars in Figure 4. From Figure 4, the population members move toward the best position in the search space as the result of applying the operators. Although in each step there are some newly generated members shown as light blue circles, at the end of information sharing and angular movement stages, the population size remains fixed by excluding the less promising members. The novelty of OHDA resides in the angular movement stage, which provides an effective search in high dimensional problems. The parameters of algorithm are first initialized and the initial population is formed with randomly generated values. Considering each member as a D dimensional vector like xi = [x1i , x2i , ..., xD i ], i ∈ [1, M ], population will be a matrix like (4) with M as the population size. Each member has a fitness 6
lP repro of
Journal Pre-proof
rna
Figure 3: Flowchart of the proposed OHDA.
Jou
value as f (xi ). After initialization, other functions including information sharing and opposite learning, angular movement and population limiting iteratively alters the population until the stopping criteria is met. x11 . . . xD 1 P opulation = ... . . . ... (4) x1M . . . xD M
7
lP repro of
Journal Pre-proof
Table 2: Pseudo Code of OHDA.
rna
Procedure OHDA(member selection, #Dim to move, Local radius, #T umbling, L) Initialize parameters While stop condition is not satisfied do: Share information between the individuals: for i=1: N um population/2 Randomly choose two individuals Exchange the values of some variables between them according to one of (5), (6), (7), (8) end for Replace the new members with the worst members of population if better Angular movement and opposite learning: for each population member Choose an initial angle Choose two random dimensions Move the individual according to this angle and (9) Generate the opposite point according to (10) Select the better one Locally search the vicinity of the best individual end for Population limiting: Adding some new members Sorting the population, preserving the better ones and ignore the rest. end While Return the best so far individual
Jou
3.2. Information Sharing Information sharing among individuals is essential in order to reach near optimal solutions. Since the building blocks of an optimal individual may be spread out to the less fit individuals, gathering them is a key to find an optimal or a near optimal solution [32]. Similar to ants communicating through pheromone, flowers share information via pollen with the help of pollinators and other natural trends such as wind and water. Through information sharing, members take advantage of the encoded information of other individuals. In this stage two parameters should be considered: 1. The number of the population take part in this stage (con8
lP repro of
Journal Pre-proof
Figure 4: The whole process of OHDA on a population with 3 individuals for one
rna
iteration. (a) The initial population. (b) Population during information sharing. Members with a cross on them will be ignored for the next step. (c) For each individual, a new member by angular movement as well as its opposite point are generated in this stage. (d) Population after non-promising members are ignored. (e) Population after population limiting which includes generating new members and ignoring non-promising ones.
Jou
trolled by member selection parameter). 2. The number of the variables of the selected members that share their information (controlled by Γ parameter). For assigning the values for these parameters different strategies can be considered. The simplest form for two randomly selected individuals is to share their information encoded in their variables in both sides. The number of variables for information exchange is specified with a predefined threshold value namely Γ. Figure 5 shows the one possible type of information sharing for OHDA and (5) provides the rule of sharing; r is a random value in range [0, 1], A and B are two randomly selected individuals from the population with D dimensions. This stage provides the chance for every member of the population to examine other members for improvement, as there is no priority in selecting the members. At the end of this stage, some new members are 9
lP repro of
Journal Pre-proof
Figure 5: One possible strategy for Information sharing of OHDA. Both random individuals give and receive information.
added to the population and the whole population is sorted according to the fitness value. The better individuals are preserved for the next stage and the rest are ignored to save the storage. There can be a controlling mechanism on the number of population members taking part in information sharing and this number is specified as a percentage namely member selection. The optimal value for this parameter is specified in Section 6 according to simulations. { Ai ↔ Bi if r > Γ, i ∈ {1, 2, ..., D} nochange OW
(5)
Jou
rna
There are different choices for sharing the information between the population members. In OHDA, individuals just share the value of their variables without any concern of information loss. This can be resolved by exchanging the combination of variable values so that there will be a chance of keeping the possible dependencies among the variables. Figure 6 illustrates these strategies schematically where A and B are two randomly selected individuals and the arrows show direction of information flow. According to different information sharing methods, different transfer rules can be applied as (6) and (7);r is a random value in range [0, 1] and Γ is a predefined threshold value. { Bi if r > Γ, f (B) < f (A), i ∈ {1, 2, ..., D} Ai = (6) Ai OW { Besti Ai = Ai
if r > Γ, i ∈ {1, 2, ..., D} OW 10
(7)
lP repro of
Journal Pre-proof
a
b
Figure 6: Different information sharing methods. a: Better one among two randomly selected ones give information (f (A) < f (B)). b: The best one gives information.
rna
In addition to information sharing using simple copy between individuals as (5), in this article other method considering the best population member is also used. The formula for such information sharing is as (8). A and B are two randomly selected members. F is a value drawn from Cauchy distribution with (µ, σ) as mean and standard deviation respectively. best denotes the best member of the population. indiv(r1) and indiv(r2) are two randomly selected individuals from the population. Using the information of the best member as well as other individuals results in better information sharing and wise use of the best so far member of the population and accordingly better performance in real problems. The simulation results of Section 6 verifies this improvement. if rand > 0.5 Ai ↔ Bi (8) Ai = Ai + F ∗ (best − Ai ) +F ∗ (indiv(r1) − indiv(r1)) OW
Jou
The computational complexity of information sharing stage namely O(Info. sharing), will be O(N ) ∗ Γ ∗ O(D). N is the number of population, D is the dimension of the problem and Γ is the probability of each variable taking part in information sharing. 3.3. Angular Movement in OHDA. In this phase of the algorithm, individuals take part in an angular movement. This is done for each member of the population by simply selecting two random dimensions for a limited number of times and by applying simple rules to move the individual. In order to simulate this process, some 11
lP repro of
Journal Pre-proof
Figure 7: Angular movement of an individual from point A to point C with step sizes l1 and l2 and angles α and β.
parameters should be initialized. Parameters related to angular movement are: number of dimensions take part in angular movement (#Dim to move), number of steps in individual movement (#T umbling), radius for local search (Local radius ) and length of movement (L). These parameters can be predefined or chosen adaptively during the optimization. Movement process for a 2-dimensional problem in two consecutive steps of lengths L1 and L2 with angles α and β is shown in Figure 7. Although opposition-based learning takes place in this stage, it is ignored schematically in Figure 7 for simplicity. Movement equations for one step from point A to B is shown in (9), where L is the movement step size, and x and y are coordinates of the points. A′x = Ax + L × cos α A′y = Ay + L × sin α
rna
(9)
Jou
In this stage, for each population member, number of #T umbling new members are created. Considering L as the total length of movement in #T umbling steps, there should be a percentage value according to which each stage of movement takes place. This percentage vector is fixed at [.5, .7, 1] for the experiments in this paper with the intention that local search will help the algorithm to reach the sub-optimal solution(s). In this paper we have assumed that each movement step introduces one new member each time. Angular movement includes another operation related to opposite learning so that after each angular movement step, opposite point of the newly generated member is also examined. This approach is found to speed up the convergence and to reduce the time complexity specially in high dimensions. To apply opposite point concept to OHDA, after angular movement and forming a new individual, its anti-solution in the opposite direction according 12
lP repro of
Journal Pre-proof
Figure 8: Demonstration of an angular movement with angle α and its antisolution (A¯′ ). to the defined angle is generated. Decision is made to select the best one based on the fitness value. This process is simulated by adding 180◦ degrees to the angle in this phase as shown in Figure 8, where α is the selected angle, A′ is the newly formed individual as the result of angular movement from A and A¯′ is its anti-solution at the opposite direction as (10). A¯′x = Ax + L × cos (α + 180◦ ) A¯′y = Ay + L × sin (α + 180◦ )
(10)
Jou
rna
This process will be applied to all the members and accordingly there must be a controlling mechanism to preserve the better members and to ignore the rest. After angular movement and forming the opposite points, selection is made between them and the initial individual. This process is shown previously in Figure 4 where the angles are considered counterclockwise. Figure 9 represents the angular movement on an initial population with four members. The direction for movement is considered right side of the coordinate system. Length of movement in Figure 9.b is less than the length in Figure 9.d. Comparing the population member positions in Figure 9.a and Figure 9.e, the whole process schematically verifies that angular movement helps the population go toward fitter spaces. It is noted that if the algorithm does not care about the movement length adjustment and new members are not added to the population as the result of other processes, the algorithm may get stuck in local optima in multi-modal optimization problems. The complexity of angular movement stage namely O(Ang.movement), is O(N ) ∗ O(D); N is the population size and D is the dimension.
13
a
b
d
rna
c
lP repro of
Journal Pre-proof
Jou
e
Figure 9: Process of angular movement for one time with two steps, L1 < L2. a: Population after information sharing. b: newly generated members in addition to the opposite points. Direction is the right side. c: population after preserving the better members. d: angular movement and opposite point generation on the previous population. e: Population after preserving the better members.
14
lP repro of
Journal Pre-proof
Table 3: Parameters of OHDA with their brief description. Parameter name member selection #Dim to move Local radius #Tumbling L
Explanation Percentage of population take part in info. sharing Number of dimensions take part in angular movement Radius for local search Number of steps in movement Length of total movement
Corresponding phase Information sharing angular movement angular movement angular movement
angular movement
3.4. Population limiting. In the optimization, there are new individuals that are added in some points to the population to increase the diversity and to escape from local optima. Accordingly, at this stage, some new randomly generated members are added to the population. To avoid state space explosion, the following approaches are considered. 1. At the end of each iteration, the population is simply sorted and the better ones are remained. The others are excluded from the population.
rna
2. At the end of each stage the population is sorted and population size is kept fixed.
Jou
For simplicity, parameters, their brief explanations and the stages where they are used in OHDA are summarized as Table 3. The computational complexity of this stage depends on the sorting algorithm used and the number of the population being sorted. At the worst case, if the whole population of size N is being sorted at each iteration, then the complexity of this stage namely O(P op.limiting) will be O(N logN ) with Quick-sort algorithm. 3.5. Computational Complexity of the proposed OHDA The computational complexity is reported as the time and memory cost of applying the algorithm for solving a problem. If the number of iterations of applying the algorithm is shown by I, the population size is N , calculating the fitness value of each member is as O(f it), the member selection percentage 15
lP repro of
Journal Pre-proof
is R, the probability of each variable is Γ, and the dimension of the problem is D, then the computational complexity of OHDA is calculated as (11). O[I × O(f it) × [R × O(Inf o.Sharing)] + O(Ang.movement) + O(P op.limiting)]
(11)
As the simulation results of Section 6.4 show, OHDA algorithm needs lower population size comparing the other algorithms, and it can save a lot of memory. Fixing the value of some parameters like member selection (R) and Γ at 100% and 0.5 respectively, will simplify equation (11) as (12). O(I × O(f it) × [O(N ∗ D) + O(N ∗ D) + O(N logN )])
(12)
4. Behavioral Analysis of OHDA.
In this section, the behavior of OHDA algorithm is studied with more details and the contribution of OHDA on high dimensional data will be discussed. Analysis on numerical and simulation results are presented in Section 6.
rna
1. Initialization: Like other optimization algorithms, OHDA starts searching with uniformly distributed random population. If prior information is available, different distribution functions can be chosen to form the initial population. Also the use of opposite points in the initial population generation is beneficial. The population is formed with randomly generated values in the search range and each region is behaved equally.
Jou
2. Information sharing: OHDA uses the simple copying of some variable values. If the individuals are chosen randomly to take part in this stage, exploration of search space will occur. It is worth to note that if the population had already lost its diversity, this stage will have the role of exploitation. As a result, the impact of this stage varies during the algorithm execution and it depends on the diversity of current population. In the second option for information sharing, there is a good chance of finding informative blocks through employing the best member and two other randomly selected population members. This strategy is used for simulations in constraint and difficult problems presented in Section 6.
16
lP repro of
Journal Pre-proof
3. Angular movement: This stage of the algorithm has the impact of exploration or global search with random angular movements in the search space. It can take some steps forward and make decision between many options and not just the point itself and its opposite. After angular movement, another stage is performed to locally search the vicinity of the-far-so-best individual with the help of Gaussian distribution. The use of opposite points in this stage is to increase the convergence speed of the algorithm and its impact is to change the direction of movement if the current search direction is not promising in comparison with the opposite direction.
4. Dominant operator: The dominant process is selection which takes place in all of the stages. It is different from other algorithms like GA. In other cases, PSO algorithm takes advantage of swarm intelligence with the contribution of self-best and global best members and so on. Another dominating operator of the proposed OHDA algorithm is angular movement which makes the movement in the search space.
rna
5. Performance in discrete feasible spaces: One simple ways to accept an individual is its feasibility. The ability of an optimization algorithm in these problems is well-proved if the population members could pass through the infeasible regions to search for possible feasible solutions surrounded by infeasible ones. According to the behavior of the movement operator of OHDA, it is expected that the algorithm is able to make constructive movements to further investigate the search space.
Jou
6. Analysis of opposite points: Without loss of generality we assume that the space is two dimensional and after moving one individual with a random angle in range [0, π/2], the opposite point will be examined. If we consider the initial point as A, by applying the angular movement as given in (9), we will get to point A′ . We define the opposite point by ¯ (10) shown as A´ and it is calculated by adding 180o degrees to the selected angle α while considering the initial point A as the central point. ¯ Assume that in the coordinate system A´ > A´ considering both x and y coordinates (assume that the function to be optimized is increasingly monotone and the goal is minimization), We try to prove that the op¯ timal point (Ao ) is near to the opposite point (A´ ) in comparison with 17
lP repro of
Journal Pre-proof
¯´ Figure 10: Illustration of the position of Ao according to A´ and A.
¯´ A´ as in Figure 10. If we assume that Ao is near to A, it should be less o than A (A > A ) because the midpoint is considered to be A. ¯ |A´ − AO | < |A´ − AO | ¯ = (A´ − AO )2 − (A´ − AO )2 < 0 ¯ ¯ = (A´ − AO + A´ − AO )(A´ − AO − A´ + AO ) < 0 ¯ ´ A¯´ − A) ´ <0 = (A´ − 2AO + A)(
(13)
rna
As the range of angles is [0, π/2], (10) can be rewritten as (14): A¯′x = Ax − L × cos (α) A¯′y = Ax − L × sin (α)
(14)
Applying (9) and (14) to (13) we have (15):
Jou
(2Ax − 2AOx )(−L cos(α)) < 0 α ∈ [0, π/2] =⇒ L cos(α) > 0 =⇒ (2Ax − 2AO x) > 0 =⇒ Ax > AO x =⇒ (2Ay − 2AOy )(−L sin(α)) < 0 α ∈ [0, π/2] =⇒ L sin(α) > 0 =⇒ (2Ay − 2AO y) > 0 =⇒ Ay > AO y
18
(15)
lP repro of
Journal Pre-proof
Figure 11: Convergence of F6 and F15 from CEC2005 in 50 dimensions.
As a result, in order to have the optimal point near to the opposite ´ we have A > Ao where A is the midpoint. point and by definition of A,
Jou
rna
7. Convergence of the algorithm: For an algorithm to perform well in solving problems, it should preserve balance between exploration and exploitation. If the algorithm lacks exploration, it will search in the local areas and getting stuck in local optima is inevitable which will result in premature convergence. If the algorithm lacks exploitation feature, it will behave like random search without preserving any heuristics, which is not suitable for limited time and memory. Investigating the performance of an algorithm in dealing with composite functions will better reveal the convergence behavior of the algorithm. In this paper to show the convergence of the algorithm, two test functions from CEC2005 are selected with different levels of complexity (F6: Multimodal, F15: Hybrid Composition) in 50 dimensions and their convergence diagrams are plotted as Fig. 11 which verifies the convergence of the algorithm.
5. Contribution of OHDA in high dimensional problems. Although dealing with high dimensions is highlighted enough in computer research community, most of the newly introduced optimization algorithms
19
lP repro of
Journal Pre-proof
Jou
rna
are tested for low dimensions like 30, 50, 100. This motivates us to investigate the high dimensional problems. One of the methodologies that helps algorithms reach better regions is local search of the algorithm in addition to global search [33]. Sometimes a good local search makes better progress in the searching process. This is also important in high dimensional problems where there may be lots of local optima and limited resources is one of the concerns in these problems. One of the characteristics of the proposed algorithm is its special form of local search. In the angular movement phase of the algorithm, which is done by angular movement and simulates a kind of virtual greedy forward move, the best member of the population does local search with the help of Gaussian distribution while the best individual is considered as the center of the distribution and Local radius parameter is defined as the standard deviation of distribution. At first, not all the members are examined locally and this saves the computational cost. On the other hand, the vicinity of the best member is further examined to reach possible better individuals. Another characteristics of the proposed algorithm that makes it effective in high dimensions is that, the number of selected dimensions for movement does not grow so much while increasing the problem dimension. This way, time complexity will not increase so much, which is the case with some other optimization algorithms. This property is different from other algorithms like PSO in which all the problem dimensions take part in the search process. Being proved that opposition learning increases the convergence speed of optimization algorithms, introducing new definition of opposite point suitable to the algorithm operators will increase the convergence speed of the proposed OHDA. This improvement is needed for high dimensions. According to the simulation results of the next section, the proposed OHDA algorithm outperforms the other algorithms for problems with 1000 and 2000 dimensions. 6. Simulations and results
In this section, the performance of OHDA is studied on CEC2005, CEC2014, and CEC2017 test functions and the results are compared with some other algorithms. In order to better represent the effect of the operators on the population members, Figure 12 shows the population with five members in one iteration. Apparently, the whole population moves toward the global best position while applying the operators. 20
lP repro of
Journal Pre-proof
100 80
-298
60 40 20
-299
y
f(x,y)
-298.5
-299.5
0
-20 -40
-300 100
-60
50
100
50
0
-80
0
-50
y
-50 -100
-100
-100 -100
x
-80
-60
-40
-20
0
20
40
60
80
100
x
(a) Search space 100
100
80
80
60
60
40
40 20
0
y
y
20
-20
0
-20
-40
-40
-60
-60
-80 -100 -100
(b) Initial population members.
-80
-80
-60
-40
-20
0
x
20
40
60
80
-100 -100
100
-80
-60
-40
-20
0
20
40
60
80
100
x
rna
(c) Population members after informa- (d) Population members after population tion sharing and angular movement. limiting. 100 80 60 40
y
20 0
Jou
-20 -40 -60 -80
-100 -100
-80
-60
-40
-20
0
20
40
60
80
100
x
(e) Population members after controlling the size.
Figure 12: One iteration of OHDA on a population with five members. 21
lP repro of
Journal Pre-proof
6.1. Benchmark test functions In order to evaluate the performance of the proposed OHDA algorithm, we conduct experiments on the benchmark functions in CEC2005 [34] which includes 25 functions of 3 types namely 5 unimodal, 20 multi-modal (including 11 hybrid composition functions, 7 basic and 2 Expanded functions). Another test suit used here is CEC2014 [35] which contains more complicated functions than CEC2005. CEC2014 contains 30 functions of 3 unimodal functions, 13 simple multi-modal, 6 hybrid functions and 8 composite test functions. This paper also includes the optimization results for CEC2017 [36] constrained problems to further study the proposed algorithm’s behavior. This test function suit contains 28 functions with different features like separability and rotation. To save the space, we have excluded the details for these test functions and interested researchers are addressed to the reference papers [34, 35, 36].
rna
6.2. Different choices for information sharing. In order to select the best strategy to share information between population individuals, Shifted Sphere test function and Shifted Rotated Expanded Scaffers Function are selected to do the experiments with 50 and 100 dimensions [34]. The test functions are unimodal and composite functions respectively with search range [−100, 100]. The values of parameters for these tests are given in Table 4 and results of experiments are shown in Table 5. Information sharing strategies are summarized as follows to better reference the information sharing options. The parameter value for Γ is 0.5 for the experiments.
1. Strategy 1: The better individual among two chosen members gives the information to other member.
Jou
2. Strategy 2: The best member of the population always gives the information to others. 3. Strategy 3: Both selected individuals share their information in both directions. 4. Strategy 4: The combination of the best member and two other randomly selected members is used as (8).
22
lP repro of
Journal Pre-proof
Table 4: Parameter values of OHDA for the experiment to choose the information sharing strategy. Parameter P op size #T umbling num neighbor #Dim to move
Vaule 20 3 3 5
Parameter member selection Local radius L
Vaule 100% 0.05 10
Table 5: Average ± standard deviation (std) of fitness value for f1 and f14 from CEC2005 [34] in 20 independent runs for 50 and 100 dimensions. F1 F14
50D 100D 50D 100D
Strategy 1 -447.52 ±(0.55) -423.10 ±(32.95) -276.93 ±(0.40) -252.27 ±(0.47)
Strategy 2 -446.44 ±(0.71) -402.83 ±(28.11) -276.85 ±(0.30) -252.08 ±(0.35)
Strategy 3 -447.88 ±(0.47) -428.37 ±(10.20) -277.00 ±(0.36) -252.31 ±(0.54)
Strategy 4 -446.27 ±(0.79) -343.34 ±(247.30) -277.11 ±(0.45) -252.15 ±(0.35)
Jou
rna
According to the results of Table 5, the third strategy in both selected test functions which are unimodal and composite functions have better performance for dimensions of 50 and 100. The reason may be that, the third strategy exchanges information between the individuals and avoids information loss but performs information transfer. Also all the individuals have a chance to take part in this sharing and small pieces of information from even less fit members may gather to form the building blocks of the best so far individual. In this paper the simulations use the third or forth strategy for information sharing. After selecting the members for information sharing, the number of the variables which share their information should be specified first. This number is specified with Γ parameter in the information sharing equations of section 3.2. As the result, if Γ is 0.5 for each pair of the selected members to share information, a mask vector with D dimension is generated uniformly with zeros and ones which reflect the equal chance for producing 0 or 1. Different values for Γ parameter will change the chance of producing ‘1’ in the mask vector. Then the randomly selected individuals will exchange information when the corresponding mask value is one; otherwise no information exchange happens for that variable. For example if the mask vector is m = [m1 , m2 , ..., mD ] for a D dimensional problem, the rule for information 23
lP repro of
Journal Pre-proof
sharing will be as (16), where individual1 and individual2 are two randomly selected members of the population and i ∈ [1, D]. The simulation results for more complex functions of CEC2014 and constraint problems of CEC2017 uses the forth strategy discussed at Section 3 with the use of Cauchy distribution. { individual1 (i) ↔ individual2 (i) m(i) = 1 (16) continue OW
Jou
rna
6.3. Optimal values for the parameters of OHDA. In order to find the value of parameters, we have chosen two different functions from CEC2005 test functions which are unimodal and multi-modal [34]. F1 is a unimodal function in the range of [-100, 100], F6 is multi-modal in the range of [-100,100]. The algorithm is run 25 times for each experiment and the reported values in Table 6 are the average results. Each experiment alters the value of one parameter each time and the rest are fixed. The experiments are done for a dimension of 100 with a population size of 20. It is worth to note that the order of choosing parameters to find the (near) optimal values influences the results. Table 6 shows the experimental results on the selected functions according to the different parameter values. From Table 6, increasing the member selection parameter improves the algorithm performance. As the result, member selection rate will be fixed at 100% and it won’t be a parameter to be tuned. Also, the value of Local radius, L and #tumbling affect the algorithm performance and they are related to the search space range. As a result, the value of L parameter will be chosen according to the search range. To better analyze the effect of L, Table 7 shows the experiment results for different values for L. For functions F1 and F6 with search range of [-100, 100], L is chosen from {1, 2, 5, 10, 20, 30} and for F15 with [-5,5] search range L is tested with {0.05, 0.1, 0.5, 1, 2, 3} values. As the table 7 suggests, choosing both large and small values for L degrades the algorithm performance and it is problem dependent. The optimal value for L in F1 and F6 is 10 and it is 2 for F15. For experimenting on the dim to move and #tumbling parameters, we have chosen F1, F6, F7, F13, and F15 with L fixed at 10 for F1, F6 and F7 with [-100,100] search range and 2 for F13 and F15 with [-3,1] and [-5,5] search ranges. Table 8 shows the average fitness values for 25 runs on F1, F6, F7, F13 and F15 from CEC2005 in 50D and 100D to find optimal value for parameters dim to move and #tumbling. As the experimental results show, 24
rna
25
F6 ([-100,100])
F1 ([-100,100])
-370.36555 -383.25352 -387.61063 -393.27851 -393.53622 1219317 773620 706467 731343
(0.5) (1) (5) (10) (20) (1) (5) (10) (20)
Local Local Local Local Local Local Local Local Local
radius radius radius radius radius radius radius radius radius
29886475.2 225538157 277261217
member selection(100%) member selection(50%) member selection(40%)
F6 ([-100,100])
#Dim to move=25 L=2 #tumbling=3 Local radius=2 #Dim to move=25 L=2 #tumbling=3 Local radius=2
Fixed parameters
#Dim to move=25 L=2 #tumbling=3 member selection=100%
#Dim to move=25 L=2 #tumbling=3 member selection=100%
lP repro of 847.69 1711.265 2949.72732
member selection(100%) member selection(50%) member selection(40%)
F1 ([-100,100])
Average fitness value
Parameter (value)
Function ( search range)
Table 6: Experimental results to find optimal parameter Values of OHDA.
Jou
Journal Pre-proof
lP repro of
Journal Pre-proof
Table 7: Average fitness values for 25 runs on F1, F6 and F15 from CEC2005 in 100D to find optimal Value for parameter L while fixing local radius at 0.5 for F1 and F6 and 0.005 for F15 and #tumbling=3. L 1 2 5 10 20 30
F1 2.28E+03 -3.76E+02 -3.33E+02 -3.76E+02 -3.29E+02 -3.08E+02
F6 1.24E+08 1.04E+06 1.20E+06 9.14E+05 1.75E+06 3.43E+06
L 0.05 0.1 0.5 1 2 3
F15 1.05E+03 1.06E+03 9.42E+02 8.67E+02 7.64E+02 8.55E+02
the optimal value for parameter dim to move is 5 for 50D and it is 10 for 100D. Overall, the algorithm behaves well when the #tumbling is 3.
Jou
rna
6.4. The effect of population size on the algorithm performance and comparison results In this subsection the effect of population size is studied. A benchmark function from CEC2005 is selected and the algorithms are run for 2 different population sizes (5 and 20). The results are reported in Table 9. The parameter values for the algorithms are given in Table 10. Table 9 shows the mean value for fitness function in 25 independent runs. From Table 9, as the population size increases, the performance of OHDA degrades while the stopping criteria is the fixed number of function evaluations. It is because OHDA uses many virtual moves and reaches to the max number of function evaluations. It is noted that the performance for OHDA doesn’t degrade a lot in comparison with other algorithms. That is, reducing the number of population size affects the performance of the other algorithms more than OHDA. As a result, the population size for OHDA will be fixed at 5 and for the other algorithms population size is 20. 6.5. Simulation results and comparisons on OHDA The simulation results for CEC2005, CEC2014 and CEC2017 are provided in this section. Each experiment is conducted on different systems and the comparisons are done with different methods.
26
lP repro of
Journal Pre-proof
Table 8: Average fitness values for 25 runs on F1, F6, F7, F13 and F15 from CEC2005 in 50D and 100D to find optimal Value for parameter dim to move and #tumbling while fixing L at 10 for F1,F6 and F7 and 2 for F13 and F15. D
50
100
50
#tum bling
2
2
3
3
F1
F6
F7
F13
F15
-4.45E+02 -4.45E+02 -4.46E+02 -4.17E+02 -3.89E+02 -2.32E+02 -4.45E+02 -4.46E+02 -4.45E+02 -4.18E+02 -3.97E+02 -3.07E+02
9.76E+03 7.01E+03 9.58E+03 7.66E+05 1.87E+06 9.72E+06 7.20E+03 6.24E+03 9.47E+03 8.66E+05 1.50E+06 8.32E+06
6.07E+03 6.11E+03 6.15E+03 1.23E+04 1.29E+04 1.22E+04 6.08E+03 6.14E+03 6.17E+03 1.20E+04 1.27E+04 1.29E+04
-1.01E+02 -8.93E+01 -7.88E+01 -2.71E+01 2.27E+01 3.93E+01 -1.03E+02 -9.11E+01 -7.56E+01 -3.12E+01 2.28E+01 4.38E+01
5.88E+02 6.21E+02 6.52E+02 7.55E+02 7.77E+02 8.38E+02 6.13E+02 6.47E+02 7.19E+02 7.36E+02 8.23E+02 8.55E+02
rna
100
dim to move 5 10 25 10 25 50 5 10 25 10 25 50
Table 9: The mean value of 25 runs on Function F5 from CEC2005. The parameter values except for population size is as Table 10
Jou
PP PP Alg. |P op| PPP P 5 20 Selected Popsize
OHDA
CA
ICA
ABC
AAA
3.9E+5 4.0E+5 5
4.9E+5 4.3E+5 20
4.4E+5 4.3E+5 20
5.5E+5 5.4E+05 20
4.5E+5 4.3E+5 20
27
lP repro of
Journal Pre-proof
6.5.1. Simulation results on CEC2005 In this subsection the simulation results for OHDA as well as four other selected algorithms for comparisons are presented as tables 11 and 12 in 1000 and 2000 dimensions respectively. The results in tables include the mean value of the fitness function, the minimum value over 25 runs, Standard deviation (Std) and also average time complexity (seconds) of the runs while the stopping criteria is fixed at 10000 function evaluations (FEs) for all simulations in both 1000 and 2000 dimensions. The four comparison algorithms are CA [37], ICA [18], AAA [21] and ABC [38, 39]. These experiments are done with MATLAB 2016a on a system with the following specifications: RAM 7.8, Intelr Core i7 CPU 860 @2.80 GHz, Ubuntu 16.04 LTS.
rna
• Parameter setup for OHDA and other algorithms for comparisons The proposed OHDA algorithm is compared with four other algorithms. Stopping criteria is the fixed number of function evaluations which is 10000 for all simulations. The population size is fixed at 20 for CA, ICA, AAA, ABC algorithms and 5 for OHDA. The parameter values for OHDA and other algorithms for comparisons are given in Table 10. For OHDA two parameters namely Local radius and L are selected according to the size of the search space. As the result, if the search range is as small as [-5, 5], the value of Local radius is 0.005 and for large search ranges of [-100, 100], it is set as 0.1. Although, L vector for angular movement is multiplied by different values according to the search range; L is multiplied by 3 for small ranges and 40 for large ranges. These values can be set adaptively and in later simulations it is selected wisely according to the Euclidean distance of the member from the best member but it needs more simulations. • Discussion on the results of CEC2005.
Jou
According to Tables 11 and 12, it is observed that: – OHDA outperforms in 14 functions over 24 functions comparing 4 other algorithms in 1000 dimensions. – Six out of the 14 functions which OHDA outperforms, are composite functions and 4 of them are multimodal functions. – OHDA outperform in 58.4% of functions, ICA outperforms in 25% of functions and AAA outperforms just in 16.6% of functions in 28
lP repro of
Journal Pre-proof
Table 10: Parameter specification for OHDA and other algorithms for comparisons. Algorithm AAA ABC CA ICA OHDA
Parameter values Shear Force=2, Energy Loss=0.3, Adaptation=0.5 Trial Limit=round(0.6*nVar*nPop), a=1 Acceptance Ratio=0.35, alpha=0.3, beta=0.5 nEmp=5, alpha=1, beta=1.5, pRevolution=0.05, mu=0.1, zeta=0.2 member selection=100%,#Dim to move=10 Local radius=0.005 or 0.1, #Tumbling=3, L=[0.5, 0.7, 1]*3 or 40
If the search range is small as [-5, 5], the value of Local radius is 0.005 and for big search ranges as [-100, 100] it is 0.1. L vector is multiplied by 3 for small ranges and 40 for big ranges.
1000D.
– OHDA has rank 2 in 4 functions and rank 3 in 2 functions in 1000D.
– Considering time complexity, OHDA outperforms in 13 functions. Seven of them outperform the conventional methods in terms of accuracy and time complexity .
rna
– OHDA outperforms in 12 functions over 24 runs comparing 4 other algorithms in 2000 dimensions.
– OHDA outperform in 50% of functions, ICA outperforms in 33.3% of functions and AAA outperforms in 16.7% of functions in 2000D. – OHDA ranks 2 in 7 simulations and ranks 3 in 3 simulations. – As the complexity of the test functions increases, OHDA performs better in comparison with others.
Jou
– From functions F1 to F5 (Unimodal functions), OHDA outperforms 2 algorithms in 2000D. – From functions F6 to F14 (Multi-modal functions), OHDA outperforms 4 algorithms in 2000D. – From functions F14 to F25 (Composite functions), OHDA outperforms 6 algorithms in 2000D. – Considering time complexity, OHDA outperforms in 9 functions. In 5 of them it outperforms both accuracy and time complexity. 29
Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time
OHDA 2147163 1944096 84389.65 5.2 29152358 21881882 4135045 39.89 4.08E+10 3.58E+10 3.71E+09 11.81 73700271 43622950 15572269 44.23 3.94E+05 3.58E+05 1.83E+04 15.32 1.2E+12 9.82E+11 1.08E+11 6.58 524804.9 417515.1 101237.1 15.28 -118.37 -118.392 0.01 7.03 15876 14903.53 531.10 2.95 35896.1 32574.26 1624.03 6.07 1869.30 1824.29 23.56 41.19 2.43E+09 2.14E+09 1.36E+08 229.54 14190.31 11380.11 1613.81 20.72
30
F13
F12
F11
F10
F9
F8
F7
F6
F5
F4
F3
F2
ICA 2603943 2396443 114420.8 1.04 26546841 20000994 3458337 44.82 4.87E+10 4.04E+10 4.42E+09 8.76 84188493 51548756 27224254 45.53 434094.6 404345.4 14831.77 7.70 1.98E+12 1.66E+12 1.86E+11 0.63 446940.8 420553.7 14759.06 3.40 -118.66 -118.71 0.02 7.44 18821.27 16838.27 776.657 1.78 33358.33 30944.05 1249.17 12.40 1941.20 1911.50 16.00 43.73 2.1E+09 1.95E+09 96438929 315.20 47402.94 36982.9 6543.408 14.72
ABC 6259486 5967285 127367.5 0.91 2.4E+08 1.07E+08 75315877 44.13 4.1E+11 3.62E+11 3.19E+10 9.04917 3.04E+08 1.47E+08 89746964 48.104 5.35E+05 5.00E+05 1.73E+04 3.12 7.71E+12 7E+12 3.4E+11 0.61 804608.8 758898.6 30818.92 4.00 -118.37 -118.38 0.01 7.49 25168.31 24542.29 325.0933 1.85 43368.76 41452.39 902.74 7.51 1992.76 1979.40 6.50 52.83 5.56E+09 5.21E+09 1.8E+08 276.3738 314624.2 286437.5 18415.15 14.55 F17
F16
F15
F14
F Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time
OHDA 195.05 193.13 0.80 39.14 2055.62 2015.85 19.85 344.61 1132.35 1019.56 77.16 211.30 1945.28 1846.19 49.41 312.4017 1398.37 1360.75 19.32 215.51 1398.46 1354.021 22.28 335.50 1395.684 1363.262 20.06 312.53 1854.46 1822.18 21.55 654.71 1952.15 1877.42 49.15 410.21
CA 194.50 193.24 0.71 158.60 1818.86 1747.39 48.60 456.24 1822.34 1733.23 54.50 488.06 2087.36 1992.16 51.03 347.2957 1605.02 1565.23 22.03 508.25 1591.63 1507.47 34.68 446.29 1584.1 1526.01 33.24 438.694 2099.80 2072.84 15.98 618.25 2148.68 2047.86 46.76 574.93
ICA 191.62 189.53 1.37 49.82 1237.93 1171.70 61.93 404.11 1702.75 1429.07 115.87 317.49 2101.21 2018.95 40.84 303.5401 1573.39 1514.99 34.40 325.29 1569.21 1514.78 35.52 393.20 1560.91 1490.90 32.88 428.05 2154.76 2058.96 36.28 613.05 2152.25 2047.93 59.12 607.81
AAA 192.95 190.17 1.36 51.29 1579.51 1471.177 57.04 375.82 1726.7 1646.25 40.89 409.65 1902.99 1819.15 35.33 223.605 1549.65 1508.26 20.89 418.07 1512.69 1489.57 13.08 401.60 1530.70 1508.15 12.82 347.40 2107.88 2063.61 24.09 485.48 2153.11 2090.97 29.75 523.05
ABC 197.59 196.58 0.40 49.67 2057.54 1732.221 74.51 366.51 2114.04 2059.21 28.08 429.32 2177.88 2085.36 50.48 314.2525 1804.45 1733.65 46.63 428.17 1807.60 1727.86 38.01 372.54 1788.39 1725.56 36.75 327.03 2360.23 2311.41 25.83 503.14 2372.43 2254.60 66.18 515.05
F25
F24
F23
F22
F21
F20
F19
F18
-
2028.084 1977.93 24.24 402.81 2.02E+03 1.96E+03 25.55 405.75
-
2055.62 2015.84 19.85 344.61 2054.52 2016.37 20.36 346.84
2092.01 2027.07 34.78 344.13 2092.50 2039.165 31.24 464.64
-
1974.21 1935.28 31.02 368.01 1940.58 1918.994 14.72 467.81
-
2164.26 2102.26 31.12 344.00 2166.36 2112.176 25.96 465.54
-
lP repro of
AAA 3527763 3242606 126368.8 2.34 3.8E+07 3.1E+07 3.9E+07 45.86 1.1E+11 9.18E+10 1.08E+10 11.22928 1.09E+08 68444150 21508094 44.51941 436080.7 403129.8 13484.21 10.21 2.84E+12 2.25E+12 4.05E+11 1.67 6.60E+05 6.44E+05 9.00E+03 7.70 -118.40 -118.45 0.02 6.71 16903.25 15879.93 584.04 2.53 31671.96 29383.75 2146.22 9.78 1909.9 1890.86 12.01 48.41 2.49E+09 2.3E+09 89962001 253.5392 36679.25 13655.17 16498.74 14.35
rna
CA 4563900 4205800 287990 591.73 71816224 48099374 19386352 141.32 2.19E+11 1.72E+11 2.14E+10 55.62 1.11E+08 50536965 31862299 136.48 433984.6 407439.1 17927.67 70.90 4.62E+12 3.63E+12 6.84E+11 47.53 754998.3 716518.3 24075.2 63.96 -118.42 -118.49 0.02 104.74 20941.91 19787.68 696.21 90.20 33896.84 31085.47 1483.16 100.17 1906.26 1867.23 20.95 142.69 4.49E+09 3.89E+09 3.06E+08 412.06 281753.8 246461.4 20068.2 111.93
Jou
F1
F
Table 11: The mean, minimum, std of fitness values for 25 runs and mean value for time complexity (seconds) on test functions from CEC2005 in 1000 dimensions. The best value (minimum) is highlighted in bold. Times are in seconds. “-” indicates unavailable results for this dimensions.
Journal Pre-proof
Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time
31
F13
F12
F11
F10
F9
F8
F7
F6
F5
F4
F3
F2
CA 1.0E+7 9326160 412286.3 186.3983 2.56E+08 1.63E+08 62264242 306.08 5.38E+11 4.57E+11 4.67E+10 218.15 4.22E+08 2.55E+08 1.08E+08 318.0825 684744.7 632133.2 27753.13 150.5781 1.21E+13 9.99E+12 1.37E+12 95.31 1.51E+6 1.47E+7 39172.74 138.34 -118.378 -118.414 0.018 127.0072 44364.98 41696.47 1052.993 123.3537 73021.58 68664.05 2013.983 240.262 3818.07 3769.09 26.01 357.01 2.0E+10 1.9E+10 7.1E+8 1258.22 634612.9 571782.1 35278.25 211.9942
ICA 4232761 3471705 307786 1.42 1.08E+08 806+E7 11242664 121.2524 1.84E+11 1.54E+11 1.21E+10 29.65 1.55E+08 1.04E+08 38318395 125.68 680307.4 653641.7 19487.37 36.74224 8.22E+12 6.72E+12 6.75E+11 0.81 1122654 1060766 28983.38 33.56 -118.560 -118.587 0.015196 39.733997 41693.65 40246.89 996.55 1.30 73864.26 69435.62 2718.617 37.61 3873.17 3826.87 26.511 125.131 1.2E+10 1.09E+10 4.84E+08 1048.712 243253.42 189956.07 33170.42 25.72
ABC 1.3E+7 12398193 207473 1.4468 1.06E+09 5.2E+08 3.05E+08 119.104 8.32E+11 7.20E+11 5.89E+10 27.66 1.26E+09 6.13E+08 3.89E+08 115.9701 799758.8 744359.4 26505.63 27.99 1.61E+13 1.46E+13 5.22E+11 0.76 1641031 1570189 39043.02 27.23 -118.346 -118.363 0.07 21.97 51304.12 49889.33 585.86 1.11 88399.41 86504.34 1159.43 31.22 3950.35 3934.242 7.28 100.25 2.4E+10 2.3E+10 5.1E+8 1081.77 682772.8 592500.3 33797.29 24.60 F17
F16
F15
F14
Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time Mean Min Std Time
OHDA 687.94 685.35 1.01 89.90 1444.614 1327.951 59.95 847.08 1505.59 1360.71 64.78 697.467 2060.961 1987.696 27.77514 817.91 1484.434 1450.154 15.9624 717.17 1484.51 1448.947 16.12 839.33 1491.98 1462.80 14.52 805.17 2046.89 2004.48 23.4031 1146.172 2169.8 2071.96 42.77 1175.408
CA 691.50 688.72 1.30 328.71 1892.54 1836.38 27.75 1077.12 1917.27 1822.27 39.09 1061.876 2022.74 1888.69 78.06 987.54 1645.244 1589.335 23.94 1209.795 1633.49 1582.84 32.53 921.73 1644.08 1590.60 26.45 1216.703 2177.682 2128.43 23.89 1431.06 2214.95 2140.09 41.38 1588.18
ICA 685.6247 681.7461 1.68 102.38 1397.47 1275.11 117.81 990.66 1903.81 1788.11 64.85 1128.01 2133.515 2095.68 22.50 1128.637 1630.70 1575.18 34.88 1051.77 1641.136 1580.19 32.24 1103.7 1643.73 1598.36 28.98 838.50 2240.56 2203.48 20.04 1259.62 2264.10 2187.79 54.27 1047.28
AAA 690.83 684.89 2.30 111.76 1767.16 1724.35 18.05 691.79 1847.72 1802.57 22.81 809.39 1940.85 1905.22 20.75 1147.01 1590.6 1555.97 22.32142 1138.087 1607.51 1568.60 20.93 1152.11 1588.374 1542.51 20.08773 1130.635 2107.88 2063.61 24.09 485.48 2263.33 2216.034 30.71 1281.316
ABC 696.83 695.97 0.36 109.02 2094.756 2055.484 14.75 1217.22 2130.09 2082.00 23.97 1172.27 2196.89 2144.91 29.10 1141.35 1810.45 1742.96 32.99 1090.46 1810.39 1742.87 33.01 1111.39 1810.39 1742.87 33.01 1079.13 2378.735 2323.981 22.69 1193.103 2401.293 2284.223 53.70 1388.632
F25
F24
F23
F22
F21
F20
F19
F18
-
2055.22 1999.98 25.44 878.37 2025.996 1987.718 15.09477 921.5047
-
2092.38 2065.96 21.43549 960.22 2105.566 2068.742 27.86 993.87
2116.07 2062.83 25.92 1277.54 2122.343 2097.444 16.43474 1328.317
-
1979.31 1955.82 21.71 1291.32 1962.66 1939.38 14.72 1295.43
-
2166.061 2096.39 31.20 245.2873 2183.91 2146.25 20.56 1330.28
-
lP repro of
AAA 8154800 7403853 314833.6 1.62 1.7E+08 1.25E+08 23804307 119.1029 4.24E+11 3.71E+11 2.96E+10 36.99 5.91E+08 2.67E+08 2.46E+08 128.1487 653525.59 627414.68 13691.72 40.01448 7.20E+12 5.66E+12 6.47E+11 1.83 1495668 1472868 12374.89 35.11 -118.366 -118.406 0.017628 38.7615 39060.21 37364.38 1374.35 1.53 67502.4 64159.06 3003.051 34.45 3846.024 3795.594 23.53 97.6175 1.44E+10 1.38E+10 3.99E+08 979.2758 102501.93 60079.76 28491.42 31.059
rna
OHDA 7106350 6740046 247415.7 21.87 1.23E+08 96817707 18189393 118.35 1.81E+11 1.57E+11 1.37E+10 55.42 2.76E+08 1.87E+08 63686199 123.3692 623265.9 600099.3 14553.52 50.98 5.38E+12 5.05E+12 2.46E+11 21.87 1323996 1099256 136383.5 68.93 -118.343 -118.354 0.005 44.86 36221.34 35408.21 431.43 12.06 80906.54 77184.94 1770.786 65.50 3749.45 3653.93 41.78427 74.35 1.24E+10 1.21E+10 2.91E+08 699.23 61964.76 51518.33 5559.73 60.58
Jou
F1
Function
Table 12: The mean, minimum, std of fitness values for 25 runs and mean value for time complexity (seconds) on test functions from CEC2005 in 2000 dimensions. The best value (minimum) is highlighted in bold. Times are in seconds. “-” indicates unavailable results for this dimensions.
Journal Pre-proof
lP repro of
Journal Pre-proof
– It is worth to note that OHDA outperforms the other algorithms while using less population; OHDA runs with 5 members instead of 20 members like other algorithms in comparisons. – One of the main advantages of OHDA is saving storage with a limited number of population.
rna
• Statistical performance Analysis of results for CEC2005. In this subsection the results of two-tailed Wilcoxin sum-rank test are provided to compare statistically the proposed OHDA with other algorithms in pairwise form. The Wilcoxon signed-rank test is a nonparametric statistical hypothesis test used to compare two related samples to assess whether their population mean ranks differ. The value α = 0.05 for two-tailed test is selected in this regard and the minimum values of functions for 25 runs on each algorithm in 1000 and 2000 dimension are considered for comparison. H0 or Null hypothesis is: “There is no significant difference in the mean value of the results in two selected algorithms or in other words, their mean values are mostly alike”. H1 or the alternative hypothesis is just opposite of H0. P value, which shows the probability of rejecting H0 is compared with the α and rejection of Null hypothesis happens if P value is less than α; otherwise we fail to reject the Null hypothesis, and we say that the result is statistically non-significant. P values for the statistical tests are presented as Table 13; where “+” means that pvalue is less than 0.00001 and it results in the rejection of Null hypothesis when α = 0.05. As it is obvious from Table 13, the simulations are reliable for the majority of runs.
Jou
6.5.2. Simulation results on CEC2014 In this subsection the simulation results for OHDA as well as six other selected algorithms for comparisons are presented in Tables 15 and 16 in 30 and 100 dimensions respectively. The results in 15 and 16include the mean value of the error over 25 runs while the stopping criteria is fixed as 1.2E+5 function evaluations. The six comparison algorithms are KH (2012) [40], MVO (2016) [41], WOA (2016) [42], RW-GWO (2019) [42], B-BBO hl2011) [43] and LX-BBO (2016) [44]. The parameter values for RW-GWO, LX-BBO and B-BBO are as [42] and the parameter values for KH, MVO and WOA are summarized as Table 14. The information sharing method for the simulation results in this subsection and for CEC2017 is based on the best member as 32
lP repro of
Journal Pre-proof
Table 13: P values for Wilcoxin sum rank test for the simulation on CEC2005. “+” means p value <0.00001. Dimension=1000 OHDA OHDA vs ICA vs AAA + + + + + + + + + + 0.14 + + + + + + + + 0.05 + + + + + + + + + + + + + + + 0.08 + + + 0.02 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PP PAlgorithm OHDA PP PP F PP vs CA
F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 F19 F20 F21 F22 F23 F24 F25
OHDA vs ABC + + + + + + + + + + + + + + + + + + + + + + + + +
Dimension=2000 OHDA OHDA OHDA vs CA vs ICA vs AAA + + + + 0.18 + + + + + + + + + + + + + + + + 0.13 + + + + + + + + + + + + + + + + + + + + + 0.01 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OHDA vs ABC + + + + + + 0.22 + + + + + + + + + + + + + + + + +
rna
well as two other randomly chosen members as discussed in Section 3 and the parameter values for Cauchy distribution are (0.5, 0.1). These experiments are done with MATLAB 2018 on a system with the following specifications: RAM 32, Intelr Core i7 CPU 860 @2.80 GHz, Windows 10. • Discussion on the results of CEC2014 According to Tables 15 and 16 it is observed that: – OHDA outperforms in 11 functions over 30 functions comparing RW-GWO algorithm in 30 dimensions.
Jou
– Three out of the 11 functions which OHDA outperforms are unimodal, 5 of them are simple multi-modal, 3 of them are hybrid and one of them is a composite function. – OHDA outperforms in 18 functions over 30 functions comparing B-BBO algorithms in 30 dimensions. – Three out of the 18 functions which OHDA outperforms are unimodal, 6 of them are simple multi-modal, 4 of them are hybrid and 5 of them are composite functions. 33
lP repro of
Journal Pre-proof
Table 14: Parameter specification for OHDA, KH, MVO, and WOA. Algorithm KH MVO WOA
OHDA
Parameter values Vf=0.02 Dmax=0.005, Nmax=0.01, Sr = 0 WEP Max=1, WEP Min=0.2 parameter “a” decreases linearly from 2 to 0 in Eq. (2.3) and a2 linearly decreases from -1 to -2 to calculate t in Eq. (3.12) of [42] member selection=100%,#Dim to move=10 Local radius=0.05 or 0.1, #Tumbling=3, L=[0.5, 0.7, 1]*ED
ED shows the Euclidean Distance between the current member and the best member of the population.
– OHDA outperforms in 19 functions over 30 functions comparing LX-BBO algorithms in 30 dimensions.
– Three out of the 19 functions which OHDA outperforms are unimodal, 6 of them are simple multi-modal, 5 of them are hybrid and 5 of them are composite functions. – OHDA outperforms in 30 functions over 30 functions comparing WOA algorithm in 30 dimensions.
rna
– OHDA outperforms in 30 functions over 30 functions comparing MVO algorithm in 30 dimensions. – OHDA outperforms in 30 functions over 30 functions comparing KH algorithm in 30 dimensions.
– OHDA outperforms in 15 functions over 30 functions comparing the other three algorithms namely WOA, MVO and KH in 100 dimensions.
Jou
– Three out of the 15 functions which OHDA outperforms are unimodal, 4 of them are simple multi-modal, 3 of them are hybrid and 5 of them are composite functions. – Overall OHDA outperforms well in composite functions. In most of the experiments OHDA outperforms in 5 composite functions over 8 composite functions in both 30 and 100 dimensions.
• Statistical tests on the results of CEC2014 P values for the statistical tests are presented as Tables 17 and 18. As it is obvious from Table 17 and 18, the simulations are reliable for the majority of runs. 34
lP repro of
Journal Pre-proof
Table 15: Comparison of average error in objective function value in 30 dimensions of CEC2014 test problems. Better results are highlighted in bold.
f1
f2
f3
f4
f5
f6
Mean error 4.40E+06 1.01E+07 6.50E+06 8.02E+06 2.56E+07 5.80E+07 2.01E+09 6.03E+01 5.34E+04 2.36E+04 2.23E+05 2.58E+06 4.32E+09 4.04E+10 9.31E+01 1.64E+04 6.04E+03 3.16E+02 3.31E+04 1.22E+04 1.45E+08 9.91E+01 9.99E+01 1.03E+02 3.41E+01 5.85E+02 7.33E+02 1.85E+04 2.00E+01 3.06E+00 3.74E+00 2.05E+01 5.20E+02 5.21E+02 5.21E+02 2.95E+01 1.69E+01 2.00E+01 9.84E+00 6.34E+02 6.27E+02 6.51E+02 5.55E-02 1.76E-01 7.82E-02 2.53E-01 7.01E+02 7.42E+02 2.15E+03 1.33E+02 5.53E+01 4.71E-01 4.38E+01 9.95E+02 9.35E+02 1.36E+03
Function
Jou
f7
Method OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH
f8
f9
f10
f11
f12
f13
Method OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH
Mean error 1.99E+02 7.66E+01 9.11E+01 6.33E+01 1.13E+03 1.14E+03 1.58E+03 3.13E+03 1.26E+04 6.69E+03 9.61E+02 4.94E+03 5.20E+03 1.06E+04 4.18E+03 1.23E+04 6.72E+03 2.68E+03 6.17E+03 7.71E+03 1.07E+04 4.82E-01 1.11E-02 1.11E-02 5.45E-01 1.20E+03 1.20E+03 1.21E+03 4.66E-01 6.55E-01 6.75E-01 2.80E-01 1.30E+03 1.30E+03 1.31E+03 3.05E-01 6.20E-01 3.93E-01 4.23E-01 1.40E+03 1.41E+03 1.91E+03 6.03E+01 1.55E+01 1.88E+01 8.81E+00 1.58E+03 1.59E+03 6.02E+07 1.26E+01 1.08E+01 1.07E+01 1.03E+01 1.61E+03 1.61E+03 1.61E+03
Function
rna
Function
f14
f15
f16
35
f17
f18
f19
f20
f21
f22
f23
Method OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH
Mean error 2.88E+05 1.46E+06 1.28E+06 5.71E+05 3.84E+06 2.80E+06 5.69E+08 2.94E+03 2.90E+03 8.22E+02 6.52E+03 1.78E+04 8.45E+05 1.40E+10 1.93E+01 5.19E+03 7.81E+03 1.14E+01 1.95E+03 1.93E+03 4.20E+03 1.13E+03 2.61E+04 1.63E+04 6.27E+02 2.38E+04 4.47E+03 9.05E+07 6.76E+04 1.11E+06 1.23E+06 2.58E+05 1.45E+06 1.07E+06 3.30E+08 5.53E+02 1.88E+03 1.68E+02 2.08E+02 2.93E+03 2.71E+03 3.20E+05 3.16E+02 4.11E+02 3.43E+02 3.15E+02 2.63E+03 2.63E+03 5.10E+03
Function
f24
f25
f26
f27
f28
f29
f30
Method OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH OHDA LX-BBO B-BBO RW-GWO WOA MVO KH
Mean error 2.54E+02 1.48E+04 3.41E+04 2.00E+02 2.61E+03 2.67E+03 2.99E+03 2.24E+02 5.29E+02 6.54E+02 2.04E+02 2.72E+03 2.71E+03 3.06E+03 1.00E+02 2.13E+00 3.64E+01 1.00E+02 2.71E+03 2.78E+03 3.10E+03 5.69E+02 1.96E+02 3.05E+02 4.09E+02 2.70E+03 3.57E+03 4.83E+03 5.03E+02 1.94E+03 2.12E+03 4.34E+02 5.01E+03 4.14E+03 1.25E+04 2.16E+02 1.98E+07 3.09E+07 2.14E+02 5.35E+06 1.29E+06 1.08E+09 1.00E+03 6.96E+06 1.38E+07 6.69E+02 8.70E+04 1.39E+04 2.35E+07
lP repro of
Journal Pre-proof
Table 16: Comparison of average error in objective function value in 100 dimensions of CEC2014 test problems. Better results are highlighted in bold.
f1
f2
f3
f4
f5
f6
f7
f8
Method OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH
Mean error 2.32E+08 3.21E+08 1.19E+09 2.73E+10 1.15E+08 9.69E+08 7.39E+10 6.75E+11 1.94E+05 2.68E+05 2.04E+05 1.42E+07 1.30E+03 1.27E+03 7.03E+03 3.44E+05 5.21E+02 5.21E+02 5.21E+02 5.22E+02 7.39E+02 7.52E+02 7.34E+02 7.83E+02 7.02E+02 7.08E+02 1.32E+03 6.97E+03 1.46E+03 1.49E+03 1.81E+03 2.91E+03
Function f9
f10
f11
f12
f13
f14
f15
f16
Method OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH
rna
Function
Mean error 1.83E+03 1.72E+03 2.22E+03 3.49E+03 1.51E+04 2.01E+04 2.75E+04 3.67E+04 1.80E+04 2.36E+04 3.22E+04 3.70E+04 1.20E+03 1.20E+03 1.20E+03 1.21E+03 1.30E+03 1.30E+03 1.30E+03 2.64E+08 1.43E+03 1.40E+03 1.59E+03 1.42E+03 8.15E+03 2.48E+03 3.20E+05 8.15E+03 1.65E+03 1.65E+03 1.65E+03 1.20E+03
Function f17
f18
f19
f20
f21
f22
f23
Method OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH
Mean error 4.07E+07 2.64E+07 1.27E+08 1.43E+10 9.98E+04 2.04E+06 2.59E+09 9.12E+10 2.11E+03 2.14E+03 2.35E+03 1.78E+03 1.44E+05 1.31E+05 1.42E+05 1.43E+03 1.62E+07 1.86E+07 4.89E+07 2.46E+09 5.78E+03 6.67E+03 7.25E+03 7.52E+06 2.79E+03 2.52E+03 3.29E+03 1.27E+04
Function f24
f25
f26
f27
f28
f29
f30
Method OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH OHDA WOA MVO KH
Table 17: P values for Wilcoxin sum rank test for the simulation of CEC2014 on 30D. Function
OHDA vs LX-BBO 0.005 0 0 0 0 0 0 0.16 0 0 0 0 0 0 0
Jou
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15
OHDA vs B-BBO 0.06 0.002 0 0 0 0 0.001 0 0 0 0 0 0 0 0
OHDA vs RW-GWO 0.005 0 0 0 0.003 0 0 0 0 0 0 0 0 0 0
Function f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30
36
OHDA OHDA vs B-BBO vs LX-BBO 0 0 0 0 0.001 0.2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0034 0
OHDA vs RW-GWO 0 0.002 0.01 0 0 0.003 0.0001 0 0 0 0 0 0 0 0.02
Mean error 2.96E+03 2.60E+03 3.11E+03 4.49E+03 2.85E+03 2.70E+03 2.95E+03 4.54E+03 2.80E+03 2.80E+03 2.94E+03 4.53E+03 5.91E+03 7.17E+03 6.47E+03 1.28E+04 6.01E+03 1.82E+04 1.03E+04 4.71E+04 3.31E+03 1.29E+08 7.73E+07 1.10E+10 7.28E+03 1.29E+08 3.03E+06 8.02E+08
lP repro of
Journal Pre-proof
Table 18: P values for Wilcoxin sum rank test for the simulation on CEC2014 on 100D, where the test of the equality of the methods fails if p value <0.05. Function f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14 f15
OHDA vs WOA 0.06 0 0.065 0.44 0.016 0.016 0 0.1 0 0 0 0 0.14 0 0
OHDA vs MVO 0 0 0.09 0 0 0.093 0 0 0 0 0 0 0 0 0
OHDA vs KH 0 0 0.02 0 0 0 0 0 0 0 0 0 0 0.005 0
Function f16 f17 f18 f19 f20 f21 f22 f23 f24 f25 f26 f27 f28 f29 f30
OHDA vs WOA 0.35 0.002 0.003 0.002 0.08 0.12 0 0 0 0 0 0 0 0 0
OHDA vs MVO 0.001 0 0 0 0.43 0 0 0 0 0 0 0 0 0 0
OHDA vs KH 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Jou
rna
6.5.3. Simulation results on CEC2017 In this subsection the simulation results for OHDA as well as LSHADE44IEpsilon algorithm [28] for comparison are presented as Table 20 in 100 dimensions. The results in the table include the mean value of the fitness function over 25 runs and overall constraint violation value (cv) and the success rate (SR) as the feasibility rate of the solutions obtained in 25 runs while the stopping criteria is the number of function evaluations (FEs) which is fixed at 104 × D for the experiments and D is the dimension of the problem. The comparing algorithm with OHDA is LSHADE44-IEpsilon [28] which is a DE based method with the same constraint handling method as in this paper and the parameter values as in Table 19 and Tc=0.8, cp=2, alpha=0.8, theta= 0.2*popsize and σ is set as 0.0001 for IEpsilon in both methods. These experiments are done with MATLAB 2018 on a system with the following specifications: RAM 32, Intelr Core i7 CPU 860 @2.80 GHz, Windows 10. • Discussion on the results of CEC2017 According to Table 20 it is observed that: – OHDA outperforms in 15 functions over 28 functions in 100 dimensions. – 5 functions over 15 functions in which OHDA outperforms, are non-separable functions and 6 of them are separable functions 37
lP repro of
Journal Pre-proof
Table 19: Parameter specification for OHDA and LSHADE44-IEpsilon. Algorithm OHDA
LSHADE44-IEpsilon
Parameter values #Dim to move=10 Local radius=0.005 #tumbling=3, L=[0.5, 0.7, 1]*ED H = 10, K = 4, n0 = 2, delta = 1/(5K) DE/current-to-pbest/1, p = 0:2
considering objective function. The constraints in these functions have different features like rotation and separability.
– In f11 and f23 where OHDA didn’t outperform LSHADE44-IEpsilon in fitness, it can reach to lower constraint violation. • Statistical tests on the results of CEC2017 P values for the Wilcoxin sum rank test are presented as Table 21. As it is obvious from Table 21, the simulations are reliable for the majority of runs according to the p value. 7. Conclusions
Jou
rna
This paper introduces an efficient optimization algorithms to work with high dimensional problems called the Opposition-based High Dimensional optimization Algorithm (OHDA). The performance of the proposed OHDA algorithm is studied using three test suits namely CEC2005 with 25 benchmark functions in 1000 and 2000 dimensions, CEC2014 with 30 test functions more complicated than CEC2005 and CEC2017 on 28 constraint optimization functions. Results are compared with other optimization algorithms. Simulation results show that with the fixed number of function evaluations as the stopping criteria, OHDA outperforms other algorithms like CA, ICA and AAA in 1000 and 2000 dimensions with less population member than others in CEC2005. It also outperforms other algorithms in 13 and 12 functions over 25 ones in 1000 and 2000 dimensions respectively with just 1/4 of the other algorithms’ population size. Applying the opposite point concept to the algorithm can speed up the convergence speed of the algorithm. Applying OHDA on CEC2014 test suit results in improvements in some of the complicated test functions. As most of the real optimization problems in real applications are constraint problems with complicated feasible search space, the performance of OHDA algorithm is compared with other algorithms on 38
Jou
39
f14
f13
f12
f11
f10
f9
f8
f7
f6
f5
f4
f3
f2
f1
Function
Mean 202.82378 326.947629 141.1315 285.2665 1.73E+07 7.89E+04 1.85E+03 8.06E+01 5.54E+06 1.64E+02 1.56E+04 3.26E+03 -1.10E+03 -4.94E+02 1.61E+00 3.63E+00 8.06E+00 1.95E+01 1.02E-01 9.12E-01 -7.97E+03 -8.91E+03 1.25E+01 1.13E+01 4.28E+07 2.54E+08 1.13E+00 8.13E-01
SR 100% 100% 100% 100% 96% 100% 100% 100% 64% 100% 0% 0% 92% 100% 0% 0% 92% 24% 0% 0% 0% 0% 100% 100% 0% 0% 100% 100% f17
f16
f15
Function
Method OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon
Mean 2.01E+01 5.39E+01 7.34E+02 1.68E+02 1.10E+00 1.16E+00 1.17E+02 8.34E+02 4.08E+02 4.26E-06 1.06E+01 1.16E+01 9.25E+00 1.56E+01 2.74E+08 1.53E+09 1.13E+00 9.98E-01 2.65E+01 2.66E+01 7.39E+02 6.42E+02 1.10E+00 1.10E+00 5.76E+03 3.74E+03 6.43E+02 6.35E+02
cv 0 0 0 0 1.01E+02 3.06E+02 4.20E+01 2.12E+02 1.47E+05 1.46E+05 0 0 0 0 9.93E+03 2.33E+04 0.00E+00 3.52E-02 0 0 0 0 1.01E+02 1.01E+02 3.19E+03 1.27E+02 1.47E+05 1.47E+05
SR 100% 100% 100% 100% 0% 0% 0% 0% 0% 0% 100% 100% 100% 100% 0% 0% 100% 96% 100% 100% 100% 100% 0% 0% 0% 0% 0% 0% f28
f27
f26
f25
f24
f23
f22
f21
f20
f19
f18
lP repro of
cv 0 0 0 0 1.0E-6 0 0 0 6.23E+02 0.00E+00 1.67E-02 6.06E-03 3.86E-03 0.00E+00 1.54E+01 2.64E+01 2.53E-02 1.37E+00 4.17E-01 1.88E+01 9.54E+02 4.39E+03 0 0 4.06E+03 1.05E+04 0 0
rna
Method OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon OHDA LSHADE44-IEpsilon
Table 20: Mean value for fitness function over 25 runs for CEC2017 on the proposed OHDA and LSHADE44IEpsilon [28]. cv shows the overall constraint violation value and SR is the success rate.
Journal Pre-proof
lP repro of
Journal Pre-proof
Table 21: P values for Wilcoxin sum rank test for the simulation of CEC2017, where the test of the equality of the methods fails if p value <0.05 Function f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
pvalue 0.1 0.002 0.0008 0 0 0 0.001 0.008 0 0
Function f11 f12 f13 f14 f15 f16 f17 f18 f19 f20
pvalue 0 0.28 0 0 0 0 0.22 0.001 0 0.001
Function f21 f22 f23 f24 f25 f26 f27 f28
pvalue 0.06 0 0 0.43 0 0.45 0.09 0.22
CEC2017 designed for testing the algorithms on constraint problems. The simulation results show some improvements by reaching to better positions in the search space. References
[1] D. L. Donoho., High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality, American Math. Society Lecture-Math Challenges of the 21st Century (2000) 1–33.
rna
[2] J. Fan, R. Li, Statistical Challenges with High Dimensionality, Feature Selection in Knowledge Discovery (2006) 1–27. [3] N. K. Krishnanand, D. Ghose, Detection of Multiple Source Locations using a Glowworm Metaphor with Applications to Collective Robotics, In Proceedings 2005 IEEE Swarm Intelligence Symposium. SIS 2005 (2005) 84–91.
Jou
[4] L. Ma, Y. Zhu, Y. Liu, L. Tian, H. Chen, A novel bionic algorithm inspired by plant root foraging behaviors, Applied Soft Computing 37 (2015) 95–113. [5] H. Eskandar, A. Sadollah, A. Bahreininejad, M. Hamdi, Water cycle algorithm: A novel metaheuristic optimization method for solving constrained engineering optimization problems, Computers and Structures 110 (2012) 151–166. [6] S. Chu, T. Pei-Wei, J.-S. Pan, Cat Swarm Optimization, In Pacific Rim international conference on artificial intelligence, Berlin, Heidelberg (2006) 854–858. 40
lP repro of
Journal Pre-proof
[7] J. James, V. Li, A social spider algorithm for global optimization, Applied Soft Computing 30 (2015) 614–627. [8] R. Tang, S. Fong, S. Deb, Wolf Search Algorithm with Ephemeral Memory, In Seventh International Conference on Digital Information Management (ICDIM 2012) (2012) 165–172.
[9] S. A. Mirjalili, S. M. Mirjalili, A. Lewis, Grey Wolf Optimizer, Advances in Engineering software 69 (2014) 46–61. [10] Y. Shiqin, J. Jiang, G. Yan, A Dolphin Partner Optimization, In 2009 WRI Global Congress on Intelligent Systems (1) (2009) 124–128.
[11] D. Simon., Biogeography-based optimization, IEEE Transactions on Evolutionary Computation 12 (6) (2008) 702–713.
[12] M. Eita, M. Fahmy, Group counseling optimization, Applied Soft Computing 22 (2014) 585–604.
[13] M. Ghaemi, M.-R. Feizi-Derakhshi, Forest optimization algorithm, Expert Systems with Applications 41 (15) (2014) 6676–6687.
rna
[14] J. H. Holland, Genetic Algorithms, Scientific American, a division of Nature America, Inc. 267 (1) (1988) 66–73.
[15] R. Storn, Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces, Journal of global optimization, 11(4) (1997) 341–359.
Jou
[16] R. Poli, P. Nordin, W. Langdon, T. editors. Fogarty, Genetic Programming: Second European Workshop, EuroGP’99, G¨oteborg, Sweden, Proceedings, Lecture notes in Computer Science (1598) (1999) 26–27. [17] K. M. Passino., Biomimicry of bacterial foraging for distributed optimization and control (3) (2002) 52–67. [18] E. Atashpaz-Gargari, C. Lucas., Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition, IEEE Congress on Evolutionary Computation, CEC 2007 (2007) 4661–4667.
41
lP repro of
Journal Pre-proof
[19] M. Yazdani, F. Jolai., Lion Optimization Algorithm (LOA): A natureinspired metaheuristic algorithm, Journal of Computational Design and Engineering 3 (1) (2016) 24–36.
[20] Bastos Filho, Carmelo JA, Fernando B. de Lima Neto, Anthony JCC Lins, Antonio IS Nascimento, M. P, A Novel Search Algorithm based on Fish School Behavior, In 2008 IEEE International Conference on Systems, Man and Cybernetics (2008) 2646–2651. [21] S. Uymaz, G. Tezel, E. Yel, Artificial algae algorithm (AAA) for nonlinear global optimization, Applied Soft Computing Journal 31 (2015) 153–171.
[22] M. Ghaemidizaji, C. Dadkhah, H. Leung, A New Optimization Algorithm Based on the Behavior of BrunsVigia Flower, In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2018) 263–267. [23] S. A. Mirjalili, Moth-flame optimization algorithm: A novel natureinspired heuristic paradigm, Knowledge-Based Systems 89 (2015) 228– 249.
rna
[24] S. A. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems, Neural Computing and Applications 27(4) (2016) 1053–1073. [25] D. Polap, M. Wozniak., Polar Bear Optimization Algorithm: MetaHeuristic with Fast Population Movement and Dynamic Birth and Death Mechanism, Symmetry 9, (203) (2017) 1–20.
Jou
[26] H. R. Tizhoosh., Opposition-Based Learning: A New Scheme for Machine Intelligence, International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06) 1 (2005) 695–701. [27] M. Ergezer, D. Simon, Probabilistic properties of fitness-based quasireflection in evolutionary algorithms, Computers & Operations Research 63 (2015) 114–124.
42
lP repro of
Journal Pre-proof
[28] Z. Fan, Y. Fang, W. Li, Y. Yuan, Z. Wang, X. Bian, LSHADE44 with an Improved ϵ Constraint-Handling Method for Solving Constrained SingleObjective Optimization Problems, In 2018 IEEE Congress on Evolutionary Computation, CEC 2018 (2018) 1–8. [29] W. F. Leong, G. G. Yen, Constraint Handling in Particle Swarm Optimization, International Journal of Swarm Intelligence Research 1 (1) (2010) 42–63. [30] A. R. Jordehi., A review on constraint handling strategies in particle swarm optimisation, Neural Computing and Applications 26 (6) (2015) 1265–1275.
[31] R. Pol´akov´a, L-SHADE with competing strategies applied to constrained optimization, In 2017 IEEE congress on evolutionary computation (CEC) (2017) 1683–1689.
[32] K. Deb., Genetic algorithm in search and optimization: The technique and applications, In Proceedings of international workshop on soft computing and intelligent systems. Machine Intelligence Unit, Indian Statistical Institute Calcutta, India. (1998) 58–87.
rna
[33] R. Regis, C. Shoemaker, Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization, Engineering Optimization, 45(5) 0273 (2013) 529–555. [34] P. Suganthan, N. Hansen, J. Liang, K. Deb, Y. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization, KanGAL report, 2005005 (2005) 1–50.
Jou
[35] J. Liang, B. Qu, P. Suganthan, Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, 635. [36] G. Wu, R. Mallipeddi, P. Suganthan, Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained RealParameter Optimization, National University of Defense Technology, 43
lP repro of
Journal Pre-proof
Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report (2017). [37] R. G. Reynolds., An introduction to cultural algorithms, The Western journal of medicine 172 (1994) 335–336. [38] D. Karabaga, An idea based on honey bee swarm for numerical optimization, Technical Report-TR06 48 (9) (2005) 1–16.
[39] X. Yu, W. Chen, X. Zhang, An Artificial Bee Colony Algorithm for Solving Constrained Optimization Problems, In Proceedings of 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference, IMCEC 2018 (2018) 2663–2666.
[40] A. Gandomi, A. H. Alavi, Krill herd: A new bio-inspired optimization algorithm, Communications in Nonlinear Science and Numerical Simulation 17 (12) (2012) 4831–4845. [41] S. A. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-verse optimizer: a nature-inspired algorithm for global optimization, Neural Computing and Applications 27 2 (2016) 495–513.
rna
[42] S. Gupta, K. Deep, A novel Random Walk Grey Wolf Optimizer, Swarm and Evolutionary Computation 44 (2019) 101–112. [43] H. Ma, D. Simon, Blended biogeography-based optimization for constrained optimization, Engineering Applications of Artificial Intelligence (24) (2011) 517–525.
Jou
[44] V. Garg, K. Deep, Performance of Laplacian Biogeography-Based Optimization Algorithm on CEC 2014 continuous optimization benchmarks and camera calibration problem, Swarm and Evolutionary Computation 27 (2016) 132–144.
44
Journal Pre-proof
*Declaration of Interest Statement
lP repro of
Declaration of interests ☐ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
Jou
rna
none
Journal Pre-proof
lP repro of
*Author Contributions Section
This research is part of my student’s (Manizheh Ghaemidizaji) PhD project under the joint supervision of me (Chitra Dadkhah) and Professor Henry Leung while she was a visiting student in Prof. Leung’s Lab for 16 months in University of Calgary, Calgary, Alberta, Canada.
Author statement
Manizheh Ghaemidizaji: Conceptualization, Methodology, Software, Validation, Visualization, Formal analysis, Writing - Original Draft, Writing - Review & Editing,
Chitra Dadkhah: Conceptualization, Methodology, Validation, Formal analysis, Resources, Writing - Review & Editing, Supervision,
Jou
rna
Henry Leung: Methodology, Validation, Formal analysis, Resources, Writing - Review & Editing,