Author’s Accepted Manuscript COOA: Competitive Optimization Algorithm Yousef Sharafi, Mojtaba Ahmadieh Khanesar, Mohammad Teshnehlab
www.elsevier.com/locate/swevo
PII: DOI: Reference:
S2210-6502(16)30006-2 http://dx.doi.org/10.1016/j.swevo.2016.04.002 SWEVO211
To appear in: Swarm and Evolutionary Computation Received date: 18 October 2015 Revised date: 20 February 2016 Accepted date: 12 April 2016 Cite this article as: Yousef Sharafi, Mojtaba Ahmadieh Khanesar and Mohammad Teshnehlab, COOA: Competitive Optimization Algorithm, Swarm and Evolutionary Computation, http://dx.doi.org/10.1016/j.swevo.2016.04.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
COOA: Competitive Optimization Algorithm
Yousef Sharafi Department of Computer, Science and Research Branch, Islamic Azad university, Tehran, Iran Email:
[email protected]
Mojtaba Ahmadieh Khanesar Department of Electrical and Control Engineering, Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran Email:
[email protected]
Mohammad Teshnehlab Industrial Control Center of Excellence, Electrical Engineering Department, K.N.Toosi University, Tehran, Iran Email:
[email protected]
Abstract This paper presents a novel optimization algorithm based on competitive behavior of various creatures such as birds, cats, bees and ants to survive in nature. In the proposed method, a competition is designed among all aforementioned creatures according to their performances. Every optimization algorithm can be appropriate for some objective functions and may not be appropriate for another. Due to the interaction between different optimization algorithms proposed in this paper, the algorithms acting based on the behavior of these creatures can compete each other for the best. The rules of competition between the optimization methods are based on imperialist competitive algorithm. Imperialist competitive algorithm decides which of the algorithms can survive and which of them must be extinct. In order to have a comparison to well-known heuristic global optimization methods, some simulations are carried out on some benchmark test functions with different and high dimensions. The obtained results shows that the proposed competition based optimization algorithm is an efficient method in finding the solution of optimization problems.
Keywords: Swarm intelligence, cat swarm optimization, particle swarm optimization, artificial bee colony optimization, ant colony optimization, imperialist competitive algorithm, function optimization.
1. Introduction There are two main intelligent approaches to solve optimization problems: evolutionary based optimization algorithms [51-52] and swarm intelligence optimization methods [1, 4, 11, 13, 25, 26, 32]. These optimization algorithms have been successfully applied to constrained optimization problems [54] and multimodal optimization problems [53]. Recently, swarm intelligence-based optimization algorithms have been paid much attention and they have been successfully applied to many different engineering problems. Examples of swarm intelligence-based approaches are ant colony optimization (ACO) [7,9,28], [29], cat swarm optimization (CSO) [5,6,46], particle swarm optimization (PSO) [1,4,11,13,25,26,32,36,37,42,44-47,49-50], cuckoo optimization [27,33,34,40], lion pride optimization [30], artificial bee colony (ABC) [3,14-18,24,39,43], firefly optimization [35], imperialist competitive optimization (ICA) [2], fish swarm optimization [19], grey wolf optimizer [21] and so on. Swarm intelligence has some advantages such as scalability, fault tolerance, adaptation, speed, modularity, autonomy, and parallelism. Swarm intelligence is a research field which aims to model the collective intelligence which exists in nature in different species e.g. insect, fish, cat, ants, etc. Real-world optimization problems are very difficult and have high degrees of uncertainty. Swarm intelligence based optimization algorithms are proved to be outperform conventional optimization algorithms, especially when the complexity of the problem increases. In addition, swarm intelligence based optimization algorithms are easy to implement and can be combined with other algorithms. Despite higher optimization performance of swarm intelligence based optimization algorithms, they suffer from some limitations and are not appropriate for all optimization algorithms. Due to the fact that not any of swarm intelligence based optimization algorithms is appropriate for all optimization problems, more research is required to test and improve the algorithms for different optimization problems. Research is continued to enhance the existing algorithms to suit particular applications. Enhancement is done either (a) by modifying the existing algorithms or (b) by hybridization
of the existing algorithms. What we mean by hybridization of existing algorithms is to enhance their performances by combining the features of different optimization algorithms. Particle swarm optimization is a population-based stochastic algorithm which is inspired by the migratory behavior in the nature that starts with an initial population of randomly generated particles. In PSO, each particle in the population (swarm) flies towards its previous best position and the global best position. Although the mechanism of this motion can result in a fast convergence, it suffers from the premature convergence problem, which means that it can be easily trapped into local minima when solving multimodal problems [31]. Although the convergence speed of PSO is high due to the fast information exchange between the solution vectors, its diversity decreases very quickly in the successive iterations resulting in a suboptimal solution [20]. Ant colony optimization solves the optimization problems by moving the tracks of ants with the assistance of pheromone. This algorithm is another swarm based optimization algorithm which is inspired by the pheromone trail laying behavior of real ant colonies [8, 9], ACO converges to the optimum solution by accumulating and renovating pheromone. It has the ability of distributing, paralleling and converging. However, this method has major disadvantage that the computational time and the number of runs of the design code are considerably high. Cat swarm optimization is one of the swarm intelligence algorithms for finding the best global solution. Because of the complexity of the algorithm, CSO takes a long time to converge and cannot achieve the accurate solution. This algorithm is composed of two modes of behavior which exist in real cats. Cats are mostly resting, looking around and seeking the next position to move to; this mode is called the seeking mode. Tracing mode is the other mode of operation of a cat in which the cat is tracing some targets [5, 6]. Artificial bee colony algorithm is one of the most recently proposed swarm intelligence algorithms which is inspired by the foraging behavior of bee colony for global optimization. According to a recent study [15], ABC is better than or similar to other population-based algorithms with the advantage of employing fewer control parameters. However, it still has some deficiencies in dealing with functions having narrow curving
valley and functions with some extremely complex multimodal behavior. The standard ABC algorithm as a relatively new swarm optimization method is often trapped in local optima [23], and the speed and precision of the convergence of the ABC decreases as the dimension of the problem increases [17]. This is mainly because in the ABC algorithm, bees exchange information in one dimension with a random neighbor in each food source searching process. Meta-heuristic algorithms have parameters to be tuned which are commonly chosen by trial-and-error as well as the skill of the user. Consequently, an efficient algorithm such as ABC with fewer parameters to be adjusted is always more favorable. Furthermore, ABC benefits from memory, local search and solution improvement mechanism, that is why it is capable of get excellent performance on optimization problems [3,14-18,24,39,48]. Imperialist competitive algorithm is one of the most successful swarm based optimization algorithms, there are only two different types of countries, imperialists and colonies, which Imperialists absorb their colonies. Similar to the other evolutionary algorithms that start with initial populations, ICA begins with initial empires. Any individual of an empire is called a country. There are two types of countries; colony and imperialist state that collectively form empires. Imperialistic competitions among these empires form the basis of the ICA. During this competition, weak empires collapse and powerful ones take the possession of their colonies. Imperialistic competitions converge to a state in which there exists only one empire and its colonies are in the same position with the same cost as the imperialist. The imperialist represents the best solution found for the problem [10]. In this paper, we have proposed a novel hybrid algorithm which combines the advantage of PSO, ACO, CSO, ABC and ICA so as to solve the optimization problems. In order to overcome the limitations of ACO, this algorithm is used in combination with PSO, CSO, ABC and ICA. With regard to recommendation of evolutionary algorithms and swarm intelligence to solve different optimization problems, the optimization strength of each algorithm in some problems is higher than other algorithms. The proposed algorithm benefits from the existing knowledge; logic of evolution, swarm intelligence algorithms to solve various optimization problems is the main goal. Therefore, in this research, it has been endeavored to apply a
heuristic optimization well known as biologic optimization which is adopted from the behavior of the creatures. The main idea of the proposed method is inspired by the competition between different species in the nature. The survivance of each species highly depends on the environmental conditions. For example, it might be possible that dinosaurs are extinct in an environment but ants survive in the same conditions. In this condition, the ant colony benefits from more resources and their colonies becomes larger. In addition, some species attack others which results in weakening a species and strengthening another. Inspired by these facts, in the proposed algorithm different nature inspired algorithms have the possibility to compete with each other, weaken them and even fully dominate them. In the present paper, imperialist competitive algorithm is used so that all creatures can fight for survival. The proposed method is compared with several other optimization methods using several fitness functions. It is observed that this method outperforms all different optimization algorithms in almost all performed simulations. This paper is organized as follows: in Section 2 the optimization algorithms of PSO, CAT, ABC, ACO and ICA are discussed. In Section 3 the new proposed algorithm in this paper called competitive optimization algorithm (COOA) is introduced. Eventually, the simulation results on a number of benchmark test functions are presented in the last section.
2. A brief overview of discussed evolutionary algorithms In this section, we outline the optimization algorithms of PSO, CSO, ABC, ACO and ICA, the most wellknown swarm based optimization methods.
2.1 Particle swarm optimization Algorithm Particle swarm optimization is inspired by bird and fish flock movement behavior was put forward originally by Kennedy and Eberhart in 1995 [11]. Particle swarm optimization is an optimization method which is applied to solve the problems whose solutions are a point or surface in an n-dimensional space. In such a space, some assumptions are made and an initial velocity is devoted to each particle. Subsequently, based on velocity value, personal and social experiences, these particles move in problem solution space and results are calculated at the end of each iteration according to a cost function (fitness). After some
iteration, particles accelerate towards the ones with better cost functions. Based on Eq. (1), the velocity of each particle is updated in each iteration of PSO algorithm. Equation (2) calculates the new position of the particles based on the former velocity and the position of each particle.
Note that
v i (t 1) w .v i (t ) r1 c1 (x best ,i (t ) x i (t )) r2 c 2 (x gbest (t ) x i (t ))
(1)
x i (t 1) x i (t ) v i (t 1)
(2)
represents the position of the i-th particle;
best personal position of the i-th particle;
is the velocity of the i-th particle;
denotes the best collective position of all particles; w is the
inertia weight; t shows the current iteration number of algorithm movement; values in the range of [0, 1],
and
is the
and
are two random
are two constant values defined by the user. This algorithm has a
coherent algorithm and has proven its efficiency in different applications.
2.2 Cat swarm optimization Algorithm The first paper based on the behavior of the cats was published in 2006 by Chu and Tsai to find the solution of continuous optimization problems [5, 6]. They investigated the behavior of the cats in two modes, namely seeking and tracing modes. Cats are always in one of the aforementioned modes. Cats benefit from a keen sense of curiosity to move around objects which usually do not provoke their reactions. They devote most of their time to resting; however, they show high intelligence simultaneously. Cats that are wise and intelligent benefit from a wide field of view, albeit they are considered slothful. Two main types of behaviors are called seeking and tracing modes are applied to model CSO algorithm. The position of particles and a model of the behavior of the cats are used to solve optimization problems. In CSO, in order to solve the optimization problems, the number of cats must be determined. Each cat has a position with m dimensions, speed with m dimension; cost function value corresponded with the position of cats, and a flag to recognize whether the cat is in a seeking mode or tracing mode. The most appropriate solution is the position of the cat with the best value of cost function (Fitness) which can be updated at the end of each iteration if it is required [5, 6].
2.2.1 The seeking mode process This mode which is one of the two main behaviors of cats is related to a resting mode of a cat, but being alert-looking around its environment carefully and endeavoring to find an appropriate position toward its next movement (see Fig. 1).
Fig. 1.Four important factors in seeking mode.
Seeking mode includes four essential factors which are defined as follows: SMP (Seeking Memory Pool): is a memory source of seeking mode defined for each individual cat which indicates any points for which the cat looks. The cat selects a point from the memory pool according to some rules. SRD (Seeking Range of the selected Dimension):is the seeking range of the selected dimension based on which the mutative ratio for the chosen dimensions is declared. In seeking mode, if a dimension is selected for mutation, the difference between the new and old values should not be out of range. CDC (Counts of Dimension to Change): shows how many of the dimensions will vary. SPC (Self-Position considering): is a Boolean value which decides whether the point at which the cat is already standing can be one of the candidate points for the cat to move to. The true value is usually considered for SPC, since not moving to a new position might occasionally provide the cat with a better opportunity to achieve a better result. 2.2.2 The tracing mode process The tracing mode simulates the behavior of the cat in tracing some targets. The position of each cat changes
according to its own velocity equation for every dimension. The tracing mode includes three steps which update the position of each cat using the following Eq. (3) and Eq. (4).
where
v k ,d (t 1) w .v k ,d (t ) r1 c1 (x gbest ,d (t ) x k ,d (t ))
(3)
x k ,d (t 1) x k ,d (t ) v k ,d (t 1)
(4)
is the best position among all cats; w is inertia weight; t shows the algorithm iteration;
denotes the velocity of the K-th cat in d-th dimension;
is the position of the K-th cat in d-th dimension;
randomizes the velocity equation of the cat in the range of [0, 1];
is the acceleration coefficient to
increase velocity of each cat. Mixture Rate (MR) is a parameter used to determine the number of the cats in both seeking and tracing modes which is usually selected as equal to 0.05.
2.3 Artificial Bee Colony Algorithm Artificial bee colony algorithm which uses the intelligent foraging behavior of honey bees was proposed by Karaboga [3, 14-18, 24, 39]. In ABC algorithm, artificial bees fly in a search space to find the positions of food sources (solutions) with high nectar amount and finally to get the highest nectar. The colony of artificial bees is classified into three groups namely employed bees, onlookers and scouts. The number of employed bees is equal to the number of randomly determined food sources. The colony is separated into equal numbers of employed bees and onlookers. In each cycle of the search, employed bees investigate their food sources and their nectar amounts are calculated. Nectar and position information of the food sources with the onlookers are then shared. Onlookers select food sources depending on the nectar amounts of food sources. A food source, which cannot be improved for a predetermined number of tries, is abandoned and the employed bee associated with that food source become a scout to discover a new food source. Having found a new food source, the scout becomes an employed bee again. After determining the new positions of food sources, a new iteration begins. These iterative processes are repeated until the termination conditions are satisfied [14-39].
The ABC algorithm has only two input (control) parameters; popsize and limit. popsize is the population size of the colony and its limit is the counter for a solution that does not change over number of cycles. Initially, ABC algorithm generates randomly separated NS solutions using Eq. (5).
X i,j X where popsize.
,
.
rand (0,1) (X
min j
max j
X
min j
(5)
)
is the number of food sources (possible solutions) and equals to half of
is the dimension of the solution space.
and
are the lower and the upper bounds for the
dimension j, respectively. After initialization, each solution of the population is evaluated and the best solution is memorized. Each employed bee produces a new food source in the neighborhood of present one by using Eq.(6).
V i , j X i , j i , j (X i , j X k , j ) Where
and
random number in the range
(6)
are randomly chosen indices, has to be different from , . New solution
is modified with the previous solution
randomly selected position from its neighboring solution evaluated and compared to the fitness of
. If the fitness of
and becomes a new food source. Otherwise,
. Once the new solution is better than that of
is a
and the
is obtained, it is ,
is replaced with
is retained. After all the employed bees complete their
searching processes, they share nectar amounts and positions of the food sources with the onlookers. For each onlooker bee, a food source is chosen by a selection probability. The probability of the food source is calculated as follows: pi
fitness i NS
fitness i 1
where
(7) i
is the fitness (objective function) value of the food
. When onlooker bees select food
sources, they start exploitation process like employed bees in Eq. (6). If any food source position cannot be improved through a predetermined number of cycles, then the food source is assumed to be abandoned. After that, the corresponding employed bee becomes a scout and the algorithm generates a new solution by
Eq. (5). Finally, the best solution is memorized and the algorithm repeats the search processes of the employed bees, onlookers and scouts until the termination conditions are met [14, 39].
2.4 Ants Colony Optimization Algorithm Ant colony optimization is an appropriate algorithm to find some solutions for optimization problems. This method is inspired by behavior of ants seeking a path between their colony and a source of food. In 1992, this method was initially proposed in Dorigo’s PhD thesis to present an algorithm to solve traveling salesman problem. Ants discover food sources through a collective cooperation. Having exited from their colony, they randomly wander around the colony. Finding a source of food, ants lay down some special chemical material called pheromone and return to their colony. Other ants follow the path created by the pheromone to find their path. If they eventually find food at the end of the path, they return to their colony laying down some pheromone. After some iteration, laying down more pheromone on the path, most of the ants select the path as the strongest. It should be noted that the pheromone trail starts to fade; consequently, the attraction strength of long paths on which pheromone is laid is reduced. On the other hand, due to the higher density of pheromone on the shorter paths, they are more likely to attract ants. This algorithm was initially proposed to solve discrete problems such as travelling salesman problem. Ant colony optimization algorithm for real-valued optimization problems was first proposed by Dorigo and Socha in 2008 [28]. In discrete mode, while the range of selections is a particular number, in continuous mode, the above mentioned range is indefinite [7, 28]. Gaussian distribution is used to generate new solutions, according to Eq. (8).
f (x ; ; 2 )
In Eq. (8),
1 2 2
e (x )
value equals the standard deviation and
that if the problem space is n-variable,
2
/(2 2 )
(8)
value equals the mean. It is required to be noticed
is the i-th solution as follows:
S i (S i1 , S i2 , S i3 ,..., S in )
value equals the j-th dimension of i-th population space. The solution archive (population space) includes
solution as follows: S 1 (S 11 , S 12 , S 13 ,..., S 1n ) S 2 (S 21 , S 22 , S 23 ,..., S 2n ) S 3 (S 31 , S 32 , S 33 ,..., S 3n ) . . . 1 S k (S k , S k2 , S k3 ,..., S kn )
The goal is to use the knowledge of these solutions, obtain a distribution function to be able to generate new solutions, compare with the old solution archive and select a
-tuple population; consequently,
distinct probability distribution functions are required to generate new solutions. Socha and Dorigo have defined a Gaussian distribution with the mean of
, and standard deviation of
for each dimension of an
solution. Therefore, in order to benefit from a good distribution for one dimension of the new solution, all distributions related to the corresponding dimensions and all available solutions in the solution archive should be combined to obtain a general distribution function. Ant colony algorithm is based on a solution archive which plays the role of ant population space. Every row represents a member of the population that ( ) is the cost function value. The archive of the solutions is sorted from the best cost function value to the worst one;
value is the probability of the
selection of i-th answer among the other solutions. General distribution function does the search around each dimension. For all the corresponding dimensions in solution archive, general distribution function is defined by Eq. (9). k
G i (x ) l g li (x ) l 1
value is calculated according to Eq.(10),
(9)
l
1
e
q k 2
( l 1)2 2q 2 k 2
(10)
where q is a constant parameter and k is the solution archive size. It is a good idea to normalize the values by Eq. (11). pl
l k
r 1
It is much better to use
value instead of
(11) r
. Standard derivation of i-th dimension from the i-th solution
of solution archive is calculated by Eq. (12), k
i l
where
x ei x li
e 1
(12)
k 1
is equal for all dimensions of the solution archive [8, 28].
2.5 Imperialist competitive algorithm Imperialist competitive algorithm was proposed by Lucas and Atashpaz based on imperialism phenomenon. This algorithm works as follows: assume some countries as the solutions of a problem and every country is defined like an
-dimension vector as follows:
country x 1 , x 2 , x 3 ,, x N var
where
is the number of search space dimension. Evaluating each country with an objective function is
done as follows:
cost function f country f x 1 , x 2 , x 3 ,, x N var
(13)
The number of all countries is equal to
. In this algorithm, countries can be put in two groups, namely
imperialist and colony, as a result,
best members of the initial population are considered as
imperialists and
remained ones are supposed to be colonies. The cost function determines the power of
each country, and the stronger an imperialist is, the more colonies it will possess. An imperialist and countries governed by it are called empire [2]. The numbers of colonies of an imperialist directly depends on its cost function; as a result, each imperialist’s cost function should be firstly calculated as follows:
M n c n max c i
(14)
i
where
is the objective function value for the n-th imperialist,
all
are calculated as follows:
Pn
is the new value of
Mn N imp i 1
, and to normalize,
(15)
Mi
The number of colonies of each imperialist is calculated based on Eq. (16), where
is the number of the
n-th imperialist’s colonies. There is usually a competition in each empire.
NC n Round Pi * N col
(16)
If a colony’s position is better than its imperialist, their positions are exchanged, total power of an empire is calculated by as follows:
TC n Cost imperialist n .mean Cost colonies of imperialist n where
(17)
. Afterwards, all empires are in competition with each other. Based on
weakest empire and its weakest member are selected. Based on member is transformed to one of the other empires, (see Fig. 2).
value, the
value and roulette wheel, the weakest
Fig. 2.Imperialistic Competition [2].
3. The Proposed Competitive Optimization Algorithm Various methods inspired by living creatures’ lives such as ants, birds, cats, cuckoos, lions, and so forth have been presented which are known as swarm intelligence. The selection of an appropriate algorithm to solve optimization problems is a time consuming process. In this paper, a novel method is proposed which can be applied to solve various optimization problems. The proposed method in this paper which strives to apply the capabilities of different optimization algorithms is based on competition among different living creatures to assist them to survive in nature. In the real world, animals have to attack other species to survive and due to these attacks, some animals become extinct or are in danger of extinction. Applying the knowledge and logic of some optimization algorithms inspired by living creatures’ social life in parallel in nature and all interactions among living creatures may results in weakening or strengthening various species of living creatures in nature. In this paper, a novel optimization algorithm inspired by interactive competition of living animals in nature to survive is proposed which is called as competitive optimization algorithm (COOA). As the representatives of various living creatures in nature, social life of ants, birds, bees, and cats are used. For the interactive competition among living creatures in nature, ICA algorithm is used. At the end of each iteration of COOA algorithm, based on the interactive competition of optimization algorithms of various living species in nature, the weakest species is recognized and based on roulette wheel; its weakest member is given to other species in nature to help them be strengthened. This process is always done at the end of
each iteration of the proposed algorithm. Figure 3 shows the initial population of various species. As shown in this figure, there are four species at the beginning with the same number of initial population.
Fig. 3. The initial population of various algorithms.
In Fig. 3, the initial population space of ants is shown by
; According to Eq.(8) to Eq.(12), ants search
their environment. In the same figure, the initial population space of bees that search their environment according to Eq.(5) to Eq.(7) is shown by
; the initial population space of cats that search their
environment according to Eq.(3) to Eq.(4) is shown by
; the initial population space of birds search their
environment according to Eq.(1) to Eq.(2) is shown by . All species start working according to their own knowledge and logic. In each iteration, a member of the weakest species will not survive. On the other hand, other species gain more strength and increase their population. After some iterations, only a species remains which means that based on the features of optimization problems, one of the optimization algorithms which results in the best results, will be the dominated optimization algorithm. In order to use ICA to integrate PSO, ACO, CAT and ABC each species is considered to be an empire and its best particle is considered to be its imperialist. Other members of each species are considered to be colony. The competition between the empires is based on ICA but the evolution of the members each empire is based on of their own algorithms. For example, the change of the position of each particle in ACO is calculated from Eq.(8) to Eq.(12) and so on.
The weakest colony of each empire possibly immigrates from its empire to another empire. When it immigrates to a new empire it uses the algorithm of the search the space new empire. A thorny issue related to majority of evolutionary algorithms inspired by living creatures’ lives is that when optimization algorithms reach a level near the optimal one may not have a good performance which leads to stagnation. In the proposed algorithm, after some iteration, all species except one type might become extinct and one type of living creatures survive which is being clearly shown in Fig. 4.
Fig. 4.The remaining species of living creatures after one step optimization process.
As shown in Fig. 4, species of bees are the survived one and others have vanished; as a result, as soon as one of the species remains, COOA algorithm continues its work solely with one of the four optimization algorithms. Despite the fact that the population of the remained species is four times more than its initial population, its optimization algorithm might not have a strong performance and may lead to stagnancy with no development, henceforth. A solution to solve this problem is proposed as follows. As soon as a single species remains as a direct result of interactive competition of optimization algorithms, the curve of the best cost function value of the eventual species will stagnate, according to Fig. 5. The population of final species includes whole population of all four initial species. This final population is required to be divided among all four initial species equally but randomly. Afterwards all processes of the algorithm should be repeated for all four species with their new positions. It is worth mentioning that the
same processes are required to be repeated if a species again remains. This actions prevents fast convergence to one algorithm and may result in better solutions. It means that after some iterations, the problem space will be smaller and new stronger strategies to find the better solutions are required. Another strategy which is applied is that if the number of current active species is more than one and no noticeable improvement in Tstagnancy=10 iterations of the algorithm is noticed, and if stagnancy happens based on Fig. 5, activating the deactivated species and also dividing the population among all species equally and randomly is required to be done.
Fig. 5.The remaining species of living creatures after an optimization process.
COOA Algorithm processes are shown in Algorithm 1. Algorithm 1 COOA
12345-
6-
7891011-
121314-
Generating the initial population equally and randomly among all four groups and evaluating the initial population Each group starts working based on its members’ strategy and social behavior. Evaluating the members of all four groups. Calculating the power of each active group which equals cost function value of the best member of the group plus a coefficient of the mean of all members’ cost function values. Determining the weakest group and its weakest member based on the power of the group. If the number of members equals zero without considering that member, this group is required to be deactivated. Selecting one of the active groups except the weakest one chosen in previous step, based on roulette wheel. Devoting the determined weakest member to the group chosen in previous step. Each group will survive based on its members’ strategy and social behavior. Calculating the cost function values of active group members. If an active group is only remained or stagnancy has happened, the current population of the final group should be equally and randomly divided among all four initial groups so that all deactivated groups are again activated. Saving each active group’s best result. If stop position is not achieved, go to step 4. Presenting the results.
The flow chart of the proposed algorithm is shown in Fig. 6.
Start
Generating the initial population equally and randomly among all groups.
Each group acts based on its social behavior.
Evaluating all four groups’members
Calculating power of each active group
Selecting a member from the weakest group
Selecting one of the active groups except the weakest one, and transferring the chosen member to the selected group.
Checking the active groups in order to deactivate a group in case of requirement
Each active group acts based on its social behavior
Evaluating members of each active group
Has an only active group remained? Or has stagnancy occurred?
Yes
No All inactive groups are again activated.
The current population should be divided equally and randomly among all groups.
Saving the best result No Stopping conditions? Yes
Presenting results
Fig.6. Flowchart of the proposed algorithm.
4. Experimental Results The results of the evaluation and comparison the proposed algorithm with other methods is conducted in two parts, the first part is the evaluation algorithm COOA is compared with other algorithms that are used in this algorithm (such as PSO, ACO, CSO, ABC, ICA), the second part is the evaluation algorithm COOA compared with HPSOWM [41] and EPUS-PSO [44].
4.1 Compare COOA with algorithms are used in this algorithm The proposed algorithm is evaluated on 15 benchmark test functions [38-40] which are presented in Table V. All calculations are performed by applying Matlab R2010a software and a computer with CPU Intel corei5 CPU and 4GB of RAM. The results of the proposed method are compared with six other methods namely: ACO [28], PSO [25], CSO [5], ABC [14], ICA [2] and all best groups algorithm (ALL BEST) which looks like the proposed algorithm except that there are not any connection and interaction among the groups. At the end of each iteration of the algorithm, the best cost function value among all four groups is selected. In other words, ACO, PSO, CSO, ABC and ICA run individually and the results of ALL BEST is reported of the best value obtained from these algorithms. It is worth mentioning that the proposed algorithm is represented by COOA in the tables and figures in this section. Table 3 demonstrates the initial population of all algorithms for presenting the results. As shown in this table, the initial population of all algorithms is equal to 4N, where N is the population base. The population of all species for both COOA and ALL BEST algorithms are selected as equal to 4N; each species has one fourth of total population, N, and the population of each PSO, ACO, CSO and ABC algorithms is equal to 4N if they run separately. As shown in Table 4, the results of various algorithms in three different dimensions with the population base and the iteration have been investigated.
Table 3 The initial population size for algorithms discussed. Algorithm
Population base
ALLBEST
COOA
PSO
ACO
CSO
ABC
PSO
ACO
CSO
ABC
N
N
N
N
N
N
N
N
N
PSO
ACO
CSO
ABC
ICA
4N
4N
4N
4N
4N
Table 4 Initial population size, number of dimensions and maximum number of steps in results. Dimension Population base(n) Max iteration 15
7
500
30
15
1000
60
30
2000
Default parameters settings for ACO, PSO, CSO, ABC, ICA and COOA are presented in Table 5.
Table 5 Default parameters settings Algorithm
parameter
value 2.05 2.05 0.9 TO 0.4 5 TRUE 0.4 0.25 2.05 0.9 TO 0.4 0.05 0.5
PSO W
CSO
SMP SPC CDC SRD W MR
ACO
Q
1
ABC
NO PARAMETER
ICA
NUMBER OF EMPIRES = 7 ASSIMILATION COEFFICIENT = 2.0 REVOLUTION PROBABILITY = 0.1 PROBABILIYT OF REVOLUTION ON A SPECIFIC VARIABLE = 0.15 COLONIES MEAN COST COEFFICIENT = 0.1 INITIALIZATION SELECTION PRESSURE = 1.0 INTER-EMPIRE COMPETITION SELECTION PRESSURE = 1.0
The results of modeling and performance of the proposed algorithm in comparison with other current methods are presented in Table 7-8. Furthermore, in comparison with other methods, the convergence curve of the best result in each iteration of the algorithm and the curve of the number of objective function evaluation for the proposed algorithm are presented in Figs. 7-24. The achieved results are the mean of twenty separated performances. The last column (M) in Tables 6-7 represent the mean of number of used stagnancy escape strategies which were discussed in Figs. 4 and Figs. 5. The results of the simulation of the proposed algorithm show a considerable improvement in accuracy and precision of the final result, prevention of premature convergence and increasing the speed of convergence of the proposed algorithm over other studied methods. The t-test is a statistical method to evaluate the significant difference between two algorithms. The t-value is positive if the first algorithm is better than the second, and it is negative if it is poorer. The t-value is defined as follows:
t
2 1 12 1 1 2 2
(18)
where ̅̅̅and ̅̅̅ are the mean values of the first and second methods, respectively; standard deviations of the first and second methods, respectively; and freedom. When the t-value is higher than 1.645,
and
are the
is the value of the degrees of
there is a significant difference between the two
algorithms with a 95% confidence level. The t-values between the COOA and other optimization methods are shown in Table 7. We see that most t-values in this table are higher than 1.645. Therefore, the performance of the COOA is significantly better than that of other optimization methods with a 95% confidence level [41]. Another important conclusion which can be made from Table 6 is that besides the proposed algorithm, none of the algorithms outperform the other algorithms in all test functions. For example in optimization of
Sphere function when the dimension is selected as 15, CSO is the second best optimization algorithm while when the optimization of Weierstrass with the dimension equal to 15 is considered, ABC is the second best optimization algorithm. Since the proposed algorithm has the possibility to select the dominating optimization algorithm, it can benefit from more appropriate optimization algorithm for each of the proposed algorithms. This is the main concept of the proposed algorithm.
Table 6 Benchmark test functions. Function Name
Formula ( )
∑(
( )
)
))]
∑| |
∑((
(∑
)(
( )
(
) )
( (
∑(
|
)(
(
) (
))
)
)
√
(
)
|
∑|
( )
(
∑(
{
| | (
)
| |
))
))
)
)
)
∑(
( )
(
∑
( )
∑|
∑(
( )
(
∏| |
)
( )
( )
( ∑
∑
( )
( )
) )
)
(
( )
( )
))
(
√ ∑
(
∑ [∑
(
(
(
( )
∑
∑(
( )
( )
Range
)
Optimal value
Table 7 Results for benchmark test functions of 15, 30 and 60 dimension: mean & standard deviation fitness values. No.
1
2
3
4
5
6
Function
n
Dimension
Max iteration
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
Result
COOA
ALL BEST
PSO[25]
ACO[28]
CSO[5]
ABC[14]
ICA[2]
Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank
2.40936E-29 3.52584E-29 N/A 1 3.31863E-36 7.42065E-36 N/A 1 4.54043E-64 6.23842E-64 N/A 1 0.397983623 0.441147733 N/A 1 0.198991811 0.431317801 N/A 1 9.750597752 4.633562129 N/A 1 1.04805E-14 1.18897E-14 N/A 1 6.21725E-15 0 N/A 1 6.21725E-15 0 N/A 1 16.660347 3.709869645 N/A 3 1.607136876 1.192902137 N/A 1 3.58246874 2.930127571 N/A 1 0 0 N/A 1 0 0 N/A 1 0.554546794 0.011751945 N/A 2 9.09901E-10 2.0346E-09 N/A 1 9.45153E-35 1.89031E-34 N/A 1 5.74047E-58 8.1354E-58 N/A 1
1.42595E-09 2.16238E-09 4.66 3 1.69776E-13 1.7957E-13 6.69 4 4.71093E-18 3.97734E-18 8.38 4 23.28381107 4.610559868 34.94 4 47.03129622 3.078008169 106.55 3 109.3293099 10.75106798 60.15 3 7.58417E-06 6.59339E-06 8.13 3 6.92751E-08 3.97143E-08 12.33 3 6.02689E-10 2.80125E-10 15.21 3 10.00075563 4.654266717 7.91 2 26.39572233 1.082358178 108.82 4 53.08333702 0.711648383 116.08 2 4.136336506 0.629232847 46.48 5 10.5388757 0.679275885 109.71 5 3.214541068 0.068327201 271.29 3 1.21479E-05 7.11625E-06 12.07 3 1.94761E-08 1.40838E-08 9.78 4 1.25401E-12 7.26681E-13 12.2 4
5702.511933 960.4041863 41.99 7 21173.67115 1801.059445 83.13 7 61882.16219 2304.850463 189.85 6 99.92141281 13.28391745 52.95 7 256.0624964 7.719268558 234.01 6 612.4468665 16.57277355 247.65 6 16.10489263 0.755990562 150.64 7 17.98789412 0.365623888 347.88 7 19.14822558 0.088968792 1521.86 5 3632361.354 1081492.817 23.75 7 29239652.54 4235952.846 48.81 7 112970280.9 21990849.02 36.33 6 17.42207435 0.255550888 482.07 6 39.82552717 0.862219819 326.61 6 63.30132565 0.486357976 912 7 779757341.8 141101491.9 39.08 7 3516621937 240118283.2 103.56 7 9090788695 207514384.8 309.77 6
0.003948512 0.00185703 15.03 6 766.683559 237.2042829 22.85 6 132400.8534 10872.31521 86.11 7 89.01382592 8.573067459 72.99 6 288.4081482 12.06643892 168.79 7 914.6409158 15.37802852 398.39 7 0.043262046 0.011345708 26.96 4 9.401145299 1.192651989 55.74 5 20.75896897 0.215867715 679.99 6 1040.413011 208.4045201 34.73 6 11709799.12 12441889.73 6.65 6 563890532.1 70806161.03 56.31 7 17.89635438 1.469193225 86.13 7 42.98116835 0.693868605 438.01 7 54.00251403 2.249596091 168 6 590.674742 462.6489471 9.03 6 76808146.81 18874351.61 28.78 6 21044374868 1707333771 87.16 7
2.82355E-12 5.05348E-12 3.95 2 2.41735E-15 2.21174E-15 7.73 3 3.13713E-19 2.42335E-19 9.15 3 15.69891195 5.989080632 18.02 3 53.74057606 15.10413388 25.06 4 89.38693898 14.87054582 36.15 2 4.36685E-07 4.385E-07 7.04 2 9.44339E-09 4.74258E-09 14.08 2 9.60545E-11 4.78923E-11 14.18 2 44.15118872 4.842518502 31.87 5 23.07202046 1.163815359 91.07 3 53.2441353 2.291808044 94.40 3 2.752672222 1.182339836 16.46 4 9.305656533 1.644054941 40.02 4 36.66920427 0.521117399 489.92 5 4.14444E-07 4.54384E-07 6.44 2 2.8558E-10 2.6999E-10 7.48 3 6.60717E-14 2.49905E-14 18.69 3
4.33868E-07 2.6085E-07 11.76 5 4.32669E-12 4.93274E-13 62.02 5 5765.550161 1848.320069 22.06 5 46.76295049 5.357986743 60.98 5 122.976442 13.54376209 64.07 5 317.3904415 26.29567391 81.47 5 3.944877814 8.820627806 3.16 6 12.34843639 6.127981287 14.25 6 20.86044856 0.031345357 4705.82 7 4.993325142 1.106934334 -21.31 1 14.39916046 9.314122556 9.63 2 675252.2863 294139.2807 16.23 5 0.153449684 0.033222315 32.66 2 2.73721E-05 1.36558E-05 14.17 2 4.60592E-06 2.34483E-07 -333.66 1 0.027284044 0.0118267 16.31 4 7.17231E-07 1.93686E-07 26.18 5 1012288393 196769845.9 36.38 5
3.19684E-07 7.14829E-07 3.16 4 1.02251E-18 1.50689E-18 4.8 2 3.9694E-23 8.8643E-23 3.17 2 14.13118607 7.78004771 12.46 2 24.27756259 7.195506945 23.62 2 128.5482344 26.15809135 31.62 4 0.279172139 0.595653897 3.31 5 0.050100215 0.052073417 6.8 4 1.756586398 0.376105193 33.03 4 30.66201039 29.1707912 3.37 4 163.1329315 25.51706278 44.71 5 74.39116311 9.765813923 49.11 4 0.219613867 0.268294064 5.79 3 0.63133431 0.333092519 13.4 3 21.13544475 1.927955017 75.48 4 24.17228235 52.84237071 3.23 5 1.31047E-28 2.58784E-28 3.58 2 2.88247E-15 4.99259E-15 4.08 2
1(1.16)
4(3.44)
6(6.01)
7(6.22)
2(3.06)
5(4.22)
3(3.39)
Overall Ranking (Average ranking number)
M
8.4
5.8
11.4
8.6
9.0
12.0
8.6
8.0
11.8
9.2
11.0
12.8
6.6
8.4
7.0
7.0
8.0
11.3
9.15
Continued Table 7. No .
7
8
9
10
11
12
Function
n
Dimension
Max iteration
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
Result
Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank
Overall Ranking (Average ranking number)
CSO[5]
ABC[14]
ICA[2]
2.51171E-16 4.2261E-16 N/A 1 4.47669E-25 4.74394E-25 N/A 1 2.22208E-39 1.78543E-39 N/A 1 4.56242E-06 6.37456E-06 N/A 1 5.9719E-18 6.84765E-18 N/A 1 2.73008E-23 4.06795E-23 N/A 1 0 0 N/A 1 0 0 N/A 1 0 0 N/A 1 1.4788E-14 2.50713E-14 N/A 1 4.69347E-46 6.77943E-46 N/A 1 7.56667E-64 8.94814E-64 N/A 1 5.61945E-08 8.25556E-08 N/A 1 2.64863E-11 9.99543E-12 N/A 1 7.4374E-11 1.80265E-11 N/A 1 8.82172E-28 1.50445E-27 N/A 1 5.62394E-44 9.74093E-44 N/A 1 2.46895E-66 3.75822E-66 N/A 1
COOA
0.009605604 0.015693833 4.33 3 92.0755911 82.9539497 7.85 5 26.44436833 40.98016039 4.56 3 0.649573574 0.885161195 5.19 5 0.399170251 0.633176712 4.46 5 0.304594833 0.246209497 8.75 3 4.0374E-193 0 4 2.8895E-283 0 4 0 0 1 7.71718E-10 8.96902E-10 6.08 3 1.02541E-12 1.07565E-12 6.74 3 2.34388E-17 5.2575E-19 315.24 3 77.35619062 17.49504962 31.27 5 58.4354261 39.99240907 10.33 3 61.01433317 42.31814997 10.2 3 4.09805E-13 4.99812E-13 5.8 3 1.84945E-16 6.87321E-17 19.03 3 8.27694E-20 7.10652E-20 8.24 4
ALL BEST
5.24837E+23 7.70532E+23 4.82 7 1.83233E+53 1.68166E+53 7.7 6 1.275E+115 1.2131E+115 7.43 6 6629088626 3147914881 14.89 7 41208013243 17312468135 16.83 7 1.76489E+11 31657046767 39.42 6 8.9743E+162 65535 7 1.5085E+177 65535 1.63E+173 7 2.0339E+190 65535 2.1945E+186 6 12269.54015 1404.934418 61.75 7 58082.36136 3879.206592 105.87 7 220495.9144 25118.22934 62.07 6 582.7464706 67.50558826 61.04 7 1989.682166 58.54272138 240.32 6 5296.722006 129.8033718 288.54 6 16.22744517 3.074348045 37.32 7 153.0486761 34.35605805 31.5 6 2716.86189 2269.208719 8.47 6
PSO[25]
1.9332E+17 3.34706E+17 4.08 6 3.53092E+56 5.57916E+56 4.48 7 1.6571E+129 1.5975E+129 7.33 7 128.1496654 15.78786312 57.4 6 13412080676 7763479194 12.22 6 8.74901E+11 84383955072 73.31 7 5.55148E-21 9.61543E-21 4.08 6 3.5829E+131 6.1963E+131 4.09 6 2.4477E+200 65535 2.641E+196 7 0.009360426 0.002157021 30.69 6 1842.582926 485.5309826 26.83 6 480145.228 47994.5446 70.74 7 172.2029908 21.08368574 57.75 6 2335.822888 414.3307226 39.86 7 7746.026845 278.7609346 196.49 7 0.00010978 3.26019E-05 23.81 6 311.0302782 196.4253444 11.2 7 3.6719E+11 3.41024E+11 7.61 7
ACO[28]
23.91460561 27.36674075 6.18 5 18.5820364 20.66932505 6.36 4 161.9726764 280.5449031 4.08 4 0.001144313 0.000982591 8.2 3 0.001467055 0.001518762 6.83 3 1.301162376 1.097507807 8.38 4 2.3925E-259 0 2 0 0 1 0 0 1 1.65585E-12 1.39151E-12 8.34 2 4.53499E-15 5.42116E-15 5.92 2 7.95427E-19 1.18953E-18 4.73 2 60.84511802 43.61288248 9.86 4 100.218908 83.74864149 8.46 5 349.0012719 154.9351254 15.93 5 1.04881E-15 1.26317E-15 5.87 2 1.37872E-17 1.45131E-17 6.72 2 5.9047E-22 1.99818E-22 20.9 3
0.008281788 0.000910203 64.34 2 4.63162E-05 2.54518E-06 128.68 2 7618.259589 980.4964168 54.94 5 0.000895024 0.00046919 13.42 2 1.09569E-07 3.56585E-08 21.73 2 1337286795 716276938.1 13.2 5 5.0807E-139 8.8001E-139 4.08 5 2.8035E-290 0 3 5.3519E+148 9.1442E+148 4.14 5 1.90822E-06 8.35826E-07 16.14 5 2.97819E-11 1.32954E-11 15.84 5 1533.552394 256.4868692 42.28 5 1.625904397 0.945927175 12.15 2 0.021507383 0.007790482 19.52 2 2.00691E-06 8.95424E-07 15.85 2 6.74851E-10 2.39877E-10 19.89 5 1.24117E-14 1.4995E-15 58.53 5 10.47186553 2.778601587 26.65 5
0.407428206 0.452001649 6.37 4 0.075858286 0.112643854 4.76 3 0.135531425 0.124095837 7.72 2 0.180478124 0.098670452 12.93 4 0.035220754 0.059344982 4.2 4 0.000452046 0.00078295 4.08 2 1.3665E-222 0 3 1.3846E-220 0 5 0 0 1 4.41225E-08 7.58165E-08 4.12 4 8.97216E-12 1.55402E-11 4.08 4 2.09073E-05 3.62125E-05 4.08 4 23.18332147 39.52751578 4.15 3 58.55560187 53.13874001 7.79 4 289.3929602 35.68897394 57.34 4 2.03544E-10 3.52431E-10 4.08 4 2.58584E-15 4.4788E-15 4.08 4 1.03767E-28 1.71251E-28 4.28 2
1(1.00)
3(3.50)
7(6.50)
6(6.50)
2(3.00)
4(3.72)
5(4.74)
M
5.6
5.6
11.0
7.3
8.0
11.6
4.6
6.0
8.0
4.3
7.3
11.0
9.0
9.0
9.6
5.6
7.6
11.0
7.89
Continued Table 7. No.
13
14
15
Function
n
Dimension
Max iteration
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
7
15
500
15
30
1000
30
60
2000
Result
COOA
ALL BEST
PSO[25]
ACO[28]
CSO[5]
ABC[14]
ICA[2]
Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank Mean Std t-value Rank
1.837022648 0.012845948 N/A 1 3.914861597 0.123301874 N/A 1 8.720101924 0.208134441 N/A 1 9.15313E-07 1.54369E-06 N/A 1 9.76996E-15 1.01754E-14 N/A 1 2.81256E-15 2.5252E-15 1 0.666666667 4.94307E-13 N/A 1 0.333333333 0.528634599 N/A 1 6.00031522 1.000047282 N/A 1
4.752135446 0.006475143 1432.89 5 10.52092414 0.041331526 359.2 5 20.9838982 0.013950864 415.71 4 3.92643236 5.060832742 5.49 4 3.121303375 1.224414729 18.03 4 8.664051368 5.239086777 11.69 3 20.83842086 3.6170126 39.43 4 47.0009803 3.955966218 82.68 4 110.255554 5.976133584 121.66 4
5.388021102 0.05864797 418.22 6 12.1188097 0.045716426 441.13 6 25.6831944 0.310749796 320.7 6 149729911.1 35164494.57 30.11 7 1884978644 663033061.4 20.1 7 8067321780 552555304.3 103.24 6 77.41918174 3.516751803 154.32 7 207.750882 11.21866498 130.59 6 507.2794758 5.117951798 679.72 6
5.989342668 0.073786133 392.03 7 13.23063607 0.118400732 385.34 7 27.72144898 0.047716918 629.22 7 44.93013144 11.80318439 26.92 6 1280231451 362016975.8 25.01 6 53339262393 9874069720 38.2 7 72.39576596 7.782961923 65.17 6 285.0211256 22.36065665 90 7 897.5006436 15.02262456 418.7 7
4.453825442 0.586951717 31.52 4 10.51018048 0.406733436 109.73 4 21.71732306 0.147756146 360.06 5 8.862975962 4.939987482 12.69 5 16.47602497 8.640293802 13.48 5 27.1433343 7.119706745 26.96 4 10.66666667 6.658328118 10.62 2 36.33333333 19.65536398 12.95 3 100.3333333 42.61846235 15.65 3
1.862983933 0.555603749 0.33 2 5.998860386 0.028010687 116.54 3 15.99416704 0.493593297 96.02 3 0.021741452 0.002305405 66.68 2 0.000148581 3.59622E-05 29.21 2 10144417.39 9923079.73 7.23 5 30.33333549 1.527523722 137.33 5 101.3333333 5.131601439 138.44 5 292.144445 13.45362405 149.98 5
2.280119872 0.208772048 14.98 3 5.804514884 0.432515209 29.71 2 13.07727356 0.810876079 36.8 2 0.481691992 0.336739834 10.11 3 2.045859707 0.382096712 37.86 3 6.565133124 4.613725072 10.06 2 13.66666667 16.77299417 5.48 3 13.00000002 13.22875655 6.77 2 44.66780494 26.57708233 10.28 2
1(1.00)
5(4.11)
6(6.33)
7(6.67)
4(3.89)
3(3.56)
2(2.44)
Overall Ranking (Average ranking number)
M
8.3
9.0
13.6
5.0
8.3
9.6
8.3
10.3
11.6
9.33
Table 8 Ranking the algorithms based on the mean of fitness values of solutions achieved from 20 experiments. No.
1
2
3
4
5
6
7
8
9
10
11
Function
n
Dimension
Max iteration
COOA
ALL BEST
PSO[25]
ACO[28]
CSO[5]
ABC[14]
ICA[2]
7
15
500
1
3
7
6
2
5
4
15
30
1000
1
4
7
6
3
5
2
30
60
2000
1
4
6
7
3
5
2
7
15
500
1
4
7
6
3
5
2
15
30
1000
1
3
6
7
4
5
2
30
60
2000
1
3
6
7
2
5
4
7
15
500
1
3
7
4
2
6
5
15
30
1000
1
3
7
5
2
6
4
30
60
2000
1
3
5
6
2
7
4
7
15
500
3
2
7
6
5
1
4
15
30
1000
1
4
7
6
3
2
5
30
60
2000
1
2
6
7
3
5
4
7
15
500
1
5
6
7
4
2
3
15
30
1000
1
5
6
7
4
2
3
30
60
2000
2
3
7
6
5
1
4
7
15
500
1
3
7
6
2
4
5
15
30
1000
1
4
7
6
3
5
2
30
60
2000
1
4
6
7
3
5
2
7
15
500
1
3
7
6
5
2
4
15
30
1000
1
5
6
7
4
2
3
30
60
2000
1
3
6
7
4
5
2
7
15
500
1
5
7
6
3
2
4
15
30
1000
1
5
7
6
3
2
4
30
60
2000
1
3
6
7
4
5
2
7
15
500
1
4
7
6
2
5
3
15
30
1000
1
4
7
6
1
3
5
30
60
2000
1
1
6
7
1
5
1
7
15
500
1
3
7
6
2
5
4
15
30
1000
1
3
7
6
2
5
4
30
60
2000
1
3
6
7
2
5
4
7
15
500
1
5
7
6
4
2
3
15
30
1000
1
3
6
7
5
2
4
30
60
2000
1
3
6
7
5
2
4
7
15
500
1
3
7
6
2
5
4
15
30
1000
1
3
6
7
2
5
4
12
13
14
15
30
60
2000
1
4
6
7
3
5
2
7
15
500
1
5
6
7
4
2
3
15
30
1000
1
5
6
7
4
3
2
30
60
2000
1
4
6
7
5
3
2
7
15
500
1
4
7
6
5
2
3
15
30
1000
1
4
7
6
5
2
3
30
60
2000
1
3
6
7
4
5
2
7
15
500
1
4
7
6
2
5
3
15
30
1000
1
4
6
7
3
5
2
30
60
2000
1
4
6
7
3
5
2
1(1.07)
4(3.60)
7(6.47)
6(6.43)
2(3.20)
5(3.89)
3(3.20)
Overall Ranking (Average ranking number)
Fig. 7. Evolution of average Number Function Evaluation(NFE) of Ackley function with 15 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 8. Evolution of average fitness of Ackley function with 15 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 9. Evolution of average Number Function Evaluation(NFE) of Alpine function with 15dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 10. Evolution of average fitness of Alpine function with 15 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 11. Evolution of average Number Function Evaluation(NFE) of Rastrigin function with 15 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 12. Evolution of average fitness of Rastrigin function with 15 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 13. Evolution of average Number Function Evaluation(NFE) of Ackley function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 14. Evolution of average fitness of Ackley function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 15. Evolution of average Number Function Evaluation(NFE) of Alpine function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 16. Evolution of average fitness of Alpine function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 17. Evolution of average Number Function Evaluation(NFE) of Rastrigin function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 18. Evolution of average fitness of Rastrigin function with 30 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 19. Evolution of average Number Function Evaluation(NFE) of Ackley function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 20. Evolution of average fitness of Ackley function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 21. Evolution of average Number Function Evaluation(NFE) of Alpine function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 22. Evolution of average fitness of Alpine function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 23. Evolution of average Number Function Evaluation(NFE) of Rastrigin function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
Fig. 24. Evolution of average fitness of Rastrigin function with 60 dimension for PSO[25], ACO[28], CSO[5], ABC[14], ICA[2], ALL BEST and COOA.
4.2 The Comparison Between COOA and HPSOWM and EPUS-PSO In the second part of the experiments, we compare the results of the proposed algorithm with HPSOWM [41] and EPUS-PSO [44]. Hybrid particle swarm optimization with wavelet mutation (HPSOWM) is a new hybrid PSO that incorporates a wavelet-theory-based mutation operation. HPSOWM applies the wavelet theory to enhance PSO in exploring the solution space more effectively for a better solution [41]. Efficient population utilization strategy for particle swarm optimizer (EPUS-PSO) adopting a population manager to significantly improve the efficiency of PSO. EPUS-PSO using variable particles in swarms to enhance the searching ability and drive particles more efficiently. Moreover, sharing principals are constructed to stop particles from falling into the local minimum and make the global optimal solution easier found by particles [44]. The results listed in Tables 10-11 and Figs. 25-42 are obtained from the evaluation of all algorithms on 15 benchmark test functions [48] which are presented in Table 9.
Table 9 Benchmark test functions. #
Function Name
Range
Optimal value
Unimodal Functions Shifted Sphere Function Shifted Schwefel's Problem 1.2 Shifted Rotated High Conditioned Elliptic Function Shifted Schwefel's Problem 1.2 with Noise in Fitness Schwefel's Problem 2.6 with Global Optimum on Bounds Basic Functions Shifted Rosenbrock's Function Shifted Rotated Griewank's Function without Bounds Shifted Rotated Ackley's with Global Optimum on Bounds Shifted Rastrigin's Function Shifted Rotated Rastrigin's Function Shifted Rotated Weierstrass Function Schwefel's Problem 2.13 Expanded Functions Expanded Extended Griewank's + Rosenbrock's (F8F2) Expanded Rotated Extended Scaffe's F6 Hybrid Composition Function Hybrid Composition Function 1
+
Table 10 Results for benchmark test functions of 40 and 80 dimension: mean & standard deviation fitness values. Function
n
Dimension
Max iteration
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
Overall Ranking (Average ranking number)
Result Mean Std t-value Rank mean Std t-value Rank mean Std t-value Rank mean Std t-value Rank mean Std t-value Rank mean Std t-value Rank mean Std t-value Rank mean std t-value Rank mean std t-value Rank mean
COOA 4.50000E+02 0.00000E+00 N/A 1 4.50000E+02 0.00000E+00 N/A 1 4.50011E+02 4.02339E-05 N/A 1 4.50813E+02 4.83319E-01 N/A 1 1.60508E+06 5.22162E+03 N/A 1 1.70990E+06 4.53995E+02 N/A 1 6.17868E+03 6.45859E+01 N/A 1 5.33795E+04 1.07866E+04 N/A 1 8.41040E+03 7.30447E-01 N/A 2 1.61336E+04
HPSOWM 5.17021E+02 3.62027E+01 13.09 3 7.14436E+02 5.31936E+01 35.15 3 1.06410E+04 5.73429E+03 12.57 3 4.14512E+04 5.58325E+03 51.93 3 3.46126E+07 1.20870E+07 19.31 2 6.71822E+07 7.47184E+06 61.96 2 2.58395E+04 5.80829E+03 23.93 3 7.64934E+04 1.42448E+04 9.15 3 1.06899E+04 3.66475E+03 4.40 3 1.36390E+04
EPUS-PSO 4.74679E+02 1.27446E+00 136.93 2 5.25280E+02 4.48420E+00 118.71 2 6.35080E+03 2.35945E+02 176.84 2 3.54122E+04 6.46317E+03 38.25 2 3.83864E+07 3.14743E+06 82.63 3 1.20133E+08 3.47271E+07 24.11 3 1.27701E+04 3.98559E+03 11.69 2 6.90002E+04 3.00137E+04 3.46 2 6.95080E+03 2.56223E+03 -4.03 1 1.74266E+04
std t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank mean std
2.36457E-01 N/A 3 3.92903E+02 3.38360E-01 N/A 1 4.50854E+02 1.06329E-01 N/A 1 -1.79901E+02 6.18369E-02 N/A 1 -1.79930E+02 3.42710E-05
2.90859E+03 -6.06 2 2.81901E+04 1.14162E+04 17.22 3 3.62228E+05 6.62955E+04 38.59 3 3.67644E+03 1.21407E+00 22431.20 2 7.59219E+03 1.43378E+01
1.14818E+03 7.96 1 1.81800E+04 3.74989E+03 33.54 2 3.24866E+05 6.62929E+04 34.60 2 3.84792E+03 3.09540E+01 920.10 3 8.12736E+03 1.42266E+01
t-value Rank mean std t-value Rank mean std t-value Rank
N/A 1 2.80591E+02 2.29054E-03 N/A 1 2.80684E+02 4.40145E-04 N/A 1
3833.04 2 2.81143E+02 2.82218E-02 137.98 2 2.81298E+02 8.02678E-03 539.74 3
4128.98 3 2.81154E+02 1.55182E-02 253.91 3 2.81296E+02 1.31890E-02 327.77 2
(1)1.19
(3)2.63
(2)2.19
M
2.50
5.00
4.70
6.30
4.00
7.50
8.00
6.50
6.00
7.00
9.20
6.00
6.00
6.50
6.00
3.00
5.89
Continued Table 10. Function
n
Dimension
Max iteration
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
20
40
750
40
80
1500
Overall Ranking (Average ranking number)
Result mean std t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank mean
COOA -3.23533E+02 8.81340E-04 N/A 1 -2.91694E+02 4.82011E-02 N/A 1 -1.09784E+02 6.71960E+00 N/A 1 2.96487E+02 2.47715E+01 N/A 2 1.25343E+02 2.13180E+00 N/A 1 1.68046E+02
HPSOWM -1.39518E+02 1.41786E+01 91.77 2 3.66671E+01 6.35199E+01 36.55 2 -8.24321E+01 7.03106E+01 2.74 2 1.97796E+02 7.37991E+01 -8.96 1 1.33286E+02 1.65640E+01 3.36 3 1.87051E+02
EPUS-PSO -2.41915E+01 1.88907E+01 112.05 3 2.27536E+02 3.90571E+01 94.00 3 3.39658E+01 2.01087E+01 47.94 3 4.28980E+02 3.70509E+01 21.02 3 1.25785E+02 5.63570E+00 0.52 2 1.70063E+02
std t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank mean std
5.58000E-01 N/A 1 2.16251E+04 2.06859E+01 N/A 1 4.09717E+04 1.03021E+04 N/A 1 -1.27068E+02 1.06826E-02 N/A 1 -1.17409E+02 8.26738E-02
2.19021E+01 6.13 3 2.01902E+06 1.17274E+05 120.43 2 1.04476E+07 3.10280E+06 23.72 2 -1.10664E+02 5.90612E+00 19.64 2 -8.63056E+01 1.45531E-01
3.27310E+00 4.29 2 3.27709E+06 1.31150E+05 175.52 3 1.79431E+07 4.74104E+05 266.94 3 -1.05212E+02 4.13598E+00 37.37 3 -6.72656E+01 9.38587E-01
t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank mean std t-value Rank
N/A 1 -2.82925E+02 1.42281E-04 N/A 1 -2.62881E+02 2.01951E-02 N/A 1 3.25375E+02 5.44700E-01 N/A 1 2.90414E+02 1.47000E-02 N/A 1
1314.01 2 -2.82130E+02 4.31046E-01 13.03 3 -2.62796E+02 3.32904E-01 1.81 3 9.2348E+02 2.0464E+02 20.67 3 1.0023E+03 1.0418E+01 20.67 3
376.31 3 -2.82275E+02 2.72075E-01 16.89 2 -2.62873E+02 2.74608E-01 0.19 2 5.2301E+02 2.6107E+01 53.52 2 5.2133E+02 2.5148E+01 53.52 2
(1)1.06
(3)2.40
(2)2.54
M
7.23
4.50
6.30
5.33
5.64
5.30
4.00
7.00
5.50
6.20
7.20
5.30
7.00
4.25
5.78
Table 11 Ranking the algorithms based on the mean of fitness values of solutions achieved from 20 experiments. Function
n
Dimension
Max iteration
COOA
HPSOWM
EPUS-PSO
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
1
2
3
40
80
1500
1
2
3
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
2
3
1
40
80
1500
3
2
1
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
1
2
3
40
80
1500
1
2
3
20
40
750
1
2
3
40
80
1500
1
3
2
20
40
750
1
2
3
40
80
1500
1
2
3
20
40
750
1
2
3
40
80
1500
2
1
3
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
1
2
3
40
80
1500
1
2
3
20
40
750
1
2
3
40
80
1500
1
2
3
20
40
750
1
3
2
40
80
1500
1
3
2
20
40
750
1
3
2
40
80
1500
1
3
2
(1)1.13
(3)2.50
(2)2.37
Overall Ranking (Average ranking number)
Fig. 25. Evolution of average Number Function Evaluation(NFE) of F1 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig.26. Evolution of average fitness of F1 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 27. Evolution of average Number Function Evaluation(NFE) of F1 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 28. Evolution of average fitness of F1 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 29. Evolution of average Number Function Evaluation(NFE) of F3 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 30. Evolution of average fitness of F3 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 31. Evolution of average Number Function Evaluation(NFE) of F3 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 32. Evolution of average fitness of F3 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 33. Evolution of average Number Function Evaluation(NFE) of F6 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 34. Evolution of average fitness of F6 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 35. Evolution of average Number Function Evaluation(NFE) of F6 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 36. Evolution of average fitness of F6 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 37. Evolution of average Number Function Evaluation(NFE) of F7 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 38. Evolution of average fitness of F7 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 39. Evolution of average Number Function Evaluation(NFE) of F7 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 40. Evolution of average fitness of F7 with 80 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 41. Evolution of average Number Function Evaluation(NFE) of F15 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
Fig. 42. Evolution of average fitness of F15 with 40 dimension for HPSOWN [41], EPUS-PSO[44] And COOA.
4.3. More Analysis on the Immigration Process between the Species In this subsection, more analysis on the obtained results are given. These analysis shows that how the size of each species changes for the benchmark functions of Sphere, Rastrigin, Rosenbrock, Ackley and Weierstrass. The dimension of the functions considered here is chosen as equal to 60. The number of population for each species is considered to be equal to 30 and the algorithms are iterated 1000 times. The obtained results are shown in Figs 43- 47. As can be seen from these figures, the behavior of each of these algorithms are for each cost function are different. For example for Sphere function which does not have any local minimum, the dominating algorithm is CSO. However, in the optimization of Rastrigin, the population of PSO becomes the largest at the end of optimization process. Furthermore, in the optimization process of Ackley and Weierstrass, the dominating algorithms are CSO and ACO, respectively. As can be seen from these figures, since different optimization algorithms are working in parallel, they can compete over finding the optimal value of the function and finally the best algorithm can dominate other algorithms.
Fig. 43 The evolution of the size of each of four algorithms of ABC, CSO, PSO and ACO in COOA for the optimization of the Sphere function
Fig. 44 The evolution of the size of each of four algorithms of ABC, CSO, PSO and ACO in COOA for the optimization of the Rastrigin function
Fig. 45 The evolution of the size of each of four algorithms of ABC, CSO, PSO and ACO in COOA for the optimization of the Rosenbrock function
Fig. 46 The evolution of the size of each of four algorithms of ABC, CSO, PSO and ACO in COOA for the optimization of the Ackley function
Fig. 47 The evolution of the size of each of four algorithms of ABC, CSO, PSO and ACO in COOA for the optimization of the Weierstrass function
5. Conclusion In this paper, a new optimization method based on competitive behavior of living creatures in nature is presented. The key idea of the paper is that no individual optimization algorithm can succeed in solving the best solution of every optimization algorithm. Hence, it may be possible to make nature inspired optimization algorithms to compete each other to find the surviving algorithm with respect to the optimization problem. The four algorithms of ACO, PSO, CSO and ABC are used as the competitors. The algorithms based on the motion behavior of these creatures work in parallel will survive through the ICA based competition algorithm. The ICA
algorithm decides the population of which algorithm must increase and which one must decrease. The proposed optimization algorithm is implemented on a number of benchmark test functions. The obtained results illustrate a considerable improvement in accuracy and precision of the final results, prevention of premature convergence, and increasing the speed of convergence in each iteration of the algorithm. Moreover, the simulation results shows that in the studied optimization problems, the species immigrate to different optimization algorithm. These results support the fact that none of the optimization algorithms is suitable for all of optimization problems. Since the proposed method makes it possible for the member of each algorithm to immigrate to another algorithm, it eliminates the trial and error process needed to find the optimal optimization algorithm.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
A. Alfi, M.M. Fateh, Intelligent identification and control using improved fuzzy particle swarm optimization. Expert Systems with Applications 38(10) (2011) 12312–12317. E. AtashpazGargari, C. Lucas, Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. IEEE Cong EvolComput(2007). B. Basturk, D.Karaboga, An artificial bee colony (abc) algorithm for numeric function optimization. In: IEEE Swarm intelligence symposium. Indianapolis, IN, USA (2006). D. Bratton, J. Kennedy, Defining a standard for particle swarm optimization.in Proceedings of the IEEE Swarm Intelligence Symposium (2007) 120–127. Y. Sharafi, M.A.Khanesar, M. Teshnehlab, Discrete Binary Cat Swarm Optimization Algorithm. in Computer, Control & Communication (IC4), 2013 3rd International Conference on (2013). S.C. Chu, P.W. Tsai, Computational Intelligence Based on the. Behaviour of Cat. In International Journal of Innovative Computing.Information and Control (2007)163-173. M. Dorigo,Optimization, Learning and Natural Algorithms. PhD thesis, Dipartimento. diElettronica, Politecnico di Milano, Italy (1992). M. Dorigo,C. Blum, Ant colony optimization theory: a survey. Theoretical Computer Science 344 (2005)243–278. M. Dorigo, L.M. Gambardella, Ant Colony System: a cooperative learning approach to the traveling salesman problem. IEEE Transactions on Evolutionary Computation 1(1997)53-66. H. Duan, Chu. Xu, S. Liu, Sh.Shao, Template matching using chaotic imperialist competitive algorithm. Meta-heuristic Intelligence Based Image Processing 31(13) (2010) 1868-1875. R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory. In: Proceedings of Sixth International Symposium on Micro Machine and Human Science. Nagoya, Japan, Piscataway NJ, IEEE Service Center (1995)39-43. E.F. Campana, C. Fasano, A. Pinto, Dynamic analysis for the selection of parameters and initial population, in particle swarm optimization. Journal of Global Optimization 48(3) (2010)347-397. S.N. Deepa, G. Sugumaran, Model order for mulation of a multivariable discrete system using a modified particle swarm optimization approach, Swarm and Evolutionary Computation 1 (4)(2011)204–212. D. Karaboga, An idea based on honey bee swarm for numerical optimization. Technical Report TR06, Erciyes University, Engineering Faculty, Computer Engineering Department (2005). D. Karaboga, B.Akay, A comparative study of artificial bee colony algorithm. Appl. Math. Comput 214(2009)108-132. D. Karaboga, B.Basturk, On the performance of artificial bee colony (ABC) algorithm. Applied Soft Computing 8(2008)687-697. D. Karaboga, B.Basturk, A powerful and efficient algorithm for numerical function optimization:Artificial bee colony (abc) algorithm. Journal of Global Optimization 39(3) (2007)459-471. R. Kumar, Directed Bee Colony Optimization Algorithm, Swarm and Evolutionary Computation 17 (2014) 60-73. L.X. Li, Z.J. Shao, J.X. Qian, An Optimizing Method Based on Autonomous Animate: Fish Swarm Algorithm. In: Proceeding of System Engineering Theory and Practice 11(2002)32-38.
20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42.
H. Liu, A. Abraham, W.Zhang, A Fuzzy Adaptive Turbulent Particle Swarm Optimization. International Journal of Innovative Computing and Applications 1(1) (2007)39-47. S.A. Mirjalili, S.M. Mirjalili, A.Lewis, Grey Wolf Optimizer. Advances in Engineering Software 69(2014) 46-61. M. Molga, , C.Smutnicki, Test functions for optimization needs.kwietnia(2005). Guo. Peng, Cheng. Wenming, Liang.Jian, Global Artificial Bee Colony Search Algorithm for Numerical Function Optimization. Seventh International Conference on Natural Computation 3(2011)1280-1283. A. Alizadegana, B. Asadyb, M. Ahmadpourc, Two modified versions of artificial bee colony algorithm, Applied Mathematics and Computation 225 (2013) 601–609. R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization: an overview. Swarm Intelligence Journal 1(1) (2007)33– 57. Bai. Qinghai, Analysis of Particle swarm optimization Algorithm. Computer and Information Science, College of Computer Science and Technology, Inner Mongolia University for Nationalities,Tongliao, China (2010). R. Rajabioun, Cuckoo Optimization Algorithm. Applied Soft Computing 11(2011)5508–5518. K. Socha, M.Dorigo, Ant Colony Optimization for Continuous Domains. European Journal of Operational Research 185(3) (2008) 1155-1173. M. Dorigo, Ch. Blumb, Ant colony optimization theory: A survey. Theoretical Computer Science 344(2) (2005)243–278. B. Wang, X.P. Jin, B.Cheng, Lion pride optimizer: An optimization algorithm inspired by lion pridebehavior.Sci China InfSci 55(2012)2369-2389. H. Wanga, H. Sun, Ch. Li, Sh. Rahnamayan, J.Pan, Diversity enhanced Particle swarm optimization with neighborhood search. Information Sciences 223(2013)119-135. Sh. Sun, J. Li, A two-swarm cooperative particle swarm optimization, Swarm and Evolutionary Computation 15 (2014) 118. X.S. Yang, S. Deb, Engineering Optimisation by Cuckoo Search. Int. J. Mathematical Modellingand Numerical Optimisation 1(4) (2010) 330-343. X.S.Yang, S. Deb, Cuckoo search via Lévy flights. In: Proceeings of World Congress on Nature & Biologically Inspired Computing (NaBIC) (2009) 210-214. X. Liua, M. Fub, Cuckoo search algorithm based on frog leaping local search and chaos theory, Applied Mathematics and Computation 266 (2015) 1083–1092. Xi.Yang, Engineering Optimization: An Introduction with Metaheuristic Applications. John Wiley and Sons (2010). H. Izakiana, W. Pedrycz, A new PSO-optimized geometry of spatial and spatio-temporal scan statistics for disease outbreak detection, Swarm and Evolutionary Computation 4 (2012) 1–11. Xin.Yang, Test Function Benchmarks for Global Optimization. Nature-Inspired Optimization Algorithms (2014) 227-245. W. Gaoa, L. Huanga, S. Liub, F.T.S. Chanc, C. Daid, X. Shana, Artificial bee colony algorithm with multiple search strategies, Applied Mathematics and Computation 271 (2015) 269–287. H. Zheng, Y.zhou, A Novel Cuckoo Search Optimization Algorithm Base on Gauss Distribution. Journal of Computational Information Systems 8(10) (2012) 4193-4200. S.H. Ling, H.H.C. Iu, K.Y. Chan, H.K. Lam, C.W. Yeung, F.H.Leung, Hybrid Particle Swarm Optimization With Wavelet Mutation and Its Industrial Applications. IEEE Trans. Syst., Man, Cybern 38(3) (2008)743-763. R. Malviya, D.K. Pratihar,Tuning of neural networks using particle swarm optimization to model MIG welding process,Swarm and Evolutionary Computation 1(4)(2011)223–235.
43.
W.F. Gao, S.Y. Liu, L.L. Huang, A novel artificial bee colony algorithm based on modified search equation and orthogonal learning. IEEE Transactions on Cybernetics 43(3) (2013) 1011-1024.
44.
Sh. Hsieh, Ts. Sun, Ch.Liu, Sh.g. Tsai, Efficient Population Utilization Strategy for Particle Swarm Optimizer.IEEE Trans. Syst., Man, Cybern 44(9) (2014) 1567-1578.
45.
Zh. Ren,A. Zhang, Ch. Wen, Z.Feng, A Scatter Learning Particle Swarm Optimization Algorithmfor Multimodal Problems.IEEE Transactions on Cybernetics 44(7) (2014)1127-1140.
46.
Zh. Zhan,J. Zhang, Y. Li, H.Sh. Chung, Adaptive Particle Swarm Optimization. IEEE Trans. Syst., Man, Cybern 39(6) (2009) 1362-1381.
47.
A. Agapie, M. Agapie, G. Rudolph, G.Zbaganu, Convergence of evolutionary algorithms on the n-dimensional continuous space. IEEE Transactions on Cybernetics 43(5) (2013) 1462-1472.
48.
P.N. Suganthan, N. Hansen,J.J. Liang, K. Deb, Y.P. Chen, A. Auger, S. Tiwari, Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization.Nanyang Technol. Univ. Singapore (2005).
49.
I.G. Tsoulosa, A. Stavrakoudisb, Enhancing PSO methods for global optimization, AppliedMathematics and Computation 216 (10) (2010) 2988–3001. B. Gao, X. Ren, M. Xu, An improved particle swarm algorithm and its application.Procedia Engineering 15(2011)24442448.
50.
51.
Y. Wang, H.X. Li, T. Huang, L. Li, Differential evolution based on covariance matrix learning and bimodal distribution parameter setting, Applied Soft Computing, 18(2014) 232-247
52.
Y. Wang, Z. Cai, Q. Zhang, Enhancing the search ability of differential evolution through orthogonal crossover, Information Sciences 185 (2012) 153-177
53.
Y. Wang, H.X. Li, Yen, G.G., W. Song, MOMMOP: Multi objective Optimization for Locating Multiple Optimal Solutions of Multimodal Optimization Problems, IEEE Transactions on Cybernetics 45 (2014)830-843.
54.
Y. Wang, B.C. Wang, H.X. Li, G.G. Yen, Incorporating objective function information into the feasibility rule for constrained evolutionary optimization, IEEE Transactions on Cybernetics (2015)1-15.