INS 10296
No. of Pages 14, Model 3G
18 September 2013 Information Sciences xxx (2013) xxx–xxx 1
Contents lists available at ScienceDirect
Information Sciences journal homepage: www.elsevier.com/locate/ins 5 6
Integrating the artificial bee colony and bees algorithm to face constrained optimization problems
3 4 7
Q1
Hsing-Chih Tsai ⇑
8 9
Q2
Department of Construction Engineering, National Taiwan University of Science and Technology, Taiwan Ecological and Hazard Mitigation Engineering Researching Center, National Taiwan University of Science and Technology, Taiwan
10 11 1 5 3 2 14 15 16 17 18 19 20 21 22 23 24
a r t i c l e
i n f o
a b s t r a c t
Article history: Received 18 September 2012 Received in revised form 28 August 2013 Accepted 5 September 2013 Available online xxxx Keywords: Constrained optimization problem Swarm intelligence Artificial bee colony Bees algorithm
Swarm intelligence (SI) has generated growing interest in recent decades as an algorithm replicating biological and other natural systems. Several SI algorithms have been developed that replicate the behavior of honeybees. This study integrates two of these, the artificial bee colony (ABC) and bees algorithms (BA), into a hybrid ABC–BA algorithm. In ABC–BA, an agent can perform as an ABC agent in the ABC sub-swarm and/or a BA agent in the BA sub-swarm. Therefore, the ABC and BA formulations coexist within ABC–BA. Moreover, the population sizes of the ABC and BA sub-swarms vary stochastically based on the current best fitness values obtained by the sub-swarms. This paper conducts experiments on six constrained optimization problems (COPs) with equality or inequality constraints. In addressing equality constraints, this paper proposes using these constraints to determine function variables rather than directly converting them into inequality constraints, an approach that perfectly satisfies the equality constraints. Experimental results demonstrate that the performance of the ABC–BA approximates or exceeds the winner of either ABC or BA. Therefore, the ABC–BA is recommended as an alternative to ABC and BA for handling COPs. Ó 2013 Elsevier Inc. All rights reserved.
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
43 44
1. Introduction
45
Most search and optimization problems used in science and engineering involve inequality and/or equality constraints that the obtained optimal solutions must satisfy. Because constrained optimization problems (COPs) are frequently encountered in real world applications, significant attention and efforts have been invested in their efficient and effective resolution. A COP is usually written as a nonlinear programming problem of the following type. For minimization problems:
46 47 48
49
51 52 53 54
Minimize f ð~ xÞ subject to g i ð~ xÞ 6 0;
i ¼ 1; . . . ; Ng
hj ð~ xÞ ¼ 0;
j ¼ 1; . . . ; Nh
xLk 6 xk 6 xLk ;
ð1Þ
k ¼ 1; . . . ; D
where f is a D-dimensional function; x represents function variables; g denotes less-than-equal-to inequality constraints; Ng is number of g constraints; h means equality constraints; and Nh is number of h constraints. All x variables have lower and upper bounds [xL, xU]. ⇑ Address: #43, Sec. 4, Keelung Rd., Taipei 106, Taiwan. Tel.: +886 2 27301277/76663; fax: +886 2 27301074. E-mail address:
[email protected] 0020-0255/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.ins.2013.09.015
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 2
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
92
Evolutionary algorithms, especially genetic algorithms (GA), have been widely used in recent decades to solve COPs [24]. Powell and Skolnick mapped feasible and infeasible solutions into two different intervals within a GA environment [35]. Deb used GA to propose a constraint handling method that frees settings of penalty numbers [4]. Douglas et al. analyzed COPs with employed niched pareto GA and penalty methods [7]. Coello compiled a comprehensive survey of popular constraint handling techniques used with evolutionary algorithms [3]. Barkat Ullah et al. developed a hybrid evolutionary algorithm based on GA and devised a local search technique that emphasizes equality constraint problems [1]. Swarm intelligence (SI) has generated growing interest as an algorithm that replicates biological and other natural systems. Optimization algorithms have been proposed and employed in various applications, and applications related to SI have demonstrated great promise. SI is the property of a system, in which coherent functional global patterns emerge due to the collective behaviors of agents interacting locally with their environment. SI algorithms include ant colony optimization, particle swarm optimization (PSO), artificial fish swarm, artificial bee colony (ABC) and gravitational search algorithm, among others [6,8–11,21,22,26,27,29,36,37,40,42–45]. Sun et al. applied an improved vector PSO to solve COPs using a constrainedpreserving method [39]. Kou et al. proposed a co-evolutionary PSO with infeasible degree to address COPs [19]. Masuda and Kurihara used a multi-objective PSO to minimize both objective functions and total amount of constraint violations [23]. ABC performance on COPs has also been investigated [17,25]. However, as no single algorithm is able to achieve optimal results for all algorithm applications [46], researchers are currently exerting significant efforts to further improve existing SI algorithms and develop new algorithms inspired by natural phenomena. Bee or bee-colony algorithms are fairly new members of SI. It is widely presumed that only some aspects of the nature/ behavior of honeybees can be exploited and that new characteristics can be added to create new classes of algorithms [47]. In recent years, the unique nature of honeybee colonies has inspired numerous researchers to develop various new bee-inspired algorithms such as virtual bee, the bees, BeeAdHoc, the marriage in honeybees, the BeeHive, bee system, bee colony optimization, and artificial bee colony [18]. The bee colony optimization algorithm has since been further developed and applied to transportation problems [41]. Yang initially proposed the virtual bee algorithm and demonstrated how it could solve two-dimensional numerical problems [47]. The bees algorithm (BA) originally proposed by Pham et al. is used to solve unconstrained function optimization problems and train multi-layered perceptron networks to recognize different patterns in control charts [29–34]. Basturk and Karaboga proposed the artificial bee colony (ABC) algorithm and used it to solve unconstrained and constrained function optimization problems [2,12–16]. Among all bee-inspired algorithms, ABC is considered the most effective and currently accounts for over half of bee SI applications [18]. However, BA directly performs bee SI based on perceptions of the natural behavior of bees and its basic concepts are superior to those of other algorithms. Experience has demonstrated that neighborhood search (ngh) is a key BA parameter. This study used a stochastic self-adaptive ngh to free its setting and improve BA performance as well. While sharing common concepts, both ABC and BA provide unique advantages. This paper thus proposes to integrate ABC and BA (ABC–BA) and apply the resulting ABC–BA algorithm to COPs. Moreover, this paper uses an infeasible degree selection method similar to that of [19] to handle the inequality constraints. This paper tests and compares the efficacy of two methods that use, respectively, tolerance values and functional relationships to convert equality constraints into inequality constraints. The remainder of this paper is organized as follows: Section 2 presents the ABC and BA algorithms; Section 3 introduces the designs of the ABC–BA algorithm; Section 4 conducts experiments on 6 well-known benchmark functions of COPs; and Section 5 provides a discussion and conclusions.
93
2. Artificial bee colony and bees algorithm
94
2.1. Artificial bee colony
95
Basturk and Karaboga [2] proposed the artificial bee colony (ABC) algorithm and used ABC in a wide range of applications [18]. The collective intelligence of bee swarms consists of three essential components: food sources, employed bees, and unemployed bees. Unemployed bees are further segregated into onlookers and scouts [12–16]. The three main phases of ABC are the employed phase, onlooker phase, and scout phase. Each employed bee searches for a new candidate location cxs from its old location xs using the following formula:
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91
96 97 98 99
100 102
103 104 105 106
107 109
cxsd ¼ xsd þ ust ðxsd xtd Þ;
t 2 f1; 2; . . . ; NEg;
d 2 f1; 2; . . . ; Dg
ð2Þ
where s denotes the bee index of an employed bee; t is a random bee index that must be different from s; NE is the number of employed bees; d is a random dimension index in D; D is the problem dimension; and ust is a random number in the range of [1, 1]. If the food source of cxs is better than that of xs, cxs replaces xs; otherwise, xs is retained. There are NO onlookers and each onlooker chooses a food source xs depending on the probability ps associated with its fitness fits.
ps ¼ fit s
110 112
t – s;
.X
fits ¼
fits
ð3Þ
1=ðf v ðxs Þ þ 1Þ; f v ðxs Þ P 0 1 þ jf v ðxs Þj;
f v ðxs Þ < 0
ð4Þ
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx 113 114 115 116
117 119
3
where fv is the function value of the objective function and Eq. (4) is designed for minimization. Each onlooker chooses an xs by probability ps and then searches a candidate location by Eq. (2) for the xs. If a location is not improved further through limit iterations, the location is assumed to be abandoned. A scout will be recruited directly to replace the abandoned location with a new xnew in the search space [xmin, xmax].
xdnew ¼ xdmin þ randð0; 1Þ xdmax xdmin
ð5Þ
121
Population size N can be defined as the number of food sources equal to either NE or NO and the number of scout bees NC equal to 1.
122
2.2. Bees algorithm
123
The bees algorithm (BA) proposed by Pham et al. [29] mimics honeybee food foraging behavior and is used in a variety of applications [5,20,28,32–34,38]. This work combines elite bees and best bees as selected bees with a population size of NS. The number of recruited bees (Rec) for each selected bee varies linearly from nep to nsp according to the rank of the selected bee. Therefore, the population size (N) equals the sum of NS and the number of remaining bees (NR). Moreover, NS uses twothirds of N, leaving one-third as NR. The selected bees execute exploitation along with the recruited bees and the remaining bees abandon their current sites and move to new sites randomly for exploration. All selected bees use Eq. (6) to update their locations when new locations provide better fitness in Rec trials.
120
124 125 126 127 128 129
130 132 133 134 135 136 137 138 139 140 141
cx ¼ x þ ngh ð2r 1 1Þ
where x is a D-dimensional location vector of the selected bee and cx is the candidate location found by a recruited bee. If cx is better than x, x is replaced. Otherwise, the next recruited bee is executed or x is retained while no recruited bee is allowed. D is the problem dimension. ngh is usually a fixed value for neighborhood search and r1 is a D-dimensional random vector within [0, 1]. Various ngh settings can be found in range of [0.008, 20] with either a fixed value or a value that decreases linearly across iterations [20,30,31,38]. However, identifying a common value for ngh is difficult because this value relates to the search space and current convergence of BA. This study incorporates a stochastic self-adaptive ngh (ssngh) for each recruited bees that improves original BA performance and frees the setting of ngh. The first two selected bees determine the ssngh using the following formula:
142 144
ð6Þ
PD ssngh ¼
i¼1 jxs1;i
xs2;i j
D
r2
ð7Þ
147
where xs1 and xs2 are D-dimensional locations of the first two selected bees and ssngh calculates the average dimensional distance between xs1 and xs2 to conduct self-adaptive neighborhood search. r2 is a random number within [0, 1] for each recruited bee to encourage a stochastic search. Therefore, the setting of ngh is freed by the ssngh, hereafter.
148
2.3. Comparing on ABC and BA
149
One of the major differences between ABC and BA is that, while BA updates locations in all D dimensions, ABC modifies locations in the dth dimension only. Major differences in the three phases of each are:
145 146
150 151 152 153 154 155 156 157 158
(1) Although both BA selected bees and ABC employed bees have their own locations, ABC employed bees self update their positions while BA selected bees do not. (2) Each BA selected bee updates its location depending on Rec recruited bees, and each ABC employed bee modifies its location by itself or by onlooker bees 0 to NO times per iteration. Moreover, BA recruited bees search the neighborhood areas of their associated selected bees within a D-dimensional radius. ABC onlooker bees perform the search for one of the employed bees and the search radius is calculated by another employed bee in the dth dimension. (3) BA remaining bees have their locations and perform random searches independently. ABC scout bees do not have locations for themselves, but renew the locations of ABC employed bees.
159 160 161 162 163 164 165 166 167
As described above, BA primarily executes neighborhood searches to obtain better solutions, a common element in optimization algorithms. ABC mainly performs one-dimensional searches according to positions of employed bees, which is different from other swarm intelligent algorithms. Despite their limitations, both ABC and BA have been verified across a wide range of problems and they appear to offer complementary advantages in addressing certain types of problems. This study contributes to the advancement of this bee-inspired algorithm category by combining the unique advantages of ABC and BA to perform various optimization tasks. Achieving better solutions than either could alone is the basic justification for combining ABC and BA. The primary designs of the proposed ABC–BA are identifying the winner of ABC and BA during the evolutionary process automatically and performing formulation of the winning sub-swarm frequently. Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 4
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
168
3. Integration of ABC and BA (ABC–BA)
169
Taking advantage of the overlap between the foundational concepts of ABC and BA, this study developed an ABC–BA algorithm that integrates ABC and BA. The integrated ABC–BA overcomes the limitations of ABC and BA in handing particular problems. The ABC–BA population (N) comprises selected bees (NS2 = 2N/3) and remaining bees (NR1 = N/3). The ABC–BA selected bees perform the functions of ABC employed bees and/or BA selected bees. The ABC–BA remaining bees are directly assigned as BA remaining bees, and one ABC scout bee is used. Therefore, two sub-swarms of ABC and BA co-exist in ABC–BA. When the ABC sub-swarm provides higher fitness than the BA sub-swarm, more ABC–BA selected bees perform the functions of ABC employed bees. Conversely, if the BA sub-swarm gives higher fitness, more BA selected bees are created in the ABC– BA population. An index R is defined to determine whether an ABC–BA selected bee (ABCBAsb,i) performs as an ABC employed bee or a BA selected bee. When members of ABC employed bees (ABCeb) and BA selected bees (BAsb) are identified, ABC and BA sub-swarms are executed, because ABC and BA sub-swarms should identify their respective members prior to execution. The ABC sub-swarm follows the three phases of employed, onlooker, and scout while the BA sub-swarm recruits bees to search neighborhood regions. Moreover, the NR1 ABC–BA remaining bees directly perform as remaining bees in the BA sub-swarm. For minimization problems:
170 171 172 173 174 175 176 177 178 179 180 181
182
184
8 > < 1 R0 ; BestABC < BestBA R ¼ R0 ; BestABC > BestBA > : R; BestABC ¼ BestBA
185 187
ABCBAsb;i 2
BAsb ;
ð8Þ
randi < R
ð9Þ
ABC eb ; randi P R
213
where BestABC and BestBA are the best solution found so far by ABC and BA sub-swarms, respectively. R0 is a fixed constant and expected within [0.5, 1]. randi is a random number within [0, 1] to determine whether the ith ABC–BA selected bee should perform as an ABC employed bee or a BA selected bee. When BestABC is smaller than BestBA, R is 1–R0 based on Eq. (8). According to Eq. (9), each ABC–BA selected bee has a lower probability of being a BA selected bee and a higher probability of being an ABC employed bee. However, there is a strong likelihood that stochastic selection assigns no ABC–BA selected bees to the ABCeb or BAsb. To guarantee the execution of ABC and BA sub-swarms in each iteration, at least two ABC–BA selected bees are needed for each of ABCeb and BAsb because bee indexes s and t in Eq. (2) should be non-identical. Similarly, BA execution is guaranteed by at least two selected bees due to s1 and s2 in Eq. (7). Therefore, this study suggests that the first two ABC–BA selected bees (identified by fitness values) are assigned to be two ABC employed bees and two BA selected bees. Therefore, when BestABC achieves minimization improvements, the improvements made by the ABC sub-swarm will be passed to the BA sub-swarm in the next iteration through the first ranking ABC–BA selected bee, and vice versa. During the optimization process, BestABC and BestBA may frequently be equaled and the value of R is retained as the previous R in Eq. (8). Thus, the number of ABC–BA selected bees (NS2) equals the sum of the numbers of ABC employed bees (NE1) and BA selected bees (NS1) minus two. Consequently, both NE1 and NS1 vary from 2 to NS2. According to the definitions of ABC, this study used NO1 onlooker bees (NO1 = NE1) to update bee locations and employed one scout bee during the scout phase. In the BA sub-swarm, there are NS1 ABC–BA selected bees available to perform as BA selected bees; each recruits Rec bees to search its neighborhood region. Additionally, there are NR1 ABC–BA remaining bees that always act as BA remaining bees in the BA sub-swarm. Therefore, the numbers of agents in the ABC and BA sub-swarms depend on BestABC and BestBA stochastically. When BestABC outperforms BestBA, most ABC–BA selected bees function as ABC employed bees. When BestBA outperforms BestABC, BA selected bees are more frequent. In sum, there are NE1 ABC employed bees, NO1 ABC onlooker bees, and one ABC scout bee in the ABC sub-swarm to improve BestABC; and NS1 BA selected bees, recruited bees, and NR1 remaining bees in the BA sub-swarm for BestBA. Fig. 1 shows the proposed ABC–BA algorithm. As previously described, ABC requires at least two employed bees and BA holds at least two selected bees and one remaining bee (one-third of the population). Therefore, restrictions of N for ABC, BA and ABC–BA are 2, 3 and 3, respectively. If BA and ABC–BA are allowed to iterate without the remaining bees, restrictions of N can be ignored, because the SI algorithms always perform with a minimum of two agents.
214
4. Numerical examples and analysis
215
4.1. Constraint handling method
216
For optimization algorithms, the fitness (fit) of a solution directly adopts the function value if the solution is feasible. Kou et al. used the amount of constraint violation and solutions at hand to propose the fit [19]:
188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212
217
218
( fitð~ xÞ ¼
220
f ð~ xÞ; fmax þ
PNg
g i ð~ xÞ 6 0; ~
i¼1 g i ðxÞ;
others
hj ð~ xÞ ¼ 0
ð10Þ
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
D
5
x
fit fit NS2 t
N BestABC
NR1 N BestBA
Itemax R NS2
i i
NE1
NS1
NR1 NE1 NO1 limit NC1 BestABC ssngh Rec
NS1 NR1
BestBA fit fit
Fig. 1. Pseudo-code of ABC–BA algorithm.
221 222
223
where fmax is the objective function value of the worst feasible solution in the population. When a solution is infeasible, fit is no longer set to an extreme value. This study suggests the fit:
( xÞ ¼ fitð~
225 226 227 228 229 230
231
f ð~ xÞ; 1 fmax
þ
g i ð~ xÞ 6 0;
PGG
ð11Þ
1 where fmax , obtained during process of population initialization, is the worst feasible solution. The initialization process re1 peats, if the population fails to provide a feasible solution. Once fmax is obtained, it will not be updated during subsequent iterations because it is just a penalty number. GG is a subset of Ng that counts unsatisfied g constraints. When a g constraint is broken, a positive value is incorporated in fit to increase its value under minimization tasks. Equality constraints h are converted into inequality constraints with a tolerance value d. The constrained optimization problem can be formulated as:
Minimize f ð~ xÞ xÞ 6 0; subject to g i ð~
i ¼ 1; . . . ; Ng
g l ð~ xÞ ¼ jhj ð~ xÞj d 6 0; 233
hj ð~ xÞ ¼ 0
~ others i¼1 g i ðxÞ;
xLk 6 xk 6 xLk ;
j ¼ 1; . . . ; Nh;
l ¼ Ng þ 1; . . . ; Ng þ Nh
ð12Þ
k ¼ 1; . . . ; Nx
236
There are Ng + Nh inequality constraints for the constrained optimization problem, which is sequentially handled using Eq. (12) to obtain fit values. For maximization problems, a negative operator is directly applied to function f and the problem automatically becomes a minimization problem.
237
4.2. Settings for the studied algorithms
238
242
For all experiments, 10,000 iterations (Itemax) were executed and no additional criterion was set. Fifty runs were independently performed for each algorithm. Table 1 lists the algorithm parameter settings. ABC selected bees satisfies the relationship NS2 = NE1 + NS1 2. Total population size (N) was 50 for all algorithms. Number of recruited bees was decreased linearly from 10 to 5 according to selected bee rank. limit was suggested as the multiple of the number of variables (NV) and the population size (N) [16], and R0 was set at 0.9.
243
4.3. Six benchmark functions
244
No single algorithm is able to achieve optimal results in all applications [46]. Therefore, various tests are necessary to accumulate computational experience. This study investigated optimization performance on the same six constrained equations (g01–g06) as investigated in [19] (also see Appendix A). In Table 2, Opt, D, Ng, Li, NI, Nh, LE, NE, Active, and f(x⁄) represent a maximization or minimization problem, the number of decision variables, inequality constraints, linear inequality constraints, nonlinear inequality constraints, equality constraints, linear equality constraints, nonlinear equality constraints, active constraints at the optimum, and the optimum, respectively. Among these, only g03 has an equality con-
234 235
239 240 241
245 246 247 248 249
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 6
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx Table 1 Parameter settings for BA, ABC and ABC–BA. Algorithm
Parameter
Value
BA
N NS NR Rec Limit
50 32 18 {10, 5} N NV
ABC
N NE NO NC Limit
50 N NE 1 N NV
ABC–BA
N NS2 (NE1, NS1) R0 NR1 Rec NO1 NC1 Limit
50 32 0.9 18 {10, 5} NE1 1 N NV
Table 2 Information of the used COPs [19]. COP
Opt
D
Function type
g01 g02 g03 g04 g05 g06
Max Min Max Min Max Min
20 13 10 2 2 8
Nonlinear Quadratic Nonlinear Nonlinear Nonlinear Linear
Ng
Nh
Li
NI
LE
NE
1 9 0 0 0 3
1 0 0 2 2 3
0 0 0 0 0 0
0 0 1 0 0 0
Active
f(x⁄)
1 6 0 2 0 3
0.803619 15 1 6961.81388 0.095825 7049.3307
straint; the others all have inequality constraints. For COPs with inequality constraints, obtained solutions are exactly feasible. Feasible solutions for g03 may actually be illegal when a tolerance is allowed, as described in Eq. (12). Therefore, this study suggests two approaches, namely g03-1 and g03-2, to convert equality constraints into inequality constraints.
250 251 252
(1) g03-1
253 254 255
256
Q3
Initially, the equality constraint of g03 is handled by the tolerance shown in Eq. (12). Therefore, g03 can be formulated as:
Max f ð~ xÞ ¼
D pffiffiffiffiD Y D xi i¼1
258 259 260 261 262 263 264 265 266 267
268
X D 2 s:t: g 1 ð~ xÞ ¼ jh1 ð~ xÞj d ¼ xi 1 0:001 6 0 i¼1
ð13Þ
where D = NV = 10; and the tolerance value d is set at 0.001, which accepts parts of illegal solutions as feasible. When a high value of d is provided, the precision achieved for the equality constraint is surrendered. Allowing a low d value makes it difficult for optimization algorithms to obtain feasible results. Moreover, values of d can be linearly decreased by the lapse of iterations to satisfy the equality constraint with high precision. However, a general setting for d is hard to achieve for various equality constraints. In any event, the d may make obtained solutions illegal. (2) g03-2 To guarantee legal solutions, this study suggests using the equality constraint to identify the g03 function variables that do not need to be handled by optimization algorithms. This is a win–win scenario that both guarantees legal solutions and reduces the number of variables.
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 7
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
Max f ð~ xÞ ¼
10 pffiffiffiffiffiffi10 Y xi 10 i¼1
s:t: x10
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X9 ¼ 1 x2 i¼1 i
xÞ ¼ g 1 ð~
270
ð14Þ
9 X x2i 1 6 0 i¼1
274
where D = 10; NV = 9; and the search space is [0, 1]. Nine variables are used to optimize x1–x9; and x10 is directly calculated using the equality constraint. Next, the search space of x10 is converted into an inequality constraint, which seems to convert the equality constraint into an inequality constraint, thus restricting the search space of the determined x10. After reducing NV by 1, all obtained solutions are absolutely legal.
275
4.4. Experimental results
276
Results include the ‘‘Min/Max’’, ‘‘Mean’’, and ‘‘Std’’ of 50 runs. Table 3 lists average running time and rankings for ABC, BA, and ABC–BA. All obtained solutions are feasible in terms of the constraints. The ‘‘Min/Max’’ is defined as the best result of 50 runs; ‘‘Mean’’ is the average value of 50 run results; and ‘‘Std’’ denotes the standard deviation of the 50 run results. Of these, the final results of g03-1 are actually illegal owing to the allowed d. The equality constraint for g03-1 is approximately satisfied in BA and ABC–BA runs, while ABC runs never reach the maximum function value.
271 272 273
277 278 279 280
281 282 283 284
4.4.1. Final optima With regard to ‘‘Mean’’ results, ABC outperforms BA on g01 and g02; BA outperforms ABC on g03-1, g04 and g06; and ABC and BA perform equally on g03-2 and g05. ABC–BA outperforms ABC and BA on g01, g02, g03-1, g03-2, and g05, and results are very close to the ABC or BA winner for g04 and g06. With regard to ‘‘Ranking’’ for ‘‘Min/Max’’, ABC–BA clearly leads both
Table 3 Statistical results obtained by ABC, BA and ABC–BA. Statistical solutions and running time
Ranking
ABC
BA
ABC–BA
ABC
BA
ABC–BA
g01
Max Mean Std Avg. time (s)
0.700781 0.656050 0.015455 33.8
0.628518 0.467657 0.052889 103.8
0.753000 0.693690 0.029230 52.9
2 2 1 1
3 3 3 3
1 1 2 2
g02
Min Mean Std Avg. time (s)
15.000 15.000 0.000 43.1
11.460 9.393 0.734 94.7
15.000 15.000 0.000 47.8
1 1 1 1
2 2 2 3
1 1 1 2
g03-1
Max Mean Std Avg. time (s)
0.568 0.313 0.118 25.6
1.005 1.005 0.000 72.0
1.005 1.005 0.000 69.2
2 2 2 1
1 1 1 3
1 1 1 2
g03-2
Max Mean Std Avg. time (s)
1.000 1.000 0.000 28.6
1.000 1.000 0.000 80.1
1.000 1.000 0.000 38.0
1 1 1 1
1 1 1 3
1 1 1 2
g04
Min Mean Std Avg. time (s)
6961.804 6948.676 56.448 24.2
6961.814 6961.814 0.000 63.3
6961.814 6961.790 0.062 36.6
2 3 3 1
1 1 1 3
1 2 2 2
g05
Max Mean Std Avg. time (s)
0.095825 0.095825 0.000000 25.1
0.095825 0.095825 0.000000 67.2
0.095825 0.095825 0.000000 33.2
1 1 1 1
1 1 1 3
1 1 1 2
g06
Min Mean Std Avg. time (s)
7182.3218 7492.4805 180.0504 36.0
7081.1561 7138.8787 51.7419 85.1
7074.1764 7154.3462 48.8327 75.8
3 3 3 1
2 1 2 3
1 2 1 2
ALL
Min/Max Mean Std Avg. time (s)
– – – 31.33
– – – 81.02
– – – 52.60
1.71 1.86 1.71 1.00
1.57 1.43 1.57 3.00
1.00 1.29 1.29 2.00
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 8
289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315
4.4.2. Computational time BA requires the most computational time, and ABC requires the least, with ABC–BA in between. However, assigning fewer recruited bees (Rec) to BA or ABC–BA can further reduce computational time at a possible loss in statistical accuracy. On average, the computational times of BA and ABC–BA are 259% and 168% of ABC. Although ABC appears to be the most efficient in terms of computational time, both ABC and BA would have to be run in order to achieve ABC–BA’s level of accuracy. Thus, it can be stated that ABC–BA requires 46.8% of the sum of computational time of ABC and BA. This paper used 5–10 recruited bees in BA and ABC–BA. Computational time decreases when Rec is reduced. It remains true that the cost of ABC–BA lies between that of ABC and BA alone, regardless of which is more efficient. Although the limitations of ABC or BA in addressing particular problems cannot be avoided, integrating ABC–BA achieves performance results that approximate or exceed the ABC or BA winners. 4.4.3. Convergence speed Figs. 2 and 3 show average function values obtained during iterations. All algorithms converged quite fast on g03-2 and g05, hence their results were plotted within the first 500 iterations. Both ABC and BA showed their advantages on particular functions, where ABC outperformed BA on g01, g02 and g05; while BA was superior across the remaining functions. However, ABC–BA always performed close to the winner of ABC and BA or outperformed both ABC and BA in convergence speed. 4.4.4. Variations on ABC–BA sub-population sizes According to the average function value results shown in Figs. 2 and 3, ABC outperformed BA on g01, g02 and g05 and BA outperformed ABC on g03-1, g03-2, g04 and g06. However, it cannot be concluded that the ABC and BA sub-swarm performances in ABC–BA must respectively conform to their single ABC and BA counterparts. The first two ABC–BA selected bees were assigned to both ABC and BA sub-swarms. Therefore, the current best solution obtained by a sub-swarm would be shared automatically with the other sub-swarm in the next iteration to allow the losing sub-swarm to make improvements based on the winning sub-swarm’s achievements and its own innate abilities. The composition of NS2 is key to accessing ABC–BA sub-swarm performance. Based on the relationship of NS2 = NE1 + NS1 2, 30 of the 32 ABC–BA selected bees execute Eq. (9) using the setting R0 = 0.9 to determine their sub-swarms, while the remaining 2 use formulations of both ABC and BA. Values of NE1 and NS1 average 17 when BestABC and BestBA are equal in performance. When one is better than the other, the winning sub-swarm holds 29 (30 0.9 + 2) ABC–BA selected bees on average, while the losing sub-swarm holds 5 (30 0.1 + 2) ABC–BA selected bees. Taking g04 as an example, BA absolutely outperforms ABC (Table 3). However, ABC–BA performs with a large BA sub-swarm initially and quickly turns into a large ABC sub-swarm with an NE1 higher than
0.70
g02
Iteration
g01 0
5000
10000
0
0
Iteration 5000
ABC BA ABC-BA
-5
0.50
10000
f (x)
288
-10
0.30
ABC BA ABC-BA
0.10
-15
g03-1 0
Iteration 5000
g03-2 0
10000
0.8 0.6
Iteration 250
500
1.0
1.0
ABC BA ABC-BA
0.8
f (x)
287
ABC and BA across the board. With regard to overall ‘‘Ranking’’ results, ABC–BA ranks first and BA ranks second. The proposed ABC–BA is thus an effective substitute for ABC and BA, as ABC–BA always does better than or nearly as well as the best results of either ABC or BA alone.
f (x)
286
f (x)
285
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
0.6
0.4
0.4
0.2
0.2
0.0
0.0
ABC BA ABC-BA
Fig. 2. Result convergence of g01–g03 by ABC, BA and ABC–BA.
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 9
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
Iteration
g04 -5.E+03
0
5000
0.100
ABC BA ABC-BA -6.E+03
0
250
500
0.095
f (x)
f (x)
Iteration
g05 10000
0.090
ABC BA ABC-BA
0.085
0.080
-7.E+03
Iteration
19000
0
f (x)
16000
13000
5000
Iteration 10000
32
0
5000
10000
ABC BA ABC-BA
NE 1
g06
17
10000 2 7000
g01
g02
g03-1
g04
g05
g06
g03-2
Fig. 3. Result convergence of g04–g06 and NE1 variations of ABC–BA on all constrained equations.
316 317 318 319 320 321 322 323 324 325 326 327 328
329 330 331 332 333 334 335 336 337
17 (Fig. 3). Similarly, BA outperforms ABC on g03-2, but ABC–BA settles into a big ABC sub-swarm. Moreover, the convergence of ABC–BA is significantly better than that of ABC on g01. The losing BA sub-swarm assists the winning ABC subswarm to achieve the results of ABC–BA on g01 in Fig. 2. Evidence can be obtained from the values for NE1 of g01 (Fig. 3), the values of which are larger than 17 but far below 29, meaning that the BA sub-swarm sometimes registers as the wining sub-swarm and takes more ABC–BA selected bees. The ABC sub-swarm frequently takes the lead to resolve into a large ABC sub-swarm. Additionally, ABC, BA and ABC–BA may be compared in terms of agent composition. An ABC run has 50 employed bees (NE), 50 onlooker bees, and 1 scout bee; a BA run has 32 selected bees (NS), 18 remaining bees, and a certain number of recruited bees; while an ABC–BA run has average NE1 or NS1 numbers of 29 at most. Therefore, ABC–BA uses fewer agents than either ABC or BA. The outstanding performance of ABC–BA comes from its designs: (1) the first two ranking ABC–BA selected bees live in both the ABC and BA sub-swarms so that the current best solution obtained by the winning sub-swarm is shared automatically with the losing sub-swarm; (2) each ABC–BA selected bee is assigned to act as an ABC employed bee and/or a BA selected bee in each iteration; and (3) both ABC and BA sub-swarms have the chance to be the winning sub-swarm and earn more bee agents. 4.4.5. Comparisons on the setting of R0 Previous studies set the R0 to 0.9 beforehand. To justify this selection, this paper examined R0 from 1 to 0 at 0.1 intervals. Table 4 lists results obtained using an R0 interval of 0.2. Major differences in mean equation values occur on g01 and g06. However, these differences do not generate ABC–BA results inferior to the ABC or BA loser. Therefore, values of R0 are not particularly sensitive to result convergence. Although this paper asserts that the value of R0 must be greater than 0.5 to make the winning sub-swarm prosper in Section 3, results obtained using an R0 less than 0.5 are still acceptable. While an R0 set to 0 generates results that are obviously worse than those generated by others on g01, g02, g03-1, and g04, its results are still acceptable and significantly better than those obtained by the ABC or BA loser. Result performance was evaluated further using mean absolute percentage error (MAPE) values as shown in Fig. 4.
338
340 341 342 343 344 345 346
MAPE ¼
n 1X TV i CV i n i¼1 TV i
ð15Þ
where TVi is the target Min/Max value (Table 2), CVi is the calculated value (using the mean function values in Table 4), and n is the number of evaluations (using 7 for n). The MAPE result is best when R0 = 1 (Fig. 4). When R0 varies within 0.3–0.9, MAPE values all fall below 2.5%. When R0 is decreased to 0, the MAPE value climbs to 3.22%. Nevertheless, all MAPE results are quite outstanding compared to ABC and BA MAPE values (13.36% and 11.56%, respectively), as calculated using results in Table 3. Although MAPE results suggest 1 as the best values for R0, this paper still suggests 0.9 for R0 based on its acceptable performance and greater number of ABC–BA selected bees available to encourage the losing sub-swarm to become the winPlease cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 10
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
Table 4 Statistical results for different R0 values. R0 1.0
0.8
0.6
0.4
0.2
0.0
g01
Max Mean Std Avg. time (s) Avg. NE1
0.780086 0.707701 0.029139 49.0 27.6
0.755542 0.695842 0.029102 57.2 23.2
0.779288 0.694074 0.030298 67.5 19.4
0.751987 0.691286 0.028705 78.1 16.1
0.739894 0.674221 0.024979 86.5 9.3
0.746528 0.641206 0.060428 99.5 4.4
g02
Min Mean Std Avg. time (s) Avg. NE1
15.000 15.000 0.000 45.9 31.8
15.000 15.000 0.000 55.0 25.9
15.000 15.000 0.000 66.6 20.0
15.000 15.000 0.000 75.0 14.1
15.000 15.000 0.000 87.3 8.1
15.000 14.992 0.045 98.9 2.2
g03-1
Max Mean Std Avg. time (s) Avg. NE1
1.005 1.005 0.000 64.3 3.8
1.005 1.005 0.000 63.6 8.5
1.005 1.005 0.000 56.1 14.1
1.005 1.005 0.000 47.3 20.0
1.005 1.005 0.000 38.7 25.8
1.005 1.004 0.000 30.9 31.7
g03-2
Max Mean Std Avg. time (s) Avg. NE1
1.000 1.000 0.000 34.0 31.8
1.000 1.000 0.000 42.8 25.9
1.000 1.000 0.000 52.5 20.0
1.000 1.000 0.000 62.1 14.1
1.000 1.000 0.000 71.3 8.1
1.000 1.000 0.000 83.7 2.3
g04
Min Mean Std Avg. time (s) Avg. NE1
6961.814 6961.737 0.216 31.3 30.7
6961.814 6961.794 0.032 43.0 16.7
6961.814 6961.787 0.037 48.7 14.0
6961.813 6961.737 0.156 43.4 20.0
6961.786 6961.498 0.581 35.6 26.0
6961.799 6961.033 2.162 29.2 32.0
g05
Max Mean Std Avg. time (s) Avg. NE1
0.095825 0.095825 0.000000 29.8 32.0
0.095825 0.095825 0.000000 36.9 26.0
0.095825 0.095825 0.000000 44.9 19.9
0.095825 0.095825 0.000000 52.0 14.0
0.095825 0.095825 0.000000 59.9 8.1
0.095825 0.095825 0.000000 70.9 2.0
g06
Min Mean Std Avg. time (s) Avg. NE1
7080.3963 7172.4100 91.0219 71.7 2.6
7098.3834 7149.3923 37.1814 74.1 8.2
7088.3416 7173.7453 87.9235 66.6 14.4
7103.9841 7159.9631 53.9021 58.3 19.2
7094.6294 7162.5669 45.3521 49.4 25.6
7086.1585 7180.7318 54.8860 41.1 29.3
3.5%
MAPE
3.0%
2.5%
2.0%
0
0.2
0.4
0.6
0.8
1
R0 Fig. 4. MAPE results of 7 COPs for different R0 values.
347 348 349 350 351 352 353 354 355
ning sub-swarm. Additionally, average time consumption is not fixed under the various R0 (Table 4). On g01, execution time increases with a decreasing R0 and the NE1 decreases when the losing sub-swarm gets more ABC–BA selected bees. Similar performances can be obtained on all other equations with the exception of g04. For these COPs, the ABC and BA sub-swarm winner can be easily identified. Therefore, R0 = 1 and R0 = 0 benefit, respectively, the winning and losing sub-swarms. On g04, the set of R0 = 0.6 is the most time consuming with a small NE1. Extreme R0 values at 1 and 0 finally settle on a large ABC subswarm and consume less time because the proposed ABC–BA determines the winning and losing sub-swarms during the iterative process. Which ABC–BA sub-swarm is the winning sub-swarm for a particular COP, e.g., g04, is not made explicit before execution. Under R0 = 0.6, more ABC–BA selected bees are assigned on average to the BA sub-swarm during the iterative process. The ABC sub-swarm is the winning sub-swarm and losing sub-swarm, when R0 = 1 and R0 = 0, respectively, Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 11
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx 356 357
358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376
with more ABC–BA selected bees assigned to it (NE1). In sum, the ABC–BA runs in Table 4 obtained outstanding results through different ABC and BA sub-swarm competition rounds.
4.4.6. Final optima obtained The obtained optima achieved the global optima as described in Section 4.3 on g02, g04, and g05. For g03, BA and ABC–BA achieved the global optima with values equal to 1.005 on g03-1, while its constant h1 was active at 103. To convert g03 into g03-2, ABC, BA and ABC–BA all achieved optima as described in Table 2. For g01 and g06, global optima were either not yet obtained or unobtainable using the present algorithms. As both g01 and g06 were rather complicated problems, this paper attempted to increase N and/or Itemax for ABC–BA to confirm its outstanding performance. Four kinds of experiments were conducted, as follows: (1) N = 500 and Itemax = 10,000; (2) N = 50 and Itemax = 100,000; (3) N = 165 and Itemax = 33,000; and (4) N = 500 and Itemax = 100,000. ABC–BA settings related to N were proportionally enlarged with the exceptions of Rec within [10,5] and NC at 1. The computational time for experiments 1, 2, and 3 can be roughly estimated as 10 that under the setting N = 50 and Itemax = 10,000 and experiments required 100 the computational time (Tables 3 and 5). Statistical results show that all four experiments improved optimization for g01 and g06. Among the first three, experiments 2 and 3 performed best on g01 and g06, respectively. However, discrepancies between experimental results seem minor and both enlarging N and increasing Itemax are acceptable approaches. In experiment 4, ABC–BA achieved the best solutions of all experiments on both g01 and g06. In sum, the final optima proposed by the study are associated with those in bold in Tables 3 and 5. Table 6 lists solutions obtained in this study, with values in bold meaning constraints near to active and border values indicating constraints active illegally. The g03-2 is proposed to replace g03-1, as g03-1 obtained results are theoretically illegal. Though the results of g01 and g06 are not better than those described in Table 2, they are absolutely legal while their references are not. These results show that the proposed ABC–BA delivers competitive performance with respect to other algorithms that approximates or exceeds the winner of either ABC or BA.
Table 5 Statistical results of experiments on g01 and g06. ABC–BA Statistical solutions and running time Experiment 1
Experiment 2
Experiment 3
Experiment 4
g01
Max Mean Std Avg. time (s)
0.792307 0.738410 0.023691 549.2
0.793016 0.743788 0.020424 561.6
0.777644 0.729574 0.024808 563.6
0.803476 0.759529 0.020746 5127.3
g06
Min Mean Std Avg. time (s)
7059.425 7085.965 15.806 759.1
7063.601 7105.387 17.340 704.4
7059.035 7097.013 18.167 840.8
7054.112 7074.429 13.621 7500.8
Table 6 Obtained global optima in this study. f(x⁄)
x⁄
g (x⁄) or h (x⁄)
x1 ðx6 Þ
x2 ðx7 Þ
x3
x4
x5
g1(x⁄)
g2(x⁄)
g01
0.803476
3.167951 2.994904 0.471884 0.454932
3.123296 2.946604 0.477583 0.457872
3.091838 2.946026 0.467803 0.450660
3.052015 0.480478 0.471832 0.455041
3.032485 0.497503 0.456404 0.439526
3.51E06
120.0634
g02
15.000
1.000000 1.000000 3.000000
1.000000 1.000000 3.000000
1.000000 1.000000 1.000000
1.000000 1.000000
1.000000 3.000000
0.000000 5.000000 0.000000
0.000000 5.000000 0.000000
g03-1
1.004998
0.316336 0.316884
0.316106 0.316446
0.316809 0.315988
0.316002 0.316169
0.316252 0.316865
0.001000
g03-2
1.000000
0.316228 0.316228
0.316228 0.316228
0.316228 0.316228
0.316228 0.316228
0.316228 0.316228
0.000000
g04
6961.814
14.09500
0.842961
5.05E09
1.60E10
g05
0.095825
1.227971
4.245373
1.73746
0.167763
g06
7054.112
593.4886 216.4511
1352.519 287.1467
0.001645 36.82935
1.21E04 1.630901
5108.104 395.6866
182.8909
295.6959
g3(x⁄)
0.000000 5.000000 0.000000
9.35E05 2.468396
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 12
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
g01 (x1,x2)
g03 (x1,x2)
g05 (x1,x2)
g02 (x1,x2)
g04 (x1,x2)
g06 (x1,x2)
Fig. A.1. Plots of 2-dimensional COPs without considering constraints.
377 378 379 380 381 382 383 384 385
4.4.7. Comparisons on g03-1 and g03-2 With regard to g03, the g03-1 in this study was very similar to those in [19], while g03-2 is novel. Although both convert the equality constraint of g03 into an inequality constraint, g03-2 retains the equality constraint, thus providing legal solutions. Conversely, g03-1 sets a tolerance to handle the equality constraint with an inequality condition, meaning that the tolerance may exist in its solutions. Although reducing the amount of the tolerance may ease the failure to satisfy equality, it greatly increases optimization difficulties. Thus, g03-1 must convert the equality constraint into an inequality constraint. An infinitesimal tolerance means that ABC, BA and ABC–BA will necessarily fail to obtain any feasible solution. In experiences, decreasing the g03-1 tolerance by 106 will cause the approaches to spend significantly more time to obtain a prior 1 fmax . Therefore, g03-2 is highly acceptable when used to address COPs with equality constraints.
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
13
386
5. Conclusion
387
414
The proposed ABC–BA algorithm is executed with both ABC and BA sub-swarms. Population sizes of sub-swarms vary dynamically according to the most recent ABCbestand BAbest. When one sub-swarm performs better than another, the former will gain most of the ABC–BA selected bees to improve itself in the next iteration and its formulation will be executed more frequently. The first two ranking ABC–BA selected bees perform as both ABC employed bees and BA selected bees to ensure sub-swarm survival and communicate the current best solution obtained to the losing sub-swarm in the next iteration. Experiments were conducted to address COPs with equality or inequality constraints. In addressing equality constraints, this paper studied the efficacy of two kinds of manners in converting equality constraints into inequality constraints. One erected a tolerance for the equality constraint; the other is to determine function variables using functional relationships. This paper found the latter to be more effective and guarantee optimal equality constraint satisfaction and therefore proposed it be used to avoid the adoption of the tolerance by the former. Additionally, using the equality constraint to determine a function variable reduces the number of undetermined variables and improves the efficiency of obtaining feasible solutions against the condition while feasible solutions are hard to find in the presence of a tiny tolerance allowed by the former. Experimental results on six COPs prove that the performance of ABC–BA approximates or exceeds that of the ABC or BA winner. As both ABC and BA offer unique advantages in handling COPs, researchers must execute both to achieve a performance comparable to ABC–BA. Running ABC–BA requires roughly half (46.8%) of the combined computational time of ABC and BA while giving outstanding results. Therefore, ABC–BA may be used in place of either ABC or BA. Notably, according to the results of NE1, the proposed ABC–BA does not generally perform like a single ABC or BA winner. Rather, ABC–BA allows the sub-swarms of each to take the lead while allowing most of its ABC–BA agents perform sub-swarm formulations for further optimization. Therefore, this paper successfully integrated ABC and BA into an ABC–BA able to handle COPs effectively. This paper does not simply or directly use ABC and BA algorithms in separate ABC–BA sub-swarms. In the ABC–BA design, ABC–BA agents are categorized into selected bees and remaining bees. ABC–BA selected bees act as ABC employed bees and/ or BA selected bees to initialize ABC and BA sub-swarms. ABC onlooker bees are first deployed, followed by one ABC scout bee. There are BA recruited bees with BA remaining bees adopting all ABC–BA remaining bees. While the winning sub-swarm is allowed to prosper, the approach lets the losing sub-swarm to survive and have a new chance to win. Therefore, clearly understanding both ABC and BA algorithms is essential to this ABC–BA proposal. Integrating other SI algorithms into the ABC–BA or a new algorithm offers the prospects of other solution scenarios that may achieve similar or greater success in future works.
415
Appendix A
416 417
Plots of the six COPs are shown in Fig. A.1. Although the first two design variables illustrate parts of the function attributes, constraints are key concerns of the COPs.
418
References
419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446
[1] A.S.S.M. Barkat Ullah, R. Sarker, C. Lokan, Handling equality constraints in evolutionary optimization, European Journal of Operational Research 221 (3) (2012) 480–490. [2] B. Basturk, D. Karaboga, An artificial bee colony (ABC) algorithm for numerical function optimization, in: Proceedings of IEEE, Swarm Intelligence Symposium, Indianapolis, IN, USA, 2006. [3] C.A.C. Coello, Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art, Computer Methods in Applied Mechanics and Engineering 191 (11–12) (2002) 1245–1287. [4] K. Deb, An efficient constraint handing method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2000) 311–338. [5] T. Dereli, G.S. Das, A hybrid ‘bee(s) algorithm’ for solving container loading problems, Applied Soft Computing Journal 11 (2) (2011) 2854–2862. [6] M. Dorigo, V. Maniezzo, A. Colorni, The ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics – Part B 26 (1) (1996) 29–41. [7] A.G. Douglas, V.L. Ricardo, S. Adriano, Handling constraints as objectives in a multi-objective genetic based algorithm, Journal of Microwaves and Optoelectronics 2 (6) (2002) 50–58. [8] R.C. Eberhart, J. Kennedy, A new optimizer using particles swarm theory, in: Proceedings of the Sixth International Symposium on Micro Machine and Human Science, 1998, pp. 39–43. [9] L. Gibelli, B.D. Shizgal, A.W. Yau, Ion energization by wave-particle interactions: comparison of spectral and particle simulation solutions of the Vlasov equation, Computers and Mathematics with Applications 59 (8) (2010) 2566–2581. [10] M. Jaberipour, E. Khorram, B. Karimi, Particle swarm algorithm for solving systems of nonlinear equations, Computers and Mathematics with Applications 62 (2) (2011) 566–576. [11] F. Kang, J. Li, Z. Ma, Rosenbrock artificial bee colony algorithm for accurate global optimization of numerical functions, Information Sciences 181 (16) (2011) 3508–3531. [12] D. Karaboga, An Idea based on Bee Swarm for Numerical Optimization, Tech. Rep. TR-06, Erciyes University, Engineering Faculty, Computer Engineering Department, 2005. [13] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, Journal of Global Optimization 39 (3) (2007) 459–471. [14] D. Karaboga, B. Basturk, Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In: Melin et al. (Eds.), Proceedings of IFSA 2007, LNAI 4529, 2007, pp. 789–798. [15] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC), Applied Soft Computing 8 (1) (2008) 687–697. [16] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Applied Mathematics and Computation 214 (1) (2009) 108–132.
388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015
INS 10296
No. of Pages 14, Model 3G
18 September 2013 14 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 503 502
Q4
H.-C. Tsai / Information Sciences xxx (2013) xxx–xxx
[17] D. Karaboga, B. Akay, A modified artificial bee colony (ABC) algorithm for constrained optimization problems, Applied Soft Computing Journal 11 (3) (2011) 3021–3031. [18] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artificial Intelligence Review (2012) 1–37. [19] X. Kou, S. Liu, J. Zhang, W. Zheng, Co-evolutionary particle swarm optimization to solve constrained optimization problems, Computers and Mathematics with Applications 57 (11–12) (2009) 1776–1784. [20] Z. Khanmirzaei, M. Teshnehlab, A. Sharifi, Modified honey bee optimization for recurrent neuro-fuzzy system model, in: The 2nd International Conference on Computer and Automation Engineering, vol. 5, 2010, pp. 780–785. [21] X. Li, Z. Shao, J. Qian, An optimizing method base on autonomous animates: fish-swarm algorithm, Systems Engineering – Theory & Practice 22 (2002) 32–38. [22] V.J. Manoj, E. Elias, Artificial bee colony algorithm for the design of multiplier-less nonuniform filter bank transmultiplexer, Information Sciences 192 (2012) 193–203. [23] K. Masuda, K. Kurihara, A constrained global optimization method based on multi-objective particle swarm optimization, Electronics and Communications in Japan 95 (1) (2012) 43–54. [24] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strategy to solve constrained optimization problems, IEEE Transactions on Evolutionary Computation 9 (1) (2005) 1–17. [25] E. Mezura-Montes, O. Cetina-Domínguez, Empirical analysis of a modified artificial bee colony for constrained numerical optimization, Applied Mathematics and Computation 218 (22) (2012) 10943–10973. [26] Y. Mo, H. Liu, Q. Wang, Conjugate direction particle swarm optimization solving systems of nonlinear equations, Computers and Mathematics with Applications 57 (11–12) (2009) 1877–1882. [27] I. Montalvo, J. Izquierdo, R. Pérez, M.M. Tung, Particle swarm optimization applied to the design of water supply systems, Computers and Mathematics with Applications 56 (3) (2008) 769–776. [28] A. Moussa, N. El-Sheimy, Localization of wireless sensor network using bees optimization algorithm, in: Proceedings of IEEE International Symposium on Signal Processing and Information Technology, 2010, pp. 478–481. [29] D.T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, M. Zaidi, The Bees Algorithm – A Novel Tool for Complex Optimisation Problems, Manufacturing Engineering Centre, Cardiff University, Cardiff CF24 3AA, UK, 2005. [30] D.T. Pham, A. Ghanbarzadeh, E. Koç, S. Otri, S. Rahim, M. Zaidi, The bees algorithm – a novel tool for complex optimisation problems, in: Proceedings of IPROMS 2006, 2006, pp. 454–461 [31] D.T. Pham, A.J. Soroka, A. Ghanbarzadeh, E. Koç, S. Otri, M. Packianather, Optimising neural networks for identification of wood defects using the Bees Algorithm, in: Proceedings of 2006 IEEE International Conference on Industrial Informatics, Singapore, 2006. [32] D.T. Pham, A.J. Soroka, E. Koç, A. Ghanbarzadeh, S. Otri, Some applications of the bees algorithm in engineering design and manufacture, in: Proceedings of International Conference on Manufacturing Automation (ICMA 2007), Singapore, 2007. [33] D.T. Pham, A. Ghanbarzadeh, Multi-objective optimisation using the bees algorithm, in: Proceedings of IPROMS 2007 Conference, 2007. [34] D.T. Pham, A. Haj Darwish, E.E. Eldukhri, S. Otri, Using the bees algorithm to tune a fuzzy logic controller for a robot gymnast, in: Proceedings of IPROMS 2007 Conference, 2007. [35] D. Powell, M.M. Skolnick, Using genetic algorithms in engineering design optimization with nonlinear constraints, in: the 5th International Conference on Genetic Algorithms San Francisco, CA, USA, 1993, pp. 424–431. [36] B.Y. Qu, J.J. Liang, P.N. Suganthan, Niching particle swarm optimization with local search for multi-modal optimization, Information Sciences 197 (2012) 131–143. [37] E. Rashedi, H. Nezamabadi-pour, S. Saryazdi, GSA: a gravitational search algorithm, Information Sciences 179 (13) (2009) 2232–2248. [38] G.A. Ruz, E. Goles, Learning gene regulatory networks using the bees algorithm, Neural Computing and Applications (2011) 1–8. [39] C.-L. Sun, J.-C. Zeng, J.-S. Pan, An improved vector particle swarm optimization for constrained optimization problems, Information Sciences 181 (6) (2011) 1153–1163. [40] S. Sundar, A. Singh, A swarm intelligence approach to the quadratic minimum spanning tree problem, Information Sciences 180 (17) 3182–3191. [41] D. Teodorovic, M.D. Orco, Bee colony optimization—a comparative learning approach to computer transportation problems, Advanced OR an IA Methods in Transportation (2005) 51–60. [42] H.C. Tsai, Predicting strengths of concrete-type specimens using hybrid multilayer perceptrons with center-unified particle swarm optimization, Expert Systems With Applications 37 (2009) 1104–1112. [43] H.C. Tsai, Y.H. Lin, Modification of the fish swarm algorithm with particle swarm optimization formulation and communication behavior, Applied Soft Computing 11 (2011) 5367–5374. [44] H.C. Tsai, Y.Y. Tyan, Y.W. Wu, Y.H. Lin, Isolated particle swarm optimization with particle migration and global best adoption, Engineering Optimization 44 (12) (2012) 1405–1424. [45] X.-Z. Wang, Y.-L. He, L.-C. Dong, H.-Y. Zhao, Particle swarm optimization for determining fuzzy measures from data, Information Sciences 181 (19) (2011) 4230–4252. [46] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, IEEE Transactions on Evolutionary Computation 1 (1997) 67–82. [47] X.S. Yang, Engineering optimizations via nature-inspired virtual bee algorithms, Lecture Notes in Computer Science 3562 (2) (2005) 317–323.
Please cite this article in press as: H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inform. Sci. (2013), http://dx.doi.org/10.1016/j.ins.2013.09.015