Bacterial foraging algorithm with varying population

Bacterial foraging algorithm with varying population

BioSystems 100 (2010) 185–197 Contents lists available at ScienceDirect BioSystems journal homepage: www.elsevier.com/locate/biosystems Bacterial f...

2MB Sizes 6 Downloads 151 Views

BioSystems 100 (2010) 185–197

Contents lists available at ScienceDirect

BioSystems journal homepage: www.elsevier.com/locate/biosystems

Bacterial foraging algorithm with varying population M.S. Li a , T.Y. Ji a , W.J. Tang a , Q.H. Wu a,∗ , J.R. Saunders b a b

Department of Electrical Engineering and Electronics, The University of Liverpool, Brownlow Hill, Liverpool, L69 3GJ, UK School of Biological Sciences, The University of Liverpool, Liverpool, L69 3GJ, UK

a r t i c l e

i n f o

Article history: Received 4 March 2010 Accepted 7 March 2010 Keywords: Bacterial foraging algorithm Bacterial colony behaviors Optimization

a b s t r a c t Most of evolutionary algorithms (EAs) are based on a fixed population. However, due to this feature, such algorithms do not fully explore the potential of searching ability and are time consuming. This paper presents a novel nature-inspired heuristic optimization algorithm: bacterial foraging algorithm with varying population (BFAVP), based on a more bacterially-realistic model of bacterial foraging patterns, which incorporates a varying population framework and the underlying mechanisms of bacterial chemotaxis, metabolism, proliferation, elimination and quorum sensing. In order to evaluate its merits, BFAVP has been tested on several benchmark functions and the results show that it performs better than other popularly used EAs, in terms of both accuracy and convergency. Crown Copyright © 2010 Published by Elsevier Ireland Ltd. All rights reserved.

1. Introduction Various conventional optimization methods have been developed to solve science and engineering problems over the past few decades, such as Linear Programming (LP), Non-Linear Programming (NLP), and Convex Programming. LP and NLP algorithms can only be applied to problems that are formulated in the augmented form. However, most of the nature and engineering problems are black box problems, and the evaluation of fitness process is based on physical measurements or hard to be transformed into augmented form. Moreover, traditional methods do not guarantee finding the global optimum solution, because they are susceptible to local entrapment. Based on the studies of biological evolution and animal behaviors, the biologically inspired algorithms (Bounds, 1987), which are usually referred to as EAs, such as Genetic Algorithm (GA) (Holland, 1992), Evolutionary Programming (EP) (Fogel, 1966) and Particle Swarm Optimizer (PSO) (Kennedy and Eberhart, 1995), have been developed and used to solve complex optimization problems (Back, 1996). Recently, a Group Search Optimizer (GSO) has been developed at University of Liverpool (He et al., in press), which is inspired by animal searching behavior and group living theory. GSO has a better performance than GA, PSO and FEP in solving multi-modal functions (He et al., in press). However, GSO gives a modest search performance for unimodal functions. In sociology and biology, a population is defined as a collection of inter-breeding organisms of a particular species. Population size is an essential phenomenon in evolution and has been incorporated in the development of EAs. Most EAs are based on a fixed popula-

∗ Corresponding author. Tel.: +44 151 7944535; fax: +44 151 7944540. E-mail addresses: [email protected], [email protected] (Q.H. Wu).

tion framework, which introduces unnecessary computation in the optimization process. The major drawback is redundant computation load caused by lack of knowledge about (1) the relationship between population size and dimensionality; and (2) the relationship between population size and complexity of the optimization problem. Some methods have been investigated that aim to give suggestions on choosing a proper population size for EAs; however, most of them focus on a specified EA (Harik et al., 1999), and cannot guarantee their predicted performance. Therefore, in most applications, the population size is blindly determined and the methods used are still based on a fixed population size. To overcome this problem, a few EAs designed with a varying population size have been proposed over the past few years. The Genetic Algorithm with Varying Population Size (GAVaPS) (Arabas et al., 1994) introduced the concept of varying population and reported an enhanced convergence speed. The Adaptive Population sized Genetic Algorithm (APGA) (Back et al., 2000) and the Population Resizing on Fitness Improvement Genetic Algorithm (PRoFIGA) (Eiben et al., 2004) have also been investigated. As far as computational efficiency is concerned, however, these algorithms do not provide better performance than GAs that have a fixed population, and they are merely developed with some algorithmic improvements (Harik and Lobo, 1999; Lobo and Goldberg, 2004). Furthermore, these algorithms cannot be generalized for a wide range of optimization problems, as their population size variation relies on the knowledge of specific optimization problems. Last but not least, there is no evidence showing that these algorithms provide better optimization results than GAs in generic cases. Meanwhile, the population size of some algorithms, such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES), is determined by the dimension of the objective function (Hansen et al., 2003). Consider the fact that these algorithms are not developed

0303-2647/$ – see front matter. Crown Copyright © 2010 Published by Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.biosystems.2010.03.003

186

M.S. Li et al. / BioSystems 100 (2010) 185–197

as a generic method, and no result has been reported better than that of conventional EAs, although they are based on varying population size frameworks, the experimental studies presented in this paper excludes these algorithms. The study of bacterial behaviors for development of novel optimization algorithms has received a great deal of attention over the past few years. The Bacterial Foraging Algorithm (BFA) proposed by Kevin Passino is one of the emerging optimization methods (Passino, 2002). BFA stems from the study of E. coli chemotaxis behavior and is claimed to have a satisfactory performance in optimization problems. It has been applied to adaptive control system (Passino, 2002), PID controller design (Kim and Cho, 2005) and dynamic environment optimization (Tang et al., 2006a). However, Passino’s BFA is also based on a fixed population size and demands a large amount of computation. In this paper, a novel optimization algorithm, BFAVP, is proposed. Instead of simply describing chemotaxis behavior in BFA, BFAVP involves further details of bacterial behaviors, and incorporates the mechanisms of metabolism, proliferation, elimination and quorum sensing. This study builds on our previous study of a mathematical framework of bacterial foraging patterns in a varying environment (Tang et al., 2006b). In BFAVP, the following four features of bacterial behaviors are incorporated: (1) chemotaxis, which offers the basic search principle of BFAVP. Chemotaxis comprises two basic foraging patterns, tumble and run. The biased random walk, which performs the “local search” in problem solving, is composed of the combination of these two patterns. In the tumble process, the head angle of each bacterium is described as a compound angle, which is inspired by bacterial swimming behavior in a 3-dimension space. (2) Metabolism, which controls the proliferation of a bacterium. In order to measure metabolism, the property of bacterial energy is incorporated, which is related to the nutrient level in the environment. Superior bacteria usually have more energy, which leads to a more filial generation. Due to this feature, bacteria are able to aggregate around optima at an earlier stage of optimization. (3) Proliferation and elimination, which result in a varying population. Proliferation occurs if bacterial energy is high enough. In order to provide a criterion for elimination, the property of bacterial age is incorporated in BFAVP. Bacterial age describes the life cycle of a bacterium. A bacterium with a higher age is prone to be eliminated. By this mechanism, with the same expectation value, the computational complexity of the optimization process can be reduced and unnecessary computation can be avoided, by eliminating the bacteria that do not possess sufficient energy during their evolutionary process. (4) Quorum sensing, which enables BFAVP to escape from local optima. This a two-fold operator that can either attract a bacterium to the optima or repel it away from the location where bacteria concentrated. In this paper, BFAVP has been evaluated on thirteen benchmark functions, which are high-dimensional unimodal functions, high-dimensional multimodal functions and low-dimensional multimodal functions respectively. The simulation results are reported to show the merits of the proposed algorithm in comparison with GA, PSO, Fast Evolutionary Programming (FEP) (Yao et al., 1999), and GSO, respectively. To further compare the improvements of BFAVP, it has been compared with a novel variants of BFA, which is Adaptive Bacterial Foraging Optimization Algorithm (ABFOA) proposed in 2009.

2. Algorithm Description In order to describe the features of chemotaxis, metabolism, proliferation and elimination, and quorum sensing, four mathematical models are constructed correspondingly, which will be elucidated in detail in this section. During an optimization process, the four

models are performed in order in each iteration. In other words, an iteration comprises four procedures. 2.1. Chemotaxis E. coli sense simple chemicals in the environment, and are able to decide whether chemical gradients are directionally favorable or unfavorable (Berg, 2000). If a favorable gradient is sensed, they will swim towards the direction. Bacteria swim by rotating thin, helical filaments known as flagella driven by a reversible motor embedded in the bacterial cell wall. E. coli have 8–10 flagella placed randomly on the bacteria (Bar Tana et al., 1977). In chemotaxis, the motor runs either clockwise or counterclockwise with the different direction of protons flowing through the cytoderm. When the motors turn clockwise, the flagellar filaments work independently, which leads to erratic displacement. This behavior is called “tumble”. When the motors turn counterclockwise, the filaments rotate in the same direction, thus pushing the bacterium steadily forward. This behavior is called “run”. The alternation of tumble and run is presented as a biased random walk. The chemotaxis behavior is modeled by a tumble-run process that consists of a tumble step and several run steps. The tumblerun process follows a gradient searching principle, which indicates that the position of the bacterium is updated in the run steps by the gradient information provided by the tumble step. Determining the rotation angle taken by a tumble action in an n-dimensional search space can be described as follows. Suppose the pth bacterium, in the tumble-run process of the kth iteration, has a current position  n−1 k , ϕk , . . . , ϕk Xpk ∈ Rn , a rotation angle ϕpk = ϕp1 ∈ R and a p2 p(n−1)





k , dk , . . . , dk n tumble length Dpk (ϕpk ) = dp1 pn ∈ R , which can be calp2

culated from ϕpk via a Polar-to-Cartesian coordinate transform: k = dp1

n−1 





k cos ϕpi ,

i=1



k = sin ϕk dpj p(j−1) k dpn

= sin



n−1 

k ϕp(n−1)





k cos ϕpi



j = 2, 3, . . . , n − 1,

(1)

i=p

.

In Polar-to-Cartesian coordinate transform, an arbitrary vector in the n-dimensional space can be represented by n − 1 angles and a normalized distance to the original point. The maximal rotation angle max is related to the number of the dimensions of the objective function, which can be formulated as:  , max = √  n + 1

(2)

where n is the number of dimensions, and · denotes the operation which rounds the element to the nearest integer towards minus infinity. In the tumble-run process of the kth iteration, the pth bacterium generates a random rotation angle, which falls in the range of [0, max ], where max is the maximal tumble angle. A tumble action takes place in an angle expressed as: ϕ ˆ pk = ϕpk + r2 max /2,

(3)

and immediately following the tumble action, a step of run can be expressed as: Xˆ pk (1) = Xpk + r1 lmax Dpk (ϕ ˆ pk ),

(4)

where Xpk indicates the position of the pth bacterium at the beginning of the kth iteration, r1 ∈ R1 is a normally distributed random number generated from N(0, 1), r2 ∈ Rn−1 is a random sequence

M.S. Li et al. / BioSystems 100 (2010) 185–197

with a range of [0,1], lmax is the maximal step length of a run, Xˆ pk (1) is the position of the pth bacterium after the first run step. Once the angle is determined by the tumble step, the bacterium will run for a maximum of nc steps. If at the nf (nf < nc ) run step, the bacterium cannot find a position which has a higher fitness value than the current one, the run process also stops. The position of the pth bacterium is updated at the hth (h > 1) run step in the following way: Xˆ pk (h) = Xˆ pk (h − 1) + r1 lmax Dpk (ϕ ˆ pk ),

For the convenience of description, the position of the pth bacterium beginning immediately after the tumble-run process of the kth iteration is denoted by Xˆ pk (nf ), nf ≤ nc . The rotation angle is updated after each iteration. The tumble angle of the pth bacterium at the beginning of the (k + 1)th iteration is expressed as ϕpk+1 , which has the same value as ϕ ˆ pk . 2.2. Metabolism Heterotrophic bacteria are not able to produce energy by photosynthesis. As a result, they need forage to accumulate energy. A heterotrophic bacterium (such as E. coli) acquires energy by absorbing nutrients from the environment and uses them for chemotaxis, growth and sensing. The objective of a bacterium is to survive in a nourishing environment and to reproduce. Provided with sufficient nutrition and energy, the bacterium first replicates its genetic material and then divides. However, in a hostile environment, some bacteria switch from germination to the sporulation cycle. Without sufficient energy, the bacterium stops metabolizing and its genetic material is packed into a small spore. In BFAVP, bacterial energy is described as epk , which is a measure of the energy quantity of the pth bacterium at the kth iteration. The fitness value of this bacterium at the kth iteration, F(Xˆ pk (nf )), is the source of its energy epk . In each iteration, bacteria absorb energy

subsequent to the tumble-run process. The energy transform of the pth bacterium in the kth iteration is defined as: e˜ pk = epk + ˛ · (F(Xˆ pk (nf )) − Fmin ),

proliferation state, which produces a new individual. For the new bacterium, bacterial energy is represented as: k

eˆm+1 =

(6)

where ˛ is a coefficient for energy transform and Fmin indicates the minimal fitness value obtained in the history. For the pth bacterium, if its energy e˜ pk is not high enough to trigger proliferation, it will be kept for the next tumble-run process in the (k + 1)th iteration, i.e. epk+1 = e˜ pk . On the other hand, if e˜ pk reaches the threshold, the proliferation process starts, which is explained in the following section. 2.3. Proliferation and Elimination Bacteria absorb energy from the nourishment environment, and perform a proliferation process if enough energy is gained. Bacteria grow to a fixed size and then reproduce through binary fission, which results in division. Two identical daughter bacteria are produced by division. Bacteria may also be eliminated if they reach the limitation of their lifespan. Bacterial growth follows three phases. The lag phase is a period of slow growth during which bacteria acclimatize to the nutrient environment. Once the metabolic machinery is running, bacteria begin to multiply exponentially. The log phase is the period during which nutrients are consumed at a maximum speed. In the stationary phase, as more and more bacteria are competing for dwindling nutrients, growth stops and the number of bacteria stabilizes (Prats et al., 2006). In BFAVP, the proliferation process is controlled by the level of bacterial energy e˜ pk . As soon as e˜ pk reaches the upper limit of the energy a bacterium can possess, the pth bacterium turns into the

e˜ pk 2

,

(7)

where m is the population size in the current iteration, and the previous bacterium also keeps half of the energy as follows: k

(5)

187

eˆp =

e˜ pk 2

.

(8)

The new bacterium will not be involved in the optimization process until the next iteration, so its energy may as well be denoted k k+1 . For the pth bacterium, its energy eˆp will not change until by em+1 the next metabolic charge takes place. Whether elimination behavior occurs depends greatly on bacterial age. For the pth bacterium at the kth iteration, its age is recorded by an age counter, Akp . In the next iteration, the bacterial age becomes: Ak+1 = Akp + 1. p

(9)

If a bacterium divides, the ages of two daughter bacteria are set to be 0. For a bacterium, when its age is close to the upper limit of its lifespan, it may be removed from the search space. However, assigning a fixed lifespan for each bacterium results in a large number of bacteria being eliminated at the same time. This is not in line with the real-world bacterial phenomenon. Therefore, in order to consider the randomness of elimination behavior, a probability is introduced to describe the uncertainty of bacterial ages, which follows the Gauss distribution (Liu, 1999). The probability density of a bacterial lifespan expectancy is set as:



1

Pp = √ exp 2ˆ



− ) ˆ (Ak+1 p

2



2ˆ 2

,

(10)

where  ˆ is the mean of the lifespan of the bacteria, and ˆ indicates the standard deviation of all the lifespan expectancies. Eq. (10) illustrates that the lifespan follows a Gaussian distribution, which is in accordance with the biological behavior in the real world. After the dead bacteria are eliminated, their positions are distributed around local optima, i.e. BFAVP can detect all local optima in a single run. This particular advantage is explained in Section 4.4. 2.4. Quorum Sensing A bacterium uses a batch of receptors to sense the signals coming from external substances. The bacterium also has an inducer, which is a molecule inside the bacterium, to activate the gene expression (Passino, 2002). When the inducer binds the receptor, it activates the transcription of certain genes, including those for inducer synthesis. This process, called “quorum sensing”, was discovered by Miller to explain the cell-to-cell communication (Miller and Bassler, 2001). In BFAVP, most “nutrients” locate around optima, which correspond to higher fitness values. Based on this assumption, the density of the inducer is increased if the fitness value is greater. On one hand, quorum sensing promotes symbiotic behavior inside a single bacterial species, which attracts the bacteria to the best position in the current iteration. Within a population, a certain percentage of bacteria are moving by attraction. Hence, the positions of the bacteria moving by attraction are updated as follows: Xpk+1 = Xˆ pk (nf ) + ı · (Xbest − Xˆ pk (nf )),

(11)

where ı is a coefficient describing the strength of bacterial attraction, Xbest indicates the position of current best global solution updated after each function evaluation.

188

M.S. Li et al. / BioSystems 100 (2010) 185–197

On the other hand, quorum sensing can occur between disparate species, which may cause competition or even be inimical between each species. In BFAVP, a small number of the bacteria are randomly selected to be repelled. To measure a degree of repelling, a repelling rate is defined by , i.e. in each iteration, i.e. 100% of the bacteria are processed by repelling. Accordingly the attraction rate is (1)100%. The repelling process is based on the random searching principle. If the pth bacterium shifts into the repelling process, a random angle in the range of [0, ] is generated. The bacterium is thereby “moved” to a random position following this angle in the search space, which can be described as: Xpk+1 = Xˆ pk (nf ) + r3 lrange Dpk (ϕ ˆ pk + r2 · /2),

(12)

where r3 ∈ Rn is a normally distributed random sequence drawn from N(0, 1), and lrange is the range of the search space. Repelling is the major method for BFAVP to increase the population diversity. The process is advantageous when apply to multimodal objective functions. The pseudo code of BFAVP is listed in Table 1. 3. Simulation Studies 3.1. Parameter Setting In order to evaluate the performance of BFAVP, thirteen benchmark functions which are divided into three sets are adopted in the following simulation studies. BFAVP is used for the minimization of the benchmark functions. Functions f1 ∼f5 listed in Appendix A are high-dimensional unimodal benchmark functions, used to investigate the convergence rate of each algorithm. Functions f6 ∼f10 listed in Appendix B are high-dimensional multimodal benchmark functions, which have many local optima. The total number of the local optima of each function increases as its dimension increases. Functions f11 ∼f13 shown in Appendix C are low-dimensional multimodal benchmark functions (Yao et al., 1999). It should be mentioned that the optima of f6 , f10 , f11 , f12 and f13 locate other than in the center of the solution space. The boundaries of the variables, the number of dimension and the minimum value of each benchmark function is also given in the Appendix. In the simulation studies, the performance of BFAVP is compared with that of the following six EAs, GA (Holland, 1992), PSO (Kennedy and Eberhart, 1995), FEP (Yao et al., 1999; Eberhart and Shi, 2000), GSO (He et al., 2006), Bacterial Foraging Algorithm (BFA) (Passino, 2002) and Adaptive Bacterial Foraging Optimization Algorithm (ABFOA) (Dasgupta et al., 2009). GA and PSO are popular algorithms that have been widely adopted, and their advantage in computation time has been proved in many literatures. FEP, on the other hand, is faster and reaches smaller fitness values on certain benchmark functions, which has been demonstrated in He et al. (in press). Paper (He et al., in press) has also shown the superiority of GSO over GA, PSO and FEP under a majority of conditions. Paper (Dasgupta et al., 2009) gives detailed comparisons between BFA and ABFOA on several benchmark functions. Concerning its application to real-world problems, in our previous work, BFAVP has been applied to solve power system Optimal Power Flow (OPF) problems, and the performance of BFAVP was compared with GA and PSO (Li et al., 2007). In OPF applications, BFAVP obtained superior results. Therefore this paper focuses on the comprehensive description of BFAVP and discussion on its performance in comparison with the other EAs on the selected benchmark functions. As BFAVP is a varying population-based algorithm, it is fair to compare it with EAs that have been developed using a varying population framework. Such EAs include GAVaPS (Arabas et al., 1994), APGA (Back et al., 2000) and PRoFIGA (Eiben et al., 2004). However,

these algorithms are not developed as a generic method: GAVaPS is designed to solve the benchmark function which less than 3 dimensions, APGA is evaluated on a few benchmark functions which have less than 10 dimensions, and PRoFIGA deigned to meet the demand of Spears’ multimodal problems. Moreover, no result of GAVaPS, APGA and PRoFIGA has been reported better than that of GA on most of the benchmark functions. Hence, they are not involved in the comparative studies. The CMA-ES (Hansen et al., 2003) is typically applied to unconstrained or bounded constraint optimization problems. The paper aims to reduce the time complexity of the Evolution Strategy (ES) on unimodal functions. As a result the algorithm is not compared with proposed algorithm. In all experiments, the initial population size of all the algorithms is selected to be 50. For BFAVP, the mean of bacterial lifespan, , ˆ is set to be 50, i.e. 50 iterations and the standard deviation, , ˆ is set to be 10, i.e. 10 iterations. These two parameters are set up to allow the population size to vary in a more bacterially-realistic way as discussed in Section 2.3. However they are not sensitive to the performance of BFAVP which will be discussed in Section 3.7. The maximal number of the run steps taken in a same direction is 4, which will be further discussed in Section 3.7. The repelling rate  is set to be 0.2, according to the experiment undertaken, i.e. 20 percent of the bacteria are repelled during the process of quorum sensing, which will be discussed in Section 3.7. The setting of GA is based on the Genetic Algorithm Optimisation Toolbox (GAOT) (Houck et al., 1995), using the normalized geometric ranking as the selection function. The implementation of PSO is described in Kennedy and Eberhart (1995). For the parameters of PSO the inertia weight ω is set to be 0.73 and the acceleration factors c1 and c2 are both set to 2.05, as recommended in Clerc and Kennedy (2002). For FEP, the tournament size is set to be 10 for selection, as recommended in Yao et al. (1999). In order to make the comparison fair, the tuning of the algorithms does not depend on any assumptions on the objective function. Once the parameters are chosen, they will be fixed for all of the experiments. The reason for comparison with GSO is because GSO has been approved to be better than other EAs on these benchmark functions. The tuning of GSO parameters follows the recommendation of He et al. (in press). Some literature have proposed the use of t-test to investigate the stochastic property of an EA. The t-test only assesses whether the means of two groups are statistically different from each other. As in a similar manner to the t-test, as suggested in Yao et al. (1999), the mean and standard deviation of the best solution of an algorithm could be used for the purpose of comparison between different algorithms. In order to investigate the stochastic performance and stability of BFAVP, we will present the average mean and standard deviation of the best solution of each algorithm obtained from 50 runs of the algorithm on each benchmark function. In He et al. (in press), a larger number of runs was set up to investigate GSO’s stochastic performance. As we take GSO as an example, we would not spend unnecessary time running all algorithms in a large number of times for comparison purpose. The performance of BFAVP obtained by selecting higher number of runs to get an average result will be further discussed in Section 3.6. 3.2. Maximal Number of Function Evaluations In order to investigate the computational load of each algorithm involved in the comparative studies, the maximal number of function evaluations of each algorithm on each benchmark function is recorded and shown in Table 2. The maximal number is obtained once an algorithm is fully converged, which was suggested in Yao et al. (1999). In the table, the evaluation numbers of GA, PSO, FEP and GSO are taken from He et al. (in press). From the table, it can be seen that the maximal number of function evaluations of BFAVP for each

M.S. Li et al. / BioSystems 100 (2010) 185–197

189

Table 1 Pseudo Code of BFAVP. Set k := 0; Randomly initialize bacterial positions; WHILE (termination conditions are not met) FOR (each bacterium p) Tumble: Generate a random rotation angle by (3). Set h := 1; Run: FOR (each run step h) Move the bacterium to the new direction and calculate its position, Xˆ pk (h); if h = 1, use (2); if h > 1, use (5). If the fitness value at current position is less than the value at previous position, the bacterium moves towards the angle until it reaches the maximum step; otherwise, the bacterium stops at current position. Increase h by 1; END FOR END FOR FOR (each bacterium p) Proliferation: Calculate e˜ pk by (6); IF (˜epk reaches the upper limit of the energy a bacterium can possess) Bacterium p divides into 2 daughter bacteria, each new bacterium takes half of the energy of the original one, according to (7) and (8), both of the new bacterial ages are set to 0; END IF Elimination: Calculate bacterial age Akp by (9); Generate a random lifespan by (10); IF (Akp exceeds the lifespan) Bacterium p is eliminated. Record its position; END IF END FOR Quorum Sensing: (1-)100% of the bacteria are attracted to the global optimum by (11), 100% of bacteria are repelled by (12); k := k + 1; END WHILE

benchmark function is less than or equal to those reported. Concerning the computational load and efficiency of BFAVP, Sections 4.1 and 4.2 will give further elucidation. 3.3. Simulation Results and Discussion on Unimodal Functions According to the No Free Lunch theorem, under certain assumptions there is no single search algorithm that is best on average for all problems (Wolpert and Macready, 1997). The bold values in Tables 3 to 6 are the best value for each function among 5 algorithms. As it can be seen. As it can be seen from Table 3, the results yielded by PSO are much better than those yield by BFAVP on functions f1 and f2 . This is because the inertia weight of PSO is decreasing during the optimization process, which ensures that the search range of the entire population is reduced iteration by iteration. For this type of unimodal functions that have only one optimum, this search strategy helps PSO find a better solution.

Table 2 The maximal number of function evaluations of BFAVP, GA, PSO, FEP and GSO for f1 ∼f13 . Function

BFAVP

GA a

PSO a

FEP a

GSO a

f1 f2 f3 f4 f5

1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

1.5 × 105 2.0 × 105 5.0 × 105 2.0 × 106 1.5 × 105

1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

f6 f7 f8 f9 f10

1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

1.5 × 105 2.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

1.5 × 105 2.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

9.0 × 105 5.0 × 105 1.5 × 105 2.0 × 105 1.5 × 105

1.5 × 105 2.5 × 105 1.5 × 105 1.5 × 105 1.5 × 105

f11 f12 f13

7.5 × 103 1.5 × 105 1.25 × 103

7.5 × 103 2.5 × 105 1.25 × 103

7.5 × 103 2.5 × 105 1.25 × 103

1.0 × 104 4.0 × 105 1.0 × 104

7.5 × 103 2.5 × 105 1.25 × 103

a

The numbers are adopted from He et al. (in press).

However, although this strategy may guarantee convergence when the benchmark function is unimodal, it traps the algorithm in local optima in multimodal functions, which can be seen in the next section. BFAVP outperforms GA on all the unimodal benchmark functions. However, the random searching characteristic of BFAVP may affect accuracy of the optimization result for unimodal functions. Compared with FEP and GSO, BFAVP has nearly the same performance on functions f1 ∼f3 . It is good to see that BFAVP has obtained better performance on functions f4 and f5 , due to a significant minimal average best results of mean and standard deviation. In this cases, the attraction among bacteria offered by quorum sensing in BFAVP is effective, which leads to a fast convergence rate on this type of functions that have obvious gradient variation. From the simulation results, it can be concluded that the algorithms that have a step length degression feature, such as PSO, are more suitable for solving unimodal problems with slow gradient variation. The algorithms working with random operators, such as BFAVP, FEP and GSO, are more suitable for solving problems which have obvious gradient variations. 3.4. Simulation Results and Discussion on Multimodal Functions The results listed in Tables 4 and 5 are obtained from the evaluation of all algorithms on multimodal benchmark functions, which reveal that multimodal problems can be solved effectively by BFAVP, as it has both the random search and gradient search capabilities and compromises well with them. In the multimodal functions, the number of local optima exponentially increases in proportion to the increase in the number of dimensions. Therefore, it is difficult for most of EAs to solve multimodal functions. In this case, these EAs lacks a capability to increase the population diversity, as their population size is fixed. As a result, they are very likely to be trapped in local optima. Consequently, the EAs that have a decreasing step length are incapable of finding the accurate solution in the search space of multimodal functions.

190

M.S. Li et al. / BioSystems 100 (2010) 185–197

Table 3 Average best results of mean () and standard deviations () of BFAVP, GA, PSO, FEP and GSO on f1 ∼f5 . GA a

BFAVP

PSO a

FEP a

GSO a

−37

−4

f1

 

9.9908 × 10 8.7307 × 10−9

3.1711 1.6621

3.6927 × 10 2.4598 × 10−36

5.7 × 10 1.3 × 10−4

1.9481 × 10−8 1.9841 × 10−8

f2

 

5.4701 × 10−5 3.6527 × 10−5

0.5771 0.1306

2.9168 × 10−24 1.1362 × 10−23

8.1 × 10−3 7.7 × 10−4

3.7039 × 10−5 8.6185 × 10−5

f3

 

0.1504 0.1221

7.9610 1.5063

0.4123 0.2500

0.3 0.5

0.1078 3.9981 × 10−2

f4

 

7.7345 × 10−4 2.10485 × 10−3

338.5616 361.497

37.3582 32.1436

5.06 5.87

49.8359 30.1771

f5

 

0 0

3.6970 1.9517

0.1460 0.4182

0 0

1.6000 × 10−2 0.1333

a

−9

The results are adopted from He et al. (in press).

Table 4 Average best results of mean () and standard deviations () of BFAVP, GA, PSO, FEP and GSO on f6 ∼f10 . BFAVP

GA a

PSO a

FEP a

GSO a

f6

 

−12569.4882 1.3082 × 10−2

−12566.0977 2.1088

−9659.6993 463.7825

−12554.5 52.6

−12569.4882 2.2140 × 10−2

f7

 

0.3319 0.3368

0.6509 0.2805

20.7863 5.9400

4.6 × 10−2 1.2 × 10−2

1.0179 0.9509

f8

 

1.6325 × 10−5 2.9589 × 10−5

0.8678 0.2805

1.3404 × 10−3 4.2388 × 10−2

1.8 × 10−2 2.1 × 10−2

2.6548 × 10−5 3.0820 × 10−5

f9

 

2.4649 × 10−2 4.2953 × 10−2

1.0038 6.7545 × 10−2

1.0633 × 10−2 1.0895 × 10−2

2.6 × 10−2 2.2 × 10−2

3.1283 × 10−2 2.8757 × 10−2

f10

 

1.9596 × 10−9 2.0533 × 10−9

4.3572 × 10−2 5.0579 × 10−2

3.9503 × 10−2 9.1424 × 10−2

0.2 × 10 − 6 6.1395 × 10−5

2.7648 × 10−11 9.1674 × 10−11

a

The results are adopted from He et al. (in press).

Table 5 Average best results of mean () and standard deviations () of BFAVP, GA, PSO, FEP and GSO on f11 ∼f13 . BFAVP

GA a

PSO a

FEP a

GSO a

f11

 

0.9980 2.5080 × 10−16

0.9989 4.4333 × 10−3

1.0239 0.1450

1.22 0.56

0.9980 0

f12

 

1.8594 × 10−4 4.2768 × 10−4

7.0878 × 10−3 7.8549 × 10−3

3.8074 × 10−4 2.5094 × 10−4

5.0 × 10−4 3.2 × 10−4

4.1687 × 10−4 3.1238 × 10−4

f13

 

−1.0316 5.3284 × 10−16

−1.0298 3.1314 × 10−3

−1.0160 1.2786 × 10−2

−1.03 4.9 × 10−4

−1.0316 0

a

The results are adopted from He et al. (in press).

It can be seen from Table 4 that BFAVP outperforms all the other algorithms on benchmark functions f7 , f8 and f9 . BFAVP has nearly the same performance as good as GSO on f6 , which has 9.3132 × 1020 local optimum. The results show all the algorithms which have a random walk mechanism such as mutation are suitable for this type of functions. Particularly, BFAVP models the quorum sensing behavior which allows bacteria to randomly move from local optima to a better position. Accordingly quorum sensing increases the probability of BFAVP to find the global optimum. BFAVP also outperforms GA, PSO and FEP on functions f11 ∼f13 , as shown in Table 5. A small standard variance of GSO and BFAVP shows that they always converge to the global optima on these functions. In this case, GSO is slightly stabler than BFAVP on these low-dimensional multimodal problems, as GSO’s standard deviation is smaller that that of BFAVP. 3.5. The Comparison among BFA, Adaptive Bacterial Foraging Optimization Algorithm (ABFOA) and BFAVP In order to evaluate the improvement of the proposed algorithm, this experiment compares BFAVP with conventional BFA

and ABFOA, which is a novel variation of BFA proposed in 2009 (Dasgupta et al., 2009). Paper (Dasgupta et al., 2009) presents the detailed results of Adaptive Bacterial Foraging Optimisation Algorithm (ABFOA) and the classical BFA using a test-suite of six well-known benchmark functions – Sphere function f1 , Rosenbrock function f4 , Rastrigin function f7 , Griewank function f9 , Ackley function f8 and Shekel’s foxholes function f11 . In this section, BFAVP is applied to these benchmark functions with the same number of function evaluations recommended in the paper, which is 5 × 106 evaluations for each benchmark function. The results are listed in Table 6. From the results it can be found that BFAVP has much better performances than classical BFA on all benchmark functions. The ABFOA has a chemotaxis model improved from that of BFA, which leads to the improvement on all benchmark functions as well. However, compared with ABFOA, BFAVP engages more detailed bacterial behaviors into the framework to meet the demands on different optimization environments. As a result, BFAVP outperforms ABFOA on 5 benchmark functions. In summary, the varying population framework in BFAVP overcomes the limits of the chemotaxis models in BFA and ABFOA.

M.S. Li et al. / BioSystems 100 (2010) 185–197 Table 6 Average best results of mean () and standard deviations () of BFA, ABFOA and BFAVP. BFA a

ABFOA a

BFAVP

 

0.084 0.0025

0.045 0.0061

4.6807 × 10−4 1.0281 × 10−4

f4

5

5 × 10

 

58.216 14.3254

40.212 6.5465

15.4971 12.1279

f7

5 × 105

 

17.0388 4.8421

3.3316 0.5454

9.7266 5.8146

No. of FEs 5 × 10

5

f1

f8

5 × 105

 

2.3243 1.8833

0.8038 0.5512

0.3966 0.5103

f9

5 × 105

 

0.3729 0.0346

0.2414 0.5107

3.2041 × 10−3 2.9450 × 10−3

f11

1 × 105

 

1.0564 0.0122

0.9998 0.0017

0.9980 7.1036 × 10−10

a

The results are adopted from Dasgupta et al. (2009).

3.6. The Robustness and Computation Time of BFAVP

Table 9 Average computation time (s) consumed to obtain a valid solution with different thresholds on unimodal benchmark functions. Time (s) Threshold

f1 0.01

f2 0.01

f3 10.00

f4 100.00

f5 10.00

GA PSO BFAVP

Fail a 0.92 5.35

Fail a 3.78 5.74

25.36 5.50 3.19

Fail a 36.92 35.24

18.31 5.27 4.93

Time reduction

−481.52%

−51.85%

87.42%

4.55%

73.07%

a

Table 7 Average standard deviation of BFAVP achieved by different number of runs. 30 runs f1 f6 f11

50 runs −9

9.4172 × 10 1.3122 × 10−2 2.6072 × 10−16

100 runs −9

8.7307 × 10 1.3082 × 10−2 2.5080 × 10−16

300 runs −9

8.0524 × 10 1.2962 × 10−2 2.7530 × 10−16

9.1141 × 10−9 1.2900 × 10−2 2.6944 × 10−16

Table 8 The statical results of the t-test between the results of 50 runs and those of 30, 100 and 300 runs, respectively.

f1 f6 f11

30 runs

100 runs

300 runs

−0.35 −1.12 −0.27

−0.04 −0.05 0.01

0.02 0.04 0.03

“Fail” means that the average computation time never reaches the threshold.

Table 10 Average computation time (s) consumed to obtain a valid solution with different thresholds on multimodal benchmark functions. Time (s) Threshold

f6 −10000

f7 1.00

f8 1.00

f9 1.00

f10 1.0

GA PSO BFAVP

4.84 Fail a 3.96

21.07 Fail a 18.35

19.67 6.95 7.32

Fail a 6.37 4.98

14.54 15.72 11.5

Time reduction

18.18%

12.91%

62.79%

21.82%

26.84%

a

To further investigate evaluate the robustness of BFAVP, we select different numbers of runs and choose one function from the three different categories of benchmark functions, i.e. f1 , f6 and f11 . Table 7 lists the standard deviation of the best fitness values of BFAVP, obtained using different numbers of runs. standard deviation has a small difference between one and another, which reveals that BFAVP is able to obtain a stable solution after 1.5 × 105 evaluations for f1 and f6 and 1.5 × 104 evaluations for f11 , which are close to the results shown in Tables 3–5 respectively. From these results, it is understood that selecting the number of runs to be 50 is reasonable to evaluate the convergence and robustness of BFAVP in comparison with the other EAs. In the experimental studies of Sections 3.4 and 3.5, the algorithms have been run 50 times for each function. Meanwhile, in order to further prove that the number of the runs is reasonable to evaluate the convergence and robustness of BFAVP, a statical analysis is carried out by applying the two-tailed t-test among different sets of experimental results. In this experiment, the t-test is performed between the results of 50 runs and those of 30, 100 and 300 runs, respectively. The level of significant is set to 0.05, and the angle of freedom is set to 1998 degrees. The statical results of the ttest are listed in Table 8. From the results it can be found that when the number of runs is greater than 50, the robustness of BFAVP is stabilized. From the results presented in Tables 3–5, it can be seen that the convergence performances of FEP, GSO and BFAVP, obtained from 50 runs, are similar, and there are a significant difference between that of GA, PSO and BFAVP. Therefore we further comparison of the

191

“Fail” means that the average computation time never reaches the threshold.

BFAVP with GA and PSO, concerning the computation time taken for high-dimensional benchmark functions. All the experiments were undertaken using a PC which has a Core 2 Duo 2.66 GHz processor and computation time was measured in seconds. Tables 9 and 10 list the computation time consumed by GA, PSO and BFAVP, respectively, on f1 ∼f5 , which was measured when the algorithm reaches a pre-set threshold listed in the table. From the results it can be seen that, except for the functions which have quadratical surfaces, such as f1 and f2 , BFAVP is able to perform faster than GA and PSO. The maximal computation time saving made by BFAVP applied on the benchmark functions can reach up to 87.42%, in comparison with GA or PSO. BFAVP is able to make a significant saving in almost all the cases investigated. To focus on the comparison of computation time between GA, PSO and BFAVP, we set up different thresholds at 70%, 90%, 99% and 99.99% of the fitness value of function f6 for an algorithm to reach. Table 11 shows the number of function evaluations taken by each algorithm to reach a demand solution of f6 . Compared with GA and PSO, it can be seen that BFAVP is always the algorithm which reaches the thresholds first. It has also been noted that in some cases, PSO fails to converge to the global optimum of f6 . We would like to give an insight to the convergence capability of BFAVP to explore its merit gained with varying population size that is adaptive to the demand of computational complexity for reduction of redundant computation. Fig. 1 illustrates the standard deviation value of BFAVP obtained during the optimization procedure of function f6 , averaged from 50 runs. The figure shows that during the whole search period, the major search mechanism was changed from randomly searching to gradient searching, which can be seen around the 1.2 × 105 function evaluations. After that, the optimization process becomes stable. From the figure, it can be seen that the standard deviation not only decreases in the early stage of the optimization procedure, but also becomes nearly invariable at the later stage on the multimodal objective functions. A sudden change of the standard deviation of fitness values during the optimization process implies that the varying population size is capable of exploring global optima by a varying diversity of population. The rapid change of the standard deviation occurring at the early search stage or immediately after the new optima are explored also implies that BFAVP is able to offer a local search capability. The compromise between the global and local search capabilities is one of major merits of BFAVP.

192

M.S. Li et al. / BioSystems 100 (2010) 185–197

Table 11 Number of function evaluations required by each algorithm to reach a demand solution on f6 . Threshold 2, 90%

Threshold 3, 99%

Threshold 4, 99.99%

2.1 × 10 1.3 × 103 4.1 × 102

5.7 × 10 Fail a 3.9 × 103

8.9 × 10 Fail a 1.2 × 104

1.3 × 105 Fail a 1.0 × 105

3

GA PSO BFAVP a

Threshold 1, 70%

3

4

“Fail” means that the algorithm never reaches the threshold.

Fig. 1. The standard deviation of f6 obtained by BFAVP in 50 runs.

3.7. Investigation of the Parameter Settings It is necessary to understand which parameters of BFAVP would be sensitive to its performance. For this purpose, we undertook the experiment in which three benchmark functions, f1 , f6 , and f11 , are selected to evaluate the effect of each parameter. In BFAVP, the ratio between tumble and run is an important quantity which could influence the search process. This quantity is controlled by nc , which indicates the number of runs in a chemotaxis process. Table 12 lists the average best results and standard deviations of functions f1 , f6 and f11 obtained with different nc . From these results, it can be seen that the variation of nc has a minor influence on the unimodal benchmark functions. This is due to the fact that for the unimodal functions, the tumble-run process is the major searching mechanism dominating the whole search process in this case. However, for the multimodal functions, quorum sensing would play as major role in searching. If increasing the number of run steps, the number of quorum sensing would be decreased if the total number of function evaluations is fixed. In the case, in this case, quorum sensing would not be able to dominate the search process, but once Table 12 Average best results of mean () and standard deviations () of functions f1 , f6 and f11 with different number of runs in a chemotaxis process, nc . nc

f1

f6

f11

it applies during the search process, it will cause a significant disturbance to search performance which leads to a large deviation of fitness values and even to a unstable performance of BFAVP. Based on the description in Section 2, the percentage of bacteria involved in the attraction and repelling of quorum sensing is further discussed here. Different  ranged from 0 to 1 is tested on each function. Table 13 lists the average best results of mean and standard deviations of functions f1 , f6 and f11 with different . The table shows that with  set to be 0.2, BFAVP provides the best results on multimodal functions. For the unimodal functions, the performance is slightly improved by deceasing . However, a smaller  also decreases the robustness of the algorithm. As indicated above, in most of the experiments, the  has been set to be 0.2. In BFAVP, two parameters, mean and standard deviation of bacterial lifespan, are related to the elimination behavior. Tables 14 and 15 list the average best results of mean () and standard deviations () of functions f1 , f6 and f11 respectively with different mean () ˆ and standard deviation () ˆ in 10. In Table 14, the standard deviation of bacterial lifespan is set to be 10. In Table 15, the mean bacterial lifespan is set to be 50. From these two figures, it is understood that these two parameters are not sensitive to the performance of BFAVP. 3.8. The Tuning of Algorithms In this paper, the BFAVP is not developed to be suitable for every benchmark function, but designed to outperform GSO in Table 13 Average best results of mean () and standard deviations () of functions f1 , f6 and f11 with different percentage of bacteria involved in the attraction, . 

f1

f6

f11

0

 

1.0211 × 10−10 7.4112 × 10−9

−8904.9134 1008.9575

0.9980 3.1419 × 10−13

0.2

 

9.9908 × 10−9 8.7307 × 10−9

−12569.4833 1.3082 × 10−2

0.9980 2.5080 × 10−16

0.4

 

9.9519 × 10−9 3.1270 × 10−9

−12539.6324 92.1576

0.9980 6.4218 × 10−14

0.6

 

8.0917 × 10−9 1.7307 × 10−9

−12432.0975 253.9706

0.9980 9.7922 × 10−12

0.8

 

7.7725 × 10−9 8.7307 × 10−8

−11052.2785 531.4854

0.9980 5.6557 × 10−11

1

 

3.9058 × 10−9 3.7307 × 10−8

−9247.5469 691.8003

0.9980 7.8491 × 10−10

1

 

9.0318 × 10−9 8.2769 × 10−9

−10778.4387 1978.0344

1.0253 0.3816

2

 

8.1712 × 10−9 8.7060 × 10−9

−12170.3171 281.9502

1.0035 3.7655 × 10−3

3

 

8.3922 × 10−9 9.6555 × 10−9

−12532.8235 16.6948

0.9980 7.1869 × 10−7

f1

f6

f11

4

 

9.9908 × 10−9 8.7307 × 10−9

−12569.4833 1.3082 × 10−2

0.9980 2.5080 × 10−16

30

 

9.9514 × 10−9 8.5284 × 10−9

−12569.4833 1.8147 × 10−2

0.9980 3.0975 × 10−16

5

 

1.1340 × 10−10 2.6787 × 10−10

−12569.3084 0.1752

0.9980 5.0517 × 10−16

50

 

9.9908 × 10−9 8.7307 × 10−9

−12569.4833 1.3082 × 10−2

0.9980 2.5080 × 10−16

6

 

9.7577 × 10−9 9.7431 × 10−9

−12568.0587 2.0971

0.9980 4.4898 × 10−15

70

 

9.8759 × 10−9 8.9174 × 10−9

−12569.4833 1.1270 × 10−2

0.9980 2.2785 × 10−16

Table 14 Average best results () and standard deviations () of functions f1 , f6 and f11 with different bacterial lifespan, . ˆ  ˆ

M.S. Li et al. / BioSystems 100 (2010) 185–197

193

Table 15 Average best results () and standard deviations () of functions f1 , f6 and f11 with different standard deviation of lifespan, . ˆ f1

f6

f11

5

ˆ  

9.9102 × 10−9 8.5913 × 10−9

−12569.4833 1.9134 × 10−2

0.9980 2.5469 × 10−16

10

 

9.9908 × 10−9 8.7307 × 10−9

−12569.4833 1.3082 × 10−2

0.9980 2.5080 × 10−16

15

 

9.9780 × 10−9 9.1447 × 10−9

−12569.4833 1.6324 × 10−2

0.9980 2.9575 × 10−16

most cases. According to No Free Lunch (NFL) theory (Wolpert and Macready, 1997), there is not an universal parameter setting which can be applied to all specified cases. As a result, it is necessary to tune algorithms for specific applications. For example, in multimodal functions, higher population diversity leads to higher probability to detect local optima. However, the diversity also slows down the convergence speed in unimodal functions. The maximal number of the run steps taken in a same direction is also known as a significant tunable parameter. When the algorithm encounters a smooth surface, a lager number of run steps accelerates the convergence speed. In order to evaluate each algorithm, an experiment is carried out in this section. In this experiment, the parameters in BFAVP and GSO have been tuned several times on benchmark functions, f2 and f10 respectively. In order to adjust the population diversity, the repelling rate in BFAVP is set to be 0.1 and 0.25, respectively. With the same objective, the population of ranger in GSO is set to be 10% and 25% as well. Function f2 has a smooth surface aroud the global optima, so the number of run steps in BFAVP is set to be 6 which accelerates the convergence. The maximum turning angle is also reduced to be (/16) in function f2 . Table 16 listed the average best results () and standard deviations () of functions f2 and f10 obtained by GSO and BFAVP. From the results, it can be found that the performance of BFAVP is greatly improved after the parameter tuning.

4. Bacterial Colonial Behaviors in Optimization In biology, the colonial behavior refers to several individual organisms of the same species living closely together. A bacterial colony is a cluster of organisms reproduced usually from a single bacterium. Based on the simulation studies of colon behaviors in BFAVP, the advantages of BFAVP can be analyzed in three aspects, i.e. variation of population size, efficiency of energy absorption, and bacterial corpse distribution. In order to demonstrate the bacterial colony behaviors visually, BFAVP is applied on f6 in a 2-dimensional case as follows: 2 

f6 (x) = −

xi sin



|xi |

Fig. 2. The environment of f6 .

When the bacteria are moving inside the search space, a measure, optimization efficiency , can be determined as follows: =

¯k N Nk

,

(14)

¯k where N k denotes the population size in the kth iteration, and N is the number of the bacteria that have a better fitness value in kth ¯ k can be expressed as: iteration than in (k − 1)th iteration. N ¯k = N

m

Qpk ,

i=1

Qpk

=

(15)

1,

F(Xpk ) < F(Xpk−1 ),

0,

otherwise,

where F(Xpk ) indicates the fitness value of the pth bacterium at the kth iteration, m indicates the population size of each iteration. The optimization efficiency indicates the rate of the number of the bacteria with a better fitness value over the entire population size. It is an efficient method to measure the searching performance of the individuals. The objective function described by (13) is a 2-dimensional multimodal function. Therefore the computation is not as complex as that undertaken in the previous simulation studies. Thus, the initial population size is reduced from 50 to 20. The limit of population size is set to be 150. Fig. 3 shows the variation of population size within 2500 iterations. From this figure, the lag phase, log phase and stationary phase can be explicitly observed. Fig. 4 illustrates the optimization efficacy, , which follows (14). The population size of BFAVP is self-adaptive according to the complexity of the objective functions.

 .

(13)

i=1

where f6 is a multimodal function as illustrated in Fig. 2, which has a nourish distribution akin to that of the practical environment.

4.1. Variation of Population Size As described in the previous section, the population size varies in BFAVP. Bacteria absorb energy from the nutrient environment and proliferate if enough energy is gained. A bacterium is eliminated once it reaches the limitation of its lifespan.

Fig. 3. The varying population size in BFAVP.

194

M.S. Li et al. / BioSystems 100 (2010) 185–197

Table 16 Average best results () and standard deviations () of functions f2 and f10 with different parameters. Results with universal parameters f2

GSO BFAVP

f10

GSO BFAVP

   

3.3709 × 10 8.6185 × 10−5 5.4701 × 10−5 3.6527 × 10−5

9.5102 × 10−6 1.8430 × 10−5 7.9941 × 10−6 9.0148 × 10−6

   

2.7648 × 10−11 9.1674 × 10−11 1.9596 × 10−9 2.0533 × 10−9

1.8851 × 10−11 8.0144 × 10−11 1.3957 × 10−11 6.5740 × 10−11

4.2. Efficiency of Energy Absorption As described in Passino (2002), bacteria search for energy in a way that maximizes their efficiency. Suppose E is the quantity of energy absorbed by an individual during a time interval, T . The aim of the bacterial foraging is to maximize the function of:

=

Results with tuned parameters

−5

E , T

(16)

where is the foraging efficiency. Thus, an algorithm inspired by bacterial chemotaxis behavior should be highly efficient in searching. This feature can be seen from Fig. 5, which illustrates the locus

of a single bacterium on the function f6 . Although the rotation angle is randomly generated, the direction of the run step is always in the gradient-decreasing direction. Even if the bacterium is trapped into local optima, the repelling process of quorum sensing can still provide opportunities for the bacterium to escape. To measure the foraging efficiency , GA, PSO, and BFAVP are applied on function f6 , respectively. The energy of the pth bacterium at the kth iteration is proportional to the fitness value, F(Xpk ). The sum of the energy in the whole tumble-run process, E p , can be expressed as, Ep =

n

F(Xpk ),

(17)

k=1

where n is the bacterial lifespan. Therefore, Eq. (16) can be rewritten as: m

=

n m

Ep

p=1

m·n

=

F(Xpk )

p=1 k=1

m·n

.

(18)

In BFAVP, m is the mean value of the population size for the whole optimization process. The average foraging efficiency of GA, PSO and BFAVP, obtained in 50 runs, are 657.8561, 662.6191 and 794.3517, respectively. It can be seen that the foraging efficiency of BFAVP is much higher than that of GA and PSO, i.e. BFAVP is more suitable for multimodal problems. 4.3. Bacterial Migration

Fig. 4. Optimization efficiency.

Fig. 5. A bacterial locus on contour line.

Bacterial migration denotes the movement of bacteria from one location to another, sometimes over a long distance or in large groups. The mechanism of chemotaxis senses the distribution of nutrient in the environment and determines in which direction the concentration increases in the fastest way. However, as the size of a bacterial body is small, it is hard for bacteria to detect the gradient of nutrient effectively. By using the framework of varying population, the bacteria that have poor performance and are trapped in local optima are more likely to be eliminated, as less nutrition is located in these areas. Meanwhile, the bacteria in global optima have more nutrition, which leads to rapidly division. Therefore, since BFAVP simulates the phenomenon of bacterial proliferation and elimination, its convergence speed is accelerated and premature convergence can be prevented. In order to analyze the behavior of bacterial migration, we undertook an experiment to monitor the position of each bacterium during the optimization process. In this case for bacteria to be easily and clearly observed, the initial population size of BFAVP is set to be 5 as a small number. Fig. 6 illustrates four stages in the optimization process. In the initial iteration, five bacteria are randomly placed in the nutrient environment, which are far away from the global optimum located at the position of (420.9687.420.9687). In

M.S. Li et al. / BioSystems 100 (2010) 185–197

195

Fig. 6. The bacterial migration during the foraging process.

the 7th iteration, the population size is increased by 1. The bacteria are still far away from the global optimum and some of them are even trapped in local optima. However, in the 15th iteration, most of the bacteria converge to a local optimum near the global optimum. Nutrient distributed at this location is richer than elsewhere apart from the global optimum. As a result, the population size begins to increase. Finally, in the 30th iteration, the population size increases to 20. Most of the bacteria converge to the global optimum and begin rapid proliferation. From Fig. 6, it can be seen that BFAVP has an ability to adjust the population size according to the distribution of nutrient. This ability is represented by bacterial migration. When most of the bacteria are trapped in local optima where the density of nutrient is not high, the population size rises slowly. When the bacteria converge to a small region around the global optimum, the population size will increase greatly due to their proliferation, and each bacterium will then start a subtle search for the global optimum.

ria which did not gain enough energy are eliminated during the optimisation process. With the quorum sensing mechanism which attract or repel bacteria, the “corpses” of these bacteria are more likely to be found around the local optima or global optimum. Fig. 7 illustrates the bacterial “corpses” and local optima along the contour line of function f6 . Several local optima are detected by the “corpses” positions.

4.4. Bacterial Corpses and Local Optima Bacteria search for food according to the information of the nutrient distribution throughout the whole foraging process. This information updates iteration by iteration. In an environment where there are multiple local optima, a bacterium tends to stay in one of the optima if an alterative route is not discovered. Since the elimination mechanism is introduced in BFAVP, those bacte-

Fig. 7. Bacterial corpses and local optima on contour line.

196

M.S. Li et al. / BioSystems 100 (2010) 185–197

5. Conclusion

A.4. Generalised Rosenbrock’s Function

This paper has presented a novel optimization algorithm, BFAVP, which is inspired by five underlying mechanisms of bacterial foraging behavior: chemotaxis, metabolism, proliferation, elimination and quorum sensing. It is more biologically realistic than BFA. In order to build a framework of varying population, indexes of bacterial energy and bacterial age have been introduced to BFAVP to gauge the searching capability and life cycle of an individual, thus simulating bacterial metabolism, proliferation and elimination behavior. The algorithm has also incorporated chemotaxis behavior with quorum sensing phenomena. The former enables the gradient searching capability, which ensures that the bacterium always moves to a better position than the previous step. The latter plays a role in controlling the diversity of the bacterial population. Attraction and repelling of quorum sensing behavior have been adopted in BFAVP. When quorum sensing happens inside a single bacterial species, it accelerates the convergence speed. When quorum sensing happens among disparate species, it prevents bacteria from being trapped into local optima. This feature of quorum sensing has greatly enhanced the global seach capability of BFAVP. In the simulation studies, BFAVP is compared with 5 EAs, including two novel algorithms proposed in 2009. Simulation studies have shown that thanks to this framework, BFAVP makes the population size adaptive to different types of benchmark functions. BFAVP also overcomes the lack of population diversity, which most EAs suffer from. It has also offered better computation efficiency than other EAs. With the flexible operation in quorum sensing, BFAVP is more suitable for high-dimensional multimodal functions than other EAs. Moreover, BFAVP performs stably on unimodal functions, and always obtains better results than GA, PSO, FEP and GSO on most of multimodal benchmark functions, especially on high-dimensional functions, with much less computation time required. We have comprehensively presented analysis of BFAVP performance by its simulation studies on thirteen benchmark functions in comparison with GA, PSO, FEP and GSO, and also by discussions of its behavior from bacterial point of view. As these benchmark functions cover almost all cases of complex optimisation problems, we believe BFAVP has a great potential of being applied to realwork problems, although this paper did not present any results of it.

f4 (x) =

29

[100(xi+1 − xi2 ) + (xi − 1)2 ];

min(f4 ) = f4 (1, . . . , 1) = 0 A.5. Step Function 30

f5 (x) =

(xi + 0.5)2

− 100 ≤ xi ≤ 100

i=1

min(f5 ) = f5 (0, . . . , 0) = 0

Appendix B. High Dimensional Multimodal Benchmark Functions B.1. Generalised Schwefel’s Problem 2.26 30

f6 (x) =

− xi sin

  |x|

− 500 ≤ xi ≤ 500

i=1

min(f6 ) = f6 (420.9687, . . . , 420.9687) = −12569.5 B.2. Generalised Rastrigin’s Function 30

f7 (x) =

[xi2 − 10 cos(2xi ) + 10]

− 5.12 ≤ xi ≤ 5.12

i=1

min(f7 ) = f7 (0, . . . , 0) = 0 B.3. Ackley’s Function

 ⎞   30 1 2⎠  ⎝ exp −0.2 xi − exp ⎛

f8 (x) =

−20

n

1 30

i=1

30

 cos(2xi )

+ 20 + e

i=1

−32 ≤ xi ≤ 32 min(f8 ) = f8 (0, . . . , 0) = 0

B.4. Generalised Girewank Function 1 2  xi − cos 4000 30

f9 (x) =

30

i=1

Appendix A. High Dimensional Unimodal Benchmark Functions

−30 ≤ xi ≤ 30

i=1

x  √i i

i=1

+1

−600 ≤ xi ≤ 600 min(f9 ) = f9 (0, . . . , 0) = 0

A.1. Sphere Model f1 (x) =

30

B.5. Generalised Penalised Functions xi2

− 100 ≤ xi ≤ 100

i=1



f10 (x)

 = 30

min(f1 ) = f1 (0, . . . , 0) = 0

i=1



|xi |

− 10 ≤ xi ≤ 10

i=1

min(f2 ) = f2 (0, . . . , 0) = 0

u(xi , 5, 100, 4),

1 (x + 1), 4 i

u(xi , a, k, m) = − 100 ≤ xi ≤ 100

⎧ m ⎪ ⎨ k(xi − 1) , xi > a, ⎪ ⎩

0,

 2

(yi − 1) [1 + 10 sin (yi+1 )] + (y29 )

i=1

where yi = 1 +

A.3. Schwefel’s Problem 2.21 f3 (x) = maxi {|xi |, 1 < i < 30} min(f3 ) = f3 (0, . . . , 0) = 0

2

−32 ≤ xi ≤ 32 min(f10 ) = f10 (1, . . . , 1) = 0

30

|xi | +

29

i=1

A.2. Schwefel’s Problem 2.22 f2 (x) =

10 sin (y1 ) +

30

+

30

2

−a < xi < a,

k(−xi − 1)m ,

xi < −a.

2

M.S. Li et al. / BioSystems 100 (2010) 185–197 Table C.1 Kowalk’s function f12 . i ai b−1 i

1 0.1957 0.25

2 0.1947 0.5

3 0.1735 1

4 0.1600 2

5 0.0844 4

i ai b−1 i

7 0.0456 8

8 0.0342 10

9 0.0323 12

10 0.235 14

11 0.0246 16

6 0.0627 6

Appendix C. Low Dimensional Multimodal Benchmark Functions C.1. Shekel’s Foxhole Function



⎢ 2 ⎢ 1 5 + ⎢ 500 ⎣ j=1

⎤ 1

f11 (x) = ⎢

j+

2

(xi − aij )

⎥ ⎥ ⎥ ⎥ 6⎦

i=1

−65.536 ≤ xi ≤ 65.536 min(f11 ) ≈ f11 (−32, −32) ≈ 0 where



(aij ) =

−16 0 −32 −32

32 −32

16 −32

C.2. Kowalik’s Function f12 (x) =

11 !

ai −

i=1

x1 (b2i + bi x2 )

32 −32

−32 −16

... ...

0 32

16 32

32 32

"2

b2i + bi x3 + x4

− 6 ≤ xi ≤ 6

min(f12 ) ≈ f12 (0.1928, 0.1928, 0.1231, 0.1358) ≈ 0.0003075 See Table C.1. C.3. Six-Hump Camel-Back Function f13 (x) = 4x12 − 2.1x14 +

1 6 x + x1 x2 − 4x22 + 4x24 3 1

−5 ≤ xi ≤ 5 min(f13 ) = f13 (0.08983, −0.7126) = −1.0316285 References Arabas, J., Michalewicz, Z., Mulawka, J., 1994. GAVaPS-a genetic algorithm with varying population size. In: Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, vol. 1, Orlando, FL, USA, June 27–29, pp. 73–78. Back, T., 1996. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford University Press, Oxford, UK. Back, T., Eiben, A.E., van der Sart, N.A.L., 2000. An empirical study on GAs without parameters. In: Proceedings of the 6th International Conference on Parallel Problem Solving from Nature, London, UK, September, pp. 315–324. Bar Tana, J., Howlett, B.J., Koshland, D.E., 1977. Flagellar formation in Escherichia coli electron transport mutants. Journal of Bacteriology 130 (May (2)), 787–792.

197

Berg, H.C., 2000. Motile behavior of bacteria. Physics Today 53 (January (1)), 24–29. Bounds, D.G., 1987. New optimization methods from physics and biology. Nature 329 (September), 215–219. Clerc, M., Kennedy, J., 2002. The particle swarm: explosion, stability, and convergence in a multi-dimensional complex space. IEEE Transactions on Evolutionary Computation 6 (February (1)), 58–73. Dasgupta, S., Das, S., Abraham, A., Biswas, A., 2009. Adaptive computational chemotaxis in bacterial foraging optimization: an analysis. IEEE Transactions on Evolutionary Computing 13 (4), 919–941. Eberhart, R.C., Shi, Y., 2000. Comparing inertia weights and constriction factors in particle swarm optimization. In: Proceedings of the 2000 Congress on Evolutionary Computation, vol. 1, La Jolla, California, USA, July 16–19, pp. 84–88. Eiben, A.E., Marchiori, E., Valko, V.A., 2004. Evolutionary algorithms with on-the-fly population size adjustment. In: Proceedings of the 6th International Conference on Parallel Problem Solving from Nature, Birmingham, UK, September, pp. 315–324. Fogel, L., 1966. Artificial intelligence through Simulated Evolution. John Wiley and Sons Inc., New York, USA. Hansen, N., Muller, S.D., Koumoutsakos, P., 2003. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cmaes). Evolutionary Computation 11 (1), 1–18. Harik, G., Lobo, F.G., 1999. A parameter-less genetic algorithm. Proceedings of the Genetic and Evolutionary Computation Conference 1, 258–265. Harik, G., Cantu-Paz, E., Goldberg, D., Miller, B., 1999. The gambler’s ruin problem, genetic algorithms, and the sizing of populations. Evolutionary Computation 7 (3), 231–253. He, S., Wu, Q.H., Saunders, J.R., 2006. A group search optimiser for neural network training. International Conference on Computational Science and its Application 3948 (May), 1324–1330. He, S., Wu, Q.H., Saunders, J.R., in press. Group search optimiser – an optimisation algorithm inspired by animal searching behavior. IEEE Transactions on Evolutionary Computation. Holland, J.H., 1992. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. MIT Press, Cambridge, MA, USA. Houck, C., Joines, J., Kay, M., 1995. A genetic algorithm for function optimisation: a matlab implementation. Technical Report NCSY-IE-TR-95-09. Kennedy, J., Eberhart, R.C., 1995. Particle Swarm Optimization, vol. 4. Perth, Australia, 1995, pp. 1942–1948. Kim, D.H., Cho, J.H., 2005. Biomimicry of bacterial foraging for distributed optimization and control. Journal of Advanced Computational Intelligence and Intelligent Informatics 9 (July (6)), 669–676. Li, M.S., Tang, W.J., Tang, W.H., Wu, Q.H., Saunders, J.R., 2007. Bacterial foraging algorithm with varying population for optimal power flow. Lecture Notes in Computer Science 4448, 32–41. Liu, S., 1999. Tracking bacterial growth in liquid media and a new bacterial life model. Science in China Series C: Life Sciences 42 (December (6)), 644–654. Lobo, F.G., Goldberg, D.E., 2004. The parameter-less genetic algorithm in practicer. Information Sciences 167, 217–232. Miller, M.B., Bassler, B.L., 2001. Quorum sensing in bacteria. Annual Review of Microbiology 55, 165–199. Passino, K.M., 2002. Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Systems Magazine 22 (June (3)), 52–67. Prats, C., Lopez, D., Giro, A., Ferrer, J., Valls, J., 2006. Individual-based modelling of bacterial cultures to study the microscopic causes of the lag phase. Journal of Theoretical Biology 241 (August (4)), 939–953. Tang, W.J., Wu, Q.H., Saunders, J.R., 2006a. Bacterial foraging algorithm for dynamic environments. In: IEEE Congress on Computational Intelligence 2006 (CEC 2006), Vancouver, Canada, July 16–21, pp. 1324–1330. Tang, W.J., Wu, Q.H., Saunders, J.R., 2006. A novel model for bacterial foraging in varying environments. In: Computational Science and its Applications - ICCSA 2006, vol. 3980/2006 of Lecture Notes in Computer Science. Glasgow, UK, May 2006. Springer Berlin/Heidelberg, pp. 556–565. Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for search. IEEE Transactions on Evolutionary Computation 1 (1), 67–82. Yao, X., Liu, Y., Lin, G., 1999. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation (July), 82–102.