A differential evolution approach for solving constrained min–max optimization problems

A differential evolution approach for solving constrained min–max optimization problems

Expert Systems with Applications 39 (2012) 13440–13450 Contents lists available at SciVerse ScienceDirect Expert Systems with Applications journal h...

1MB Sizes 0 Downloads 54 Views

Expert Systems with Applications 39 (2012) 13440–13450

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

A differential evolution approach for solving constrained min–max optimization problems Gilberto A.S. Segundo a, Renato A. Krohling b,⇑, Rodrigo C. Cosme a a

´rito Santo, Av. Fernando Ferrari, 514, Goiabeiras, CEP 29075-910, Vitória, ES, Brazil Graduate Program in Computer Science (PPGI), Federal University of Espı ´rito Santo, Av. Fernando Ferrari, 514, Goiabeiras, CEP 29075-910, Department of Production Engineering & Graduate Program in Computer Science (PPGI), Federal University of Espı Vitória, ES, Brazil b

a r t i c l e

i n f o

Keywords: Continuous optimization Constrained min–max optimization Co-evolutionary algorithms Differential evolution

a b s t r a c t Many problems occurring in engineering can be formulated as min–max optimization problems, for instance, in game theory, robust optimal control and many others. Min–max problems are considered difficult to solve, specially constrained min–max optimization problems. Approaches using co-evolutionary algorithms have successfully been used to solve min–max optimization problems without constraints. We propose a novel differential evolution approach consisting of three populations with a scheme of copying individuals for solving constrained min–max problems. Promising results have been obtained showing the suitability of the approach. Ó 2012 Elsevier Ltd. All rights reserved.

1. Introduction Min–max optimization problems appear in many areas of engineering, for instance in game theory (Rapoport & Boebel, 1992), robust design (Cramer, Sudhof, & Zivi, 2009), control (Viñas, 2006), and others. Min–max optimization problems are considered difficult to solve. Hillis (1990), in his pioneering work, proposed a method for building sorting networks inspired by the co-evolution of populations. Two independent genetic algorithms (GAs) were used, whereas one for building sorting networks (host) and the other for building test cases (parasites). Both GAs evolved simultaneously and were coupled through the fitness function. Other interesting studies on competitive co-evolutionary algorithms (CEA) have been carried out by Paredis (1994) and Rosin and Belew (1995, 1996). A constrained optimization problem can be transformed into an unconstrained optimization problem by introducing Lagrange multipliers and solving the resultant min–max problem using a CEA. CEAs usually operate in two or more populations of individuals. In most CEAs, the fitness of an individual depends not only on the individual itself but also the individuals of other population. In this case, the fitness of an individual is evaluated by means of a competition with the members of the other population. Inspired by the work of Hillis (1990), the co-evolutionary approach has been extended to solve constrained optimization problems (Barbosa, 1996, 1999; Cramer et al., 2009; Huang, Wang, & He, 2007; Krohling & Coelho, ⇑ Corresponding author. E-mail addresses: [email protected] (G.A.S. Segundo), krohling.renato @gmail.com (R.A. Krohling), [email protected] (R.C. Cosme). 0957-4174/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2012.05.059

2006a; Laskari, Parsopoulos, & Vrahatis, 2002; Shi & Krohling, 2002; Tahk & Sun, 2000). Barbosa (1996, 1999) presented a method to solve unconstrained min–max problems by using two independent populations of GA coupled by a common fitness function. Tahk and Sun (2000) also used a co-evolutionary augmented Lagrangian method to solve unconstrained min–max problems by means of two populations of evolution strategies with an annealing scheme. The first population was made up of the vector of variables and the second one is made up of the Lagrange multiplier vector. In the same line, firstly Shi and Krohling (2002) developed a co-evolutionary particle swarm optimization (PSO) and next Krohling and Coelho (2006a) proposed a Gaussian probability distribution to generate the accelerating coefficients of PSO. Two populations of PSO using Gaussian distribution were used to solve min–max optimization problems with promising results. Laskari, Parsopoulos, and Vrahatis, (2002) have also presented a method using particle swarm optimization for solving min–max problems, but not using a co-evolutionary approach. Huang et al. (2007) used a coevolutionary differential evolution algorithm to solve the min– max associated with a constrained non-linear optimization problem, which consists of minimizing the cost of a analog circuit design given a set of restrictions specified by the user. The authors reported that the design specifications were closely met. Cramer et al. (2009) employed co-evolution to optimize the design of a ship vessel given a set of restrictions. The initial problem was transformed to a min–max and solved by a co-evolutionary GA. In this paper, differently, our problem is in its original form described as min–max optimization problem with constraints, which is a much more challenging problem, i.e., a constrained min–max one is considered here. Some conventional methods to

13441

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

solve this kind of problem have been proposed. Sainz, Armengol, and Vehí (2008) have introduced a new approach, based on modal interval analysis, to deal with continuous min–max problems. As the authors argue, in some simple cases, the results are obtained by means of simple interval arithmetic computations. But, in many cases, a branch-and-bound algorithm is required and they have observed that complexity notably increases when the number of variables increases. Lu, Cao, Yuan, and Zhou (2008) proposed a method based on the Karush–Kuhn–Tucker (KKT) conditions, in which all solutions of a constrained min–max optimization problems can be obtained. The authors claim the superiority of their method over the existing methods in the literature. Nevertheless, most of the existing methods make assumptions regarding continuity and convexity of the objective function, as well the availability of Gradient and Hessian, which may limit the applicability of such methods to real-world problems. Kuo and Huang (2009) proposed a method based on PSO to solve bi-level linear programming problems. Min–max problem are a particular case of bi-level problems, as explained in the next section. The results illustrate that the proposed PSO algorithm outperforms other methods. Nevertheless, the approach was tested for linear problems only, while most of the problems in real-world are non-linear. In our work we present a derivative free method using evolutionary algorithms that is capable to solve non-linear problems. Differently of the related works, we evolve two populations sequentially, instead of co-evolving them. In addition, we use a third population as memory in order to increase the velocity of convergence and improve the results. The rest of the paper is organized as follows: in Section 2, we describe the constrained min–max optimization problem and its relationship with bi-level problems. The standard DE is explained in Section 3 and Section 4 describes an adaptive penalty. In Section 5, a standard co-evolutionary differential evolution (CDE) is discussed. Section 6 describes a novel approach using an evolutionary algorithm to solve constrained min–max problems. In Section 7, we present results and discussions. Conclusions and directions for future research are given in Section 8. 2. Problem description A constrained min–max optimization problem is described as:

minmax x

y

f ðx; yÞ

subject to g i ðx; yÞ 6 0

ð1Þ

where f(x, y) is the objective function, x = [x1, x2, . . . , xn]T and y = [y1, y2, . . . , yn]T are the vector of variables, gi(x, y) is the vector of inequality constraints. The set S # Rn designates the search space, which is defined by the lower and upper bounds of the variables: xj 6 xj 6 xj and yj 6 yj 6 yj with j = 1, . . . ,n. The goal is to find the saddle-point (x⁄, y⁄). Points in the search space, which satisfy the inequality constraints, are feasible candidate solutions. Min–max problems can be explained using game theory in the context of a zero-sum game. In this game, each player chooses one of the many strategies available. Once both players have made their decisions, they are announced and a table of payments (previously known by both players) is used to determine the payment of one player to another. Table 1 represents the game. In this notation, the table represents the payment of player X to player Y. If the value is negative, the payment will be from the player Y to the player X. Player X can choose among strategies A, B and C. Player Y can choose among D, E, F and G. The value of the matrix represents the amount to be paid to the player Y. For example, if Player X

Table 1 Payoff matrix. Y

X

A B C

Maximum

D

E

F

G

8 6 2

2 8 3

9 7 4

5 1 5

9 8 5

chooses the strategy A and player Y chooses the strategy G, the player Y will gain 5, while the player X will lose the same 5. Player X wants to minimize his losses. He realizes that if he chooses strategy A, he will lose up to 9, depending on the strategy chosen by player Y. The maximum losses of X are represented in the last column of the table. Thus, to minimize its maximum loss, player X must choose strategy C, which leads to a maximum loss of 5 when player Y chooses strategy G. This selection is called min–max, since it minimizes the maximum loss. The min–max case discussed in this game can be understood as a discrete min–max, once there is a finite amount of choices. A continuous min–max problem has the same principles, but the number of strategies that can be chosen is infinite and represented by f(x, y). The min–max point is called saddle point. Another way to understand min–max problems is through bilevel programming problems (BLPPs). Min–max problems are a particular case of BLPP. Bracken and McGill (1973) have introduced mathematical programming problems, which have optimization function in their constraints known as bi-level problems. These problems can be described as:

min Fðx; yÞ x2X

subject to Gðx; yÞ 6 0

ð2Þ

min f ðx; yÞ y2Y

subject to gðx; yÞ 6 0 In the first level, F(x, y) is called leader’s objective function and G(x, y) is its constraints. In the second level, f(x, y) is called follower’s objective function and g(x, y) is its constraints. The leader moves first by choosing a vector x 2 X in an attempt to optimize its objective function F(x, y). The follower observes the leader’s choice and reacts by selecting a vector y 2 Y that optimize its objective function f(x, y). When F(x, y) = f(x, y), we have a min–max problem, once min f(x, y) = max  f(x, y). In the next section, an improved differential evolution algorithm, adopted in this work as the basis of the evolutionary approach to solve min–max problems is described. 3. Differential evolution The optimization procedure differential evolution (DE) was introduced by Storn and Price (1997). Similar to other evolutionary algorithms (EAs), DE is based on the idea of evolution of populations of possible candidate solutions, which undergoes the operations of mutation, crossover and selection. The parameter vectors are denoted by xi = [xi1, xi2, . . . ,xin]T with components xij. The lower and upper bounds of xi are xi and xi , respectively. The index i = 1, . . . , Np represents the individual’s index in the population and j = 1, . . . , n is the position in D-dimensional individual. Thereafter we use t to indicate time (generation). During the initialization of the population, Np vectors are generated randomly in the D-dimensional search space. After initialization, the fitness of each vector is calculated. The three

13442

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

operators: mutation, crossover and selection are described in the following (Storn & Price, 1997). 3.1. Mutation By means of the mutation operator a new vector is created. For each possible solution vector xi in generation t, denoted by xti a mutant vector v ti is calculated as:

v ti ¼ xti;r1 þ F

  xti;r2  xti;r3

ð3Þ

where r1, r2 and r3 are mutually different random integer indexes selected from 1, 2, . . . , Np. Further, r1, r2 and r3 are different so that Np P 4 is required. The real constant F is called mutation factor, which controls the amplification of the difference between two individuals. The mutation factor is usually taken from the range [0.5; 1] as pointed out by Storn and Price (1997). 3.2. Crossover Following the mutation operation, crossover is applied to the population. For each mutant vector v ti , a trial vector, uti is generated according to

( uti;j

¼

v ti;j ;

if randj 6 C r or j ¼ k

uti;j ;

otherwise

ð4Þ

where randj is a uniform random number generated into the interval [0; 1] for each variable j. The index k = 1, . . . , n is a random parameter index, chosen once for each i to make sure that at least one parameter is always selected from the mutated vector v ti . Cr is the crossover rate, which controls the fraction of mutant values that are used. The crossover rate is usually taken from the range [0.8; 1] as pointed out by Storn and Price (1997).

Li ¼ xi þ aðxbesti  xi Þ þ bðxp  xq Þ

where the subscript besti indicates the best vector in the neighborhood of xi and p, q 2 [i  k, i + k] with p – q – i; and k is the radius of neighborhood. Similarly, the global donor vector is created as:

Gi ¼ xi þ aðxbest  xi Þ þ bðxr1  xr2 Þ

v i ¼ w:Gi þ ð1  wÞLi

ð8Þ

The parameter w controls the balance between the exploration and exploitation. Das et al. (2009) suggest many ways to define w. In this work, it is used the self adaptive weight factor scheme (SAW), which provides the best results in the tests made by the authors. Inspired by promising results provided by the exponential probability distribution used to generate the coefficients of PSO (Krohling & Coelho, 2006b), we use the exponential probability distribution to generate the mutation factor in DE. The exponential probability distribution with density function f(z) is given by:

f ðzÞ ¼

1 expðjz  aj=bÞ; 2b

1 6 z 6 1; a; b P 0

The selection operation selects the better individual from the parent vector xti and the trial vector uti according to their cost value calculated by means of the objective function f(). In case of minimization problems, the selected vector is given by

1: 2: 3: 4: 5: 6: 7: 8:

    uti ; if f uti 6 f xti xti ; otherwise

ð5Þ

Generally, the performance of an DE algorithm depends on the population size Np, the mutation factor F, and the crossover rate Cr. In the last few years, different variants of DE have been developed and classified using the following notation: DE/a/b/c, where a indicates the method for selecting the parent individual that will form the base of the mutated vector; b indicates the number of difference vectors used to perturb the base individual, and c indicates the crossover mechanism used to create the child population. Das et al. (2009) developed a variant of DE/target-to-best/1/bin scheme, which utilizes the concept of neighborhood for each population member, similar to the one used in PSO algorithms. The proposed scheme is called DEGL, which balances the exploration and exploitation abilities of DE. The mutation vector v is composed by a local component using neighborhood members and a global component using the entire population, called L and G, respectively. The local vector is created by employing the best (fittest) vector in the neighborhood of that member and any two other vectors chosen from the same neighborhood. The model is expressed as:

ð9Þ

It is evident that one can control the variance by changing the parameters a and b. For this study we set a = 0 and the scale parameter b = 0.5. Experimental investigations have shown that generating the mutation factor by using this probability distribution influences positively the performance of DE. The generation of random numbers using exponential distribution is listed in Algorithm 1. Algorithm 1. Algorithm for generation of random numbers using exponential distribution

xtþ1 ¼ i

ð7Þ

where the subscript best indicates the best vector in the entire population at current generation and r1, r2 2 [1, Np] with r1 – r2 – i. The parameters a and b are the scaling factors. In this work, a = b = F. Then, the mutation vector v of each member is defined as:

3.3. Selection

(

ð6Þ

u1 rand u2 rand if u1 > 0.5 then x a + b⁄ln(u2) else x a  b⁄ln(u2) end if Return abs (x)

. uniform random number [0;1] . uniform random number [0;1] . ln stands for the natural logarithm

. abs stands for absolute value

Generating random numbers for the mutation factor F in DE using the exponential distribution may provide a good compromise between the probability of having a large number of small amplitudes around the current point (fine tuning) and a small probability of having larger amplitudes (Krohling & Coelho, 2006b). A lot of efforts in the last few years have been done to define benchmarks problems and evaluation criteria provided by CEC 2006 special session on constrained real parameter optimization. In this context, methods to handle constrained optimization with differential evolution have been presented (Brest, Zumer, & Maucec, 2006; Huang, Qin, & Suganthan, 2006) and recently for solving the CEC 2010 benchmark problems (Brest, Boskovic, & Zumer, 2010; Mallipeddi & Suganthan, 2010b; Mallipeddi & Suganthan, 2010a; Takahama & Sakai, 2010b, 2010a; Tasgetiren, Mallipeddi, & Sarman, 2010). In the following, we present the constraint handling method used is this work.

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

4. Constraint handling To evaluate possible candidates and decide which of them is the best, it is necessary to penalize the infeasible individuals. The constraint handling method adopted here is based on adaptive penalty developed by Tessema and Yen (2006, 2009), which is explained using a minimization problem f(x), but it can be used in a min– max problem minxmaxyf(x, y). In this method, the fitness f(x) of each individual must be normalized as:

~f ðxÞ ¼ f ðxÞ  fmin fmax  fmin

ð10Þ

where fmin = minf(xi), fmax = maxf(xi), i = 1, . . . , N. The above normalization is valid for minimization problems. For maximization problems, the normalized fitness is calculated as:

~f ðxÞ ¼ fmax  f ðxÞ fmax  fmin

ð11Þ

The constraint violation v(x) of each individual is calculated as the sum of normalized violations of each constraint averaged over the number of constraints m:

v ðxÞ ¼

m 1X cj ðxÞ m j¼1 cmax;j

ð12Þ

 When both individuals are infeasible, the one that violates the smallest number of constraints is considered better. If both violate the same number of constraints, the one with the smallest value of the sum of constraint violations is considered better. The normalized fitness ~f ðxÞ of each individual x in a population with N individuals is calculated as showed in Eqs. (10) and (11). Consequently, an individual with a small normalized fitness is better than an individual with a high normalized fitness for both problems, minimization and maximization. The cumulative sum of constraints violations vnew(x) in a problem with m constraints is calculated as:

v new ðxÞ ¼

cj ðxÞ ¼

maxð0; g j ðxÞÞ;

for j ¼ 1; . . . ; k

maxð0; jhj ðxÞj  dÞ; for j ¼ k þ 1; . . . ; m x

F new ðxÞ ¼ ð13Þ ð14Þ

Next, the ratio of feasible individuals rf in the current population is calculated as:

number of feasible individuals in the current population population size ð15Þ

The final fitness F(x) of each individual is calculated according to Eq. (16). The best individual is the one that presents the smallest value of F(x). 8 v ðxÞ if rf ¼ 0 > > < ~f ðxÞ if r f > 0 and v ðxÞ ¼ 0 FðxÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi > > : ~ 2 2 ~ f ðxÞ þ v ðxÞ þ ½ð1  r f Þv ðxÞ þ r f f ðxÞ if rf > 0 and v ðxÞ > 0

ð16Þ However, we found a situation in which this method does not present the expected behavior. Let two individuals x1 and x2, with f(x1) = 10 and f(x2) = 12. Let us consider that exist only two constraints and that the individual x1 greatly violates one constraint and satisfies the other while the individual x2 satisfies both constraints. Hence, v ðx1 Þ ¼ 12  ð1 þ 0Þ ¼ 0:5; v ðx2 Þ ¼ 12  ð0 þ 0Þ ¼ 0 and rf = 0.5 if we consider only these two individuals. The final fitness of individual x1 is calculated through the third case of Eq. (16) and its value is F(x1) = 0.5 + 0.5  0.5 + 0 = 0.75. The final fitness of individual x2 is calculated through the second case of Eq. (16) and its value is F(x2) = 1. Since F(x1) < F(x2), individual x1 is considered the best, even though it greatly violates one of the constraints. To solve this problem, we propose a modification of the adaptive penalty method, based on the following assumptions:  A feasible individual is always better than an infeasible one.  When both individuals are feasible, the one which has the smallest normalized fitness is better.

ð17Þ

where cj(x) = max(0, gj(x)), for j = 1, . . . , m. We used index new in order to avoid a mistake of the new sum of constraint violations proposed here with that proposed by Tessema and Yen (2006, 2009). The previously discussed assumptions are then translated into the final fitness Fnew(x) of each individual, which is calculated according to Eq. (18). The best individual is the one that presents the smallest value of Fnew(x).

(

cmax;j ¼ maxcj ðxÞ

rf ¼

m X cj j¼1

where



13443

~f ðxÞ

if x is feasible

new ðxÞ nv x þ vv ðxÞ otherwise

ð18Þ

max

where nvx is the number of violated constraints by individual x and v(x)max is the maximum vnew(xi), for i = 1, . . . , N.

5. Standard co-evolutionary algorithms Two populations of DEs are involved in the co-evolutionary differential evolution to solve constrained min–max problems according to Eq. (1). The first DE focuses on evolving the variable vector x while the vector y is maintained, ‘‘frozen’’. Only the vector x is represented in the population PopA. The second DE focuses on evolving the vector y while the population PopA is maintained ‘‘frozen’’. Only the vector y is represented in the population PopB. The two DEs interact with each other through a common fitness evaluation. For the first DE, the problem is a minimization problem and the fitness value of each individual x is evaluated according to:

f1 ðxÞ ¼ max f ðx; yÞ y2PopB

ð19Þ

Each individual xi of PopA is evaluated against all individuals yj of PopB. The fitness for the individual xi is the largest value of f1(xi). This process is repeated for all N individuals of PopA. The second problem consists in a maximization one and the fitness value of each individual is made up of the vector y is evaluated according to:

f2 ðyÞ ¼ min f ðx; yÞ x2PopA

ð20Þ

Each individual yj of PopB is evaluated against all individual xi of PopA. The fitness for an individual yj is the smallest value of f2(yj). This process is repeated for all M individuals of PopB. The Co-evolutionary differential evolution is described in Algorithm 2. The best individual in the population PopA is the solution for

13444

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

the vector x, and the best in the population PopB is the solution for the vector y. In order to illustrate the approach, let us introduce the following min–max problem P1 (Sainz et al., 2008) described as:

minmax f ðx; yÞ ¼ ðcos y þ cosð2y þ xÞÞ2 x

y

subject to

g 1 ðx; yÞ ¼ y  xðx þ 6:28Þ 6 0

ð21Þ

g 2 ðx; yÞ ¼ y  xðx  6:28Þ 6 0 with x, y 2 [3.14, 3.14]. The plots of objective function for the unconstrained and the constrained min–max problem P1 are shown in Figs. 1 and 2, respectively. The saddle points of the constrained min–max problem are marked with green ‘‘X’’ in Fig. 2, which correspond to the coordinates (x1 = 0.43708,y1 = 2.5538) and (x2 = 0.43708, y2 = 3.14). Both points possess function values equal to 8.5865  103. The Fig. 3 shows in more details the plot of the feasible solutions of the constrained min–max problem P1. When the x coordinate is smaller than 0.43708, the function f(x, y) grows imperceptibly to the limit of the feasible value of x. We can notice in Fig. 3 that only some coordinates y make the pair (x, y) feasible. So, it is what happens when the standard CDE is used to solve this problem, i.e., find the global min–max points. The populations PopA and PopB composed of the individuals of the variable x and y, respectively, are randomly initialized. Notice that the two global min–max points are found in coordinate x = 0.43708. Let population PopA composed by some individuals in the range 0.5 < x < 0.43708 and one individual in the coordinate x = 0. The chances of having a y coordinate between 3.14 and 2.55, which are feasible coordinates for the coordinates 0.5 < x < 0.43708 are small, while for the coordinate x = 0, half of y coordinates in the specified variation range for the problem are feasible (3.14 < y < 0). Suppose that there is no individual y with coordinate between [3, 14, 2, 55], but there are individuals y between [2, 0]. Once the fitness of an individual x is calculated by Eq. (19), all individuals x in range [0, 5, 0, 43708] will be classified as infeasible and, consequently, they will have a worse penalized fitness than the individual x = 0, which will be classified as feasible. So, the infeasible ones tend to move to other coordinates considered feasible. Notice that the individuals x that were initially in range [0.5, 0.43708] should have better fitness than the individual x = 0, once the maximum value for feasible pairs

(x, y) exist and is smaller than the maximum value for feasible pair (0, y). But, because of the reduced amount of individuals in population PopB, which has y coordinates, this situation was not realized by the algorithm. It could also occur to have a y coordinate in this range initially, but with each passing iteration, the individuals with these coordinates migrate to other regions, making it impossible to calculate the feasibility of the function in this region. The standard co-evolutionary differential evolution algorithm was run 10 times using the proposed adaptive penalty method for max_gen_A = 5, max_gen_B = 5 and max_cycles = 200. The results found are marked with red ‘‘+’’, in Fig. 2. The coordinates lies within the interval x 2 [0, 0.02] and y 2 [0, 0.04] with f(x, y) = [3.84,4], far away from the optimal solution. As stated previously, the main problem with this approach is the lack of information about the behavior of the function for a determined x coordinate. Having only a limited variation of individuals in population PopB, it is not possible to know which is the maximum feasible value that the function will assume for

Fig. 1. Plot of the unconstrained min–max problem P1.

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

Fig. 2. Plot of the constrained min–max problem P1 with solutions found with the standard CDE approach (red ‘‘+’’) and the optimal points (green ‘‘X’’).

Fig. 3. Plot of the constrained min–max problem P1 showing in more details the saddle points.

a given x coordinate. In the next section, we propose a sequential evolutionary algorithm with three populations.

6. A novel evolutionary approach for constrained min–max In the standard co-evolutionary algorithm each individual of a population may have a limited amount of options in the other population to find the maximum or the minimum. This may lead to wrong maximum or minimum values such that a determined coordinate can be labeled as infeasible due lack of options of coordinates in the other population. To work around this problem, some authors proposed a sequential evolutionary algorithm using two populations. The first population is responsible to evolve x coordinates, while second population is responsible to evolve the y coordinates. Using the bi-level notation, we can say that for each new x coordinate proposed by leader over an evolutionary process, the follower reacts

13445

by choosing y coordinates by an evolutionary process too. Only after the response of the follower, the leader is able to know how good is the solution proposed. Differently of CDE, in this approach, the y coordinates are evaluated against only one coordinate x and not against all the coordinates x. Below, this process is explained in details. It is assumed that the functions are in the form minxmaxyf(x, y), if not stated otherwise. In this proposal, three populations are used: (i) population PopA, which contains x coordinates; (ii) population PopB, which contains y coordinates; and (iii) population PopC, which also contains y coordinates, but it is used to save the solutions found for a given x from PopA and does not evolve, as will be explained in what follows. Fig. 4 shows the basic flowchart of the proposed algorithm. Firstly, the population PopA is initialized randomly and the initial fitness of each individual xi is calculated as being the maximum value that the function f(xi, y) admits. In order to know this maximum value, a maximization process of f(xi, y) is done, leaving the xi frozen and evolving y. In this work, we used differential evolution to evolve the populations, but any evolutionary algorithm can be used, as genetic algorithm or particle swarm optimization. The fitness function will be evaluated always as f(xi, y). Having found the maximum of the function for that determined xi, the function value in this point is returned as the fitness of that determined coordinate xi. Also, the y coordinate that maximizes the point xi is stored in a third population, named population PopC. This population PopC will have the same number of individuals of population PopA. For each new value of x, the population PopB must be restarted to guarantee diversity of population and avoid stagnation in an optimal point relative to the last x coordinate evaluated. After all individuals in PopA are initialized and their initial fitness calculated, population PopA is evolved. Each new candidate of the population PopA has its fitness calculated in a similar way as that of the initialization of population PopA. That is, a maximization process in y must be made for each new candidate. If this new candidate has a value f(x, y) larger than the last one, this candidate will replace the previous x coordinate in population PopA, and so the corresponding y of this candidate replace the y of the corresponding previous x in population PopC. When the population PopA finishes its evolution process, it returns the individual with the smallest value of f(x, y) and the corresponding y coordinate stored in population PopC. To speed up convergence in the process of maximization of the new candidates in the population PopA, methods from dynamic optimization techniques using evolutionary algorithms are used (Rohlfschagen, Lehre, & Yao, 2009; Ronnewinkel, Wilke, & Martinetz, 2000). In dynamic optimization, it is noted that the fitness function, the constraints, and the number of variables of the candidate solution can be altered through time. Always when the environment changes, an abrupt decrease in the fitness occurs, once all the candidates were evolving in a different environment, with different characteristics. Generally, the population is reinitialized to increase diversity and avoid stagnation relative to the previous environment. But, to avoid a complete restart, the knowledge obtained during previous evolutionary process with similar environment is used. Ramsey and Grefenstette (1993) propose a reinitialization of the population in the following manner:    

50% of the best solutions of similar cases. 25% of individuals of the previous population. 12.5% initialized according to a standard strategy, e.g. randomly. 12.5% initialized with exploration strategies.

13446

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

Fig. 4. Basic flowchart of the proposed algorithm.

By using these strategies, the adaptation speed to the new environment increase considerably according to the results shown in the next section. It can be said in the context of min–max problems that the environment changes each time the x coordinate changes. For each time, the function assumes a new different behavior in y. So, similar cases are those in which the new x coordinates are closer to the x coordinates already known. This proximity is measured using the Euclidian distance of the new x candidate to the individuals of population PopA. Those that possess closer distances to the new candidate are considered as similar and their respective y coordinates in the population PopC are copied to population PopB. Notice that if a new candidate xnew is very close to an existing individual xi, the coordinate ynew that maximizes f(xnew, y) should be close to the individual yi that maximizes f(xi, y). So, it is expected that population PopB reaches the optimal value faster, comparing with the time spend to reach the optimum value when all individuals are started randomly. The copy of individuals from PopC to PopB is illustrated in Fig. 5. Copies of 1/10, 1/7, 1/6, 1/5, 1/4, 1/3 e 1/2 of individuals of population PopC to population PopB were tested. The individuals

Fig. 5. Copy of individuals from PopC to PopB.

in population PopC chosen to be copied to population PopB were those ones which the corresponding x in population PopA were closer to the new candidate xnew. It was noticed that when too many individuals were copied, the population had the tendency to get stuck into local optima, while when few individuals were copied, the population had the tendency to take more time to reach the optima. By means of a in-depth simulation study, it was tested all values for re-initialization described previously

13447

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

and the best configuration was 1/5 of population PopC to population PopB. In Algorithm 3, the pseudo-code of the proposed evolutionary algorithm for PopA is described, while Algorithm 4 describes the proposed evolutionary algorithm for PopB. There are few differences between the two algorithms. In the algorithm for PopB, the fitness of each individual is calculated by f(x, y), while in the algorithm for PopA the fitness of each individual is copied from the best individual of PopB. Furthermore, in PopB, the individuals are randomly initialized and some of them are replaced by some individuals from PopC, passed by PopA. In the algorithm for PopA all the individuals are randomly initialized.

Algorithm 3. Proposed evolutionary algorithm for PopA 1: Initialize PopA randomly 2: for i = 1 to nPopA do 3: Optimize PopB passing as parameter (PopA(i), NULL, 0) 4: Put in PopC(i) the best individual of PopB. 5: Assign fitness of PopA(i) as the fitness of the best individual of PopB. 6: end for 7: Calculate the penalized fitness (Final Fitness) of all individuals according to Eq. (18). 8: Find the best individual of PopA. 9: Set lastBestChange = 0 10: Set it = 0 11: While it < iterationsmax and it  lastBestChange < maxStagnation do 12: for i = 1 to nPopA do 13: Create a new individual (xnew) using DE operators (crossover and mutation) 14: for j = 1 to n_PopA do 15: Calculate the distance between PopA(j) and the candidate xnew 16: end for 17: Sort the distances calculated in the previous step in ascending order. 18: Put in vector bestPopC the nBest individuals of PopC which correspond to individuals in PopA that are closer to the new candidate xnew, using the sorted distances calculated in previous step. 19: Optimize PopB passing as parameters (xnew, bestPopC, nBest) 20: Put in PopC of xnew the best individual of PopB. 21: Assign fitness of xnew as the fitness of the best individual of PopB. 22 Calculate the penalized fitness (Final Fitness) of PopA(i) and xnew according to Eq. (18). 23: if xnew is better than PopA(i) then 24: Copy PopC of xnew to PopC(i) 25: Copy xnew to PopA(i) 26: Assign fitness of PopA(i) as the fitness of xnew 27: Find the best individual of PopA. 28: if best Individual has changed then 29: Set lastBestChange = it 30: end if 31: end if 32: end for 33: Set it = it + 1 34: end while 35: Return the best individual in PopA and the corresponding y coordinates in PopC

Algorithm 4. Proposed evolutionary algorithm for PopB 1: Input: xnew, bestPopC, nBest 2: Initialize PopB randomly 3: if nBest > 0 then 4: for i = 1 to nBest do 5: PopB(i) = bestPopC(i) 6: end for 7: end if 8: for i = 1 to n_PopB do 9: Calculate f(xnew, PopB(i)) 10: end for 11: Calculate the penalized fitness (Final Fitness) of all individuals according to Eq. (18). 12: Find the best individual of PopB. 13: Set lastBestChange = 0 14: Set it = 0 15: While it < iterationsmax and it  lastBestChange < maxStagnation do 16: for i = 1 to nPopB do 17: Create a new individual (ynew) using DE operators (crossover and mutation) 18: Calculate the fitness of ynew as f(xnew, ynew) 19: Calculate the penalized fitness (Final Fitness) of PopB(i) and ynew according to Eq. (18). 20: if ynew is better than PopB(i) then 21: Assign PopB(i) as ynew 22: Assign fitness of PopB(i) as the fitness of ynew 23: Find the best individual of PopB. 24: If best Individual has changed then 25: Set lastBestChange = it 26: end if 27: end if 28: end for 29: Set it = it + 1 30: end while 31: Return the best individual in PopB and its fitness

7. Simulation results 7.1. Benchmark problems Four benchmark problems of constrained min–max optimization have been used to investigate the performance of the proposed algorithm. The problem P1 (Sainz et al., 2008) is described as min–max:

minmax f ðx; yÞ ¼ ðcos y þ cosð2y þ xÞÞ2 x

y

subject to

g 1 ðx; yÞ ¼ y  xðx þ 6:28Þ 6 0 g 2 ðx; yÞ ¼ y  xðx  6:28Þ 6 0 

ð22Þ



with x, y 2 [3.14, 3.14]. The points x1 ;y1 ¼ ð0:437082;2:553833Þ 

 

and x2 ;y2 = (-0.4370820, 3.14) are the solutions of the problem P1 (Sainz et al., 2008). In both points the function assumes the approximated value 0.0085865.

The problem P2 (Sainz et al., 2008) consists of min–max:

minmax f ðx; yÞ ¼ x2 þ y2 þ 2xy  20x  20y þ 100 x

y

subject to

g 1 ðx; yÞ ¼ ðx  5Þ2  ðy  3Þ2 þ 4 6 0 g 2 ðx; yÞ ¼ ðx  5Þ2 þ ðy  3Þ2  16 6 0

ð23Þ

13448

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

   with   x2 [0, 6] and y 2 [2, 8]. The points x1 ; y1 ¼ ð4:143; 4:807Þ and x2 ; y2 ¼ ð4:143; 6:907Þ are the solutions of the problem P2 (Sainz et al., 2008). In both points the function assumes the approximated value 1.10255. The problem P3 (Lu et al., 2008) is described as min–max: 2

minmax f ðx; yÞ ¼ sin x  x cos y þ 2 sin x  cos2 y þ y  1 x

y

subject to

ð24Þ

g 1 ðx; yÞ ¼ x2 þ y2 P 25

with x, y 2 [5, 5]. The point (x⁄, y⁄) = (0.88734, 5) is the solution of the problem P3 (Lu et al., 2008). The function assumes the approximated value 3.22169. The problem P4 (Liu, 1998) is described as max  min: 1 þy1 Þðx2 þy2 Þ maxmin f ðx; yÞ ¼ ðx1þx 1 y þx2 y

x

y

subject to

1

2

g 1 ðx; yÞ ¼ x21 þ x22 6 100

ð25Þ

g 2 ðx; yÞ ¼ y1 6 x1 g 3 ðx; yÞ ¼ y2 6 x2 with x, y 2 [0, 10]. The point x⁄ = (7.0854, 7.0291), y⁄ = (7.0854, 0.0000) is the solution of the problem P4 (Liu, 1998). The function assumes the approximated value 1.9454 in that point. Liu (1998) found the value f(x⁄, y⁄) = 1.9760 instead of 1.9454. We calculated the function in that point with many tools and we found the value f(x⁄, y⁄) = 1.9454. This is the value adopted as optimal for this function.

Fig. 7. Saddle points found for problem P2.

Figs. 6–8 show the optimal saddle points and the solutions found for Problem P1, P2, and P3, respectively. The points marked with red ‘‘+’’ were calculated with algorithm DEGL/SAW with

copy of individuals from PopC to PopB and the optimal points are marked with green ‘‘X’’. The problem P4 is not shown because it has more than 2 dimensions, making impossible its visualization. Tables 2–5 show the results obtained for the benchmark problems investigated in this study. The column ‘‘% successful runs’’ indicates the percentage of the executions in which the results lie within 2% of the known optima. The DEGL/SAW presents better results than the DE/rand/1/bin for all settings investigated. It can be noticed that independent of the mutation kind used, i.e., DE/rand/1/bin or DEGL/SAW, the best results were obtained by means of copying the individuals from PopC to PopB, which shows the suitability of the proposed approach using three populations with copy of individuals from PopC to PopB. Regarding the number of iterations, one can observe that the DEGL/SAW with copy of individuals between populations speed up the convergence in all four benchmarks, whereas in the DE/ rand/1/bin method, the copy of individuals between populations speed up the process of convergence in only one benchmark.

Fig. 6. Saddle points found for problem P1.

Fig. 8. Saddle points found for problem P3.

7.2. Experimental settings For the benchmark problems studied, the population size was set to PopA = PopB = PopC = 30. The individuals were randomly initialized within the boundaries for each run according to a uniform probability distribution. Each experiment was run 50 times. Each run is terminated only when there are no fitness improvements for 20 consecutive iterations or when the algorithm reaches the maximum number of iterations of 200. The crossover rate used was Cr = 0.9. 7.3. Results and discussions

13449

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450 Table 2 Results using DE/rand/1/bin without copy of individuals from PopC to PopB. Problem

Best

Mean

Std. Deviation

Worst

% Successful runs

Fitness evaluations (mean)

xbest

ybest

P1 P2 P3 P4

0.007739 1.07966 6.72037 1.96654

0,007211 1.01390 6.74062 1.97197

0.00030 0.05073 0.01456 0.00249

0.006347 0.83911 6.77831 1.97605

0 0 0 100

2191048 1682136 745336 2635277

0.426693 4.13395 1.024499 (6.87793 6.84894)

3.14 6.90512 4.893915 (0 6.84894)

Table 3 Results using DE/rand/1/bin with copy of individuals from PopC to PopB. Problem

Best

Mean

Std. Deviation

Worst

% Successful runs

Fitness evaluations (mean)

xbest

ybest

P1 P2 P3 P4

0.008153 1.10246 3.22169 1.95530

0.006482 1.06149 1.60176 1.96836

0.00065 0.03805 3.53 0.00273

0.005464 0.97254 6,5756 1.97765

0 50 82 100

2882040 5769343 726396 3921621

0,431414 4.1429 0,887343 (6.22741 6.28532)

3.14 6.90709 5 (6.22741 0)

Table 4 Results using DEGL/SAW without copy of individuals from PopC to PopB. Problem

Best

Mean

Std. Deviation

Worst

% Successful runs

Fitness evaluations (mean)

xbest

ybest

P1 P2 P3 P4

0.008582 1.10016 3.22169 1.95842

0.008318 1.0766 3.22169 1.96820

0.00018 0.01106 0 0.00341

0.007833 1.02945 3.22169 1.97277

36 32 100 100

1828430 2759875 1579641 4010104

0,437016 4.14294 0.887344 (6.97788 6.97303)

3.14 6.90594 5 (6.97763 0)

Table 5 Results using DEGL/SAW with copy of individuals from PopC to PopB. Problem

Best

Mean

Std. Deviation

Worst

% Successful runs

Fitness evaluations (mean)

xbest

ybest

P1 P2 P3 P4

0.008586 1.10236 3.22169 1.9679

0.008386 1.09869 3.22169 1.96373

0.00015 0.00353 0 0.00370

0.0080180 1.07865 3.22169 1.96518

40 100 100 100

1700507 1593933 1573609 3248724

0.437077 4.14297 0.887343 (7.02569 6.99757)

2.55384 6.90697 5 (0 6.99737)

8. Conclusions In this paper, a sequential differential evolution has been proposed to solve constrained min–max optimization problems. An adaptive penalty approach to handle the constraints has been used. The novel developed algorithm consists of three populations PopA, PopB and PopC, with a scheme of copying individuals to accelerate the convergence. At first, no individual is copied from PopC to PopB, whereas in the second case, 15 of the individuals are copied from PopC to PopB. The performance of the algorithm has been tested for four benchmark functions. The simulation results showed that the algorithm with copy individuals from PopC to PopB using neighborhood (DEGL/SAW) presented the bests results. The number of successful runs for the problems P2, P3 and P4 investigated were 100%. The inferior performance for problem P1 is due to rounding of the small value of the objective function. As part of future work, the proposed algorithm might be used with other methods to handle constraints and its parallel implementation may also be investigated. References Barbosa, H. J. C. (1996). A genetic algorithm for min-max problems. In Proceedings of the 1st international conference on evolutionary computation and its applications (pp. 99–109) Moscow, Russia. Barbosa, H. J. C. (1999). A coevolutionary genetic algorithm for constrained optimization. In Proceedings of the congress on evolutionary computation (CEC99) (pp. 1605–1611). Washington, DC, USA. Bracken, J., & McGill, T. (1973). Mathematical programs with optimization problems in the constraints. Operations Research, 21, 37–44. Brest, J., Boskovic, B., & Zumer, V. (2010). An improved self-adaptive differential evolution algorithm in single objective constrained real-parameter

optimization. In Proceedings of the IEEE congress on evolutionary computation (CEC 2010) (pp. 1073–1080). Brest, J., Zumer, V., & Maucec, M.S. (2006). Self-adaptive differential evolution algorithm in constrained real-parameter optimization. In Proceedings of the IEEE congress on evolutionary computation (CEC 2006) (pp. 215–222). Cramer, A., Sudhof, S., & Zivi, E. (2009). Evolutionary algorithms for minimax problems in robust design. IEEE Transactions on Evolutionary Computation, 13, 444–453. Das, S., Abraham, A., Chakraborty, U. K., & Konar, A. (2009). Differential evolution using a neighborhood-based mutation operator. IEEE Transactions on Evolution Computing, 13, 526–553. Hillis, W. D. (1990). Coevolving parasites improve simulated evolution as an optimization procedure. Physica D, 42, 228–234. Huang, F. Z., Wang, L., & He, Q. (2007). An effective co-evolutionary differential evolution for constrained optimization. Applied Mathematics and Computation, 186, 340–356. Huang, V. L., Qin, A. K., & Suganthan, P. N. (2006). Self-adaptive differential evolution algorithm for constrained real-parameter optimization. Technical Report Nanyang Technological University. Krohling, R. A., & Coelho, L. S. (2006a). Coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 36, 1407–1416. Krohling, R. A., & Coelho, L. S. (2006b). PSO-E: Particle swarm with exponential distribution (pp. 5577–5582). Kuo, R. J., & Huang, C. C. (2009). Application of particle swarm optimization algorithm for solving bi-level linear programming problem. Computers & Mathematics with Applications, 58, 678–685. Laskari, E. C., Parsopoulos, K. E., & Vrahatis, M. N. (2002). Particle swarm optimization for minimax problems. In Proceedings of the IEEE 2002 congress on evolutionary computation (pp. 1582–1587). Liu, B. (1998). Stackelberg-nash equilibrium for multilevel programming with multiple followers using genetic algorithms. Computers & Mathematics with Applications, 36, 79–89. Lu, B., Cao, Y., Yuan, M., & Zhou, J. (2008). Reference variable methods of solving min–max optimization problems. Journal of Global Optimization, 42, 1–21. Mallipeddi, R., & Suganthan, P. N. (2010a). Differential evolution with ensemble of constraint handling techniques for solving cec 2010 benchmark problems. In Proceedings of the IEEE congress on evolutionary computation (CEC 2010) (pp. 1907–1914).

13450

G.A.S. Segundo et al. / Expert Systems with Applications 39 (2012) 13440–13450

Mallipeddi, R., & Suganthan, P. N. (2010b). Problem definition and evaluation criteria for the CEC 2010 competition and special session on single objective constrained real-parameter optimization. Technical Report Nanyang Technological University Singapore. Paredis, J. (1994). Steps towards coevolutionary classification neural networks. In Artificial life IV (pp. 359–365). MIT Press. Ramsey, C. L., & Grefenstette, J. J. (1993). Case-based initialization of genetic algorithms. In Proceedings of the 5th international conference on genetic algorithms (pp. 84–91). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Rapoport, A., & Boebel, R. B. (1992). Mixed strategies in strictly competitive games: A further test of the minimax hypothesis. Games and Economic Behavior, 4, 261–283. Rohlfschagen, P., Lehre, P. K., & Yao, X. (2009). Dynamic evolutionary optimization: an analysis of frequency and magnitude of change. In Proceedings of the 11th annual conference on genetic and evolutionary computation (pp. 1713–1720). Ronnewinkel, C., Wilke, C. O., & Martinetz, T. (2000). Genetic algorithms in timedependent environments. In Theoretical aspects of evolutionary computing (pp. 263–288). Springer. Rosin, C. D., & Belew, R. K. (1995). Methods for competitive co-evolution: Finding opponents worth beating. In Proceedings of the sixth international conference on genetic algorithms (pp. 373–380). Morgan Kaufmann. Rosin, C. D., & Belew, R. K. (1996). New methods for competitive coevolution. Evolutionary Computation, 5, 1–29. Sainz, M. Á., Armengol, P. H. J., & Vehí, J. (2008). Continuous minimax optimization using modal intervals. Journal of Mathematical Analysis and Applications, 339, 18–30.

Shi, Y., & Krohling, R. A. (2002). Co-evolutionary particle swarm optimization to solving min-max problems. In Proceedings of the congress on evolutionary computation (CEC-2002) (pp. 1682–1687), Hawai, HI, USA. Storn, R., & Price, K. (1997). Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11, 341–359. Tahk, M. J., & Sun, B. C. (2000). Coevolutionary augmented Lagrangian methods for constrained optimization. IEEE Transactions on Evolutionary Computation, 4, 114–124. Takahama, T., & Sakai, S. (2010a). Constrained optimization by the epsilon constrained differential evolution with an archive and gradient-based mutation. In Proceedings of the IEEE congress on evolutionary computation (CEC 2010) (pp. 1680–1688). Takahama, T., & Sakai, S. (2010b). Efficient constrained optimization by the epsilon constrained adaptive differential evolution. In Proceedings of the IEEE congress on evolutionary computation (CEC 2010) (pp. 2052–2059). Tasgetiren, M. F., Mallipeddi, P. N. S. Q. P. R., & Sarman, S. (2010). An ensemble of differential evolution algorithms for constrained function optimization. In Proceedings of the IEEE congress on evolutionary computation (CEC 2010) (pp. 967–974). Tessema, B., & Yen, G. (2006). A self adaptive penalty function based algorithm for constrained optimization. In Proceedings of the IEEE congress on evolutionary computation (CEC 2006) (pp. 246–253). Tessema, B., & Yen, G. (2009). An adaptive penalty formulation for constrained evolutionary optimization. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, 39, 565–578. Viñas, P. H. (2006). Quantified real constraint solving using modal intervals with applications to control. Ph.D. Thesis Universitat de Girona, Girona, Spain.