Available online at www.sciencedirect.com
Operations Research Letters
Operations Research Letters 32 (2004) 143 – 151
www.elsevier.com/locate/dsw
Solving linear fractional bilevel programs Herminia I. Calvetea;∗ , Carmen Gal(eb a Dpto.
b Dpto.
de M etodos Estad sticos, F. de Ciencias, Edicio B, Universidad de Zaragoza, Pedro Cerbuna 12, Zaragoza 50009, Spain de M etodos Estad sticos, CPS, Edicio Torres Quevedo, Universidad de Zaragoza, Mar a de Luna 3, Zaragoza 50018, Spain Received 10 March 2003; received in revised form 10 July 2003; accepted 22 July 2003
Abstract In this paper, we prove that an optimal solution to the linear fractional bilevel programming problem occurs at a boundary feasible extreme point. Hence, the Kth-best algorithm can be proposed to solve the problem. This property also applies to quasiconcave bilevel problems provided that the 2rst level objective function is explicitly quasimonotonic. c 2003 Elsevier B.V. All rights reserved. Keywords: Bilevel; Fractional; Quasiconcave; Quasiconvex; Kth-best
1. Introduction Bilevel programming involves two optimization problems where the constraint region of the 2rst level problem is implicitly determined by another optimization problem. It has been applied to decentralized planning problems involving a decision process with a hierarchical structure. In terms of modeling, bilevel problems are programs which have a subset of their variables constrained to be an optimal solution of another problem parameterized by the remaining variables. The second level decision maker optimizes his objective function under the given parameters from the 2rst level decision maker. This one, in return, with complete information on the possible reactions of the second level decision maker, selects the parameters
This research work has been supported by Spanish CONSID-DGA Contract P53/98. ∗ Corresponding author. Fax: +34-976-761115. E-mail addresses:
[email protected] (H.I. Calvete),
[email protected] (C. Gal(e).
so as to optimize his own objective function. Bilevel problems can be formulated as min f1 (x1 ; x2 ) where x2 ∈ arg min f2 (x1 ; ); (1)
(x1 ;x2 )∈S
∈S(x1 )
n1
n2
where x1 ∈ R and x2 ∈ R are the variables controlled by the 2rst level and the second level decision maker, respectively; f1 ; f2 : Rn → R; n = n1 + n2 ; S ⊂ Rn de2nes the common constraint region and S(x1 ) = {x2 ∈ Rn2 : (x1 ; x2 ) ∈ S}. Let S1 be the projection of S onto Rn1 . For each x1 ∈ S1 , the second level decision maker solves problem (2) min f2 (x1 ; x2 ) (2) s:t: x2 ∈ S(x1 ): The feasible region of the 2rst level decision maker, called inducible region IR, is implicitly de2ned by the second level optimization problem IR = {(x1 ; x2∗ ): x1 ∈ S1 ; x2∗ ∈ M (x1 )}; where M (x1 ) denotes the set of optimal solutions to (2). We assume that S is not empty and that for all
c 2003 Elsevier B.V. All rights reserved. 0167-6377/$ - see front matter doi:10.1016/j.orl.2003.07.003
144
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
decisions taken by the 2rst level decision maker, the second level decision maker has some room to respond, i.e. M (x1 ) = ∅. The bilevel programming problem (1) is a nonconvex optimization problem that has received increasing attention in the literature (see [2,8,13] and references therein). One of its main features is that, unlike general mathematical problems, the bilevel problem may not possess a solution even when f1 and f2 are continuous and S is compact. In particular, diKculties may arise when M (x1 ) is not single-valued for all permissible x1 [2–4,8,13]. DiLerent approaches have been proposed in the literature to make sure that the bilevel problem is well posed. The most common one is to assume that, for each value of the 2rst level variables x1 , there is a unique solution to the second level problem, i.e., the set M (x1 ) is a singleton for all x1 ∈ S1 . Other approaches focus on the way of selecting x2∗ ∈ M (x1 ), in order to evaluate f1 (x1 ; x2 ), when M (x1 ) is not a singleton. Among the rules that have been proposed [8], it is worth mentioning the optimistic or weak approach and the pessimistic or strong approach. The 2rst one assumes that the 2rst level decision maker is able to inMuence the second level decision maker so that the latter always selects the variables x2 to provide the best value of f1 . Thus, the 2rst level decision maker has to solve the problem minx1 ∈S1 o {x1 } where
o (x1 ) = minx2 ∈M (x1 ) f1 (x1 ; x2 ). In the pessimistic approach, the 2rst level decision maker behaves as though the second level decision maker always selected the optimal decision which gives the worst value of f1 . This leads to the problem minx1 ∈S1 p {x1 } where p (x1 ) = maxx2 ∈M (x1 ) f1 (x1 ; x2 ). Finally, other approaches consider a local reduction of the problem [9,14]. In this paper, the linear fractional bilevel programming (LFBP) problem is considered in which both objective functions are linear fractional and S is a polyhedron, which is assumed to be nonempty and bounded. Using the common notation in bilevel programming, the LFBP problem can be written as follows: min where x2 solves
f1 (x1 ; x2 ) =
1 + c11 x1 + c12 x2 ; 1 + d11 x1 + d12 x2
min
f2 (x1 ; x2 ) =
s:t:
(x1 ; x2 ) ∈ S;
2 + c21 x1 + c22 x2 2 + d21 x1 + d22 x2
(3)
where, for i; j ∈ {1; 2}; cij ; dij are vectors of conformable dimensions, and i ; i are scalars. We assume that i +di1 x1 +di2 x2 ¿ 0; i=1; 2; ∀(x1 ; x2 ) ∈ S. If this is not so, it suKces to consider the linear fractional objective function as −( i + ci1 x1 + ci2 x2 )= −(i + di1 x1 + di2 x2 ). Moreover, it is also assumed that M (x1 ) is a singleton for all x1 ∈ S1 . Fractional programming when there exists only one level of decision has received remarkable attention in the literature [1]. It is worth mentioning that objective functions which are ratios frequently appear, for instance, when an eKciency measure of a system is to be optimized or when approaching a stochastic programming problem. In this paper, we give a geometrical characterization of the optimal solution to the LFBP problem in terms of what is called a boundary feasible extreme point. The result extends the characterization proved by Liu and Hart [11] for the linear bilevel programming problem. This property is the key to conclude that the Kth-best algorithm can be used to solve the LFBP problem. The paper is organized as follows. Section 2 provides the main theoretical result on optimality. Furthermore, an example is shown to illustrate the problems that can be caused by second level problems having multiple optima. In Section 3 the Kth-best algorithm is proposed to solve the problem and a formal proof of its correctness is given. An example is included to illustrate its application. Finally, Section 4 concludes the paper with 2nal remarks on more general bilevel problems for which the characterization of the optimal solution is still valid and the Kth-best algorithm can be applied to solve them.
2. Theoretical properties Before proving the main result on the optimal solution of problem (3) we list some preliminary de2nitions and results. Denition 1 (Danao [7]): Let f be a real-valued function de2ned on a convex subset D of Rn ,
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
(1) f is quasiconcave on D iL d1 ; d2 ∈ D; ∈ [0; 1], and f(d1 ) 6 f(d2 ) imply f(d1 ) 6 f[(1 − ) d1 + d2 ]. The function f is quasiconvex iL −f is quasiconcave. (2) f is semistrictly quasiconcave on D iL d1 ; d2 ∈ D; d1 = d2 ; ∈ (0; 1), and f(d1 ) ¡ f(d2 ) imply f(d1 ) ¡ f[(1 − )d1 + d2 ]. The function f is semistrictly quasiconvex iL −f is semistrictly quasiconcave. (3) f is explicitly quasiconcave on D iL it is quasiconcave and semistrictly quasiconcave on D. The function f is explicitly quasiconvex iL −f is explicitly quasiconcave. (4) f is explicitly quasimonotonic on D iL it is explicitly quasiconcave and explicitly quasiconvex on D. Note that the linear fractional functions
i + ci1 x1 + ci2 x2 fi (x1 ; x2 ) = ; i = 1; 2 i + di1 x1 + di2 x2 are explicitly quasimonotonic on S if i + di1 x1 + di2 x2 = 0 in S ([12, Theorem 3.53]). On the other hand, since f1 and f2 are quasiconcave and S is a nonempty and compact polyhedron, the LFBP problem is a particular case of the quasiconcave bilevel problem [5]. Hence: (1) The feasible region of the LFBP consists of the union of connected faces of the polyhedron S. As a consequence, in general IR is a nonconvex set. (2) There exists an extreme point of IR, thus an extreme point of the polyhedron S, which is an optimal solution of the LFBP problem. Denition 2 (Liu and Hart [11]): A point (x1 ; x2 ) ∈ IR is a boundary feasible extreme point if there exists an edge E of S such that (x1 ; x2 ) is an extreme point of E, and the other extreme point of E is not an element of IR. We are now in a position to characterize the optimal solution to the LFBP problem. To begin, let us consider the relaxed problem
1 + c11 x1 + c12 x2 min f1 (x1 ; x2 ) = ; 1 + d11 x1 + d12 x2 (4) s:t:
(x1 ; x2 ) ∈ S:
145
Note that f1 is a quasiconcave function and S is a nonempty and compact polyhedron, so that there is an extreme point of S which solves (4). If this is a point of IR, then it is an optimal solution of the LFBP problem. In general, by solving the relaxed problem we will not obtain an optimal solution of the bilevel problem, since decision makers usually have conMicting objectives. In this case, to characterize in a more precise way the geometry of the optimal solution to the LFBP problem, we will prove in the next theorem that it occurs at a boundary feasible extreme point. Theorem 3. If there exists an extreme point of S not in IR which is an optimal solution of the relaxed problem (4), then there exists a boundary feasible extreme point that solves the LFBP problem. Proof. As previously mentioned, there exists an extreme point of S which is an optimal solution of the LFBP problem. Let this point be (x˜1 ; x˜2 ) ∈ IR. If it is a boundary feasible extreme point the proof is complete. If this is not so, every extreme point adjacent to (x˜1 ; x˜2 ) is in IR and f1 (x˜1 ; x˜2 ) 6 f1 (x1 ; x2 )
(5)
for all adjacent extreme point (x1 ; x2 ) of (x˜1 ; x˜2 ). Firstly, we prove that there must be an extreme point (xˆ1 ; xˆ2 ) adjacent to (x˜1 ; x˜2 ) such that f1 (xˆ1 ; xˆ2 ) = f1 (x˜1 ; x˜2 ):
(6)
For this purpose let us consider the relaxed problem (4). Taking into account (5), (x˜1 ; x˜2 ) is a local extreme-minimum point of f1 in S. Since f1 is quasiconcave and explicitly quasiconvex on S, we can conclude that (x˜1 ; x˜2 ) is a global minimum of the relaxed problem (4) ([12, Theorem 5.13]), i.e. f1 (x˜1 ; x˜2 ) 6 f1 (x1 ; x2 )
∀(x1 ; x2 ) ∈ S:
(7)
By hypothesis, there exists an extreme point (y˜ 1 ; y˜ 2 ) ∈ S not in IR which is an optimal solution of problem (4). Thus f1 (x˜1 ; x˜2 ) = f1 (y˜ 1 ; y˜ 2 ): Notice that (y˜ 1 ; y˜ 2 ) cannot be adjacent to (x˜1 ; x˜2 ) as (x˜1 ; x˜2 ) is not a boundary feasible extreme point. Since f1 is continuous, quasiconvex and explicitly quasiconcave on S, the optimum set of problem (4) is the convex hull of some extreme points of S ([12, Theorem 5.21]), thus itself apolyhedron. Then, there
146
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
( ~x1, ~x2)
10
_ _
IR
_
( x^1, x^2)
_
^
_
^
( x1, x2 )
_
S
_
S
_
( ~y1, ~y2) 2
Fig. 1. Illustration of Theorem 3.
_
IR
1
exists an edge path in the optimum set of problem (4) from (x˜1 ; x˜2 ) to (y˜ 1 ; y˜ 2 ). Hence, there must be an extreme point (xˆ1 ; xˆ2 ) adjacent to (x˜1 ; x˜2 ) pertaining to the optimum set of problem (4), thus verifying (6). If (xˆ1 ; xˆ2 ) is a boundary feasible extreme point the proof is complete. If this is not so, we consider the extreme point (xˆ1 ; xˆ2 ) instead of (x˜1 ; x˜2 ) and repeat the same developments. Thus, we get an extreme point (xQ1 ; xQ2 ) adjacent to (xˆ1 ; xˆ2 ) verifying (6). Fig. 1 illustrates the process. If this new point is a boundary feasible extreme point the proof is complete. Otherwise, by repeating the process, because of the number of extreme points of S is 2nite, eventually a boundary feasible extreme point will be reached in a 2nite number of steps which solves the LFBP problem. Remark 4. The question remains as to what happens if M (x1 ) is not a singleton for all permissible x1 . The following example is used to illustrate problems caused by the existence of multiple optima when solving the second level problem for given x1 ∈ S1 . It allows us to show that the inducible region is no longer formed by the union of faces of the polyhedron S. Moreover, the 2rst level decision maker could not reach his optimal decision without ‘forcing’ the decision of the second level decision maker. Let us consider the LFBP problem (8), in which x1 is the variable controlled by the 2rst level decision maker and x2 is the variable controlled by the second level one: x1 + 3x2 + 3 min ; x1 + x 2 + 5
5
7
_
_
_
_
_
_
_
_
_
_
9
Fig. 2. Feasible region of example (8).
where x2 solves min
−x1 + 2x2 + 7 x1 + x 2 + 2
s:t:
(x1 ; x2 ) ∈ S;
(8)
where S = {(x1 ; x2 ) ∈ R2 : x1 + 2x2 6 20; x1 + x2 6 12; 2x1 + x2 6 20; 3x1 − 4x2 6 19; x1 − 4x2 6 5; x1 ; x2 ¿ 0}. The common constraint region and the feasible region IR of the example are shown in Fig. 2. Notice that for x1 = 1 the second level problem has multiple optima, M (1) = [0; 19 2 ]. This fact makes the inducible region not to consist of the union of faces of the polyhedron S. Moreover, the optimization problem of the 2rst level decision maker is not well de2ned. For completely evaluating f1 (1; x2 ) it is necessary to give a rule for selecting x2 ∈ M (1). The mapping of f1 is plotted in Fig. 3. Notice that the best value for the 2rst objective function is f1 = 23 obtained when x1 = 1 and x2 = 0. However, the 2rst level decision maker cannot force this value because the second level decision maker is indiLerent to each x2 in the interval [0; 19 2 ]. If the optimistic approach is taken the optimal solution to example (8) is therefore x1 = 1 and x2 = 0. Notice that this point is not an extreme point of the polyhedron S. However, if the pessimistic approach is used, then an optimal solution to the example does not exist.
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
_
_
_
_
_
_
_
_
_
_
f1
f1
~
2
10
147
_
_
1_
x1
5
7
_
_
_
_
_
1
_
9
_
_
7
_
_
_
5
_
1
_
_
1 _
_
_ _
2 _
_
_
_
_
_
_
9
x1
Fig. 3. Mappings of f1 and f˜1 .
On the other hand, if the 2rst level objective function was −2x1 − x2 + 22 ; f˜1 = x1 + x 2 + 1
By embedding this problem in the LFBP problem (3), we get:
1 z + c11 x1 z + c12 y2 ; min 1 z + d11 x1 z + d12 y2
then the 2rst level decision maker could reach his minimum f˜1 =1=6, obtained when x1 =9, since the second level problem given x1 = 9 has a unique optimal solution x2 =2. Notice that in this case the optimal solution is a boundary feasible extreme point. The mapping of f˜1 is also plotted in Fig. 3.
where y2 ; z solve
Remark 5. It is well known that the Charnes and Cooper (C&C) transformation [6] allows us to reformulate a linear fractional programming (LFP) problem as a linear programming (LP) one. Hence, we wonder about the applicability of the C&C transformation to reformulate in a similar way the LFBP problem as a linear bilevel programming problem. Having this motivation in mind, consider that S = {(x1 ; x2 ) : A1 x1 + A2 x2 6 b; x1 ¿ 0; x2 ¿ 0} where b is a vector and A1 ; A2 are matrices of conformable dimensions. For 2xed x1 ∈ S1 , let z = 1=(1 + d11 x1 + d12 x2 ) and y2 = zx2 . Then the second level decision maker has to solve the following LP problem: min s:t:
min
( 2 + c21 x1 )z + c22 y2
s:t:
(9)
A2 y2 − (b − A1 x1 )z 6 0; d22 y2 + (2 + d21 x1 )z = 1; x1 ¿ 0; y2 ¿ 0; z ¿ 0: Notice that the 2rst level objective function contains the nonlinear term x1 z. In this case it de2nitely makes no sense to consider y1 = x1 z as a single variable because x1 is a variable controlled by the 2rst level decision maker while z is controlled by the second level one. Since the reformulated problem is apparently more complicated to be solved than the original one, it does not seem very tempting to directly use the C&C transformation in the process of solving the LFBP problem. In the next section we will see that it can be used to solve LFP problems arising in successive iterations of the K-best algorithm.
( 2 + c21 x1 )z + c22 y2 A2 y2 − (b − A1 x1 )z 6 0; d22 y2 + (2 + d21 x1 )z = 1; y2 ¿ 0; z ¿ 0;
3. The K th-best algorithm Bearing in mind that there is an extreme point of S which solves the LFBP problem, an examination
148
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
Fig. 4. Kth-best algorithm.
of all extreme points of the polyhedron S constitutes an algorithm that will 2nd the solution of the LFBP problem in a 2nite number of steps. This is unsatisfactory, however, since the number of extreme points of S is, in general, very large. Nevertheless, in light of Theorem 3, we can propose the Kth-best algorithm, a more successful enumeration scheme, for solving the LFBP problem. This algorithm was 2rst proposed by Bialas and Karwan [4] for solving the linear bilevel programming problem. According to this algorithm, which is described in Fig. 4, an optimal solution to the relaxed problem (4), (x1[1] ; x2[1] ), is 2rst considered. If this is a point of IR, then it is an optimal solution of the LFBP problem. If this is not so, the set of its adjacent extreme points W [1] is considered. Then, the extreme point in W = W [1] which provides the best value of f1 is selected to test if it is a point of IR. If it is, the algorithm 2nishes. If this is not so, the point is eliminated from W and its adjacent extreme points with a worst value of f1 are added to W . The algorithm continues by selecting the best extreme point in W with respect to f1 and repeating the process. Next we give a formal proof of the correctness of the algorithm. For this purpose, let (x1[1] ; x2[1] ); (x1[2] ; x2[2] ); : : : ; (x1[m] ; x2[m] ) denote the m ordered extreme point solu-
tions to the relaxed problem (4), i.e. f1 (x1[i] ; x2[i] ) 6 f1 (x1[i+1] ; x2[i+1] );
i = 1; : : : ; m − 1:
We will prove that the (i + 1)st best extreme point of S, (x1[i+1] ; x2[i+1] ), is adjacent to (x1[1] ; x2[1] ), or (x1[2] ; x2[2] ); : : :, or (x1[i] ; x2[i] ). Hence, the algorithm successively computes the ordered sequence of extreme points, and it is obvious that (x1[k] ; x2[k] ) is a global optimum to the LFBP problem if k = mini∈{1; :::; m} {i : (x1[i] ; x2[i] ) ∈ IR}. Theorem 6. Let (x˜1 ; x˜2 ) be an extreme point of S. There exists an edge path in S from (x˜1 ; x˜2 ) to (x1[1] ; x2[1] ) such that the value of f1 (x1 ; x2 ) is nonincreasing along it. Proof. Assume for the time being that every extreme point (x1 ; x2 ) adjacent to (x˜1 ; x˜2 ) veri2es f1 (x1 ; x2 ) ¿ f1 (x˜1 ; x˜2 ): Hence (x˜1 ; x˜2 ) is a local extreme-minimum point of f1 in S. Since f1 is quasiconcave and explicitly quasiconvex on S then (x˜1 ; x˜2 ) is a global minimum of
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
the relaxed problem (4), i.e. f1 (x˜1 ; x˜2 ) =
f1 (x1[1] ; x2[1] );
Therefore (x1[1] ; x2[1] ) and (x˜1 ; x˜2 ) are extreme points of the optimum set of (4). Since f1 is continuous, quasiconvex and explicitly quasiconcave on S, this set is the convex hull of some extreme points of S. Then there exists an edge path in this polyhedron from (x˜1 ; x˜2 ) to (x1[1] ; x2[1] ). Since all the points of the edge path are from S and have the same value of f1 , this is the edge path we are looking for. Suppose now that there exists at least an extreme point (xˆ1 ; xˆ2 ) adjacent to (x˜1 ; x˜2 ) such that f1 (xˆ1 ; xˆ2 ) ¡ f1 (x˜1 ; x˜2 ): Let us now consider (xˆ1 ; xˆ2 ) instead of (x˜1 ; x˜2 ) and repeat the former developments. Hence, either there exists an edge path P from (xˆ1 ; xˆ2 ) to (x1[1] ; x2[1] ) for which all points have the same value of f1 and (x˜1 ; x˜2 ) − (xˆ1 ; xˆ2 ) − P is the required edge path, or there exist an extreme point (xQ1 ; xQ2 ) adjacent to (xˆ1 ; xˆ2 ) such that
149
Since (y1 ; y2 ) minimizes the value of f1 over the set of extreme points of S excluding T , then (y1 ; y2 ) = (x1[k+1] ; x2[k+1] ). Theorem 8. The Kth-best algorithm solves the LFBP problem. Proof. As a consequence of Theorem 7 the kth-best extreme point of the relaxed problem (4) is adjacent to either the 1st, 2nd, : : :, or (k − 1)th extreme point. Then, upon termination, the algorithm provides the best boundary feasible extreme point, i.e. the optimal solution to the LFBP problem. As it was previously pointing out, it is worth noting that taking into account the C&C transformation only linear problems need to be solved when applying the Kth-best algorithm for solving the LFBP problem.
f1 (xQ1 ; xQ2 ) ¡ f1 (xˆ1 ; xˆ2 ):
Example. In order to illustrate the procedure we consider the following linear fractional bilevel problem: 1 + y1 − y2 + 2y4 min f1 = ; 8 − y1 − 2y3 + y4 + 5y5
Next we consider (xQ1 ; xQ2 ) and repeat the process. Since the number of extreme points of S is 2nite, eventually an edge path will be obtained along which the value of f1 is nonincreasing.
s:t:
where (y3 ; : : : ; y8 ) solves 1 + y1 + y2 + 2y3 − y4 + y5 min f2 = 6 + 2y1 + y3 + y4 − 3y5 −y3 + y4 + y5 + y6 = 1;
Theorem 7. The (k + 1)st best extreme point of S, (x1[k+1] ; x2[k+1] ), is adjacent to (x1[1] ; x2[1] ), or (x1[2] ; x2[2] ); : : :, or (x1[k] ; x2[k] ); k ¡ m. Proof. Let W [i] denote the set of adjacent extreme points of (x1[i] ; x2[i] ). Let T = {(x1[1] ; x2[1] ); (x1[2] ; x2[2] ); : : : ; (x1[k] ; x2[k] )} and W = (W [1] ∪ W [2] ∪ · · · ∪ W [k] ) \ T . Let (y1 ; y2 ) ∈ W such that f1 (y1 ; y2 ) =
min
(w1 ;w2 )∈W
{f1 (w1 ; w2 )}:
Let (xˆ1 ; xˆ2 ) be any extreme point of S such that (xˆ1 ; xˆ2 ) ∈ i=1; :::; k W [i] . Taking into account that any edge path in S from (xˆ1 ; xˆ2 ) to (x1[1] ; x2[1] ) must contain at least a point of W as an intermediate point, and considering the edge path provided by Theorem 6, there exists (w˜ 1 ; w˜ 2 ) ∈ W such that f1 (xˆ1 ; xˆ2 ) ¿ f1 (w˜ 1 ; w˜ 2 ) ¿ f1 (y1 ; y2 ):
2y1 − y3 + 2y4 − 0:5y5 + y7 = 1; 2y2 + 2y3 − y4 − 0:5y5 + y8 = 1; yi ¿ 0; i = 1; : : : ; 8: The optimal solution of the relaxed problem (4) is (x1[1] ; x2[1] ) = (0; 0:75; 0; 0; 1; 0; 1:5; 0). By 2xing x1 = x1[1] = (y1 ; y2 ) = (0; 0:75), we get the following linear fractional problem corresponding to the second level: 1:75 + 2y3 − y4 + y5 min 6 + y3 + y4 − 3y5 s:t: −y3 + y4 + y5 + y6 = 1; −y3 + 2y4 − 0:5y5 + y7 = 1; 2y3 − y4 − 0:5y5 + y8 = −0:5; yi ¿ 0; i = 3; : : : ; 8:
150
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
Table 1 Results of the Kth-best algorithm for the example Iteration
(x1[i] ; x2[i] )
W [i]
i=1
(0; 0:75; 0; 0; 1; 0; 1:5; 0) ∈ IR f1 = 0:0192
(0, 0, 1, 0, 2, 0, 3, 0) (0, 0, 0, 0, 1, 0, 1.5, 1.5) (0.75, 0.75, 0, 0, 1, 0, 0, 0) (0, 0.9, 0, 0.6, 0.4, 0, 0, 0) (0, 0.5, 0, 0, 0, 1, 1, 0)
f1 = 0:0588 f1 = 0:0769 f1 = 0:0816 f1 = 0:1226 f1 = 0:2
i=2
(0; 0; 1; 0; 2; 0; 3; 0) ∈ IR f1 = 0:0588
(0,0.75,0,0,1,0,1.5,0) (0, 0, 0, 0, 1, 0, 1.5, 1.5) (0, 0, 0.5, 0, 0, 1.5, 1.5, 0) (1.5, 0, 1, 0, 2, 0, 0, 0) (0, 0, 1.5, 1.5, 1, 0, 0, 0)
f1 = 0:0192 f1 = 0:0769 f1 = 0:125 f1 = 0:1613 f1 = 0:32
i=3
(0; 0; 0; 0; 1; 0; 1:5; 1:5) ∈ IR f1 = 0:0769
(0,0.75,0,0,1,0,1.5,0) (0, 0, 1, 0, 2, 0, 3, 0) (0, 0, 0, 0, 0, 1, 1, 1) (0.75, 0, 0, 0, 1, 0, 0, 1.5) (0, 0, 0, 0.6, 0.4, 0, 0, 1.8)
f1 = 0:0192 f1 = 0:0588 f1 = 0:125 f1 = 0:1429 f1 = 0:2075
i=4
(0:75; 0:75; 0; 0; 1; 0; 0; 0) ∈ IR f1 = 0:0816
Its optimal solution is x2∗ = (y3 ; : : : ; y8 ) = (0; 0:5; 0; 0:5; 0; 0). Hence (x1[1] ; x2[1] )∈ IR. Notice that (x1[1] ; x2∗ ) = (0; 0:75; 0; 0:5; 0; 0:5; 0; 0) ∈ IR, so that it provides an upper bound on the optimal value of f1 for the example. The adjacent extreme points of (x1[1] ; x2[1] ) are given in Table 1. In this table are also shown the successive best extreme points computed and its adjacent extreme points. The optimal solution is reached at the fourth best extreme point. 4. The quasiconcave bilevel problem It is worth pointing out that the proof of Theorem 3 is mainly based on the fact that the 2rst level objective function is explicitly quasimonotonic. Hence, we can conclude that Theorem 3 is still valid for more general problems. Indeed, let us consider the quasiconcave bilevel programming problem, in which f1 and f2 are continuous functions; f1 is quasiconcave on S; f2 is quasiconcave on S(x1 ), for all x1 ∈ S1 ; S is a polyhedron, which is assumed to be nonempty and bounded; and M (x1 ) is single-valued for all x1 ∈ S1 . This model includes,
as important particular cases, a wide class of bilevel problems where objective functions are linear, fractional (ratios of concave nonnegative functions and convex strictly positive functions [1]) or multiplicative (the product of a set of concave functions, each strictly positive [10]). As noted previously, for this problem Calvete and Gal(e [5] proved that IR is formed by the union of connected faces of S. Hence, there exists an extreme point of the polyhedron S that solves it. Under the additional assumption that the 2rst level objective function is explicitly quasimonotonic, the proof of Theorem 3 can be replicated step by step to obtain that there exists a boundary feasible extreme point that solves the quasiconcave problem. Notice that we do not require any additional assumption on the second level objective function, so that this result is still valid for bilevel problems in which the 2rst level objective function is linear or linear fractional and the second level objective function is linear, fractional or multiplicative. The same can be said with regard to the Kth-best algorithm. Under the mentioned assumptions, an optimal solution to the quasiconcave bilevel problem can be obtained by checking the best of the extreme points
H.I. Calvete, C. Gal e / Operations Research Letters 32 (2004) 143 – 151
adjacent to all previously analyzed extreme points, excluding these. References [1] M. Avriel, W.E. Diewert, S. Schaible, I. Zang, Generalized Concavity, Plenum Press, New York, London, 1988. [2] J.F. Bard, Practical Bilevel Optimization, Algorithms and Applications, Kluwer Academic Publishers, Dordrecht, Boston, London, 1998. [3] J.F. Bard, J.E. Falk, An explicit solution to the multilevel programming problem, Comput. Oper. Res. 9 (1) (1982) 77–100. [4] W.F. Bialas, M.H. Karwan, Two-level linear programming, Manag. Sci. 30 (1984) 1004–1024. [5] H.I. Calvete, C. Gal(e, On the quasiconcave bilevel programming problem, J. Optim. Theory Appl. 98 (3) (1998) 613–622. [6] A. Charnes, W.W. Cooper, Programming with linear fractionals, Nav. Res. Logistics Q. 9 (1962) 181–186.
151
[7] R.A. Danao, Some properties of explicitly quasiconcave functions, J. Optim. Theory Appl. 74 (3) (1992) 457–468. [8] S. Dempe, Foundations of Bilevel Programming, Kluwer Academic Publishers, Dordrecht, Boston, London, 2002. [9] J.E. Falk, J. Liu, On bilevel programming, Part I: general nonlinear cases, Math. Programming 70 (1) (1995) 47–72. [10] H. Konno, T. Kuno, Multiplicative programming problems, in: E. Horst, P.M. Pardalos (Eds.), Handbook of Global Optimization, Kluwer Academic Publishers, Dordrecht, 1995. [11] Y.H. Liu, S.M. Hart, Characterizing an optimal solution to the linear bilevel programming problem, Eur. J. Oper. Res. 73 (1) (1994) 164–166. [12] B. Martos, Nonlinear Programming. Theory and Methods, North-Holland Publishing Company, Amsterdam, 1975. [13] K. Shimizu, Y. Ishizuka, J.F. Bard, NondiLerentiable and Two-level Mathematical Programming, Kluwer Academic Publishers, Boston, London, Dordrecht, 1997. [14] O. Stein, G. Still, On generalized semi-in2nite optimization and bilevel optimization, Eur. J. Oper. Res. 142 (3) (2002) 444–462.