A Linear bilevel programming algorithm based on bicriteria programming

A Linear bilevel programming algorithm based on bicriteria programming

0305%0548/87 $3.00 + 0.00 Compur.Opts Res. Vol. 14, No. 2, pp. 173-179, 1987 Rimed in Great Britain A LINEAR Pergamon Journals Ltd BILEVEL PROGRAM...

524KB Sizes 0 Downloads 130 Views

0305%0548/87 $3.00 + 0.00

Compur.Opts Res. Vol. 14, No. 2, pp. 173-179, 1987 Rimed in Great Britain

A LINEAR

Pergamon Journals Ltd

BILEVEL PROGRAMMING ALGORITHM BICRITERIA PROGRAMMING GUUEREN

BASED ON

~JNLCJ*

STFA Holding Co. Altunixade, Istanbul, Turkey

(Received Nooember 1984; revised June 1985)

Scope and Purpose-Although the bilevel programming problem has a significantly different structure than the bicriteria programming problem, a certain relationship exists between the two problems. Presently, solution techniques for bicriteria programming, in general, require leas computational effort than those for bilevel programming. Here a computationally efficient algorithm for bilevel programming, based on bicriteria programming, is presented. Abstract-In this paper the relationship between bilevel and bicriteria programming is utilized to develop an algorithm for linear bilevel programming through the adaptation of a bicriteria programming algorithm proposed previously. Some computational results are also given.

INTRODUCTION

The linear bilevel programming

problem (BLPP) is stated as follows: max ax + by, where y solves X

(1) max cx + dy subject to Ax+By
dy Q(x) = {ylBy < u - Ax, y > O}.

Definitions of feasibility and optimality for the linear BLPP are then as follows: Definition 1 - - * A point (x, y) IS called feasible if

*Gilseren Unlii (Kiziltan) is currently an investment analyst at STFA Holding Co., Turkey. She received her B.S. in electrical engineering and her MS. and Ph.D. in industrial engineering from Eogazici University, Turkey. Previous papers have appeared in European Journal of Operations Research, R and D Management, Management Science and Energy. CAOR 14 :2-G

173

174

GOLSEREN ~~NLU

Definition 2

A feasible point (x0, y”) is called optimal if (i) ax0 + by” is unique for all y” E Y(x’) (ii) ax0 + by” 2 ax + by for all feasible (x, y). In this paper it is assumed that P and Q(x) are bounded and nonempty, and condition (i) of Definition 2 holds. These assumptions are needed to insure that the problem is well-posed and has a solution [ 1,2]. The linear BLPP has been studied by various researchers. A special case, where the inner objective function is the negative of the outer one, was studied by Falk [3], who gave an algorithm based on a combination of branch and bound and linear programming techniques. Candler and Norton [4] proposed an algorithm for the linear BLPP, but since the algorithm was not equipped to deal with the inherent nonconvexity of the problem, it failed to find the global optimum. Bard [l] showed that the constraint region for the outer problem consists of a piecewise linear region composed of the edges and hypersurfaces of P. Another basic result was that the global optimum occurred at one of the vertices of P [1,5,6]. This suggested utilization of extreme-point search techniques. Candler and Townsley [6] developed an implicit enumeration scheme which uses global necessary condition information generated in the search for locally optimal solutions. Bialas and Karwan [S] gave the “Kth-best” algorithm, which requires finding the K*th best extreme-point solution to max{ax + byl(x, y) E P} where K* is the minimum index such that the associated solution is feasible to the linear BLPP. A different approach involves replacing the inner problem by its Kuhn-Tucker conditions. Bard and Falk [7] solved the resulting problem by using a separable nonconvex programming algorithm, based on branch and bound techniques. An alternative procedure, based on parametric complementary pivoting, was given by Biasal et al. [S]. Recently, another approach was proposed by Bard [1,2]. The inner objective is replaced by an infinite set of constraints, the Kuhn-Tucker conditions for the resulting problem are analyzed, and a similarity with the Kuhn-Tucker conditions associated with a parametric linear program is observed. Consequently, a grid search algorithm (GSA) based on solving the parametric linear program is given [2]. Furthermore, the parametric program points out a relationship between bilevel and bicriteria programming. Computational results show that generally the GSA outperforms the other approaches. In this paper the relationship between bilevel and bicriteria programming is studied further, and an algorithm for linear BLPP is developed through the adaptation of a linear bicriteria programming algorithm proposed previously [9].

RELATIONSHIP

OF

The linear bicriteria programming

BICRITERIA

AND

BILEVEL

PROGRAMMING

problem is written as max(C,z, C,z)

(2) subject to

AZ d u z 2 0.

where the n vectors c, and c, represent the two objective functions, 2 is an m x n matrix, and z and u are n and m vectors, respectively. A feasible point z” is called efficient if there does not exist any other feasible point z, such that c,z> clzo and c,zZ czzo with at least one strict inequality. A well-known result for identifying efficient solutions is [ 10,l l] :

A linear bilevel programming algorithm based on bicriteria programming

175

Theorem 1

A feasible point z” is efficient if and only if 31 E (0,l) such that .z” is optimal for max IC,z + (1 -n)&z, subject to

AZ f U,

(3)

23 0. The relationship between bicriteria and bilevel programming was investigated by Bard [1,2] who gives the following results: Theorem 2 3,I” E (0, l] such that the corresponding solution (x0, y”) of the parametric linear program (4) is the optimal solution to (1).

max I(ax + by) + (1 - A)dy, subject to

Ax + By 6 u,

(4)

x, y 3 0, and noting the similarity to (3): Corollary I

The solution to the linear BLPP is efficient with respect to the related bicriteria problem max (ax + by, dy), subject to

Ax + By 6 U,

(5)

X, y > 0. However, corollary 1 overlooks one special case where the solution to the linear BLPP is not an efficient solution of (5). For 3,= 1, an optimal solution to (4) may not be an efficient solution of (5), as indicated by theorem 1. If it happens that the outer objective function has multiple optimal solutions, each of these will solve (4) with A = 1. As seen in the example below, one of these optimal solutions, although dominated (i.e. not efficient) with respect to (5), can be the optimal solution to the BLPP. Example: max, 3x + y1 + y,, where y, and y, solve max - 5y, - y, YI.YI

subject to

x + y, + y, < 3, 2x + 2y, + y, 6 4, 2x - 2y, 6 l/2, x9 Yl, Y, 2 0.

The outer objective function has two optimal solutions, (x, y)’ = (9/8,7/&O) and (x, Y)~= (5/8,3/g, 2) with an objective value of 17/4 and with corresponding values of -35/8 and -31/B, respectively, for the inner objective. Then, with respect to the related bicriteria problem, (x, y)’ is dominated by (x, Y)~. However, (x, y)’ is feasible for the linear BLPP, whereas (x, y)’ is infeasible. Thus (x, y)’ is the solution to the linear BLPP, although it is not an efficient solution of (5). Consequently, to cover such cases, we modify corollary 1 as follows.

176

Corollary

GULSERENij~~ti

2

The solution to the linear BLPP is either efficient with respect to the bicriteria programming problem (5) or is among the multiple optimal solutions of (4) with I = 1. Next, we briefly sketch the linear bicriteria programming algorithm [9] which was used in developing an algorithm for the linear BLPP.

THE

LINEAR

BICRITERIA

PROGRAMMING

ALGORITHM

The bicriteria algorithm is based on identifying “efficient” bases for (2), defined as follows: Definition

A basis is called “efficient” if there exists J, E (0,l) such that K, + (1 - 1)C, are the reduced cost vectors associated with the basis. As implied by theorem 1, an extreme solution (degenerate or nondegenerate) there exists an associated “efficient” basis. We shall call two efficient bases obtained from one another by a single pivot and all linear combinations represented by those bases are efficient. The main result utilized by the algorithm [9] is as follows: Theorem

3 0, where Cr and C, is efficient if and only if “adjacent” if they are of the two solutions

3

Given an efficient basis, the basis obtained by introducing the nonbasic variable zj into the basis is an adjacent efficient basis with a nonincreasing value of the first objective function if and only if either

(i)

cljIc2j=

maxC,kIC2k,

R = {j E NJC,j > 0, C,j < O},

keR

or j E T= {j E N(C,, = Czj = 0},

(ii)

where N is the index set of nonbasic variables. As the set of efficient bases is connected, starting with an efficient basis maximizing the first objective and using Theorem 3 at each step, all efficient bases can be generated.

THE

LINEAR

BILEVEL

PROGRAMMING

ALGORITHM

An algorithm for the linear BLPP can now be developed, based on corollary 2 and utilizing the bicriteria algorithm with proper modifications. Let

HI = {(x5Y)l(Xt Y) is efficient with respect to (5)}, H, = {(x, y)[(x, y) solves (4) with L = l}.

Then, the optimal solution (x0, y”) to the BLPP is that element of H = H, u H, which is both feasible to the BLPP and has the highest value of the first objective function. Thus, to find the global optimum to the BLPP, we need only to identify this element of H. Consequently, the algorithm starts with the maximization of the first objective function, checks all multiple optimal solutions (if any) for feasibility to the BLPP and if any of these is feasible, declares it as the optimal solution to the BLPP and stops. Otherwise, it continues by generating efficient solutions of (5), checking each of these for feasibility to the BLPP. If the feasibility check is satisfied, the current solution is stored as a candidate for the optimal solution to the BLPP. A bounding scheme is also utilized to restrict the search to those efficient solutions having a value of the first objective function greater than that of the best known solution to the BLPP so far. In the algorithm, to keep track of efficient bases and to ensure that an efficient basis is generated

177

A linear bilevel programming algorithm based on bicriteria programming

only once, two sets are utilized. These are Vi = the set of efficient bases identified, but not yet generated, V, = the set of efficient bases already generated, where bases are stored by the index set of their nonbasic variables, N. To establish the correspondence between (2) and (5), let z = (x, y), C, = (a, b) and C, = (0, d). Also let z” denote the best known solution to the BLPP at any step, and 1 denote the value of the outer objective at this solution. The algorithm proceeds with the following steps: Step 1. Initialization:

set Vi = a, 1 = -m, where m is a sufficiently large number. Maximize the first objective. Form the set H2 of optimal solutions.

Step 2. Initial feasibility

check

(a) Choose z = (x, y)~ Hz. Form the inner problem with y as the initial basic feasible solution. Check whether this is also optimal. If yes, set z” = z and go to step 6. (b) Set H, = H, - {z}. If H, = a, go to step 3. Otherwise go to step 2(a). Step 3. Initialization

of the search for eficient

Step 4. Enumeration

gf adjacent

64 04 (c) (4

&icient

bases:

Find initial efficient basis. Set V, = {N}.

bases

Identify the nonbasic variable(s) zj,j E R such that C, j/C,j = max,, if any. Form the set Y of adjacent efficient bases corresponding variables. If Y= a, go to step 6. Set Y1= Y- V,, i.e. exclude from Y those efficient bases which generated. If Yi = a, go to step 6. Select NE Y, and pivot to the associated basis. Set Vi = V, = V,u {N}. Denote the solution by z = (x, y). If C,z < 1 go to step 6.

C,,/C,, and zj, j E T to those nonbasic have already been I’, u Y, - {N} and

Step 5. Feasibility

check: form the inner problem with y as the initial basic feasible solution. Check whether this is also optimal. If not, go to step 4(a). If yes, set 1 = C,z, and z0 = z.

Step 6.

Termination: if Vi = a, stop. The optimal solution is z’. Otherwise select NE Vi and pivot to the associated basis. Set Vi = Vi - {N}, and V, = I’, u (N}. Go to step 4(d).

In the enumeration scheme, priority is given to bases which are adjacent to the current basis at hand. Consequently, the only place where more than one pivot is required is step 6, if V, # @. If there are no multiple efficient solutions, i.e. no two efficient solutions give the same vector of objective values, at each return to step 4, only one adjacent efficient basis will be identified and I’, = 0 always. Then, the first feasible solution obtained at step 5 will be the optimal solution to the BLPP. If there are multiple efficient solutions, the algorithm will follow a series of paths consisting of a sequence of adjacent bases. Observe that if a feasible solution to the BLPP has just been found, then the search along that path is terminated, as from then on only efficient solutions with nonincreasing values of the first objective could be encountered. Similarly, the search along the current path is terminated at step 4(d) if C,z < 1; that is if the first objective value is less than the best known solution to the BLPP thus far. A second enumeration scheme could be, at each return to step 4, to rank the efficient bases in Y, u Vi with respect to the outer objective value and generate the one with the highest. Then the first feasible solution obtained at step 5 will be the optimal solution to the BLPP, and no bounding is required. In this case, the resultant algorithm is also a Kth-best algorithm, such as the algorithm of Bialas and Karwan [5]. Here the efficient extreme point with the K*th best value (from among all efficient extreme points) of the outer objective is identified. If there are no multiple efficient solutions, obviously, both enumeration schemes lead to the same sequence of efficient bases. In case of multiple efficient solutions, the second scheme could require, at each step, pivoting among bases which are not adjacent. However, for problems where K* is small, the second scheme could find the optimal solution quicker. Similar to the GSA, the present algorithm would not be suitable for problems where the set of all

178

G~~LSEREN ij~~ti

feasible solutions for the related bicriteria problem is efficient, such as the counter example given by Bard [2]. Then the algorithm turns into an extreme point enumeration scheme since no adjacent extreme point can be eliminated at any step as being dominated. Still the global optimum will be found, but the advantage of lesser computation time will be lost. Different from the GSA, since the parameterization over 1 is carried out implicitly by the proposed algorithm in generating the efficient solutions there is no possibility ofjumping over A”,i.e. missing an efficient solution which is also the optimal solution to the BLPP, as could be the case with the GSA. Furthermore, degeneracy does not cause any problems as the algorithm works with “efficient” bases. Also, in the GSA, if multiple optimal solutions exist at any iteration, signifying the presence of multiple efficient solutions as Bard [2] also points out, the procedure and storage requirements could turn out to be unwieldy. The proposed algorithm handles this situation conveniently by utilizing the sets Vi and V, and generating each efficient solution only once. COMPUTATIONAL

RESULTS

Computational results obtained for four types of randomly generated problems with different number of constraints and variables are summarized in Table 1. The results give the average for 20 problems per problem type. The algorithm was coded in FORTRAN and computations were done on a VAX-l l/730. Also included in Table 1 are the average CPU times on a VAX-l l/780 with the GSA algorithm, as reported by Bard [2] for similar problems. Bard [2] has shown that the GSA generally outperforms other previously proposed algorithms. Results in Table 1 indicate that the algorithm presented here performs even better than the GSA, and that further reductions in CPU time are achieved. In order to study the effect of problem structure on the computation time, as regards the relative dimensions of the vectors x and y, further sample problems were solved. The results in Table 2 again give the average for 20 problems per problem type. For a given value of the total number of variables and constraints, the CPU time is inversely related with n,. This is probably due to the increased computational effort required in the feasibility checks, since the size of the inner problem grows as n, decreases.

Table I. Computational

results and comparison

with GSA

Average CPU time (s)

Range of CPU times (s)

Average CPU time with GSA (s)

6 3 I 10 10 16 20 IO 20

0.20 0.82 2.34

0. I CO.24 0.41-1.99 0.77-5.14

0.6 1.9 -

30 20 25

4.34

1.11~11.61

6.3

Problem n,

type

n2 m

Table 2. Effect of problem

Problem type a, +n, n, m 50 SO 50 50 50 30 30 30 30 30

40 30 25 20 IO 25 20 15 10 5

25 25 25 2s 2s 20 20 20 20 20

structure

on computation

time

Average CPU time (s)

Range of CPU times (s)

3.29 3.96 4.08 4.34 6.02 1.40 1.50 2.10 2.34 2.68

1.01-6.98 1.13-12.71 1M-1 0.69 1.11-11.61 2.39-18.27 0.77-3.39 0.58-4.00 0.73-8.27 0.77-5.74 0.63-7.80

A linear bilevel programming algorithm based on bicriteria programming

179

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

J. F. Bard, Optimality conditions for the bilevel programming problem. Nav. Res. Logist. Q. 31, 13-26 (1984). J. F. Bard, An efficient point algorithm for a linear two-stage optimization problem. Upns Res. 31,670-684 (1983). J. E. Falk, A linear max-min problem. Math. Prog.5, 169-188 (1973). W. Candler and R. Norton, Multilevel programming. Unpublished research memorandum, World Bank, Washington, DC. (1976). W. F. Bialas and M. H. Karwan, On two-level optimization. IEEE Trans. Automat. Control AC-27, 211-214 (1982). W. Candler and R. Townsley, A linear two-level programming problem. Comput. Opns Res. 9, 59-76 (1982). J. F. Bard and J. E. Falk, An explicit solution to the multi-level programming problem. Comput. Opns Res. 9,77-100 (1982). W. F. Bialas, M. H. Karwan and J. Shaw, A parametric complementary pivot approach to multilevel programming. Research Report No. 80-2, Department of Industrial Engineering, SUNY, Buffalo (1980). G. Kiziltan and E. Yucaoglu, An algorithm for bicriterion linear programming. Eur. J. Opl Res. 10, 406411 (1982). H. Isermann, The enumeration of the set of all efficient solutions for a linear multiple objective program. Opl Res. Q. 28, 711-725 (1977). M. Zeleny, Multiple Criteria Decision Making. McGraw-Hill, New York (1982).