Minimax regret solution to linear programming problems with an interval objective function

Minimax regret solution to linear programming problems with an interval objective function

EUROPEAN JOURNAL OF OPERATIONAL RESEARCH ELSEVIER European Journal of Operational Research 86 (1995) 526-536 Theory and Methodology Minimax regret ...

524KB Sizes 10 Downloads 93 Views

EUROPEAN JOURNAL OF OPERATIONAL RESEARCH ELSEVIER

European Journal of Operational Research 86 (1995) 526-536

Theory and Methodology

Minimax regret solution to linear programming problems with an interval objective function Masahiro

Inuiguchi and Masatoshi

Sakawa

Department of lndustrial and Systems Engineering, Faculty of Engineering, Hiroshima University, 4-1 Kagamiyama 1-chome, Higashi-Hiroshima, Hiroshima 724, Japan

Received February 1993; revised January 1994

Abstract

This paper deals with a linear programming problem with interval objective function coefficients. A new treatment of an interval objective function is presented by introducing the minimax regret criterion as used in decision theory. The properties of minimax regret solution and the relations with possibly and necessarily optimal solutions are investigated. Next, the minimax regret criterion is applied to the final determination of the solution when a reference solution set is given. A method of solution by a relaxation procedure is proposed. The solution is obtained by repetitional use of the simplex method. The minimax regret solution is obtained by the proposed solution method when the reference solution set is the set of possibly optimal solutions. In order to illustrate the proposed solution method, a numerical example is given. Keywords: Interval programming; Linear programming; Optimization

I. Introduction

In order to deal with the ambiguity of coefficients in mathematical programming, stochasic, fuzzy and interval programming has been proposed (see, for example Stancu-Minasian [16], S~owinski and T e g h e m [15], Bitran [1], Rommelfanger et al. [12], Ishibuchi and Tanaka [11], Inuiguchi and Kume [6-10], Inuiguchi et al. [5], Inuiguchi [4] and Sakawa [13]). In mathematical programming with interval objective function coefficients, an interval objective function coefficient represents a range in which the true coefficient value lies. Since the objective value is only specified by an interval, the objective, such as maximizing an interval objective function, should be interpreted and reformulated as one or several usual objective. Maximizing the lower bound of the interval, maximizing the upper bound of the interval, maximizing the center of the interval and minimizing the width of the interval have been adopted for the reformulated objective (see Rommelfanger et al. [12], Ishibuchi and Tanaka [11] and Inuiguchi and Kume [6]). As a different approach to treat an interval objective function, the optimality in usual mathematical programming problems is extended to the interval coefficient case (see Biran [1], Inuiguchi and Kume [7-10] and Inuiguchi [4]). Two kinds of optimality, i.e., possible and necessary optimality, are defined in these approaches. 0377-2217/95/$09.50 © 1995 Elsevier Science B.V. All rights reserved SSDI 0377-2217(94)00092-Q

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

527

In former approach, the solution can be obtained easily and is optimal or efficient to the reformulated problem, but the rationality (optimality) to the original problem (interval programming problem) is debatable. Thus, the solution should be checked in the rationality. In latter approach, each optimality has a drawback. Namely, there are many possibly optimal solutions, hence, the final decision method is required. In many cases, a necessarily optimal solution does not exist. In this paper, a new treatment of an interval objective function is proposed introducing the minimax regret criterion as used in decision theory. This provides an intermediate-like approach. The properties of minimax regret solution and the relations to possibly and necessarily optimal solutions are investigated. Moreover, the minimax regret criterion is used for the determination of the final solution when a reference solution set is given. A method of solution based on a relaxation procedure is proposed. The final solution is obtained by repetitional use of the simplex method. Especially, when the reference solution set is the set of possibly optimal solutions, the minimax regret solution is obtained by the proposed solution method. In order to illustrate the proposed solution method, a numerical example is given.

2. Interval linear programming problem and minimax regret solution 2.1. Possibly optimal solution and necessarily optimal solution In this paper, the following linear programming problem with interval objective function coefficients is treated: max3'x

(1)

x~X

where the feasible set X is defined by X = {xlAx_
(2)

A is an m × n matrix, x and b are n- and m-column vectors, respectively. 3' is a possibilistic variable restricted by the following set of n-row vectors: F = {c= ( c , , c 2 . . . . . cn) Ili
(3)

of objective function coefficient vectors, c = (c~, c 2. . . . . cn), whose i-th component ui] and represents the possible range of 3'. For the sake of simplicity, we define u = (u l, u 2. . . . . un). We assume the cardinality of the objective value. the set S(c) by

S( c) = { y e X Icy = maxcx~ x~X

!

(4)

The set S(c) is the set of the optimal solutions of a linear programming problem with the objective function coefficient vector c subject to the constraint x ~ X. The following two kinds of optimal solution sets have been defined to the interval linear programming problem (1) (see lnuiguchi [4] and lnuiguchi and Kume [10]):

NS = r"l S ( c ) , c~F

us = U c~l"

S(c).

(5)

528

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

An element of NS is a solution optimal for all c ~ F. Since F shows the possible range of the objective function coefficient vector 3' (in other words, possibility of 7), an element of NS is called a "necessarily optimal solution". On the other hand, an element of HS is a solution optimal for at least one coefficient vector c and called a 'possibly optimal solution'. The necessary and sufficient conditions for necessary and possible optimality are given by Inuiguchi [4]. A necessarily optimal solution is the most reasonable solution, but does not exist in many cases. Usually, there exist a large number of possibly optimal solutions and hence, the final selection of the solution is required. Each optimal solution has a drawback. Then, a new solution concept based on the minimax criterion is introduced in the next subsection.

2.2. Minimax regret solution Assume we know the true objective function coefficient vector c after the determination of the solution of the problem (1) as x. In this assumption, from the cardinality of objective value, the regret of this determination can be expressed by

r ( x , c) = max (cy - cx). y~X

(7)

The regret r(x, c) shows the difference between the optimal value with the objective function coefficient vector c and the true objective value cx with respect to x. When the true objective function coefficient vector is unknown, the worst (maximum) regret of the determination of the solution as x can be defined by

R( x) = maxr( x, c).

(8)

c~ff

Here the problem (1) is formulated as the problem minimizing the maximum regret R(x), i.e., minR(x).

(9)

xEX

From (7) and (8), the problem (9) is rewritten as min max (cy - cx).

(10)

x ~ X c~F y~X

The problem minimizing the maximum regret, i.e., (10), is a rain-max problem subject to separate constraints [14]. Obviously, R(x) > 0 always holds and x is a necessarily optimal solution if R(x) = 0. Thus, we have the following theorem. Theorem 1. Let x* be an optimal solution to the problem (10). If R(x*) = 0, there is a necessarily optimal solution in the problem (1) and x* is one of the necessarily optimal solutions. Conversely, if there is a necessarily efficient solution in the problem (1), then R(x*) = 0 and x* is one of the necessarily optimal solutions.

Given x ~ X and c ~ F, an optimal solution y of a problem max (cy - c x ) y~X

(11)

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

529

is an element of the set S(c). Since H S is the union of S(c) for all c ~ F as defined by (6), the problem (10) can be represented as min max ( c y - c x ) . x~X

(12)

c~F y~HS

This indicates that it is sufficient for y to consider all possibly optimal solutions instead of all feasible solutions. Moreover, from the fundamental theorems of linear programming [3], there is an optimal basic feasible solution s c in S(c), if S(e) is nonempty. Letting l i b = I,.J {so},

(13)

c~F

the problem (12) can be expressed by min max ( cy - cx ) . x~X

(14)

c~F y~HB

Namely, it is sufficient for y to consider all possibly optimal basic feasible solutions. The following theorem shows that it is sufficient to consider a finite set 3 with at most 2 n elements instead of the infinite set F. Theorem 2. The problem (10)/s equivalent to the following problem: min max (cy - cx),

(15)

x~X c~d y~X

where A is defined by

A = {e = (c 1, c 2 . . . . . c , ) I c i = l i

or

C i = U i , i = 1, 2 . . . . . n}.

(16)

Proof. It is sufficient to show an optimal solution c* of max (cy - cx)

(17)

c~F

is an element of A when x ~ X and y ~ X are given. Since the objective function of the problem (17) can be rewritten as c ( y - x ) , c* = (c~, c~ . . . . . c*) is obtained as li C? =

Ui

if Yi - xi < O, if Yi_Xi>_O,

i = 1, 2 , . . . , n ,

where x = ( x 1, x 2 . . . . , Xn )t and y = (yl, Y2,-.., Yn)t. Hence, c* ~ A .

(18) []

2.3. Characteristics o f the minimax regret solution

The minimax regret solution has the following two characteristics: (i) The minimax regret solution is not always a basic solution (see Example 1). Thus, it is not necessarily an extreme point of the feasible set X. (ii) If X ~ ¢ and maxy~xCY < +oo for any c ~ F, the minimax regret solution always exists. As stated in Subsection 2.1, a necessarily optimal solution does not always exist even in the case of X ~: ¢ and maXy ~ xCy < + o0 for any e ~ F, but the minimax regret solution always exists from (ii). Thus,

530

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

C2

3¸ X2

8

T Io~ i

i

3

cl

77)

X

4

8

xl

Fig. 1. An example of the minimax regret solution.

a drawback of a necessarily optimal solution is improved in the minimax regret solution. Based on the minimax regret criterion, an arbitrary minimax regret solution can be regarded as the final solution of the problem (1) even if there exist some minimax regret solutions. In this sense, the drawback of a possibly optimal solution is resolved. Given a reference solution set, the minimax regret criterion can be used for the final decision method. This subject is discussed in the next section.

Example 1. Consider the following interval linear programming problem: maximize

[1, 31x 1 + [1, 31x 2,

(19a)

suNectto

45x 1 + 5 0 x 2 ~ 5 3 0 ,

(19b)

50x I + 45x 2 N 515,

(19c)

0 ~ X 1 ~ 8, 0 ~ X 2 ~ 8.

(19d)

The minimax regret solution of this problem is (x 1, x2) t = (5.34211, 5.50877) t. The maximum regret is r = 5.02047. As illustrated in Fig. 1, this solution is not an extreme point of the feasible set. When non-negative constraints for the decision variables are contained in the constraints, an interval linear programming problem is formulated as follows (see Rommelfanger et al. [12] and Ishibuchi and Tanaka [11]): v - maximize (/x, ux),

(20a)

subject to Ax < b,

(20b)

where v-maximize stands for "vector maximize". Efficient solutions to the multiobjective linear programming problem (20) are regarded as reasonable solutions and if there is a completely optimal solution that

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

531

maximize both objective functions simultaneously, it has been considered as the best solution. Applying this formulation to the problem (19), there is a completely optimal solution (x~, x2) t = (4, 7) t. However, this solution is not necessarily optimal. The region of F makes the solution (4, 7) t optimal is G 1 in Fig. 1. G~ is smaller than G 2 and G 3 make the solutions on points Q and T optimal, respectively. Thus, the solution (4, 7) t is not a good solution in the sense that the optimality of the solution is expected. The solution (4, 7) t is reasonable only in satisficing scheme. On the other hand, the proposed minimax regret formulation provides a solution minimizes the worst difference of objective value between the determined solution and others. In view that all possible optimal solutions are considered, the minimax regret formulation is closer to the formulation in optimizing scheme than the multiobjective formulation (20).

3. A final decision method based on the minimax regret criterion

In this section, the minimax regret criterion is introduced into the final determination of the solution when a reference solution set G is given. The following problem is discussed: min max ( cy - cx).

(21)

x~X c~F y~G

The final solution does not necessarily belong to the reference solution set G. Thus, an element of G is not a candidate for the final solution but only a reference. If the final solution should belong to G, in other words, G should be a set of candidates, then the constraints x ~ X in (21) is changed to x ~ G. In this case, the solution method proposed in the following can not be applied as far as G is not a convex polyhedron. In what follows, we assume that G is defined by a finite set of solutions, E and satisfies E ___G __F,

(22)

where E has p elements, i.e.,

E = {yl, y2 . . . . . yp}.

(23)

The set F is a convex hull of E, i.e.,

.

..... "/

Obviously, F is a convex polyhedron. For example, a possibly optimal solution set FIS and the set of efficient solutions to the problem (20) fulfill (22). Especially, from (12) or (14), the problem (21) is equivalent to the problem (10) when G -- FIS or G = FIB. Here, a method of solution to the problem (21) is proposed based on a relaxation procedure. Since a set of possibly optimal basic solution, liB, is easily obtained by multiparametric linear programming techniques [2,17-19], the proposed solution method is useful for solving the problem (10). More precisely, by Theorem 2, it is sufficient for solving the problem (10) to calculate a set of optimal basic feasible solutions s c of linear programming problems with objective function coefficient vector e ~ A, i.e., F I c = U c~,~{s~}. Theorem 3. I f (22) holds, an optimal solution to the following problem is also an optimal solution to the problem (21):

min max (cy - ex).

x~X c~F y~E

(25)

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

532

Proof. From (22), it is sufficient to show that an optimal solution of the problem (25) optimizes the following problem: min max ( cy - cx).

(26)

x ~ X cEF y~F

To do this, we demonstrate that, for any x ~ X and c ~ F, an optimal solution to max (cy - c x )

(27)

yEE

is also an optimal solution to max (cy - c x ) .

(28)

y~F

The set of E is composed of all extreme points of the convex polyhedron F. From the fundamental theorems of linear programming [3], there is an optimal basic feasible solution (an optimal extreme point) in F if the problem (28) has an optimal solution. Hence, an optimal solution to the problem (27) is also an optimal solution to the problem (28). [] By Theorem 3, it is sufficient to solve the problem (25). Using an artificial variable r, the problem (25) can be rewritten as min subject to

r, ./Ix < b,

(29a) (29b)

max(cy j - c x ) < r , j :

1, 2 . . . . . p,

(29c)

c~F

or equivalently, minimize subject to

r, Ax_
(29a') (29b')

c y / - c x < r , V c ~ F , j = 1, 2 . . . . . k.

(29c')

Given x ~ X and yJ ~ E, an optimal solution c* = (c~', c ~ , . . . , c*) of a sub-problem of (29), ~b(x, y J ) = m a x ( c y J - cx)

(30)

cEF

is easily obtained as

c*=

/ uil i

"

if y] - x i < O, ifyj_xi>0 '

i = l, 2 , . . . , n ,

(31)

where YJ=(Yi, Y~. . . . . y~)t and x = ( x 1, x 2 , . . . , x n ) t. Therefore, the value of cb(x, yJ) can be also obtained easily. From (31), for any x ~ X and any y / ~ E, an optimal solution to the sub-problem belongs to a finite set A defined by (15). Thus, the problem (29) is equivalent to the following problem: minimize subject to

r, Ax < b, c y J - c x
(32a) (32b) (32c)

The following solution algorithm is derived based on a relaxation procedure [14]:

Algorithm Step 1. Set c 1 = u. Let z ~ be the maximum solution to a problem, maxuy. y~X

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

533

Step 2. Set r ° = 0 , k = 2 a n d x ° = z ~. Step 3. Let z k be a maximum solution to a problem, maxth(x °, y). y~E

Step 4. Let c k be a maximum solution to a problem, max(cz k - c x ° ) . cE_F

S t e p 5.

If th(x °, z k) < r °, then terminate the algorithm. In this case, the optimal solution is obtained as X 0"

Step 6. Solve the following linear programming problem:

minimize subject to

r, Ax < b, cJz i - c j x < r ,

j=l,2

(33a) (33b) (33c)

. . . . . k,

and let (x*, r*) be the optimal solution. Set x ° = x * , r ° = r* and k = k + 1. Return to Step 3. Here the problem (33) is a relaxed problem of the problem (32). The problems in Steps 1 and 6 can be solved by the simplex method. A solution to the problem in Step 4 is easily obtained as shown in (31). By the finiteness of the set E, the problem in Step 3 is also solved easily. Since sets A and E have finite elements, the set of all combinations of z k and c k produced in Steps 3 and 4 is finite. Hence, this algorithm terminates in the finite iteration.

4. A numerical example In order to illustrate the solution algorithm proposed in the previous section, the following numerical example is given.

Table 1 Nineteen solutions belonging to E

yl y2 53 y4 y5 y6 57 y8 y9 y I0 y I~ y 12 y I3 y 14 y15 y16 y17 y~8 y19

XI

X2

X3

X4

X5

X6

X7

X8

0 0 1.6941 0 0 0 0 3.3568 1.4592 0 0 0.0250 0 0 0 1.4075 0 0 0

2.4615 3.9268 0 0 0 3.5289 2.3347 0 0 0.0998 0.1534 6.3641 6.4024 0 0 0 2.8016 0 6.3905

2.0000 0 0.5176 0.1575 4.8368 4.5392 2.8892 0.2810 0 4.0025 0 11.2127 11.2191 6.5366 2.2128 1.8491 1.7899 1.7875 11.2235

0 0 0 5.6693 8.6309 0 0 0 4.5817 10.8011 7.9603 0 0 11.6342 4.2534 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 0 0,0584 0 0 0 0 0 0

0 8.5854 0 0 0 0 3.8702 0 4.1633 0 6.9313 0 0 0 5.8007 3.9296 0 4.9500 0.0922

0 0 20.1882 17.4803 10.3133 16.3081 20.3412 0 7.4490 0 0 0 0 0 19.3986 21.0296 19.3774 21.1000 0

10.1538 2.2927 0 0 7.5224 3.7385 0 9.4418 0 11.2274 6.1620 3.7566 3.7849 9.9268 0 0 0 0 3.6616

M. lnuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

534

Example 2. Consider the following interval linear p r o g r a m m i n g problem: maximize

subject

to

[0,1]x l+x 2+[-1,1]x +[0,

1Ix7

x I +

3x 2 -

(34a)

+Xs, 4x 3 +x

4 -x

5x I + 2x 2 + 4x 3 -x

4x 2 - - X

3+[-1,1]x 4+[-3, -1]x 5+[0,1Ix 6

3 --X 4 --

4 -

6 + 2x 7 + 4x 8 < 40,

(34b)

3x 5 + 7x 6 + 2x 7 + 7x 8 < 84,

(34c)

5 +x

3x 5 + x 8 < 18,

--3X 1 -- 4X 2 + 8X 3 + 2X 4 +

3x 5 -

(34d) 4x 6 + 5x 7 -x

8 _~< 100,

12x 1 + 8x 2 - x 3 + 4x 4 + x 6 + x 7 _< 40,

S t e p 1. Z 1 = (0, 0, 2.2128, 4.2534, 0, 5.8007, 19.3986, 0) t. e I = u = (1, 1, 1, 1, - 1 , 1, 1, 1). S t e p 2. r ° = 0 ,

k = 2 and s° = z 1.

S t e p 3. z 2 = (0, 6.4024, 11.2191, 0, 0.0584, 0, 0, 6.7849) t. S t e p 4. e2 = (1, 1, 1 , - 1 , 0 , 0 , 1). S t e p 5. ¢ ( s °, z ~) = 26.3886 > r ° = 0. Continue. S t e p 6. Solve a linear p r o g r a m m i n g problem. s ° = (0, 5.3919, 11.3439, 0.4458, 0, 0, 6.4260, 2.2049) ~. r ° = 5.8531 and k = 3. Go to Step 3. S t e p 3. z 3 = (0, 0.1534, 0, 7.9603, 0, 6.9313, 0, 6.1620) t. S t e p 4. e 3 = (1, 1 , - 1 , 1 , - 1 , 1,0, 1). S t e p 5. ¢ ( z o, z 3) = 24.5083 > r ° = 1.8S3t. Continue. S t e p 6. Solve a linear p r o g r a m m i n g problem. s ° = (0, 3.8243, 5.2293, 3.6588, 0, 0, 0, 8.4419) t. r ° = 10.5113 and k = 4. Go to Step 3. S t e p 3. z 4 = ( 1.4075, 0, 1.8491,0, 0, 3.9296, 21.0296, 0) t. S t e p 4. e 4 = ( 1 1 , - 1 , - 1 , - 1 , 1 , 1 ,

I).

S t e p 5. ¢ ( x °, z 4) = 21.1394 > r ° = 10.5113. Continue. S t e p 6. Solve a linear p r o g r a m m i n g problem. s ° = (0.4002, 3.6247, 2.7553, 0.8187, 0, 0.7323, 4.9479, 7.0750) t. r ° = 11.3114 and k = 5. Go to Step 3. S t e p 3. z s = (0, 0, 6.5366, 11.6342, 0, 0, 0, 9.9268) t. S t e p 4, e s = ( 0 , 1 . 1 , 1 , - 1 , 0 , 0 , 1 ) . S t e p 5. ¢ ( s °, z 5) = 13.8239 > r ° = 11.3114. Continue. S t e p 6. Solve a linear p r o g r a m m i n g problem. s ° = (0, 3.9548, 3.5372, 1.4008, 0, 0.1837, 6.1122, 7.1189) t. r ° = 12.0860 and k = 6. Go to Step 3. S t e p 3. z 6 = (0, 0.1534, 0, 7.9603, 0, 6.9313, 0, 6.1620) t. S t e p 4. e 6 = (1, 1 , - 1 , 1 , - 1 , 1,0, 1). S t e p 5. ¢ ( x °, z 6) = 12.0860 _< r ° = 12.0860. Terrrfinate. The solution is obtained a.s :e ° = (0, 3.9548, 3.5372, 1.4008, 0, 0.1837, 6.1122, 7.1189)L Fig. 2. T h e i t e r a t i o n p r o c e s s o f t h e p r o p o s e d s o l u t i o n a l g o r i t h m .

(34e) (34f)

535

M. lnuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536 X 1 +X2"k-X3+X4-~-X5+X6+X7+X8

>_

(34g)

12,

(34h)

8x 1 - 12x 2 - 3 x 3 + 4 x 4 - x 5 _< 30,

(34i)

- 5 x I - 6 x 2 + 12x 3 + x 4 - x 7 + x 8 < 100, x j > 0, j =

1, 2 , . . . , 8 .

Let us obtain a minimax regret solution to this problem. Namely, let a reference solution set G be a possibly optimal solution set [IS, FIB or FIC. Calculation [IC, we obtain the nineteen elements listed in Table 1. The set E is composed of the nineteen solutions in Table 1. Using the algorithm proposed in the previous section, the minimax solution is obtained as ( X I , X2, X3, X4, X5, X6, X7, X8) t = (0, 3.9548, 3.5372, 1.4008, 0, 0.1837, 6.1122,

7.1189) t.

(35)

The minimax regret is obtained as r = 12.086. Fig. 2 shows the iteration process of the solution algorithm until the minimax regret solution is obtained.

5. Conclusion

A new solution concept to a linear programming problems with an interval objective function is proposed based on the minimax regret criterion. The properties and the interrelation with possibly and necessarily optimal solutions are investigated. A final decision method based on the minimax regret criterion is proposed when a reference solution set is given. A solution algorithm is given using a relaxation procedure. The final determination problem is solved by repetitional use of the simplex method. Especially, a minimax regret solution is obtained by this solution algorithm when the reference solution set is the possibly optimal solution set. An example is given to demonstrate the proposed solution algorithm.

References [1] Britran, G.R., "Linear multiple objective problems with interval coefficients", Management Science 26 (1981) 694-706. [2] Gal, T., and Nedoma, J., "Multiparametric linear programming", Management Science 18 (1972) 406-422. [3] Goldfarb, D., and Todd, M.J., "Linear Programming", in: G.L. Nemhauser et al. (eds.), Handbooks in Operations Research and Management Science: Vol. 1 Optimization, North-Holland, Amsterdam, 1989, 73-170. [4] Inuiguchi, M., "Fuzzy mathematical programming", (in Japanese) in: K. Asai and H. Tanaka (eds.), Fuzzy Operations Research, Nikkan Kougyou Sinbunsha, Tokyo, 1993, 41-90. [5] Inuiguchi, M., Ichihashi, H., and Tanaka, H., "Fuzzy programming: A survey of recent developments", in R. Stowinski and J. Teghem (eds.), Stochastic versus Fuzzy Approaches to Multiobjective Mathematical Programming under Uncertainty, Kluwer Academic Publishers, Dordrecht, 1990, 45-68. [6] Inuiguchi, M., and Kume, Y., "Goal programming problems with interval coefficients and target intervals", European Journal of Operational Research 52 (1991) 345-360. [7] Inuiguchi, M., and Kume, Y., "Dominance relations as bases for constructing solution concepts in linear programming with multiple interval objective functions", Bulletin of University of Osaka Prefecture, Series A: Engineering and Natural Sciences 40 (1991) 275-292. [8] Inuiguchi, M., and Kume, Y., "Efficient solutions versus nondominated solutions in linear programming with multiple interval objective functions", Bulletin of University of Osaka Prefecture, Series A: Engineering and Natural Sciences 40 (1991) 293-298. [9] Inuiguchi, M., and Kume, Y., "Properties of nondominated solutions to linear programming problems with multiple interval objective functions", Bulletin of University of Osaka Prefecture, Series A: Engineering and Natural Sciences 41 (1992) 23-37. [10] Inuiguchi, M., and Kume, Y., "Extensions of efficiency to possibilistic multiobjective linear programming problems", Proceedings of Tenth International Conference on Multiple Criteria Decision Making Vol. Ili (1992) 331-340. [11] Ishibushi, H., and Tanaka, H., "Multiobjective programming in optimization of the interval objective function", European Journal of Operational Research 48 (1990) 219-225.

536

M. Inuiguchi, M. Sakawa / European Journal of Operational Research 86 (1995) 526-536

[12] Rommelfanger, H., Hanuscheck, R., and Wolf, J., "Linear programming with fuzzy objectives", Fuzz3' Sets and Systems 29 (1989) 31-48. [13] Sakawa, M., Fuzzy Sets and Interactit~e Multiobjectiue Optimization, Plenum Press, New York, 1993. [14] Shimizu, K., and Aiyoshi, E., "Necessary conditions for min-max problems and algorithms by a relaxation procedure", IEEE Trans. Automatic Control AC-25 (1980) 62-66. [15] Slowinski, R., and Teghem, J. (eds.), Stochastic L,ersus Fuzzy Approaches to MultiobjectiL,e Mathematical Programming under Uncertainty, Kluwer Academic Publishers, Dordrecht, 1990. [16] Stancu-Minasian, I.M., Stochastic Programming with Multiple Objectit,e Functions, D. Reidel Publishing Company, 1984. [17] Steuer, R.E., "Algorithms for linear programming problems with interval objective function coefficients", Mathematics of Operations Research 6 (1981) 333-348. [18] Van de Panne, C., "A node method for multiparametric linear programming", Management Science 21 (1975) 1014-1020. [19] Yu, P.L., and Zeleny, M., "Linear multiparametric programming by multicriteria simplex method", Management Science 23 (1976) 159-170.