110
European Journal of Operational Research 31 (1987) 110-117 North-Holland
Linear program m ing with a possibilistic objective function M .K. LUHANDJULA Universitd de Tizi-Ouzou, Institut d’lnformatique,
Tizi-Ouzou, Algeria
Abstract: This paper proposes some ways for dealing with a linear program when the coefficients of the objective function are subject to possibilistic imprecision, i.e. they are characterized by fuzzy restrictions. Emphasis is placed ‘upon a passive approach that yields a satisfying solution via an appropriate semi-infinite program, and an active one that allows to reach a solution with a high possibility level of optimality. Extensions to the possibilistic constraints case and to the case of multiple-objective programm ing problems with possibilistic coefficients are also hinted. We end up with some concluding remarks and indicate axes for further developments. Keywords: Optimization, possibilistic, &possibly optimal
1. Introduction
In many practical situations that may be casted into a linear programming model, the decision maker is only able to expresshis subjective feeling on relevant parameters. This has triggered many attempts to incorporate imprecision in the scope of a linear program [5,6,7,8,10,13].Looking a bit closer reveals that the problem of incorporating possibilistic components in the objective function of a linear program has received only scarce attention [9,5]. The purpose of this paper is to further explore this neglected topic. First a special procedure consisting in replacing involved possibilistic variables by their more possible values-i.e. values meeting involved fuzzy restrictions to a higher extent-is discussed. Well-known results on tangent cones [1,4] are explored to cope with resulting deterministic problems. A basic objection that can be raised against this approach is the fact that only local features (more possible values) of the involved possibility distributions are taken into account. As an alternative, Received January 1986; revised August 1986
an approach which converts the original problem into a semi-infinite program is presented. The interest of such an approach is twofold. It takes into consideration many features of involved possibility distributions and yields a solution that maximizes the possibility of the objective function to be greater than an optimal threshold. A cutting-plane method for solving the resulting semi-infinite program is discussed. The above-mentioned approaches necessitates the elicitation of particular values on the support of considered possibility distributions (more possible values, a-levels). Therefore, the name ‘passive approach’is used. A method for proceeding directly (‘active approach’) and that delivers a solution with a high possibility level of optimality (P-possibly optimal) is also outlined. Extensions to the possibilistic constraints case and to the multiple objective programming one are also suggested. The paper is organized as follows. Preliminary materials including the formulation of the problem are given in Section 2. Passive and active approaches are developped in Sections 3 and 4 respectively. Section 5 is devoted to extensions. We end up in Section 6 with concluding remarks and indicate axes for further investigations:
0377-2217/87/$3.50 Q 1987, Elsevier Science Publishers B.V. (North-Holland)
111
M.K. Luhandjuld / LP with a possibilistic objective/unction
2. Preliminaries
2.3. Statement of the problem
2.1. Possibilistic variables
The problem addressedin this paper is that of finding a solution that is optimal in some sensefor the following program:
Consider a fuzzy set F in a universe of discourse U characterized by a its membershipfunction uF. Suppose that F acts as a fuzzy restriction associated with a variable X taking its values on lJ such that the assignment of u to X has the form: x= 24:z+(u) where uF(u) is the degree to which the constraint representedby F is satisfied when u is assignedto X. X is called a possibilistic variable on V; its possibility distribution is 17, defined as follows: IT,(u)
= Q(U)
VUE u.
max &xx, Axgb, w x 2 0, where E is an n-vector, b is an m-vector, A is an m x n matrix and the components of the vector c^ are assumed to be characterized by convex normalized possibility distributions I&,. Such a problem arises quite naturally in practice especially when the values of cj (j = 1,. . . , n) are obtained from experts.
As an example, consider the universe of discourse of positive integers N and let F be the set of small positive integers; let
3. Passive approachesfor Pl
F = { (1, l), (2, 0.9), (3, 0.8), (4, 0.7), . . . }.
A possible attitude for dealing with Pl is to find a solution which is optima1 for the most favourable circumstances. To this end one may replace possibilistic variables involved by their more possible values or fairly good estimates of them. This leads to the following problem: n max C cjxj,
Then the proposition “X is a small integer” associates with X the possibility distribution 1T, = uF in which a term such as (4, 0.7) signifies the possibility that X is a small integer given that the value 4 is assigned to X is 0.7. 2.2. Some concepts associated with a possibilistic variable Consider a possibilistic variable X on U characterized by its distribution n,. - X is said to be normalized if there is a u E U such that II,(u) = 1. - X is said to be convex if Vui, uZ E U and
v E lo, 11 IT,(uu, + (1 - u)+) 2 fin(flAq),
flIx(uz)).
- The set of more possible values for X, denoted by V,(X), is given by the following relation: VP
3.1. An optimistic approach
j-l
CjE
Vp(tj),
w
j=l,***,n,
Ax 0, c E fgq,
(P2’).
where ga
= {(Cl,...,
Cn) EIw"lCj~
V,(~j)}.
Let us now turn to features of importance for computational purposes. We will use the following notations: Vd”“(E-j) =
mill t, r= vpcq
vpm”(tj)
and F= {xWV’~Ax~b,
x20},
=
max t t= VP($)
112
M. K. Luimdjula
/ LP with a possibilistic objective flolction
where F is assumed to be a convex and compact subset of R”. Theorem 1. x0 realises the maximum of cx on F (c is a vector in R”) if and only if c E T+(F, x0), where T+( F, x0) is the polar to the cone of tangents of F at x0, i.e. T+(F,
x0)=
{~ER”I~~
x0)}
Proposition 3. Assume V,(e) is finite; let VP
j=l
-&A&O,
,..., k,
(cj-ujA)x’=O,
j=l,...,
k,
(b-Ax’)uj=O,
j=l,...,
k,
(3)
u’>, 0.
and T(F;
x0)=
{xER”]x=lim
Proof. As x0 E F, we have b - Ax0 2 0, the last
u,,(x,,-x0),
u,, > 0, x,, E F Vn and x,, + x0}.
The proof of’ this result as well as further details on the cones of tangents may be found elsewhere[4]. On account of Theorem 1, solving P2’ is equivalent to find a basic solution (an extreme point) x’EFsuch that v,(P) G T+(F;
M-
(m=
2. V,(e) G T+(F, x0), where (m,,...,
m,,) E R” 1mj = F/p”‘“( tj) V,““(I?~)
Vj}.
Proof. (1) Necessity. Trivial, becauseM c V,(e).
(2) Sufficiency. The idea of the proof is most easily conveyed in geometric language. Let
where l-l denotes the Cartesian product. It is clear that v,(e) E K and the fact that M is the set of extreme points of K is straightforward; as a matter of fact elements of M are the intersections of hyperplanes forming K. Suppose now that M G T’(F, x0), then all extreme points of K are in T+(F, x0) ana by the convexity of T+( F, x0), T+(F,
cjx,
Vj,
and by Theorem 1, ci E T+(F, c T+(F;
coy0
x0) Vj, i.e. v,(e)
x0). 4. If
= maxmaxcyG0 ccM
YES
then V,(L?)GT+(F, x0) (S={y~R”~yA~g0} and A0 is the matrix formed by the rows Ai of A uerifying the relation Aixo = b,; if Aixo < bi Vi then A0 = 0(1 X n).)
x0) if and only if
or mj=
KE
max x E F,
Proposition
x0).
The task of testing if this inclusion holds for a given extreme point x0 is a fastidious one because it necessitatesinfinite verifications. The following proposition turns out to be helpful in connection with this problem. Proposition M E T+(F,
inequality together with the system (3) are equivalent to the statement that the Kuhn-Tucker conditions for optimality are satisfied by (x0, uj) for the program
x0).
The fact that v,(e) E K completes the proof.
Proof. Let c1 & M then c’y Q coy0 f 0, Vy E S. As S coincides with T( F, x0) (see [l]), we have c’y Q 0, Vy E T( F; x0), but, by definition of T+( F; x0), {c]cygOVyET(F; x0)}. We then have C’E T+(F, x0) i.e. ME T+( F, x0) and by Proposition 2, v,(e) C T+(F, x0) follows as desired.
The procedure shown in Figure 1 may be easily implemented on a computer and provides a solution for P2’ and hence a solution that is optimal for the most favourable realisation of ej for the original program Pl. X’,. . ., Xp denote extreme points of F. These points may be obtained via an enumeration scheme or by parametrizing an arbitrary linear function. 3.2. A semi-infinite
approach for Pl
Let 0 < & < e.6 < & < 0, Q 1, and consider partitions of Supp c^,,. . . , Supp t,, (Supp denotes support), denoted T,‘, . . . , Tf,. . . , T,,‘, . . . , T”, re-
M. K. Luhandjula / LP with a possibilistic objective function
9
P4 may be written as max (Y,
Start
k=
113
tET1, ET*, tET*,
a-txgo,
a-a-txgo, tx-ago,
1
(P4’1
-c
c
ar-(s-l)&~x
stop xk solution of P2’
FETE, tc?TS,
P4’ is a semi-infinite program; as a matter of fact it has an infinite number of constraints. It is worth mentioning that a constraint of the form tx < r (r being a small threshold) may be added to P4’ to penalize t e Uj,, T’. Let us now discuss relationships between P4’ and Pl.
stop PZ’hasno solution
Proposition 5. Let (a’, x0) be a solution of P4’, then Poss( tx” > a”) = Fey Poss( h 2 a”).
Figure 1. Procedure for finding a solution ol program P2’
(Pass denotespossibility.)
spectively and defined as follows:
Proof. By virtue of rules of combining possibility distributions [2,12],
7; = (?j)a,;
(?j)p, is the&-cut of ~j,
where l7, is a n-q
s-l 7y=
(2j)p,
-
u
sup II,(t) t f.xO> a0
Poss( Exe 3 a”) =
zy*=(2j)B2-1;1,
G(t)
5”.
k-l
Let now TP=TTx ..a XT! and 6 a real positive number chosen to penalize the vector (t i, . . . , t,,) E R” with less degree of compatibility with the n-ary possibility distribution (I?,, . . . , t,,) (i.e. (ti ,..., t,,) such that min(i&,(ti) * * * U$t,,)) is low). Consider the mathematical program: max (Y, JET’, tx>,a, . tET*, cu-6~tx~ff, ET3, a-226gtxga-8, cw-(s-1)6dtxga-(s-2)6; x E F.
possibility distribution, i.e.
=min(sl(tl),...,Ir,.(t,>), t=(t, ,...) t,,).
Consider to E V,(e), then n,(f) Q lI,(t’)
sup n,(t) t
= &(t”).
(5)
Lx03a0 Equations (4) and (5) imply sup IT,(t) t tx”,ao
09
V’t and ;
supII,(t) = II&“). (4 t Furthermore 1’ E T’ and t”xo 2 (Y’ because bO, x0) is feasible for P4’. Hence II&t) Q n,(t’) Vt such that tx” 2 0~’and
Poss(ExO2 a”) =
ET*,
.
= If&“)
= sup&(t). t
114
M.K.
Luhandjula
/ LP with a possibilistic
Now let x E F, then
objective/unction
Proof. By construction we have Tk 2 . . . a Tk(P+U 2 T”P 2 . . . 2 Tk’
Poss(8x 3 cx?)= sup IT,(r) tx,a”
Q supI7,(t) = Poss(txO> 2) I as desired. If (co, x0) is a solution of P4’, Proposition 5 affords us a justification of referring to x0 as a satisfying solution for Pl. Furthermore, x0 satisfies the realistic requirement of achieving better values of cx for vector c with a high degree of compatibility with c^,worse values for c having a low degreeof compatibility and averagevalues for intermediary situations. Naturally the question arises how to find a solution of P4’. We now move to this problem.
and hence D_c a.. _cDP+‘cDP... _
_cD’.
We have then da* , x*) < * * * c&Yp+‘, x,+1> < rp(d, x”) g * * * < (p(d, xl)
(6)
where (cu*, x*) is a solution of P4’. LJ’ G 0 implies that &((Y~, xp, 1) < 0, Vt E T’, i=l I-**, s, i.e. (op, xp) E D. The optimality of ( OL*,x * ) for P4’ and (6) yield ‘p(aP, x”) a++*,
x*> ep(ap,
xP),
i.e. ‘p(cl*, x*) = cp(aP, xp) and the optimality of (op, xp) is established.
A cutting plane method for P4’ The following notations will facilitate the subsequent discussions g&Y, x, 5) = a - tx,
Furthermore, the following convergence statement can be established.
gq(a, x, t) = max(cr - 6 - tx, Ix - ol),
Proposition 6. If D + B and D’ is compact, then the sequences ( ap, xp) contains a subsequence that converges to an optimal solution of P4’.
g,(a, x, t)=max(cu-(s-1)8-tx,
Numerical example. Consider the program:
tx-a+(s-2)6), D= {(a, X)ERn+llgk(a, WETS,
max &xi, O
x, f)
s}.
Let T’P be a finite family of subsets of T’ such that PG4
*
T’P_c Tiq,
Dp=
{(a,
x) ER”+lI&(& .VtEi-kp,
where c^i is a possibilistic variable characterized by the possibility distribution shown in Figure 3. Consider the following partition of the support of c^i. TI’= (CI)I = (1)
x, t)
k=l,...,
s}.
Then the algorithm shown in Figure 2 offers a solution for the program P4’. The justification of the stopping rule is given below. Proposition 6. If LP Q 0 and (ap, xp) is a solution
and T; = (c^,),,, - T; = [0, l[.
P4’ can be written as max cy, a-tx,
ET:,
-lX
tET;,
of the program 4
mu
W)
cpb, x)-s
(a, 4 EDP, (q( (Y, x) = a) then (a”, xp) is a solution of P4’.
0’6)
O
Let now T,” = {l}, T:’ = (0, 0.5) and 6 = 0.7. The resulting discredited program yields the solu-
M. K. Luhandjula / LP with a possibilistic objective function
115
1 Choose Tjp C Tj -
j=l
,........,
s
Tjp finite
I
uj
1 Find
(d!
solution
of the program
max(p(o(, (a,
x
, xp ) x
CL)
1
)CDp 1
Compute Lp = max (ma. te% i
p = p+l A
gp,x,
t 1)
2 solution
of P4'
Figure 2. Algorithm for finding a solution of program P4’
tion (0.7, 0.7). Furthermore, we have &(a, Xl, t) = Q1- rxx,, g,(a, x1, t)=max(cu-8--x,,
lx,--).
It is easy to check that
i.e. the optimality criterion is satisfied, and hence
(0.7, 0.7) is the solution of the semi-infinite program P6, and xi = 0.7 is satisfying solution for the original possibilitistic program P5. Another approach in a similar vein has been put forward by Rommelfanger [9], who substitutes the components of c^ by r a-level intervals and debouches on a multiple objective program. Ap-. proaches described in Sections 3.1 and 3.2 as well as Rommelfanger’s proposal, need the elicitation of particular elements on the support of involved possibilistic variables, hence the name ‘passive’. One can equally well find a solution of Pl directly; this is the topic of the following section.
%,
1
4. An active approach for Pl
0.5 I-
Figure 3. Possibility distribution of the variable ?,
x0 E F is P-possibly optimal for Pl if there is no x E F such that
Definition.
Poss( h > CXO)2 p.
M.K. Luhandjula / LP with a possibilistic objective function
116
By virtue of Theorem 1 (section 3), finding a solution of P7 is equivalent to find an x0 E F such that S, G T+(F, x0). Furthermore, a the reasoning using to establish Proposition 2 may be used to prove the following assertion.
where Poss(h> &CO)= sup rr,(t) t
IX>
t.9
(extension principle, see [2,12]). This definition generalizes quite naturally the one of optimality; as a matter of fact an optimal solution of a mathematical program is nothing but a l-possibly optimal solution. The concept of necessity may also be used to define a P-necessarilyoptimal solution. In a possibilistic programming context, a ppossibly optimal solution (with p - 1) is of great interest becauseit achieves a great possibility degree of optimality. We now move to the problem of characterizing such solutions and most importantly to provide ways for finding them.
Proposition 8. Sp G T+(F, x0) if and only if Sp’G T+( F, x0) where
5. Extensions 5.1. Possibilistic constraint case
Assume now that elements of A and b are also possibilistic, i.e. we have the program max
h, w
dX<&,
Proposition 7. x0 is P-possibly optimal for Pl if and only if x0 is optimal for the program
max
Definition. x E R “, x 2 0 is a-possibly feasible for Pl’(cu=(ai, a2,..., a,,), aiEIO,l])if
cx,
x E F,
)
x 2 0.
W)
Poss(AixgLi)>ai,
i=l,...,
m.
VCESB(SP=(CEW”ICjE(~j)B].)
Proof. (Necessity). Assume x0 is /3-possibly optimal for Pl and non-optimal for P7. Then there is an x1 E F and a q E S, such that qx’ > qx’. As q E S,, we have II,(q) 2 p and consequently
tx’
SUP n,(t) t B
2 P,
tx”
i.e. Poss(2xX’ > exe) > /3, contradicting the p-possibly optimahty of x0. (Sufficiency). Assume that x0 is optimal for P7 and not P-possibly optimal for Pl. There is then x2 E F such that Poss(h’ > exe) 2 p, i.e. sup n,(t)
>, p.
t
tx2>
It is clear that an a-possibly feasible action (q = 1) is interesting for a decision-maker because such an action maintains the possibility of achievement of each constraint to a great extent. This leads us to consider the following program as a substitute of Pl’. max h, XEXU,
(Pl”)
where X” is the set of cu-possiblyfeasible actions for Pl’. Pl’ is then reduced to a program with a possibihstic objective function and crisp constraints that may be solved by methods discussed in Sections 3 and 4. A discussion on how to find X” or a subset of X* may be found elsewhere[5].
IX0
Therefore there is a p E IR” such that px2 >px” and n,(p) > /3, i.e. there is a p E S’ such that x0 is not optimal for the program max px, _
5.2. Multiple objective case
The problem to be considered now is that of finding a solution of the program
x E F,
max
contradicting the optimality of x0 for P7.
XEF
(L?x,...,~~x),
(Pl “’ )
h4.K. Luhandjula / LP with a possibilistic objective/unction
The concept of efficiency can be relaxed in the following way. Definition. x0 E F is P-possibly efficient for Pl”’ if there is no x E F such that
117
an important issue. In this respect, we are just at the beginning. Let us hope that works in these directions will proceed on a larger scale in the near future, thus contributing to better modelling in decision-making under a turbulent environment.
E’x>E’x, i?+‘x~~‘+~x”,...,C”kX>,~kxo)>,/J. The following result gives a characterization of P-possibly efficient actions for Pl “’ . Theorem 9. x0 is &possibly efficient for Pl”’ if and only if x0 is efficient for the program max
((?)p~,...,(c^~)px),
x E F, where (ti)p=(c^~)p~ . . . X (?i)p and ($)p p-cut of the possibilistic variable (tj).
(Pl”“) is the
The proof of this result as well as some ways for solving the problem Pl”” may be found elsewhere[6]. 6. Concluding remarks In this paper we have presented some ways for handling a linear program with a possibilistic objective function. Our proposals offer solutions which are satisfying in the senseof being possibly feasible and/or possibly optimal (efficient) to some extent. Some fruitful directions for further investigations include a deep exploration of linkups between fuzzy and semi-infinite programming, the building of user-friendly software for procedures described here as well as the application of these suggestions for solving concrete problems. The incorporation of both randomness and fuzziness in an optimization context is to our opinion
References 111Chametski, J.R., “Linear programming with partial infor-
mation” European Journal of Operational Research 5 (1980) 254-261. 121Dubois, D. and Prade, H., Fuzzy Sets and Sysrems, Academic Press, 1980. 131 Glashoff, K., and Gustafson, Sven A., Linear Optimization and Approximation, Springer, Berlin, 1983. t41 Guignard, M., “Generalized Kuhn-Tucker conditions for mathematical programming problems in a Banach space”, SIAM Journal 01 Control 7 (1969). 151Luhandjula, M.K., “On possibilistic linear programming”, Fuzzy Sets and Systems 18(l) (1986) 15-30. 161Luhandjula, M.K., “Multiple objective programming problems with possibihstic coefficients”, Fussy Sets and Systems, 21(2) (1987) 135-145. 171 Luhandjula, M.K., “Fuzzy optimization. An appraisal”, submitted. VI Orlovski, S.A., “Mathematical programming problems with fuzzy parameters”, working paper, IIASA, 1984. 191Rommelfanger, H., Hannschek, R., and Wolf, J., “Linear programming with fuzzy objective functions”, presented at the IFSA congress PaIma Spain 1985. WI Tanaka, H., Ichihashi, H., and Asai, K., “Fuzzy decision in linear progr amming problems with trapezoid fuzzy parameters”, in: J. Kacprzyk and R.R. Yager (eds.) Decision Support Sysrent using Fuszy Sets and Possibility TheV*
1111Yager, R.R., “A foundation for a theory of possibility”, Journal of Cybernetics 10 (1980) 177-204.
WI Zadeh, L.A., “Fuzzy sets as a basis for a theory of
possibility” Fuszy Sers and Systems I (1978). H.-J., “Description and optimization of 1131Ziiermann, fuzzy systems”, Inlernational Journal oj General Systems 2 (1976) 209-215. 1141Zimrnermann, H.-J., Fussy Sets Theory and its Applications, KIuwer-Nijhoff, Leiden, 1985.