Nonfeasible Hierarchical Multicriteria Methods

Nonfeasible Hierarchical Multicriteria Methods

CopHight © IF:\C Large Scile Warsaw, Po bnd I ~I H: ~ S \ q C Ill :-' NONFEASIBLE HIERARCHICAL MULTICRITERIA METHODS K. Tarvainen Il d,i " ki ( '''...

1MB Sizes 2 Downloads 138 Views

CopHight © IF:\C Large Scile Warsaw, Po bnd I ~I H: ~

S \ q C Ill :-'

NONFEASIBLE HIERARCHICAL MULTICRITERIA METHODS K. Tarvainen Il d,i " ki

( '''I<''T' '/\

uf T a//I /(/ /(I,I.,'> , \\,' /1' 111,' T/Il'o n

1.11"0 1'11 /0 /1' ,

SF-1I21 511 F "/Jo (J 15, Fi,, 11I1II1

Abstract. General nonfeasible (price-coordination, interaction balance) hierarchical optimization algorithms for large-scale systems with multiple objectives are considered. The systems studied consist of connected subsystems with mUltiple objectives (subgoals, indicators); the overall objectives are functions of the subsystem objectives. It is shown that, unlike in the single objective case, there is no general transformation, modification, of the objective vectors of the subsystems (cf. the additional price term in the single objective case). However, a series of transformed subproblems can be defined such that the limit solution can be taken as the subsystem solution. That is, in the general case where the way the decision-maker expresses his preference is free, an additional iteration is needed in each subproblem . A multicriteria duality theory is reviewed . Based on this theory a nonfeasible algorithm is rederived, where the subproblems are solved by multicriteria methods using explicit trade-offs (such as the SWT and Geoffr ion's method). The derivation using the duality theory conveniently gives us a coordination algorithm, sufficient convexity properties, and a new suboptimal stopping rule. Keywords. Hierarchical systems, large-scale systems, decision theory, convex programming, multicriteria optimization. I NTRODUCTI ON

Furthermore, by using a generalized Lagrangian, some earlier schemes are rederived with an easier convergence analysis and with a new suboptiMal stopping rule.

In recent years, research on large-scale systems has paid attention to the important fact that most large systems are characterized by several conflicting attributes (for a review of the 1 i tera ture, see Tarva i nen [1981 ]).

PROBLEM FORMULATION

This paper focuses on nonfeasible ( price-coordination, interaction balance ) hierarchical multiobjective methods. These methods have been studied by Tatjewski [1977] and Findeisen et al. [1980] by using the utility fun ction approach. Sakawa and Seo [1980] propose str aiqhtforward approach , where single-objective problems given by the constraint method are decomposed using the standard pricecoordination. Tarvainen and Haimes [19 80,1981] and Haimes and Tarvainen [1981] consider a general problem formulation and de riv e hi erarchical schemes where the subproblems are multicriteria problems. These subproblems are defined in terms of trade-offs; that is, they are supposed to be solved by multiobjective techniques using explicit trade-offs (such as the SWT and Geoffrion's methods ) . This paper deals first with the question of whether a general multicriteria subproblem formulation is possible . It is indicated that, in general, it is not possible . But with an additional iteration, a general formulation can be given.

This section formulates a stati c problem, which is studied in the following sections. Consider a system consisting of N interconnected subsystems . In each subsystem i = 1, N, 1et y,

the output vector of subsystem i,

x 1'

the input vector of subsystem i from other subsystems,

1

f.

1

i T (f li , .. . , f ni ) = the vector of objectives of subsystem i,

n,

the number of objectives of subsystem i,

m,

the decision vector of subsystem i.

1

1

Here n, ~ 1. In the following discussion it is ass~med that n, > 2.' The case with a single objective in J subsystem is treated by making obvious modifications, and by dropping vacuou s expressions. 289

K. Tarvainen

290

The problem formulation is as follows:

Let the subsystems' objectives be functions of the subsystem variables: fj = fj(Xi,mi'Yi),

In general, tile .wbole system's objective vector does not dire~tly consist of all f:'s; and one could cal l f1 's subgoals or indic~­ tors. For example, w~en some subsystems have cost as an objective, one of the overall system's objectives may be total cost, which is the sum of these subsystems' cost objectives. So, when denoting 1

NT

f=(f , .. . ,f )

1 F. l (f , ... , fN)

i=i, .. . ,N; j=l, ... ,n i .

min

1

(1)

N

[ Fn(f , ... , f ) subject to

, , , ,

g.(x . , m., y . ) <- 0,

the vector of all subsystem objectives,

,



we take F = (Fl (f) , ... , Fn (f))T = the objective vector of the overall system with n = the number of overall objectives.

.

i , ... , N,

( 2)

i , ... , N,

(3 )

1, ... , N,

(4 )

N

c .. y . , j= 1 , J J L

where

Figure 1 depicts the structure of the system objectives.

i f i = [fl(x,. ,m,"y")",,, f ni i (x '. ,m·, .,y ,·) l T, i = 1, ... , N.

I ' -- . I

f

1

1 f=(f, ... , fN)

~~-

: 1 ,f ! nl

1 fl

fN 1

fN nN

__J. ~ Xl

{

Subsystem 1

ml

Fig. 1.

I

J

~

Yl ....

xN

{l

Subsystem N

11

mN

Structure of the system (solid line) and objectives (broken line).

YN

291

Nonfeasible Hierarchical Multicriteria Methods

Here, Eqs. (2) a re sys tems equa t ions for subsystems. Eqs. (3) determine the constraints on subsystems; g . 's are vector-valued functions. For brevity, equality constraints (e.g., r=O) are transferred to inequality constraints (r ~ 0, -r ~ 0). Eqs. (4) present couplings between the subsystems. Usually, the Cij-matrices consist of zero and unity e1emencs, where unity indicates a connection. In the following, Cirmatrices may, however, be any constant matrices.

We would like to have transformed subprob1ems with a coordination parameter, denoted here by p, such that, after coordination, the subprob1ems' solutions yield the overall preferred solution. General forms of transformations are considered; e.g., the first transformed subprob1em is assumed to have the following form: 1 [91(f (x 1 , m1), p, Xl)] x1,m1 g2(f 1(x 1 , m1 ), p, Xl) .

mln

Separable, overall constraints can also be present (cf. Tarvainen [1981)), but we shall leave them out for notational simplicity. QUESTION OF A GENERAL SUBSYSTEM FORMULATION In the nonfeasib1e single objective method, an additional (price) term is added, to the subsystems' objectives. We may ask whether a similar transformation can be used in the mu1tiobjective case. In general this is not possible, as the following example shows. Consider the system depicted in Fig. 2.

where gl : R3 ~ Rand g2 : R3 ~ R are the transformation functions. (In the sing1eobjective case, a correspond+ng transformation is g(f, p, xl) = f - p xl). The transformations g, and g? should naturally be general for every underlying loss function and for all forms of objective functions (that satisfy any necessary convexity requirements). Hence, in terms of loss functions, we pose the following question. 3

Here, the overall objectives directly consist of the subsystems' objectives. Assum~ that the OM has the following separable (not explicitly known) loss function : L(F) = L1 (F 1 , F2) + L2(F 3 , F4 )

L~ (f1)

+

Do transformations g" g2' g3' 9 4 : R ~ R exist such that, for every f1, f2, Land L2 (satisfying any convexity requirements), there exists a p* such that a solution * m* ) of the problem (xl* ' m*1 , Y2' 2 1 2 min (L 1 (f (x 1 ,m 1 ) + L2(f (Y2' m2 )) )(.l,m 1 Y2,m 2

L2(f2).

(5)

subject to

F1



t

f1 1 Subsystem

F2

F3

f1 2

i1

,"

I

t

I

Fig. 2.

f2 2

SUb-SYS tem 2

• m1

t

F4

m2 An examp1 e of a problem

coincides with the solutions of the following problems:

K. Tarvainen

292

To get a contradiction, ass~me that the above sentence is true. We will first make the following assumption about p*. If Ll is changed in a way that does not change the solution of the original problem (5) , then p* does not change. This fact holds true in the single-objective case. From the point of the first subproblem, p* carries information about the impact of xl on the second subproblem at optimal valu~s. If the optimal values and the second subproblem do not change, it is natural to postulate that p* remains the same.

A DUALITY THEORY FOR MULTIOBJECTIVE OPTIMIZATION Among a large variety of duality theories for multiobjective decision-making, Tarvainen [ 1982 1 presents a theory that is we 11- su ited to hierarchical algorithms. This duality theory will be reviewed here for the following general multiobjective problem :

.l

mln

. 1* 1 * * wlth f = f (xl' ml ) and a = constant. The corresponding subproblem (6) is

[ ::

subject to g(x) = O.

(9 b)

The following assumptions are made: a) the f.' are convex and differentiable with cont1nuous first and second derivatives, g(x) is linear, b) there exists (but is not necessarily known) a loss function (negative utility function) U: Rn ~ R with the following properties: (b 1) U is strictly convex, (b 2) U is order-preserving (if fl >f 2 then U(fl) > U(f2)), and (b 3) U is differentiable with continuous first and second derivatives. For this problem, with these assumptions, we have the following theorem. Theorem Under the above assumptions a) and b), if XO solves the mul ticriteria proble~ given by Eq . (9), then there is a vector u which solves the following dual problem

where g

(9 a)

f (x) n

Cons i der the fo 11 owi ng change i n* L1 tha t does not change the optimal values xl' m.1*' x2* and m2* Let a new Ll , denoted by L , be l 1 1 r 1 1* Ll(f (xl,ml))=Ll(f(xl,ml))+al,f(xl,ml)-f 11

~.

f (x)

1.

T

fl (x) + u g(x)

,

The solution of (6 ) should be (xl* ' m * ) for l all a. This is clearly possible only if * * 1* , g(f 1* ,p, (8) Xl ) = f since otherwise the second term in (6') would assume arbitrary large values at (xl·' mt), which, then, could not be an optimum value . According to Eq. (3), the transformation g should be such that at an optimum, the optimum value of fl is mapped to itself. Setting up such a transformation would require a knowledge of the optimum we are seeking; hence, in general, it is clearly not possible to have the g transformations we require. Or in other words, we can see the impossibility of the existence of g's as follows. Since the optimal values fl* and xl* can be anything and the optimal value of p* is unknown, the only candidate for g satisfying Eq. (8) would be a trivial transformation * x* ) = _ f 1* g(f 1* ,p, for all p* , x* ,f 1* . This transformation does not, however. work. It is, however, possi~le to estimate the optimal value of x and f and iteratively improve it. t~e will return to this possibility. First, a duality theory is reviewed.

max

min

u

x

f 2(x)

and oT fl( x) + u g(x) min x

f (x)

2

Furthermore, we can show that the dual function, in terms of the loss function, d(u)

min U(fl(x) +uTg(x), x f 2(x), ... , fn(x))

is a convex function with the following derivative: (11 )

Hence, we can use the same gradient algorithm as in the single objective case to solve the dual problem (and the primal problem at the same time); that is,

293

Nonfeasible Hierarchical Multicriteria Methods u

k+l

Here, the superRcript k denotes iteration. The parameter c is an adjustable stepsize. (Note 3 U / 3 ~ in Eq. (12) is positive by Assumption \b2)). A NONFEASIBLE ALGORITHM USING TRADE-OFFS. Let us apply the duality approach of the previous section to cProblem (1) , ... , (4). Assume a solution x exists. The dual problem for a given u = (vl" ' " vN)' in terms of the loss function U, is as follows: N

T N

minU(F l + I v . (I C.. y.-x.),F 2 , ... ,F). i=l 1 j=l 1J J 1 n (13)

The necessary conditions (which are also, by the convexity assumptions, sufficient) are, in terms of subsystem variables, ' U

3

N

T

N

-' - - - (Fl + I v · ( I C.. y. - xi ,F2'o .. ,FJ a Fl 3 Zi i=l 1 j=l 1J J (14 )

3 U a F. + I ~ = 0, j=2 aFj a Zi n

where

z. 1

=

1, ... ,N

T

L

U

(15 )

~Fi

+I _ _"=0 , j=2 3F. JZ' J

1, .. . ,N,

fl

- - (Fl + L .; 3z . l= l

T .(

(> 0)

we get Eq. (15)

T

Cc- ' Y1' - Vi xi ) c l

1

'1 * ; Fj + I 11 .(F) j= 2 J 3Zi where

1

(17 )

with indifferent trade-offs A*lj(F) (j=2, ... ,N) evaluated at (F l , F2 , ... , Fn)' not at N T T (Fl + I vo Co ·Y· - v · x., F2 , ... , F ). .f.= 1 .-L -L 1 1 1 1 . ~ This shift in trade-off evaluat10ns 1S easily accomplished in multicriteria methods using trade-offs (see Tarvainen [1981 1, Tarvainen and Haimes [1981 1. In some cases, the functions F. and the trade-offs Ai· (F) are such that the problems of Eq. (17) ar~ completely independent. If this is not the case, a relaxation approach can be used. That is, we first guess some values for the z .'s. When we solve a new value for a z· in t~e first iteration, we use the guesseJ values for the other Zj' s, whenever needed. In the same way, in tne second iteration, we use the results of the first iteration, when needed, in a subproblem solution; and so on. As a coordination strategy, we have (cf. Eq. (12)) k

v· + C

1

k

N

( I C.

0

k k Y0 - x. ) .

./'.=1 1-L -L

( 18)

Note the convergence of this algorithm is guaranteed in the case when the subsystem optimizations are totally independent. If this is not the case, that is, a relaxation approach has been used (there is no complete optimization of the dual function), convergence problems may occur. These problems c~n be avoided, in many cases, by underrelaxat10n, smoothing.

1

and dividi'1g by , U/ ; Fl in the following form: ~

T

Fl + I v 0 Co' y. - v · x. .f.=1 -L -L 1 1 1 1 F2 i=i, ... ,N,

z.

v '+ 1

~J T _ C,t i Yi - vi' xi ) JU ~( F 1 + "v,t ; Fl ; zi C=l ~

min

kl

(x. , m., y.) T. 1 1 1

Leaving out terms that are not functions of zi we have

n

N T

(12 )

( 16 )

0,

* ; U/; F. · .. (F) = J 1J ~ U/ ; F . 1

i, ... , N

( indifferent F

trade-off, marginal rate of su bstitution ). We note that Eq. (16 ) is a necessary and sufficient condition for the following multiobjective problem:

The subproblems can also be formulated in terms of subsvstem objectives by using the following application of the chain rule in i i Eq. (16 ): aF/ azi=( aF/ af )( af/ az i ) . This yields the algorithms presented by Haimes and Tarvainen [1981 1 . NONFEASIBLE SYSTEMS USING GENERAL SUBPROBLEM FORMULATIONS. The algorithm of the previous section uses trade-offs in subproblem formulations. In this section, more general subsystem formulations are considered. Univariate Method Consider the dual problem for given vi's in Eq. (13). We can apply the univariate method to it. That is, we first fix some guessed values for z2" ' " zN and then seek the mini-

K. Tarvainen

294

mum w.r.t. z,. In other words, the decision maker first solves the following problem:

vainen [ 1982]). According to this, the following set of inequalities holds :

T N . . y. - x.) v. ( L C1J J 1 i =1 1 j =1 N

Fl + min zl

Fl +

L

t

F2 (19)

Fn

with (z2"'" zN) = a given guessed vector. Then we fix zl at its minimizing value, and optimize w.r.t. z2 keeping z3"'" zn at their guessed values, and so on. This subiteration terminates when no changes in the ~} ~~~i(~~)~ occur. The ),Ji's are then changed Subiteration in Each Subproblem

Consider Eq. (14) for a given i. Let y., X. be solutions to this equation. As constant~, they can be added to Eq. (14) in the following manner: aU a N T --(F l + L v.t Cc(y·-y · ) .t=l 1 1 1 aF aZ l i T n aU aFj v . (x. - x.) + L = 0. 1 1 1 j=2 aFj aZi

(20)

That is, the optimum is an optimum for the following problem:

min

-

T

< -

F2 Fn

F0 1

Fl

F2

F2

F0 n

Fn

° :::

~

L.

(22)

Here .t is the optimum dual sol~tion corresponding to (vl'"'' vN), (Fl , ... , FO) is the preferred solution, and ).. 4S anlfeasible solution. Thus a vector t and a vector L can be shown to the decisionmaker. If the difference between these alternatives seems relatively small to him, the iteration can be terminated. CONCLUSIONS

Consider an alternative approach where subiterations are carried out inside each subproblem without any transfer of information between the subproblems.

N T

~

N T N v· (L C.. Y·-x .) i=l 1 j=l 1J J 1 L

-

Fl + L Vo Co· (Y-y , )- v ,(x . -x.) .t= 1 <.. <..1 1 1 1 1 F2

(21)

(Note the modification of the first term disappears at the optimum). A relaxation, successive approximation, algorithm can be used to solve Problem (21). That is, we first guess some values for y. and x.. Then we solve Problem (21) . The y. a~d x . 1 solutions are used as new estimat~s of y: and xi ; and so on, until the estimates conJerge. SUBOPTIMAL COORDINATION For the single-objective case, there is a very useful criterion to terminate the iteration, before a complete balance has been reached (see Lasdon [1968 1 ) • The dual i ty theory reviewed also includes a similar result for the ~~ltiobjective case (see Tar-

A rather comprehensive treatment of nonfeasible hierarchical multiobjective methods has been given. It has been indicated that, in a general case, there is ne general subsystem modification (such as the price modification in the single-objective case). However, it is shown that with additional iteration, a completely free way to express preference in subsy;tem solutions is possible. IJithout such an iteration, the SWT method can be used successfully for the subproblem formulation developed. The convergence of the algorithms is based on a newly developed duality theory for the vector case. This duality theory also gives a new suboptimal stopping criterion. ACKNOWLEDGEMENT The author wish to thank Prof. Y.Y. Haimes for his valuable suggestions. REFERENCES Findeisen, ~J. et al. (1980). Control and coordination in hierarchical systems. Wi1ey, Ch1chester. Haimes, Y.Y . , and K. Tarvainen. (1981) Hierarchical - multiobjective framework for large scale systems. In P. Nijkamp and J. Spronk (Eds), Multiple Criteria Analysis . Gover, Hampshire. Lasdon, L.S. (1968). "Duality and decomposition in mathematical programming," IEEE Trans. Syst., Man & Cybern., 4, 86-100. Lasdon, L.S . (1970). Optimization Theory for Large Systems. MacMillan, London. Sakawa, ~4., and F. Seo. (1980). Interactive multiobjective decisionmaking for largescale systems and its application to environmental systems. IEEE Trans. Syst. Man & Cybern .,10, 796-806. Tarvainen, K. (1981--).- Hierarchical Multiobjective Optimization. Ph.D. Dissertation, Systems Engineering Department, Case Western Reserve University, Cleveland, Ohio.

Nonfeasible Hierarchical Multicriteria Methods

Tarvainen, K. (1982). A preference-oriented duality theory for multiobjective optimization. Report B-70. Systems Theory Laboratory, Helsinki University of Technology. Tarvainen, K., and V.V. Haimes. Coordination of hierarchical - multiobjective systems: theory and methodology. Report No. SEDWRP-2-80. Case Western Reserve University, Cleveland, Ohio, 1980. To be published in IEEE Trans. Syst., Man & Cybern. Tarvainen, K. , and V.V. Halmes. Hierarchicalmultiobjective framework for energy storage systems. In T. Morris (Ed), MultiCriteria Problem Solving. Springer~. Tatjewski, P. (1977). Dual methods of multiobjective optimization. Bulletin de l'academie Polonaise des Sciences, Serie des sciences techniques, Vol. XXV, No. 3, 1977 .

295