Weighting factor extensions for finite multiple objective vector minimization problems

Weighting factor extensions for finite multiple objective vector minimization problems

256 European Journal of Operational Research 36 (1988) 256-265 North-Holland Theory and Methodology Weighting factor extensions for finite multiple...

667KB Sizes 0 Downloads 65 Views

256

European Journal of Operational Research 36 (1988) 256-265 North-Holland

Theory and Methodology

Weighting factor extensions for finite multiple objective vector minimization problems D.J. W H I T E

Dept. of Systems Engineering, University of Virginia, Thornton Hall, Charlottesville, VA 22901, USA

Abstract: This paper develops two procedures for finding the efficient set of finite action multiple-objective

problems. The first gives an exact characterization of the efficient set in terms of the weighted q-th-powers of the objective functions, for a fixed q, and gives a lower bound for q. The second gives an exact characterization of the efficient set in the t-th-order minima of the weighted objective functions. Some consideration is given to problems whose feasible action sets correspond to the vertices of certain polytopes and connectedness conjectures are given for which no counterexamples have yet been found.

1. Introduction

The problem of finding optimal solutions for scalar problems with a discrete set of feasible actions can often be a difficult one. The problem of finding vector minima is much more difficult, and calls for the exploration of as many possible approaches as might make some, perhaps minor, contribution to this very difficult area. This paper explores two such approaches, which extend the standard weighting factor approach, and uses a simple example to demonstrate them, while recognizing that the essential task of converting them to effective computational procedures remains. The purpose of this paper is to present the-ideas, together with some elementary results. The two procedures which will be discussed are:

(a) The complete scalarization equivalence of vector optima in terms of weighting the powers of the objective functions; Received February 1987; revised October 1987

(b) The use of t-th order minima for the standard weighting factor approach, The full strength of such procedures will lie in their application to problems with special structures. A notable case is the class of all finite problems where there is a one-one correspondence between the members of the finite set of feasible actions and the vertices of an appropriate polytope. For example, Edmonds [1] establishes this for special Boolean linear programming equivalent forms for spanning forests (Theorem 4) and for spanning trees (Theorem 3); Dantzig [2] establishes this for a different Boolean linear program for spanning trees (Theorems 3, 4, Chapter 17) and for the standard assignment (Theorem 1, Chapter 15); Martins [3] establishes this for a particular Boolean linear program for routing problems, and their associated trees (Propositions 1, 2); White [4] obtains similar results (Lemma 1); Mine and Osaki [5] obtain corresponding results for discounted Markov decision processes (Theorem 2.7). A second case is that of general integer pro-

0377-2217/88/$3.50 © 1988, ElsevierSciencePublishers B.V. (North-Holland)

D.J. White / Weightingfactor extensionsfor vectorminimizationproblems gramming problems and general Boolean programming problems, where, in particular, the susceptibility of the latter to branch and bound procedures may make the ideas of this paper attractive, with appropriate developments. In writing this paper, it is recognized that there are already many papers dealing with finding vector minima for finite problems. Except where these are directly relevant to the ideas in this paper, they will not be cited. It is not the purpose of this paper to carry out a comparative evaluation of approaches, but more modestly, to try to add to the armory of approaches. Finally, the paper will end with some conjectures about connectedness of the vector minima set, which, if true, will add yet a little more power to methods in general, and for which an extensive search for counter examples has failed.

2. Formal statement of the problem Let: X___ R" be a finite set of feasible actions, with x as its generic member; f~ : X--+ R be the k-th objective function value, 1 ~< k ~< K. If Y g X, then x ~ Y is said to dominate y ~ Y with respect to (Y, f ) if f ( y ) - f ( x ) ~ Rr+\{0}. The vector minimum set (efficient set) of Y ___X, with respect to f , is defined by E ( Y, f ) = { all undominated members of Y with respect to (Y, f ) }. Our analysis will take place in R r For Z c R K and u ~ R / ~ define f ( Z ) = { u = f ( x ) for some x ~ Z}. Define the function e : R/~ ---, R K as follows:

[e(u)]l,=uk,

The problem is to find E(Y, f ) , or a suitable subset of E(Y, f). We will explore two possible approaches. In what follows we will assume that f ( x ) >~O, V x ~ X without loss of generality, since adding constants to fk does not change the vector minim u m set. In order to facilitate reading we will henceforth represent, for any R ~ R K, E(R, e) by E(R).

3. Complete

weighting factor scalarization of

E(X, 1) in terms of q-th powers Define, for q ~ R +, the q-th power function ,~q : R/~ -+ R K as follows: [,q(u)]k=(uk)

q,

l~k~
Before proving our first theorem, we note that the result is close to, but not identical with, the Lp-nOrm scalarization results of others (e.g., see that of Dinkelbach and Isermann [6], Lemmas 1, 2), since the result of this paper is a complete characterization in terms of a single parameter value of q. For R c R K let R* be its convex hull. We first prove a lemma, for R finite. --

+,

Lemma 1. For finite R c R K+, there exists a ~ ~ R,

[I > O, such that

~q(E(R)) .~-E(E*((pq(R)) O~)q(J~), Vq>~. Proof. Since R ___fl~+K, it is clear that

~(E(R))=E(~q(R)),

Vq>0.

l
e is the identity function and plays the same role in R 1( as f does in R". Finally, define the realizable objective function space as follows: F = f ( X ) . Then dominance in R r, and the vector minim u m set, E(R, e) of R _ R x with respect to (R, e) are defined in an analogous manner to dominance in respect of (Y, f ) . Clearly

f ( E ( Y , f )) = E ( f ( Y ) , e) for all Y _ X.

257

We establish the two inclusion relationships needed to prove the lemma. (i) E(E*(q)q(R)) (~ ¢pq(R) c_ %(E(R)), Vq > 0 Let u ~ q~q(R) \ E(epq(R)). Then, since E*(q)q(R)) n epq(R) c_ E(q~q(R)) we have u q,q( R ) \ E * (q,q (R)), and the requisite result holds. (ii) E(~q(R)) c E(E*(~q(R))) N q~q(R) for some

3 > 0 , and all q >~~ Let u ~ E(q~q(R))\E(E*(q~q(R))), for some q > 0. Enumerate E(epq(R)) so that its members are {uJ}, 1 <<.j<~J. Then there exists an a ~ •J, with

258

D.J. White / Weighting factor extensions for vector minimization problems

F.~=a% = 1, % > / 0 , 1 <~j<~J such that

where

J

E j(vz)q.<(ok) q,

A=

X~RK:Xk>0,1
j=l

~])tk=l

.

k=l

with at least one strict inequality, and where uJ=(OJk) q,

l<~k<~K,

Uk=(Vk) q,

l<~k<~K,

M(R, X)= (u~R: Xu
w i t h { v J } c _ R x, v~g~ K. We assume, without loss of generality, that u 4: v j for any 1 ~ O. For some j ( = j ( u ) ) we will have % > / 1 / K . Hence, for j = j ( u ) , we will have

(uJ)q/(uk)q<~g,

and where, for ) t ~ A , u~q~q(RK), )tu is the scalar product of )t and u. Similarly define, for R _c R K, X ~ A:

M(R, A ) = Ux~AM(R, •). Clearly, for all q > O,

Mq(X, f, A ) c E ( X , f ) . The following theorem shows that the left-hand side is identical with the right-hand side.

l
Since u ~ E(dpq(R)) and u # u j, 1 < j < J, when % > O, for some k ( = k(u)) and for j ( = j ( u ) ) , we must have

Theorem 1.

E(X, f ) = M q ( X , f, A),

Yq>~.

v/,/v k > 1. Proof. We note, first of all, that

Then, if

(pq(f(E(S, f)))

q > log(K )/log(v//vk) we get a contradiction to the above. Since R is finite, we m a y choose q large enough to get a contradiction for any such u. In fact, let = max[log( K ) / l o g (

vJk/v~)],

=E(+q(f(X))) and hence the requisite result will follow if we can show that, for R c R K, R finite,

E ( E * ( ~ q ( R ) ) ) ( " ~ q ( R ) = M ( ~ q ( R ) , A), Vq >~~.

1 <~j~J, l <~i <~J, i -¢j, l~k<~K, o[/v~ > 1. The requisite result now follows.

= dpq(E(f(S)))

This follows from L e m m a 1 and from Evans and Steuer [71, since E*(%(R)) is a polytope whose vertex set is E(q,q(R)). [] []

In other words, the above lemma states that no vector minimum of eOq(R)with respect to e can be dominated by any convex combination of any other members of eOq(R) providing q is large enough. This leads to a scalarization result we require in the following theorem. We first of all define our q-th power scalar optimizers. Define for Y c X, X ~ A:

Mq(Y, f, X) = { y ~ Y: Xdpq(f(y))<~Xq~q(f(x)), V x ~ Y}, Mq(Y, f, A) = Ux~AMq(Y , f, k ) ,

As an example, consider the follow'.mg engineering design problem. A system consists of a serially connected set of n components. Component i m a y be chosen from a given set of designs. The particular design j for component i might simply be a replication of j similar components to provide redundancy against failure (e.g., see Bray and Proschan [81). If we select design j for component i, there is a cost cij, and a probability p,j that it will not fail when called upon to function. If we let xij = 1 if design j is chosen for component i, and x~j = 0 otherwise, the total cost is fl(x), and the negative of the logarithm of the probability of the system as a whole functioning when required is f=(x),

D.J. White / Weighting factor extensions for vector minimization problems

where

259

Table 2 n

fl(x) = E

mi

Jl

E CijXij,

J2 1

2

3

4

5

i=1 j=l

1 2 3 4 5

mi

f2(x) = ~

E log(1/p,,)x,,,

i=1 j = l

and we have the conditions

(0, (1, (2, (3, (4,

16) * 14) 13) 12) 10)

(1, (2, (3, (4, (5,

13) * 11) * 10) * 9) 7)

(2, (3, (4, (5, (6,

12) 10) * 9) 8) 6)

(4, (5, (6, (7, (8,

8) 6) 5) 4) 2)

* * * * *

(6, 7) (7, 5) (8, 4) (9, 3) (10, 1) *

mi

'Y'~ x o = 1,

l<~i~n,

j=l

x o ~ {0, 1},

l<~i<~n,

l<~j<~mi.

This example is a very special case of general communicating systems, where cost and reliability are important (e.g., see Prim [9], where more general spanning tree systems are mentioned). We will subtract a constant from { cij } and l o g ( 1 / p o ) to make our smallest values zero, but this has no special significance. The physical problem may be to find E(X, f ) where X is defined by the constraints on { x,j }. It is to be noted that this problem m a y also be formulated as an optimal routing problem, and may be completely solved using dynamic programming vector optimal routing (e.g., see White [10], Chapter 10, p. 165) or by converting to an exactly equivalent scalar optimization constrained objective function problem, and then solved by dynamic programming or other methods (e.g., see White [10], Chapter 5, Theorem 4 where an exact scalar equivalence is used, or see Joksch [11], where, by using a single constraint, a partially equivalent scalar dynamic program is derived). However, the purpose of this paper is to explore alternative approaches. Let us return to our example, and take the case n = 2, rn, = 5, i = 1, 2 with the data presented in Table 1. The possible values of f are given in the follow-

ing Table 2, where Ji is the chosen value of j for a given i. The asterisked solutions in Table 2 constitute the vector minimum set. The powers q~q(f), q = 2, 3, are given in Tables 3 and 4. It is to be noted that, for q = 2, the vector minimum (9, 100) is dominated by the combination (5/12)(16, 6 4 ) + (7/12)(4, 121), whereas, for q = 3, no vector minimum is dominated by a convex combination of other solutions. So ~/= 3, and E(X, f ) = M3(X, f, A). Let ffqX: X ~ R be defined by dpqh(X ) = ) k ( f ( x ) ) q ( = ) k ~ q ( f ( x ) ) )

where (f(x))q = (fk(x)) q, 1 <~k <~K. Table 5 gives the values of e&h(x)(= ~ ( f ( x ) ) 3) where )~ = ()H, 1 - ~l)The ranges for ~1 for X ~ A, shown in Table 6, will give the vector minima when ~3x(x) is minimized. The example given is merely a simple numerical illustration around which to develop the main points of the paper. To be of any effective use, computational procedures need to be developed in due course. It is not the purpose of the present paper to do this, but a few indicators may be of assistance for such developments: (a) If X_c R n and {fk} are all convex functions on X then ~qX is convex on R~_, V q > 0, when extended to R ~_, and is convex on R n when Table 3 ~b2( f ) v a l u e s

Table 1 i

1

2

Adjusted values of parameters

j

1

2

3

4

Jl

.]'2

1

2

1 2 3 4 5

(0,256) * (1,196) (4, 169) (9, 144) (16, 100)

(1, 169) * (4, 144) (16, 64) * (4, 121) * (9, 100) * (25, 36) * (9, 100) * (16, 81) (36, 25) * (16, 81) (25, 64) (49, 16) * (25, 49) (36, 36) (64, 4) *

5

cU

0

1

2

3

4

log(1/pu)

7

5

4

3

1

c2j log(1/pzj )

0 9

1 6

2 5

4 1

6 0

3

4

5 (36, 49) (49, 25) (64, 16) (81, 9) (100, 1) *

260

D.J. White / Weighting factor extensions for vector minimization problems

Table 6

Table 4 ~ 3 ( f ) values

Jl A 1 1 2 3 4 5

2

3

4

5

(0, 4096) * (1, 2197) * (8, 1728) (64, 512) * (215,343) (1, 2744) (8, 1331) * (27, 1000) * (125, 216) * (343, 125) (8, 2197) (27, 1000) * (64, 729) (216, 125) * (512, 64) (27, 1728) (64, 729) (125,512) (343,64) * (729, 27) (64,1000) (125, 343) (216, 216) (512, 8) * (1000,1) *

q is an even integer. The vector function ffq has components convex on R+, and convex on R when q is an even integer. This property might be utilized for the scalar optimization problem over X. For example, for u, o E R ~,

Vector minimum ( f values)

X1

Range

(0, 16) (1, 13) (2, 11) (3, 10) (4, 8) (5, 6) (6, 5) (7, 4) (8, 2) (10, 1)

[1899/1900, 1] [866/873, 1899/1900] [321/350, 866/873] [488/525,321/350] [296/357,488/525] [1/2, 296/357] [61/188, 1/21 [56/225, 61/188] [7/495, 56/225] [0, 7/495]

N o w for x ~ S, we have

fix,S'--

min [x:'].

~q(v) ~ >I ~q(u)~ + ( v - u)~ V~pq(U)~,

i=1

l>~k<~K.

This follows since the left-hand side and the fight-hand side both equal 0 if x i = 0 for some i, and both equal 1 otherwise. For s ~ S, the right-hand term is concave on R ' . Thus we might replace the convex function ~qX by the concave function Oqx defined as follows.

Hence, for x, y e X, ffqx(Y) >~~bqx(X) +

#(x)(f(y)

-f(x))

where # k ( x ) = q)kk(fk(x))q-1 , I <~k <~K. Thus a sufficient condition for x to minimize [~qX] over X is that x minimize [ # f ] over X. For m a n y structured problems, this is easy to check. (b) Suppose X c R ' . The obvious difficulty with minimizing a convex function over the vertices of a polytope, X * , is that optima over X * are not, in general, vertices of X*. For concave problems some are. N o w suppose the ( xj } components are all 0 or 1, and that X corresponds to the set of vertices of X*. The if ( f k ( x ) } are polynomial in x, ~qx(X) will take the form

0qx(x)-

l <~i~n

E as(?~) min [x]'] s~S

l <~i~n

which is a piecewise linear on R n. Oq and ~qX are identical on S. Since Oqx is concave on R ' , those minimizers over X * which are in X, will be identical with the minimizers over X. An appropriate concave minimization algorithm for polytopes would be needed. The piecewise linearity may be of some assistance. Note that if we wish to retain the function ~qX, the problem is reducible to the following problem.

~qx(x) --- E a ~ ( ) ~ ) f i x ; ' s~S

i=1

rain [ E a , ( X ) z ~ ] x,z I s i S

where s = (sl, s 2. . . . . s . ) ~ S = (0, 1}'.

Table 5

Jl

J2 1

2

3

4

5

1 2 3 4 5

(4096-4096)~1) * (2744-27437,1) (2197-2189~,1) (1728-1701~,1) (100-936h])

(2197-2196)~]) * (1331-1323)~1) * (1000-973~1) * (729-665~a) (343-218~q)

(1728-1700h]) (1000-973)~1) * (729-665)h) (512-387~a) (216)

(512-4487~1) * (216-912~1) * (125 + 91)~1) * (65 + 279ha) * (8 + 5042~) *

(343-127h1) (125 +218)~1) (64+ 448hl) (27 + 702~,1) (1 + 9992~) *

261

D.J. White / Weighting factor extensions for vector minimization problems

subject to

Define n

Zs>~ l + Y'~si(xi-1 ), i=1

T = max [ T ( X ) ] . ~EA

Let X ~ A and let us define Mi,(X, f, X)c_ X as follows.

x~X. (c) Branch and bound procedures might be considered. The convexity condition in (a) might be useful to get upper bounds on min x ~ x[q~qx(X)], e.g., if we have evaluated ePqx(X), for x ~ X ° _c X, then

MIt(X, f, X ) = ( x ~ X : fflx(x)=qT~x ). Define

Mlt(X,f,A)=

U M l t ( X , f , X ),

l<~t<~T,

X~A

rain

x~X °

where we define

rnin [C~)q)~(X) ql- ~(x)(f(y) -/(x))].

x~X ° )'~ X

Alternatively, since min x ~ x[ePqx(X)] is concave on A, knowledge of the solutions for specific values of {(q, X)} will enable bounds to be obtained for other values of {(q, X)}. (d) If the minimization problem is a difficult one, then it will be necessary to terminate the search for minx ~ x[ePqx(X)] at an approximate optimal solution. Such a solution may not be a vector minimum, and it is important to determine how close this is to some vector minimum. Let us suppose x is an e-optimal solution for ePqXover X, i.e.

xln~i£Tll]tiqx(X)]~t~)qX(X)~ x~xmi[+qx(x)] n +e and that x is dominated by a vector minimum y. Then it is easy to show that

(fk(y)) q <~(fk(X)) q <~(fk(y)) q q- e/Xk, l<~k<<.K.

4. t-th-Minimizers of the standard weighting factor method In this section we will deal with solutions which give the first-minimal, second-minimal, and in general, the t-th-minimal values of [Xf(x)] over X for X ~ A. Since X is finite, [Xf ] ( = flax) will take only a finite set of possible values on X. Let there be T(X) such distinct values and let (q/ax } be the distinct values with 1 ~< t ~< T(X), with an ordering such that ff~<*tl~l

forl~
M,,(X, f, X) ---D if t > T(X). It is trivially clear that the following is true. T

E(X,f)c

UMI,(X,f,X)=X,

VX~A.

t=l

One would hope, however, that we might be able to find the whole of E(X, f ) , or a suitable subset, by considering only small values of t. F r o m White [10], Chapter 5, Theorem 1, we automatically obtain

MH(X, f, A ) ( = M , ( X , f, A)) c_E(X, f ) . Let us now consider second, third, and higher order minimizers. We have the following theorem. If x~Mlt+l(S , f, X ) \ E ( X , f), for some X ~ A, then there exists an s <~t, and a y ~ Mi,(X, f, X) such that f ( y ) <~f(x), f ( y ) 4= f(x).

Theorem 2.

Proof. Since x q~E(X, f ) , there exists a y ~ X with f ( y ) <~f(x), f ( y ) --/:f(x). Then q~lx(Y) < ~lx(X) and the requisite result holds. [] Although very simple, this result is useful in extending the usual set M1(X, f, A) when the structure of the problem makes it computationally feasible to find second and higher order minimizers (see, for example, Dreyfus [12], for tth-order minimizers for routing problems, and G a b o w [13], Kato, Baraki and Mine [14] for t-th order minimizers for spanning tree problems). At each stage we need only compare the (t + 1)-th minimizers with the s-th minimizers in order to determine whether or not the points generated are vector minima.

D.J. White / Weightingfactor extensions for vector minimization problems

262

For the example given we have the following results.

M H ( X , f , A) = (x E X: f(x)

~

{(0, 16), (1.13), (2, 11),

(s, 6), (8, 2), (10,1)) CE(X,f)}, E ( X , f)¢q M , 2 ( X , f , A ) \ M n ( X , = {x~X:f(x)

f , A)

~ ((4, 8), (6, 5), (7.4)}},

A' = A \ ( ( 1 / 3 , 2 / 3 ) , ( 4 / 7 , 3 / 7 ) ,

E ( X , f ) f ~ M 1 3 ( X , f , A) \(M12(X , f, A)UMn(X,

The adjacency condition specified applies to the type of example we have used as an illustration. Clearly, for each component i, we may delete any design j which is dominated by other design j ' . Also, adjacency in the 0 - 1 linear programming form is equivalent to solutions differing in only one component i. Thus, the results of Theorem 3 apply here. It is easily seen that unique solutions will be obtained for all )~ ~ A' where

(5/8, 3/8), (2/3, 1/3), (3/4, 1/4)}.

f , A))

= {x ~ X: f ( x ) = (2, 11)}. Thus we will pick up almost all of E(X, f ) for t = 1, 2 and the final part of E(X, f ) for t = 3. For some problems we will have M12(X, f, ~) c_ E(X, f ) for all ) , ~ A which are necessary to produce M11(X , f , A). Thus, suppose X is equivalent to the vertex set of a polytope Z, and suppose that if, whenever x, y ~ X are adjacent to Z, x cannot dominate y with respect to f (and vice versa), then we have the following theorem.

Theorem 3. I f the above adjacency condition holds, f is linear on R n and if M 11( X, f , )k) is a singleton, {x(X)}, then M12( X, f, )t) c_C_E( X, f). Proof. Let x E M 1 2 ( X , f, )~)\E(X, f). Then there is a y ~ X with f ( y ) < f ( x ) , f ( y ) 4=f(x). Hence )~f(y) < )~f(x), and y ~ M n ( X , f, )~). Thus y = x ( ) 0 . However, since f is linear, and Z is a polytope, we must have x and y adjacent in Z, contradicting the dominance of x by y. [] Examples of situations in which X is identical with the vertices of a polytope arise in optimal routing 0 - 1 linear programming formulations (e.g., see Martins [3]) and in minimal spanning tree formulations (e.g., see Edmonds [1]). Because of the structure of such problems, each vertex which minimizes ~lx(x) over X m a y be made a unique minimizer by choosing 2~ appropriately, and M u ( X , f, ~) m a y be generated by choosing an appropriate subset of A. G a b o w [13] considers second-minimizers for spanning trees, and shows that they m a y be obtained by replacing single links of first minimizers, and this is exactly equivalent to the linear programming result.

5. Connectedness conjectures Let us look back at the numerical illustration we have used viz., the cost-reliability problem. If we defined two distinct solutions to be adjacent if they have the same value of Ja, or the same value of J2, we see that the vector minimum set form a connected graph, where the edges correspond to two adjacent solutions. It is conjectured that, for certain classes of problems, this connectivity holds. A considerable amount of work, searching for counter examples for the spanning tree problem (of which class our illustration is a special case), has failed to produce one. If the conjecture is true, then this adds a little more armory to our collection of techniques for finding the vector minimum set. We could begin with a subset of vector minima which are connected (e.g., M1(X, f , A) might be a good starting point) and then extend this sub-graph until no further vector minima can be added by looking at solutions adjacent to those obtained to date. This would, of course, require a procedure for checking whether or not a particular solution is a vector minimum. Let us now formally develop the conjectures. Let X be the set of vertices of a polytope X*. Two vertices, x, y of X* are said to be connected if there is a sequence of vertices { x l ( = x), x 2, x 3....... xS,.,.,., xS(=y)} of X* such that x s and x s+l are adjacent vertices of X * , 1 ~
A subset Y of vertices of X * is said to be connected if each pair x, y ~ Y is connected by members of Y.

D.J. White / Weighting factor extensions for vector minimization problems

Consider only problems in which the objective function vectors are ( fk(X)), 1 <<,k <~K, x ~ X, with f ( x ) >/0, x ~ X. Conjecture 1. E( X, f ) is connected. If this conjecture is not generally true, then consider Conjecture 2, again under the non-negativity conditions on f. Conjecture 2. Let the set of vertices of X correspond to the 0 - 1 solutions of given linear constraints for

either spanning tree problems or for optimal routing problems. Then E( X, f ) is connected. It is to be noted that Aggarwal, Aneja and Nair [15] state that Conjecture 2 is true for a spanning tree 0 - 1 linear programming formulation (Theorem 2). However, this is based upon a misinterpretation of a theorem of Zeleny [16] (Theorem 2.4.2), where the vector minimal cormectedness is E ( X * , f)-connectedness and not E(X, f ) - c o n nectedness. It is to be noted that we have imposed the condition f ( x ) >~O, x ~ X. This condition is clearly not necessary for spanning trees, since we merely add a constant to each link fk value, 1 ~< k ~< K, but, on the same basis, there is no loss of generality in assuming this. For routing problems, the non-negativity condition is required (see, for example, Martins [3], Figure 2, where an example is given where the connectivity property fails when negative objective function values are allowed). Martins [3] (Theorem 2) gives Conjecture 2 (for routing problems) as a theorem. Unfortunately, the proof is incorrect (see Hartley [17]). It may be of interest to note that if the routing conjecture is true, then so is the spanning tree conjecture. The proof of this is, briefly, as follows. If G(N, A) is the original graph, with node-set N, and arc-set A, from which spanning trees have to be selected, construct a tree-graph G+(N +, A ÷) (see Harary [18]) with node-set N ÷ and arc-set A ÷, where i + ~ N + is a subset of nodes of N, and a + ~ A ÷ corresponds to an arc of A (note that we can have a ÷ 4 b ÷ in A ÷, but with the corresponding arcs a and b in A equal). The initial node s ÷ of N ÷ is the empty set J~ of N, and the terminal node t ÷ of N ÷ is the set N. A route from s + to t + in G+(N+,A +) corresponds to a spanning tree in

263

G(N, A). G + ( N ÷, A ÷) is made directed by only allowing arcs in A ÷ to be equivalent to moving from i f in N ÷ to j+ in N ÷ by adding an arc (i, j ) of A. Adjacent spanning trees in G+(N ÷, A ÷) are such that routes from s ÷ to t ÷ in G+(N ÷, A ÷) can differ in at most one link, and these correspond to adjacent spanning trees in G(N, A). This allows inductively, beginning with s ÷. Thus, if T1+ , T2+ are adjacent spanning trees of G+(N +, A +), they either contain the same initial arc from s ÷ (in which case use induction from the next node in these trees) or they have different initial arcs (in which case the residual trees of T1+ , T~', after removing the first two arcs in G+(N +, A+), are the same). The possibility of multiple-arcs joining nodes i f, j + in N ÷ makes no difference to the proof. Finally, to show that the spanning tree conjecture is true in some non-trivial cases, let us look at our cost-reliability problem in its general form, with K objective functions, and with fko being the k-th objective function value for design j of the i-th component. So we have

_,fk,jx,j, l k< K.

fk(x) = i=1 j=l

N o w consider M I ( X , f, A). Then M1(X , f, A) c E( X, f ) and MI( X, f, A) is connected (see White [10], Theorem 9, Chapter 4). N o w n

mi

Xl(x) = E E ( x l i j ) x , ; i=1 j=l

where f.,j is the objective function vector for given i, j. Clearly minx~x[?tf(x)] is obtained when we also have, simultaneously rain

[?~f,j],

l<<.i<~n.

1 ~j<~ m i

N o w suppose that, for each i, no vector f. o, is dominated by a convex combination of {f-i j}. Then, for any given i, for each j ' there is a ( = ho, ) such j ' minimizes [~f.ij] over 1 <~j <~m i. Thus, for each value of i, each value of j appears in the solution set MI(X, f, A). Then any set YD M I ( X , f, A) is then connected, and thus E(X, f ) is also connected. The connectedness of MI(X, f, A), when X is the set of vertices of a polytope X*, may offer a clue to the proof of the general connectedness

264

D.J. White / Weighting factor extensions for vector minimization problems

result, when combined with the general scalar linear programming connectedness result of Kirkby, Love, and Swarup [19] (Lemmas 1, 2, 3) where the result is given that each member of Mlt+I(X, f, X) is adjacent to some member of Ur(X)M, s=l ,s( X,. f , X), for I < t < T ( X ) 1, and for all X ~ A. This suggests that the connectedness property might be established using {Mlt(X , f , X)}. Indeed, with the example given, with X = (1 - e , e), e small and positive, the vector minima can be obtained by an adjacency procedure of the same kind as that of Kirkby, Love, and Swarup. Theorem 1, using a complete q-th-power characterization may also help, in a similar manner to the procedure of Zeleny [16] for continuous linear programming vector minimization problems. It is to be noted that a crucial step in such a proof is that, if X is the set of vertices of X*, then MI(X, f, A) is connected for all A ~ A . This does not follow for the ~bqX representation in general. Thus, consider our illustrative example with n = 2, K = 2 , m i=2, i = l , 2, f q l = ( 0 , 0 - 5 ) , f q 2 = (0.5, 0), f.21 = (0, 0.5), f-22 = (0.5, 0) (fkij is the k-th objective function value for the j-th design of the i-th component). Then, with xij = 1 if the j-th design of the i-th component is used, and xij = 0 otherwise, i = 1, 2, j = 1, 2, we obtain, with q = 2, t~2h(x ) = ~k2XllX21 + 0.25(XllX22 -4- x12x21 ) + ~klXlEX22 and

6. Conclusions This paper is an exploratory paper whose purpose is to add two new, elementary, results to the arsenal of techniques available for finding vector minima for difficult, but structured, finite problems. Theorems 1, 2, 3 give the main theoretical results, of some value in themselves, although there remains the task of using these results to develop computationally feasible procedures. It is possible that the search power which these results give may at least give some aid to approximating the vector minimum set. In particular, the elementary suggestion (d) of Section 3 may be of some use, or may be extended to a more powerful result. The paper ends with some connectedness conjectures, which, if true, may be used to develop search procedures based upon adjacency conditions. A search for counter-examples has failed, but the conjecture, even for the simple cost-reliabihty illustration used, seems to be a somewhat difficult one to prove.

Acknowledgement I would like to thank the unknown referees for the helpful comments made on the first draft of this paper. I would also like to thank my colleague, Bill Scherer, for the extensive computational work which failed to find a counterexample to the connectedness conjecture for the reliabihty problem mentioned in Section 5.

M 2 ( X , f , (0.5, 0.5)) = ( x : (Xll = x 2 2 = 1) or (x12 =x21 = 1)} and this set is not connected. However, h = (0.5, 0.5) is not a critical point at which solution sets change, and the connectedness result might go through using the critical points only. For the above example, connectedness can be established in this manner. It is also to be noted that, in the tabulation of critical ~k1 ranges for q = 3 in Section 3, at the critical ~kI values the new solution need not be adjacent to the previous solution (e.g., (4, 8) is not adjacent to (3, 10)), although for q = 1 it is always true that at least one new solution at a critical X1 value is adjacent to one of the immediate previous solutions.

References [1] Edmonds, J., "Optimum branchings", J. of Res. of the Nat. Bur. of Stds. - B, Maths. and Math. Phys. 17B/4 (1967) 233-240. [2] Dantzig, G.B., Linear Programming and Extensions, Princeton University Press, New Jersey, 1963. [3] Martins, E.Q.V., "On a multiple criteria shortest path problem", European Journal of Operational Research 6 (19894) 236-245. [4] White, D.J., "The set of efficient solutions for multiple objective shortest path problems", Computers&Operations Research 9/2 (1982) 101-107. [5] Mine, H., and Osaki, S., Markovian Decision Processes, Elsevier, New York, 1970. [6] Dinkeiback, W., and Isermann, H., "On decision making under multiple criteria and under incomplete information", in: J.L. Cochrane, M. Zeleny (eds.), Multiple Criteria Decision Making, University of South Carolina Press, Columbia, SC, 1973.

D.J. White / Weighting factor extensions for oector minimization problems [7] Evans, J.P., and Steuer, R.E., "A revised simplex method for linear multiple objective programming", Mathematical Programming 5 (1973) 54-72. [8] Bray, T.A., and Proschan, F., "Optimum redundancy under multiple constraints", Operations Research 13/5 (1965) 800-814. [9] Prim, R.C., "Shortest connection networks and some generalizations", Bell System Technical Journal 36 (1957) 1389-1401. [10] White, D.J., Optimality and Efficiency, Wiley, Chichester, 1982. [11] Joksch, H.C., "The shortest path problem with constraints", Journal of Mathematical Analysis and Applications 14/2 (1966) 191-197. [12] Dreyfus, S., "An appraisal of some shortest path algorithms", Operations Research 17/3 (1969) 395-412. [13] Gabow, H.N., "Two algorithms for generating weighted spanning trees in order", S.LA.M. Journal of Computing 6/1 (1977) 139-150.

265

[14] Katoh, N., Baradi, T., and Mine, H., "An algorithm for finding K minimum spanning trees", S.LA.M. Journal of Computing 10/2 (1981) 247-255. [15] Aggarwal, V., Aneja, Y.P., and Nair, K.P.K., "Minimal spanning tree subject to a side constraint", Computer & Operations Research 9/4 (1982) 287-296. [16] Zeleny, M., Linear Multiobjective Programming, Springer, Berlin, 1974. [17] Hartley, R., "Counter example to a result of Martins", Department of Decision Theory, University of Manchester, Notes in Decision Theory, No. 170, April, 1986. [18] Harary, F., "Recent results on a tree", in: A. Dold and B. Beckmann (eds.), Graphs and Combinatorics, Lecture Notes in Mathematics 460, Springer, Berlin, 1974, 1-9. [19] Kirkby, J.L., Love, H.R., and Swarup, K., "Extreme point mathematical programming", Management Science 18 (1972) 540-549.