Tchebycheff norms in multi-objective linear programming

Tchebycheff norms in multi-objective linear programming

Mathl. Comput. Modelling Vol. 17, No. 1, pp. 113-124, 1993 Ptited in Great Britain. All rights reserved 0895-7177193s5.00 + 0.00 Copyright@ 1993 Perg...

866KB Sizes 91 Downloads 94 Views

Mathl. Comput. Modelling Vol. 17, No. 1, pp. 113-124, 1993 Ptited in Great Britain. All rights reserved

0895-7177193s5.00 + 0.00 Copyright@ 1993 Pergamon Press Ltd

TCHEBYCHEFF NORMS IN MULTI-OBJECTIVE LINEAR PROGRAMMING DAVID

L. OLSON

Department of Business Analysis and Research Texas A & M University, College Station, TX 77843, U.S.A.

(Received

December

1991)

Abstract-Interactive multiple objective linear pxogranum‘ng techniques relying upon weighted combined objective functions have been criticized because they are limited to generating model extreme points. Tchebycheff norms provide the ability to reach other points on the nondominated surface, by minimizing the maximum deviation from targets. Rather than being an Lr metric as used in conventional linear programming, or an L2 metric used in least-squares regression, Tchebycheff norms minimize an L, metric. Ideal points can be estimated from payoff tables, and Tchebycheff norms used to minimize the distance from the current estimate of the ideal point. This paper compares two TchebycheH methods for multiple objective linear programming.

1. INTRODUCTION Multiple objective programming, from a mechanical perspective, can be viewed as consisting of two means of generating solutions for the decision maker consideration. Weights on different objective functions can be used, requiring manipulation of objective function coefficients [2], and Steuer’s only. The Geoffrion-Dyer-Feinberg Method [l], th e method of Zionts-Wallenius Other approaches, such as goal programming and the Method [3] operate by this mechanism. Step-Method [4,5], operate by creating new corner points through changing constraint righthand sides. (Of course, this binary classification is too simple to comprehensively categorize all methods because there are variations of methods crossing categories, such as weighted sums goal programming.) Those methods operating through changing constraint right-hand sides place a more ad hoc requirement upon decision makers, but can more accurately reflect nonlinear utility by being capable of generating solutions throughout the nondominated surface. Tchebycheff norms minimize the maximum deviation from targets. Rather than being an L1 metric as used in conventional linear programming, or an Ls metric as used in least-squares regression, Tchebycheff norms minimize an L, metric. Ideal points can be estimated from payoff tables, and Tchebycheff norms used to minimize the distance from the current estimate of the ideal point (simultaneous best attainment possible on each objective). A major benefit of the Tchebycheff norm is that techniques are released from the linear programming model extreme points. The definition of the Tchebycheff norm provides new extreme points on the nondominated surface. This paper reviews two techniques, the Satisficing Tradeoff Method [S-S] based upon the Step Method, and the method of Steuer and Choo [9,10] which utilize Tchebycheff norms. Focus of analysis is upon their relative ease of use and effectiveness in reflecting nonlinear utility. A few prior studies using human subjects have examined the use of either the Satisficing Tradeoff Method [ll] or the method of Steuer and Choo [12]. Kok found that the Satisficing Tradeoff Method (STM) was favorable relative to weighted multiple objective linear programming (MOLP) techniques using the L1 norm, because of the limitation of weighted methods to be able to generate only extreme point solutions. However, the STM method was found to involve higher computer costs due to the need for more model optimizations. Buchanan and Daellanbach applied the method of Steuer and Choo. The subjects of that study found the method of Steuer and Choo Typeset by A,#-QX 113

114

D.L.

OLSON

favorable overall, relative to the method of Zionts and Wallenius and the surrogate worth tradeoff method [13]. Specific responses measured by Buchanan and Daellanbach, where the method of Steuer and Choo scored higher than the two L1 techniques, were user confidence, ease of using the method, and ability to understand technique logic. On the other hand, the method of Steuer and Choo was found to take almost four times as much CPU time. Both of the techniques will be described below. This will be followed by an example application demonstrating both techniques. Comments will be made at the end of each technique section. The conclusions will discuss comparative features, as well as general comments about the Tchebycheff norm. 2. SATISFICING The Satisficing STEP 1. Obtain

Tradeoff

Method

the ideal point

TRADEOFF

involves

PROCEDURE

five steps.

Zk* for k = 1 to the number

of objectives,

from the payoff table.

STEP 2. Set aspiration levels for each objective k. The decision maker sets target levels as in STEM. Steuer [12] has referred to this as an ad hoc process, because decision makers set these targets in an apparently arbitrary manner. STEP 3. Relative weights Wk, reflecting both the scale of each objective as well as the distance of the target aspiration level from the ideal solution, are calculated. These weights IVr. are used in a min max LP to generate a nondominated solution, kVk = l/(Zi - Zk). The LP is: Min Q so that This LP minimizes

wk (Zi - Zk) 5 cr,

the maximum

distance

from the ideal point

STEP 4. The solution generated in Step 3 is presented objectives into one of three categories: (i) criteria (ii) criteria (iii) criteria

Ax=b.

for k = 1 to K, weighted

to the decision

by wk. maker,

who classifies

all

which the decision maker wants to improve, the decision maker would be willing to relax, and acceptable at their current attainment level.

If there are no criteria in set (i), stop. Otherwise, levels for set (i) and allowable limits on set (ii).

the decision

maker is asked for new aspiration

STEP 5. A feasibility check is conducted for the current set of aspiration levels. The test is LAMBDA = xk o(k wk [aspirationk - current+]), where & is the optimal Lagrange multiplier for the current solution (duals on the kcu constraints), Wk is the weight on objective k in light of the new aspiration level. If LAMBDA 5 c, where c is some small nonnegative value, the new aspiration levels should be feasible (objectives may be nonlinear, so certainty is not available). If new aspiration levels are predicted to be feasible, return to Step 3. If the new aspiration levels are predicted to be infeasible, more relaxed aspiration levels can be set, or the process can return to Step 3. Sausage

Example

STM will be demonstrated ming model (appended).

on a three objective,

six variable

multiple

FIBER

SALT

STEP 1. DEVELOP PAYOFF TABLE. This is identical to Step 2 in STEM.

COST Min COST

0.7359*

0.0876

0.0265

Max FIBER

1.1280

0.2466’

0.0132

1.2894

0.2351

0.0120*

Min

SALT

objective

linear program-

Tchebycheffnorms

115

STEP 2. SET ASPIRATION LEVELS.

Here, assume SALT 5 0.015.

the original

targets

set by the decision

maker are: COST

5 1.00, FIBER

1 0.15,

STEP 3. WEIGHT AND IDENTIFY NONDOMINATED SOLUTION.

Weights

for each objective

in light of aspiration WCoST : WFIBER : WSALT :

The LP model

levels are:

1 1.0 - 0.7359 1

0.2466 - 0.15 1 0.015 - 0.012

= 3.7864, = 10.3520, - 333.3333.

is:

Min a -o

+ 3.7864 COST

5 2.7864

MIN k :

--Q +

10.3520 FIBER

2 2.5528

MAX k :

f2 +

(Y + --a

333.3333

+

Wk Wk

5

wk(z;),

2

Wk(z;),

SALT 5 4.

Pork +

Hmbr +

Goat t

Jala +

Okra +

Ice=1

0.4

Pork t 0.4

Hmbr t 0.4

Goat - 0.6

Jala - 0.6

Okra - 0.6

Ice>0

0.5

Pork t 0.5

Hmbr - 0.5

Goat

-0.2

Pork - 0.2

Hmbr - 0.1

Goat t 0.25 Jala t 0.29 Okra t 0.3

Ice<0

20

0.04 Pork t 0.04 Hmbr t 0.04 Goat - 0.15 Jala t 0.02 Okra t 0.05 Ice<0 0.1

Pork t 0.1

Hmbr t 0.1

Goat t 0.1

Jala t 0.1

1.5

Pork t 2.0

Hmbr t 0.60 Goat t 0.25 Jala t 0.20 Okra t 0.01 Ice=COST

Okra - 0.9

0.05 Pork t 0.10 Hmbr t 0.20 Goat t 0.03 Jala t 0.80 Okra 0.05 Pork t 0.02 Hmbr t 0.03 Goat The solution

to this model

Ice>0

=FIBER t 0.01 Ice=SALT

is:

Variable Value

I

zk

Pork Hamburger

= O.ooQQ COST = 0.3713 FIBER

Goat

= 0.2456

Jalapeno

= 0.2066

Okra

= 0.0764 = 0

Ice

SALT

= 0.9819 = 0.1566 = 0.0148

STEP 4. TRADEOFF.

The decision maker is asked to evaluate this solution. Unlike STEM, the min-max LP in the satisficing tradeoff method may operate even though none of the targets are relaxed. In this case, all three aspiration levels were attained. We assume the decision maker would like to improve COST, while holding FIBER and SALT at current aspiration levels. Assume a new aspiration level for cost of COST 5 0.90. This yields a new WCOST = 1/[.9 - .7359] = 6.0938. STEP 5. FEASIBILITY CHECK.

=

0.5173 * 6.0938 [-0.9

- (-0.9819)]

+ 0.4255 * 333.3333 [-0.015 = 0.2259.

+ 0.0571*

- (-0.0148)]

10.3520 [0.15 - 0.15661

116

D.L.

OLSON

LAMBDA > c. It may be infeasible to improve COST to 0.9 while relaxing the FIBER and SALT minor amounts. The Tchebycheff norm will provide a solution feasible to the constraint set, but will not attain the new aspiration levels. Here, we will proceed to Step 3 with the given aspiration levels. STEP

3. WEIGHT AND SOLVE. Weights for COST are recalculated as l/(0.9 - 0.7359) = 6.0938. The new (Y constraint for COST is: --(Y + 6.0938 COST 5 4.4844, (6.0938 + 0.7359 = 4.4844). The rest of the model is the same. The new solution is: Variable Value Pork Hamburger Goat Jalapeno Okra Ice

zk

= o.oooo = 0.3280 = 0.2972 = 0.3343 = 0.0405 =o

COST FIBER SALT

= 0.9260 = 0.1347 = 0.0155

STEP 4. TRADEOFF. The attainments on all three objectives are slightly worse than aspiration levels. Assume the decision maker would like to fine-tune the solution, relaxing the aspiration level for COST to 0.91, while maintaining aspiration levels of 0.15 for FIBER and 0.015 for SALT. STEP 5. FEASIBILITY CHECK. 0.3998 * 6.0938 [-0.91

- (-0.9260)]

+ 0.0710 * 10.352 [0.15 - 0.13471 + 0.5292 * 333.3333 [-0.015

which is still positive,

but much closer to a feasible

set of aspiration

- (-0.0148)]

levels.

Proceed

= 0.0149 to Step 3.

STEP 3. WEIGHT AND SOLVE. Weights for COST are recalculated as l/(0.91 - 0.7359) = 5.7438. The new (Y constraint COST is: -_(y + 5.7438 COST 5 4.2269, (5.7438 * 0.7359 = 4.2269). The rest of the model is the same.

The new solution

Variable Value

for

is:

zk

Pork

= o.oooo

COST

Hamburger Goat Jalapeno Okra Ice

= = = = =

FIBER SALT

0.3333 0.2909 0.3309 0.0449 0

=

0.9328 = 0.1373 = 0.0154

STEP 4. TRADEOFF. Note that this decision did not attain any of the aspiration levels, although it is relatively close to all three. Assume the decision maker is satisfied with this tradeoff, and stop. If the decision maker had desired to continue the exploration, new aspiration levels could have been applied. Satisjicing

tideofl

Comments

Nakayama et al. [S] comment that this method provides a flexible means to aid search. The method is fairly insensitive to many of the parameters used. Identification of the ideal solution is not crucial, as long as the solutions used are attractive enough to be preferable to most nondominated solutions. The weight calculation used adjusts for different objective scales automatically,

Tchebycheff n-

117

is unnecessary (based on [14]). A nondominated solution is always guaranteed, and the weighted Tchebycheff norm provides feasible solutions as well. Step 5, feasibility check, can be used to guide the decision maker selection of aspiration levels. While the method will operate without adjusting for the LAMBDA calculation, it can avoid unreasonable sets of aspiration levels. In this example, the Satisficing Tradeoff Method did not reach any of the three aspiration levels, although it wss close in all three.

so scaling

3. THE METHOD

OF STEUER

AND CHOO

A weighted Tchebycheff procedure was used to give Steuer’s algorithm the ability to generate nondominated solutions that are not limited to model extreme points ([9,15]; Chapter 14). The Tchebycheff metric measures the maximum distance between the ideal point and the feasible region, thus yielding the nondominated solution with the minimum relative deviation from the ideal on all K objectives. Simply minimizing this distance could yield multiple optimal solutions, and the possibility of a dominated solution. To avoid this, the Tchebycheff norm is augmented. For a weighted Tchebycheff metric, Min max{.Z; - Zk] + Pc(Z;

- Zit),

k

where 2; is the optimal value for objective k (Steuer suggests some value Zz* slightly beyond 2,) and is some small but computationally significant positive value. A lexicographic weighted version of this metric is to: Min PI o; Pz c(Z;

- Zk),

SO

that 0’ 1 &(Z;

- Zk),

l
k

and the solution satisfies the original constraint set. The interactive procedure is: STEP 1. Develop the payoff table. Specify the maximum iterations H, sample size P, and reduction factor P. P 2 K (number of objectives), H = K, “m 2 r 5 “-fi and 1/(2K) 5 w 5 3/(2K) are recommended. STEP 2. Normalize (rescale) the objectives. STEP 3. Iteration h = 0. Bound &. lbk = lower bound, ubk = upper bound on &. Let lbk = 0,

Ubk = 1. STEP 4. Let h=h+l. STEP 5. Following Steuer and Harris [16], randomly generate as many sets of Ak such that lbk 2 Ak 5 Ubk, xktk = 1 as desired (50 K recommended) STEP 6. Filter these sets of xk to obtain 2 P maximally dispersed sets as in Steuer and Harris [16]. STEP 7. For each of these sets of Ak, solve the augmented or lexicographic weighted Tchebycheff program. STEP 8. Filter the resulting criterion vectors (objective attainments) to obtain P of the most dispersed nondominated criterion vectors to present to the decision maker. STEP 9. The decision maker selects the preferred solution, Z(h). STEP 10. If the decision maker wishes to stop, go to Step 15; else go to Step 11. STEP 11. For the set of & associated with the selected solution,

A@) =

l/(z,’ - zk(h)) c~[~,(z;

_

Z,(h))]

I

if

z; # Zk(h)fora11h

h(h) = 1,

ifZ;

Ak(h)

if for some h (but not all k), Zi = Zk(h).

=

0,

= Zk(h)

for

all

k,

This step is necessary because Z(h) may not have been generated by the vertex of the intersecting contour in Step 7. If Z(h) was generated by the vertex of the intersecting contour, A(h) would be more appropriate.

118

D.L. OLSON

STEP 12. Define xk to contract the cone. Define the parameter P such that Km where 1/(2K) 5 w 5 3/(2K)

[o,rh],

[lb, ubk]= [l - rh, [Ak(h)

-

if &(h) - 0.5 fh 2 0 if

11, 0.5 rh, h(h)

5 r 2 K-fi,

+

At(h)

•t 0.5rh

1

1

otherwise.

0.5 rh],

STEP 13. If h < H, go to Step 4; else go to Step 14. STEP 14. If decision maker wishes to continue, go to Step 4; else go to Step 15. STEP 15. Present the decision yielding the selected Zk(h). The resulting weight set used in this analysis combines the concept of the gradient of relative utility with scale adjustment. The method works whether attainment values are standardized or not, although if scales are not adjusted, weights serve the double duty of reflecting the relative importance of objectives as well as adjusting for differences in scale. Sausage Example STEP 1. Let P = 6, w = 0.4, r = 0.6, and the maximum iterations = 3. STEP 2. From the payoff table,

2;

R=we

lIansfonnation

-1.2894

0.5535

(1.2894 + Z~os~)/O.5535

1 FIBER 1 0.2466 1 0.0876 1 0.1590 1 (ZFIB~~ - 0.0876)/0.1590 1 SALT STEP 3.

ACOST

=

-0.0120

[o, 11; AFIBER

-0.0265

=

0.0145

[o, 11; ASALT

=

(.0265+ ZsAr,r)/0.0145

[o, 11.

INITIAL ITERATION. STEP 4. h = 1. STEP 5. Generate 150 &.

(Here, we will use the formula presented earlier.)

STEP 6. Filter these 150 & to 2P maximally dispersed &. (We will use the formula weights for the first iteration.) Here, we will use:

weight set

ACOST

~FIBER

ASALT

(1) 01

0.998

0.001

0.001

0.001

0.998

0.001

(31

0.001

0.001

0.998

(4)

0.333

0.333

0.333

151

0.111

0.444

0.444

(6)

0.444

0.111

0.444

0.444

0.444

0.111

(7)

STEP 7. Solve the augmented or lexicographic weighted Tchebycheff models. These include constraints reflecting the range of attainments (estimated from the payoff table) as well as xk for this weight set. In this case, the objective function is to MIN cr, with the three constraints: -0.5535

CY+ AcosT COST 5 0.7359 &ST,

0.1590 (Y+ XFIBERFIBER < 0.2466 AFIBER, -0.0145 cx+ &ALT SALT < 0.0120 XSALT.

Tchebycheff IICI~W

The results

119

are: weight set

ZCOST

0.7366

(11 121

1.1280

(31

1.2609

(41

0.9045

(51

1.0538

(61

0.8966

171

0.8256

STEP 8. Filter the criterion values down to the P most different In this case, we will select {l}, {2}, {3}, {4}, {5}, and (7). STEP

9.

STEP

10.

Continue.

STEP

11.

Develop

Assume

the decision

maker selected

new sets of weights.

fore.

for decision

maker consideration.

(4).

For Z(l),

all attainments

are less than

the ideal.

There-

wz; - zk(l)) Xk(1) = F[l/(z; - zk(l))] l/(-O.7359 ”

= [l/(-O.7359

- (-0.9045))]

- (-0.9045))

+ [l/(0.2466

Xi = 0.0234,

- 0.1982)] + [l/(-O.012

X2 = 0.0814,

- (-0.0164))]

As = 0.8952.

This set of weights would be inferred from the relative distance of the current solution to the ideal point. Here, we will use the more intuitively appealing weights selected at the last iteration, by the vertex of the Xi = 0.333, AZ = 0.333, Xs = 0.333, assuming that Z(1) was generated intersecting contour. STEP 12. Determine the next set of X weights for the next iteration. Bounds for each set of For As, weights are established. The parameter Th = 0.6l = 0.6. For Ai and &, Ak 5 rh/2. xk 7 rh/2 > 1. Thus: [lbk, Ubk] = [0.033,0.633] for Ai, [0.033,0.633] for XZ, and [0.033,0.633] for As.

Since h = 1 < 3, go to Step 14.

STEP

13.

STEP

14. Assume

ITERATION

the decision

maker wishes to proceed.

Go to Step 4.

2.

these h = 2. The concept is to randomly generate 50 K sets of weights satisfying = 1. By systematically varying k 1 weights bounds as well as the constraint that the sum of xk uniformly over the bounded region, the feasible weight sets in Table 1 were obtained. STEP

4.

STEP

6.

feasible

Filter these sets of weights down to 2 P (12). Here, we will demonstrate sets of weights from above.

STEP 7. Solve the weighted Tchebycheff programs. nondominated solutions is 0.001.) Constraints:

-0.5535

CX+ &osT

COST

0.1590 a + XFIBER FIBER -0.0145 STEP

8.

(Note that the minimum

with the eleven

weight to guarantee

2 0.7359 ACOsT, 2 0.2466 AFIBER,

CY+ &ALT SALT 5 0.0120 ASALT.

algorithm.

Filter the criterion vectors to the 6 most different. This could be done by a clustering Here, we manually select {4}, {8}, {lo}, {13}, {15}, and (17).

STEP

9.

Assume

STEP

10. Continue.

the decision

maker selects

{ 10).

120

D.L. OLSON

Table 1. x: rejected due to violation of bounda

x3 (41

0.333

0.333

0.333

(repeat of (4) from iteration 1)

0.033

0.033

0.934

X

0.033

0.734

X

0.534 0.334 0.734

X

0.033

0.233 0.433 0.633

0.233

0.033

0.233 0.233

0.233 0.433

0.233 0.433

0.633 0.033 0.233

(151

0.433 0.433

(161

0.433 0.033

0.134

0.633

(171

0.633

0.233

0.134

0.033

(81 (91 1101 011 (121 031 041

weight set

(41 (81 (91 (101 011 1121 031 041 051 061 071

STEP

--Al ---

0.534 0.334 0.134 0.534 0.334 0.334

A2

A3

ZCOST

ZFIBER

ZSALT

0.333 0.033 0.033

0.333 0.433 0.633

0.333 0.534 0.334

0.233 0.233

0.233 0.433

0.534 0.334

1.1422 1 0.2456 1 0.0131 0.9842 0.1753 0.0148 0.9364 0.2156 0.0157

0.233 0.433 0.433 0.433

0.633 0.033 0.233 0.433

0.134 0.534 0.334 0.134

0.8830 0.9063 0.8882 0.8377 1

0.633

0.033

0.334

0.633 ---

0.233

0.134

0.8622 I 0.1006 I 0.0183 0.8149 0.1851 0.0218

11. The (2) vector is identified for (10).

0.9045 1 0.1982 E 1.1871 1 0.2424 1 0.0127

0.2311 0.1050 0.1654 0.2174 1

0.0180 0.0156 0.0172 0.0266

All Zk # 2;.

We use the weights associated with the selected solution: X1 = 0.233, X2 = 0.233, As = 0.534. STEP 12. Recalculate limits on &. The parameter rh = 0.6’ = 0.36, and rh/2 = 0.18, all Xk 5 rh/2 and all & 2 1 - rh/2, so the bounds are tO.053, 0.4131. STEP

13. h = 2 < H, so continue.

STEP 14. Go to Step 4. ITERATION 3. STEP 4. h = 3. STEP 5. Generate the weighting vectors. STEP 6. Filter these sets of weights down to 2 P (12). (Here, that’s what we have.)

121

Tchebycheff norms

STEP

x-rejected due to violation of bounds

x2

A3

0.233

0.233

0.534

(repeat of (10) from &ration

0.053

0.053

0.894

X

0.053

0.143

0.804

0.053

0.233

0.714

0.053

0.323

0.534

0.143

0.053

0.804

011 1221

0.143

0.143

0.714

0.143

0.233

0.624

(23)

0.143

0.323

0.534

(24)

0.233

0.053

0.714

(25)

0.233

0.143

0.624

(26)

0.233

0.233

0.534

(27)

0.323

0.053

0.624

(28)

0.323

0.143

0.534

Xl

7. Solve the weighted

Tchebycheff

-0.5535

X

programs.

(Y+ &osT

2)

Constraints:

COST 5 0.7359 ACosT,

0.1590 Q + AFIBERFIBER 1 0.2466 AFIBER, -0.0145

weight set

(Y+ &ALT SALT 5 0.0120 ASALT.

Xl

A2

X3

ZSALT

ZFIBER

ZCOST

(10)

0.233

0.233

0.534

0.9842

0.1753

0.0148

(18)

0.053

0.233

0.714

1.1706

0.2435

0.0128

(19)

0.053

0.323

0.624

1.1576

0.2445

0.0129

(20)

0.053

0.413

0.534

1.1413

0.2456

0.0131

(21)

0.143

0.143

0.714

1.0600

0.1535

0.0137

(22)

0.143

0.233

0.624

1.0554

0.1903

0.0139

(23)

0.143

0.323

0.534

1a433

0.2075

0.0142

(24)

0.233

0.053

0.714

1.0011

0.0942

0.0143

(25)

0.233

0.143

0.624

0.9929

0.1263

0.0145

(26)

0.233

0.233

0.534

0.9842

0.1753

0.0148

(27)

0.323

0.053

0.624

0.9534

0.0996

0.0150

(26)

0.323

0.143

0.534

0.9387

0.1150

0.0152

STEP 8. Filter the criterion vectors to the 6 most different. Here, the solutions seem to be ordered in a change, from (18) to {28}, trading off COST and SALT, with FIBER peaking in the middle. We can select {lo), {17}, {23}, (251, {26}, and (28). STEP STEP

9.

10. to A(3).) STEP

Assume

the decision maker selects (26).

Continue.

(The d ecision

maker could terminate

11. The X(3) vector is identified

z, COST 1 FIBER SALT

0.24i6 -0.0120

-0.9642

1

0.1753 -0.0148

0.2483

1

We will see what happens

All Zb # 2;.

z; - zk(3)

Zk (3)

-0.7359

1

for (24).

here.

0.07131 0.0028

1/w;

-

zk (3)1

xk t3)

4.0274

0.0107

14.0252 7

0.0374

357.1429

0.9519

122

D.L.

OLSON

STEP 12. Recalculate limits on &. The parameter rh = 0.63 = 0.216, and rh/2 = 0.108. Using the A’S from Step 11, ACOST and AFIBER < rh/2, so the bounds are [O,O.ZlS]. X~ALT + rh/2 > 1, so the bound is [0.782,1]. Note that these are quite different from the bounds obtained from the weights inferred from selection, which yield: &OsT

= [0.125,0.341],

AFIBER = [0.125,0.341],

and .&ALT = [0.426,0.642].

STEP 13. h = H, so stop. STEP 14. Go to Step 15. STEP 15. The solution

generated

by the weight set (24)

is:

PORK

HMBR

GOAT

JALA

OKRA

ICE

0

0.3732

0.2459

0.2804

0.1005

0

COST

FIBER

SALT

0.9842

0.1753

0.0148

The resulting weight set used in this analysis combines the concept of the gradient of relative utility with scale adjustment. The method works whether attainment values are standardized or not, although, as in this example, if scales are not adjusted, weights serve the double duty of reflecting the relative importance of objectives as well as adjusting for differences in scale. Steuer and Choo Comments Steuer’s method provides a good technique to explore the extreme points of a linear multiobjective model. The logic is clear and easy to explain to decision makers. It also provides a very workable means to initially explore models without the need for special code, as any linear programming code can be used for solution, and only the objective function coefficients need to be modified to generate new solutions. If you create a variable representing each function, that is even easier. However, the original method cannot generate solutions outside of the original model extreme points. The contention has been that for big models, there are enough extreme points, and these are fairly close together, but there is no guarantee of that. The Steuer-Choo algorithm provides a means to overcome that limitation, at the cost of considerable computation. There are many parameters to consider, and the method is probably the most computer intensive MOLP method. The demand on the decision maker, however, is the same nominal amount as with Steuer’s basic method. The resulting set of weights could be viewed as an approximate relative measure of importance of each objective. However, two complications temper that view. First, there is not a one-t-one correspondence between extreme points and weight sets-many sets of weights would yield the same extreme point. Second, if no effort is made to equalize the scales of each objective, the different measures for each objective will change the weights. The instability of these weights is demonstrated by the calculations in Step 11 of the demonstrated model. The weights are effective in generating solutions, but should not be relied upon as accurate estimates of relative value. 4. CONCLUSIONS According to some views, one of the major problems with the use of MOLP to aid decision makers with nonlinear utility functions is the limitation of methods relying upon weighted objective functions to be able to only generate simplex corner points. Goal programming and STEM create new corner points through constraints, but have been criticized for the ad hoc nature of constraint limits. The benefit of Tchebycheff methods is that they are capable of reaching any point on the nondominated surface. The relative disadvantage of using these methods is the obvious additional computational effort, especially for the method of Steuer and Choo. Note that these methods are very sensitive to the coefficients used, with even the fifth significant digit resulting in slightly different solutions. However, the computer can be coded to do this extra computation. The demands upon decision makers are generally the same as with conventional MOLP methods.

Tchebycheff norms

123

The solutions obtained with the two methods in this paper’s example were different, although reasonably close to each other. Decisions in both cases were made by the author, seeking some consistency, but not following a rigid mathematical procedure. The ideas developed by Nakayama et al., and Steuer and Choo demonstrate how Tchebycheff norms can be applied. This approach has the advantage of being able to reflect nonlinear utility. The cost is that both methods involve complicated procedures, and the Steuer and Choo method can involve significantly increased computational effort. The ideas of using the L, metric can be extended to other multiple objective programming techniques, providing additional means of generating solutions for the decision maker’s consideration. REFERENCES 1. A.M. Geofirion, J.S. Dyer and A. Feinberg, An interactive approach for multicriterion optimization with an application to the operation of an academic department, Management Science 19 (4), Pt. 1, 357-368 (1972). 2. S. Zionts and J. Walk&s, An interactive programming method for solving the multiple criteria problem, Management Science 22 (6), 652-663 (1976). 3. R.E. Steuer, Multiple objective linear programming with interval criterion, Management Science 23 (3), 305-316 (1976). 4. R. Benayoun, 0. Larichev, J. de Montgolfier and J. Tergny, Linear programming with multiple objective functions: The method of constraints, Automatic and Remote Conirol32 (8), 1257-1264 (1971). 5. R. Benayoun, J. de Montgohier, J. Tergny and 0. Larichev, Linear programming with multiple objective functions: Step method (STEM), Mathematical Programming 1 (3), 366-375 (1971). 6. H. Nakayama, T. Tanino and Y. Sawaragi, An interactive optimization method in multicriteria decision making, IEEE Transaclions on Sydema, Man, and Cybernehcs SMC-10, 163-169 (1980). 7. H. Nakayama and Y. Sawaragi, Satisficing trade-off method for multiobjective programming, In Interactive Decision Analysis, (Edited by M. Grauer and A.P. Wierzbicki), Springer-Verlag, Amsterdam, 113-123, (1984). 8. H. Nakayama and K. Furukawa, Satisficing trade-off method with an application to multiobjective structural design, Large Scale Syslems 9,47-57 (1985). 9. R.E. Steuer and E.-U. Choo, An interactive weighted Tchebycheff procedure for multiple objective programming, Mathematical Programming 26, 326-344 (1983). 10. N.P. Greis, E.F. Wood and R.E. Steuer, Multicriteria analysis of water allocation in a river basin: The Tchebycheff approach, Water Resources Research 19, 865-875 (1983). 11. M. Kok, The interface with decision makers and some experimental results in interactive multiple objective programming methods, European Journal of Operational Research 26, 96-107 (1986). 12. J.T. Buchanan and H.G. Daellanbach, A comparative evaluation of interactive solution methods for multiple objective decision models, European Journal of Operalional Research 29, 353-359 (1987). 13. Y.Y. Haimes and W. Hall, Multiobjectives in water resources systems analysis: The s-gate worth trade-off method, Waler Resources Research 10, 615 (1974). 14. A.P. Wierzbicki, A mathematical basis for satisficing decision making, In Organizations: Multiple Agents with Multiple Criteria, (Edited by J. Morse), Springer-Verlag, (1981). 15. R.E. Steuer, Multiple Criteria Oplimizalion: Theory, Computation, and Applicalion, John Wiley & Sons, New York, (1966). 16. R.E. Steuer and F.W. Harris, Intra-set point generation and filtering in decision and criterion space, Computers and Operations Research 7 (l-2), 41-53 (1980).

D.L.OLSON

124

APPENDIX SAUSAGE MODEL

MODEL RininaizeCOST Hazimize FIBER lfinimizeSALT Subject to: Pork +

Bmbr +

Goat +

Jala+

0.4 Pork + 0.4 Bmbr + 0.4 Goat - 0.6

Okra +

Ice=1

Jala - 0.6 Okra - 0.6

Ice20

-0.2 Pork - 0.2 Jimbr- 0.1 Goat + 0.26 Jala + 0.29 Okra + 0.3

>O Ices0

0.6 Pork + 0.6 Bmbr - 0.6 Goat

0.04 Pork + 0.04 Bmbr + 0.04 Goat - 0.16 Jala + 0.02 Okra + 0.06 Ice<0 0.1 Pork + 0.1 Bmbr + 0.1 Goat + 0.1 Jala + 0.1 Okra - 0.9

Ice>0

1.6 Pork + 2.0 Hmbr + 0.60 Goat + 0.26 Jala + 0.20 Okra + 0.01 Ico=COST 0.06 Pork t 0.10 Bmbr + 0.20 Goat + 0.03 Jala + 0.80 Okra 0.06 Pork + 0.02 Bmbr + 0.03 Goat

=FIBER + 0.01 Ice=SALT

NONDOMINATEDSOLUTIONS Solution

Pork

1

0.3187 0.3211

2 3

Hmbr

Goat

Jala

0

0.3187

0 0

0.3211 0.3125

0.2625 0.2108

0 0

0.3208 0.1153

0.3750 0.1931 0.1882

Okra 0

0.0470 0

Ice COST

FIBER

SALT

0.1 0.1 0

0.7359 0.0876

0.0265

0.1242

0.0267 0.025

0 0

0.7550 0.8856

0.7374 0.75

4 5

0.3125 0.3208 0.4847

6

0

0.3187

0.3187

0.2625

0.1653 0.2118 0

0.1

0.8952

0.0894 0.2182 0.2224 0.1035

7

0

0.3211

0.3211

0.2108

0.0470

0.1

0.8980

0.14@3

0.0171

8

0

0.3125

0.3125

0.3750

0

0.9063

0.105

9

0

0.3208

0.3208

0.1931

0 0.1653

0

0.9154

0.2343

0.0156 0.0160

10 11

0

0.4 0.4876

0.2

0.4 0.2059

0

0 0.1

1.02 1.114

0.092 0.1527

12

0 0

0.4847

0.1153 0

0.1882

0.2118 0.2118

0

1.128

0.2466

0.0141 0.0132

0

1.289

0.2351

0.0120

13

0

0.6

0.1124

0.1882

0.0941

0.0257 0.0277 0.0169

0.014