ComputersOps Res. Vol. 20, No. 4, pp. 435-446, 1993 Printed in Great Britain. All rights reserved
0305-0548/93 $6.00+ 0.00 Copyright 0 1993Pergamon Press Ltd
INTERACTIVE PARTITIONING FOR MULTIPLE OBJECTIVE
CRITERIA SET METHOD LINEAR PROGRAMMING
MYUNGKOO KANG’t, KAILASH KAPUR’$§ and P. SIMIN PULAT’~ ‘Republic of Korea Army, Chungnam 320-910, Korea and 2School of Industrial Engineering, The University of Oklahoma, Norman, OK 73019, U.S.A. (Received October 1991; in revised form June 1992)
Scope and Purpose-Decision
making in real life involves the evaluation of alternate courses of action with respect to multiple criteria. Usually the criteria in a decision making problem are conflicting with each other. Due to this conflict among the criteria, there may not be a solution that satisfies all of the decision maker’s (DM) expectations. A decision in this case must be made in such a way as to maximize the DM’s overall satisfaction. Utility value is related to the level of satisfaction with a solution for the DM. An interactive method for multicriteria decision making problems provides the DM with a strategy that enables the DM to select the solution with the maximum possible utility value. The requirement of answering questions during the solution procedure in an interactive method places a cognitive burden on the DM due to his/her evaluating a given trade-off. The greater the number of criteria involved in a trade-off, the heavier the cognitive burden on the DM. Partitioning of the criteria set is recommended to facilitate easier interaction with the DM. In this paper an appropriate solution strategy of partitioning the criterion set is presented for multicriteria linear programming problems. Abstract-This paper develops an interactive method for multiple objective linear programming problems. The decision maker’s capability to evaluate the trade-off vector is affected by the number of criteria involved in the trade-off vector. In this paper, the problem considered has too many criteria for the decision maker to properly evaluate the trade-off vector when all the criteria are considered simultaneously. Hence partitioning of the criteria set is recommended to facilitate easier interaction with the decision maker. A linear additive utility function is assumed.
1. INTRODUCTION
An interactive approach for multiple criteria decision making (MCDM) is becoming increasingly popular and is considered to be a promising approach [ 1, 21. An interactive method encourages search of the decision maker’s preference structure through interaction with the decision maker. At each iteration, the decision maker (DM) provides preference information, either implicitly or explicitly. Since the DM is involved in the procedure throughout the solution process, the interactive approach has a much better chance of being implemented. However, most interactive methods have the following weaknesses: ( 1) It is assumed that the DM can properly evaluate the trade-off vector regardless
of the number of criteria involved in the trade-off vector. (2) The convergence of the methods is completely dependent on the DM’s rationale and consistency. t Myungkoo Kang is an Operations Research analyst in the Republic of Korea Army. He received a B.S. degree in Mechanical Engineering from Korea Military Academy; MS. and Ph.D. degrees in Industrial Engineering from the University of Oklahoma, Norman, Okla. His research interests include mathematical programming and computer applications of optimization techniques. $ Author for correspondence. fiKailash Kapur is the director of the School of Industrial Engineering, The University of Oklahoma, Norman, Okla. He received a B.S. degree in Mechanical Engineering with distinction from Delhi University; M.S. degree in Operations Research and Ph.D. degree in Industrial Enaineerina from the Universitv of California. Berkelev. He has co-authored the book Reliability in-Engineering Design,Wiley, New York. He receiied the Allan Chop Technical Advancement award from the Reliability Division and Craig Award from the Automotive Division of the ASQC. He was elected a Fellow of the American Society for Quality Control and the Institute of Industrial Engineers. %P. Simin Pulat is an Associate Professor in Industrial Engineering at the University of Oklahoma. She received a B.S. degree in Industrial Engineering from Middle East Technical University in Turkey; MS. and Ph.D. degrees in Operations Research from North Carolina State University. Her research interests include network optimization, mathematical programming, multiple criteria optimization, and computer applications of optimization techniques. 435
MYUNGKOO KANG et al.
436
The reduced burden on the DM and the inconsistency of the DM in the evaluation of the trade-off vector are the two critical factors to be considered in developing an interactive method. One way to reduce the burden on the DM is to ask him/her easier questions-such as asking for pairwise comparisons [ 31 instead of requiring the decision maker to specify the trade-off value in a criterion for a unit increase in the reference criterion. Another way to facilitate an interaction is to visually present an alternative to the DM [4]. But even with the above suggested interaction styles, it is very difficult for the DM to respond to a preference question involving a trade-off vector for a large number of criteria. An effort to reduce the burden of evaluation on the DM has been made by Churchman and Ackoff [S] in their approach to estimate the true weight vector for a discrete multiple criteria decision making problem. However, we are not aware of such an approach for interactive multiple objective mathematical programming problems. Recognizing that the inconsistency of the DM is closely related to the cognitive burden on the DM, the proposed algorithm partitions a p-dimensional trade-off vector (where p denotes the number of criteria involved in the problem) into some convenient number of subsets to facilitate easier interaction with the DM. Consequently, the alleviated cognitive burden on the DM may lead to a reduction in inconsistent responses. Hence, the trade-off vector posed to the DM for a preference question is of a lower dimension than the trade-off vectors posed by the existing procedures. Similar to the method of Zionts and Wallenius [6], an efficient solution is presented to the DM at each iteration and the feasible weight space, A, is reduced according to the DM’s responses to the preference questions by adding new constraints to the existing A. The reduction in the weight space continues until the current solution is identified as the best compromise solution (see Section 2.7).
1.1. The problem and notation
Consider the maximization of a set of p conflicting linear multiple objectives over a linear constraint set. Assume that the DM’s utility function is unknown but is a linear additive function. The problem can be mathematically defined as f(x) = cx
max subject to
Ax < b x > 0.
(1)
Denoting matrices and sets by bold capital letters and vectors by bold lower-case letters, f(x) E RP represents the p dimensional vector [fi(x),fi(x), . . .,&(x)], where fi(x) = ckx with c” ER” being a vector of cost coefficients for the kth objective, 1 < k < p. Cpx. = (c’, . . . , cp)’ then represents the cost coefficient matrix and X represents a set offeasible solutions such that X = {x E R”I Ax < b, x 2 O> where A,,, = [a,, . . . , a,] is the constraint coefficient matrix. Also, u:RP + R represents the utility function of the DM and A s RP represents the set of the feasible weight vectors. Given the linear utility function, ( 1) can be restated as max {u(f(x))jxcX}
= max{CliOfi(x)lAx
+
X, =
b,(x,x,)
2 0}
(2)
i
where Jo is the true weight vector for the DM’s utility function. Further, we define a composite objective, f,+ 1(x), as i”f(x), where Ik is a weight vector estimate at the kth iteration. The reduced cost wf for the variable j associated with the ith criterion for a basic feasible solution (BFS) is determined by w: = cLB_ ‘ai - c$ where ck E R” is the row vector of cost coefficients corresponding to the basic variables in the ith objective and cj is the cost coefficient for variable j with respect to the ith criterion. Since the objectives are conflicting with each other at a given feasible solution, and the goal is to improve a certain objective function value, improvement in that certain objective function value can only be achieved by sacrificing at least one other objective function value.
Criteria set method for multiple objective linear programming
431
1.2. Scaling the objective functions
Steuer [ 73 suggests three possible approaches for resealing the objective functions: (a) normalizing, (b) the use of 10 raised to an appropriate power, and (c) the application of range equalization factors. Zionts and Wallenius [ 33 use the normalization approach where the non-zero coefficients of each objective function are normalized in a standard manner-i.e. the coefficients are divided by the square root of the sum of the squares of the coefficients. Approach (c) equalizes the ranges of the criterion values over the efficient set by multiplying each objective by its representative range equalization factor [ 83. Approach (b) brings all objective function coefficients into the same order of magnitude by simply moving the decimal points to the appropriate place. Approaches (a) and (c) are likely to change the coefficients to unrecognizable numbers. However, approach (b) ensures that the recognition of each coefficient is retained with 10 raised to an appropriate power because only the decimal point is adjusted. Considering the fact that the DM’s consistent response to the preference question over a trade-off vector is the key to success in interactive methods, and that the DM’s consistency is closely related to the degree of the difficulty in evaluating the trade-off vectors, approach (b) is recommended for our algorithm.
2. DEVELOPMENT
OF THE
ALGORITHM
In general, the algorithm generates efficient solutions to a linear programming problem, each involving maximization of a weighted sum of the objectives. Normally, however, in a multiple criteria decision problem, a certain criterion, say the ith criterion, is more important than others; or perhaps the DM may be more familiar with a certain criterion, say the ith criterion, and can easily compare the change in the ith criterion value with that in the utility. Thus, it is quite natural for the DM to base his/her trade-off decision on the change of the ith criterion value. For our purposes, a primary criterion on which the DM bases his/her trade-off decisions is called the reference criterion. The interactive partitioning criteria set (IPCS) method asks the DM to specify the reference criterion at the beginning of the procedure. If the DM does not provide the reference criterion, then the IPCS method selects the reference criterion arbitrarily. The maximum dimension of the trade-off vector on which the DM can express the preference with consistency is called the proper vector size (PVS). One unique feature of the IPCS method is the use of the PVS. Noting that a trade-off vector of high dimension causes the DM unnecessary confusion, the IPCS method asks the DM to specify the PVS at the beginning of the procedure. If the DM cannot provide the PVS, then a threshold value of five is used for the PVS. The threshold value of five is based upon the result of Miller’s study on human capacity to process information. During the interaction phase of the algorithm, the procedure asks the DM a preference question over a trade-off vector whose dimension does not exceed the PVS. 2.1. Partitioning the criterion set Since we assume that p is a large number, the criterion set is partitioned into some appropriate number of subsets for the sake of easy interaction. Based on the reference criterion and the PVS specified by the DM at the beginning of the procedure, the following partitioning scheme is employed: (a) The (p - 1) criteria, except for the reference criterion, are evenly distributed among the appropriate number of subgroups such that no subset has more than (PVS - 1) elements. (b) Then the reference criterion is added to each subset. Relative weight relation is established within each partitioned set through interaction; then overall weight relation is naturally established because the reference criterion is the common element among the partitioned subsets. After the partitioning we have [(p - 1 )/PVS - 1 )] subsets, where r] is the round-up function. Hence r(p - l)/PVS - l)] is the smallest integer greater than or equal to (p - l)/(PVS - 1).
MYUNGKOO KANGet al.
438
2.2. Utility inefficiency A nonbasic variable corresponding to a given BFS, Xj, is said to be utility inefficient if and only if the adjacent BFS with xj in the basis has a utility less than or equal to the utility of the current solution [lo]. Utility efficient variables with respect to a given solution can be detected as follows. Let f(x’) be the current criterion vector for an extreme point x1 and f(x’) be the criterion vector with respect to an adjacent extreme point x ‘. Assume that xi is the non-basic variable entering the basis when moving from x1 to x2 and let wj be the reduced cost column vector associated with the nonbasic variable Xj, Then f(x2) = f(x’) - tlw? where 0 is the maximum allowable change for the nonbasic variable Xj with respect to the basis corresponding to x1. Consider the following subproblem. Sub-problem( Eff): min
t
i=l
w~~i112E A9
.
(3)
where A4 is the current set of the feasible weight vectors. If the optimal objective function value for the sub-problem( Eff) is non-negative, then the nonbasic variable xj is utility inefficient for the BFS x1 [lo]. The net amount of change in utility achieved by accepting trade-off T is denoted by Au(T). The DM prefers trade-off T only if Au(T) > 0. Some, but not necessarily all, utility inefficient nonbasic variables can be identified by solving sub-problem( Elf). All other nonbasic variables can be identified by asking the DM to express the preference of the adjacent solutions over the current solution. If the DM does not prefer an adjacent solution, then the corresponding nonbasic variable is utility inefficient. If all nonbasic variables are utility inefficient for the BFS x*, then due to the convexity of the constraint set there is no feasible direction from x* which improves the utility achieved at x*. The above case is called Termination Criterion I and x* is optimal in terms of the DM’s utility function. While Zionts and Wallenius [6] terminate their procedure when Termination Criterion I is met, the partitioning of the criterion set renders Termination Criterion I inappropriate for the IPCS method. Thus, the IPCS method utilizes another termination criterion called Termination Criterion II (see Section 2.7). 2.3. ~e~u~uncy
and inconsistency
A new weight constraint is added to the existing feasible weight constraint set, based on the DM’s response to a preference question with a trade-off. Before adding a new constraint to an existing weight constraint set, the redundancy of the new constraint needs to be checked. Consider the current weight constraint set. Suppose there exists a set of r linear inequalities in p variables: C&1wjni~b’, ~i~Oo, forj= l,..., r. If the optima1 objective function value for the following sub-problem is non-positive, then constraint k is redundant [ 11J. Sub-problem( redundancy): max subject to
f
wj& < ti,
forj=
l,..., r,j#k.
i=l
(4) During the procedure, the DM’s responses to the preference questions are converted into the weight constraints. Consider two weight constraints, gi(lz) < 0 and g2(1) < 0, corresponding to the DM’s responses to the two preference questions. Let A = (leRPlg,(l) < 0, g2(A) < 0, A 2 01. If A = 0, then the DM is inconsistent [lo]. If the DM’s preference response to a trade-off question is “indifferent”, then the response is discarded to allow for the DM’s vagueness in judging the trade-off vector. But even with the allowance for DM vagueness, inconsistency in the DM’s responses may still exist. Employing Malakooti and Ravindran’s [lo] approach, an inconsistent response can be removed. The newly added weight constraint for trade-off vector T may take either of the following forms: -AT < 0
Criteria set method for multiple objective linear programming
439
or 1T < 0. Without loss of generality, we may assume AT < 0.If the response, IT < 0,is erroneous due to the DM’s inconsistency, then there must exist a positive real number, r, such that (IT- r)< 0; where r is called the inconsistency compensation variable. The existence of inconsistency can be detected by solving the following LP problem. If the optimal objective function value for the sub-problem is positive, then there exists at least one inconsistent response [lo]. Sub-problem(consistency):
subject to
-rZTj-zj
keK
120
(5)
where J is the set of indices for the new weight constraint of the - ATj < 0 type. Similarly, K corresponds to the ATk< 0 type. Furthermore, the inconsistency is caused either by an error in estimating or comparing the trade-off vectors or by a change in the DM’s preferences. When an inconsistent response exists, the trade-off vectors that correspond to the positive inconsistent variables are identified and presented to the DM for new responses. If the DM reiterates some of his/her previous responses, it is assumed that the DM’s response to this particular trade-off vector is correct and valid. If a weight constraint is correct, we can eliminate the inconsistent variable that corresponds to the constraint. This elimination in turn forces the associated weight constraint corresponding to the reiterated response to hold in sub-problem(consistency). If the DM’s new responses are different from the previous responses, then new constraints will be constructed based on the new responses and substituted for those based on the previous inconsistent responses. The above procedure is repeated until either the optimal function value for sub-problem(consistency) becomes zero or sub-problem(consistency) becomes infeasible. If a consistent weight vector exists, then it can be found by solving sub-problem(consistency) a finite number of times. If sub-problem(consistency) is infeasible and there is no inconsistency compensation variable, r, in the sub-problem, then it is assumed that the DM is not able to correct the detected inconsistency and we terminate the procedure. 2.4. Estimation
of the true weight vector
The IPCS method requires an estimate of 1’ at each iteration. If the optimal objective function value is zero for sub-problem(consistency), then all of the inconsistent variables are zero and there exists a feasible weight vector, lq6Eq. In this paper, the middle-most point [ 33 in the feasible weight space is chosen as the weight vector estimate. The middle-most point is the optimal point given by with the following sub-problem, sub-problem(lambda). Assuming that there are I constraints in the current weight constraint set, A, we have two possible types of constraints in A; either -AT' < 0 or ATk< 0,where j and k are indices of the trade-off vectors posed to the DM as preference questions. Consider the following sub-problem(lambda). Sub-problem( lambda): max
min{sl,...,sp+r}
subject to
1, + *.. + & = 1
-nTj+Sj=O, jeJ ATk+Sk=O,
li_Si
= 0,
keK i = l,...,p
(6)
where J is the set of indices for the -AT' < 0 type. Similarly, K is for the ATkc 0 type. The optimal solution for sub-problem(lambda) is the point whose minimum slack to each inequality is maximized. The utilization of the middle-most point as the weight vector estimate is
MYUNGKOOKANG et al.
440
justified because it is a feasible weight vector and is farthest from each of the hyper-planes defining the feasible weight space.
2.5. l?ade-off vectors In some cases, the need for lengthy interaction with the DM may negatively influence the adoption of the method. However, one way to assure fast convergence to the optimal solution is to cut off a large portion of the feasible weight space at each iteration. The IPCS method employs two types of trade-off vector: (1) The partitioned reduced cost vector of a utility efficient nonbasic variable, referred to as a Type I trade-off vector. (2) The trade-off vector based on the weight estimate, referred to as a Type II trade-off vector. Unlike the method of Zionts and Wallenius [3, 61, our method utilizes the partitioned reduced cost vectors as the Type I trade-off vector. The DM is asked the preference questions with the trade-off vectors for partitioned criterion sets. Suppose the criterion subset, Sk, has d < p criteria in the set. Let T = (t$, . . . , td) be a trade-off vector which is presented to the DM for the kth partitioned criterion set. Let I, = (iI ti > 0, ig Sk] and I, = fj[ tj < O,~ESk). An example of a preference question corresponding to a trade-off vector T = (tl, . . . , td) is: “Are you willing to sacrifice tj units in criterion j for all 1~ I,,,, in order to have ti units of increase in criterion i for all ieI,?” The DM is required to answer a series of the above questions for all partitioned subsets of a utility efficient nonbasic variable except the trivial case where ti 2 0 for i = 1, . . . , d. As noted in Section 2.2, termination of the procedure is not assured by the preference question with a Type I trade-off vector. We utilize the other type of trade-off vector to assure the te~ination of the algorithm. Assuming tht I= (1 1,. . . , $J is the current estimate of A*, the p-dimensional trade-off vector T; = (tf, 0,. . . , 0,tj, 0,. . . , 0) is called the Type II trade-off vector between criterion 1 (the reference criterion) and criterion i, if (ti, ti) = ( - M&, M& ) or (ti, ti) = (M&, -M& ), where M is some positive number. Note that jj > 0 for j = 1,. . . , p since 1 is a middle-most point of the polyhedral set. According to its definition, the Type II trade-off vector has only two non-zero elements, Since oniy two criteria have nonzero elements, the evaluation over the Type II trade-off vector by the DM can be performed with less difficulty than evaluation over the Type I trade-off vector which contains more than two non-zero elements. Example. Let i = (2 r,. . ., &) be the current estimate of A*. Then Tg is either M(&, -x,,o,..* ,O) or ~(~~,~~,O,,.., 0), with & being the reference criterion; similarly, T: is either M( &, 0, -I,, 0, . . . , 0) or M( - &, 0, I,, 0, . . . ,0). M is a positive real number. Note that M 3 1 magnifies Aa to a certain extent. Consider the case when JA\u(Ti)l = 0. In this case, it is not easy for the DM to express his/her preference. The use of a carefully chosen M helps the DM to express his/her preference over Ti by magnifying the magnitude of Au(Tb). One possible rule for the selection of M is as follows. Suppose 2i is the known upper bound for objective 1 and zi is the known lower bound for objective i. Further let &i and I be the current objective function values for objectives 1 and i, respectively. Suppose ti = (M)(&) and ti = ( -M)(&). Therefore objective 1 and objective i will be increased by (M)( &) and ( - M)( 2,) respectively when the trade-off vector Ti is taken. But, the increase in objective 1 is bounded by f,. Hence the maximum value of M is M, = (5i - Pi)/& Similarly, the maximum value M can take with respect to the lower bound of objective i is M, = (2, - 4i)/~l. In order to satisfy both bounds, M = min{M,, M,j. If M < 1, then set M = 1. Theorem I. Let T:, i = 2,. . . , p be the Type II trade-off vector between criterion 1 which is the reference criterion and criterion i and let 2 = (1 1,. . . ,A,) be the current estimate of the 2’. Then the (p - 1) Dimensions cutting plane corresponding to the Au(T$ passes through i in the weight space. Proof: Without loss of generality it is assumed that Au(T:) >, 0, or equivalently that AT: 3 0.
Criteria set method for multiple objective linear programming
441
Rewriting ITi 2 0 as &Ii - &j, 2 0 and substituting (&, &) for (A,, A,), equality relation holds. Q.E.D. The cutting plane corresponding to Au(TfJ > 0 passes through 1. Since the current estimate of 12’is the middle-most point in the feasible weight space, a cutting plane passing through 1 is expected to cut off approximately half of the current feasible space. Hence adding this type of cutting plane leads to a considerable reduction in the feasible weight space. Furthermore, the use of the Type II trade-off vector assures the strict reduction of the current weight space. Since it passes through the relative interior point of the feasible weight space, a cutting plane from the Type II trade-off vector divides the feasible weight space into two mutually exclusive non-empty sets. 2.6. Strength of preference
and weight constraint
generation
Due to the partitioning of the criterion set, the expected number of preference questions for this method may be larger than the existing methods if no other measure is taken. The concept of strength of preference [lo] is utilized in a slightly modified way to reduce the number of interactions with the DM. It is assumed that the DM has the implicit preference zone, /I, in the evaluation of the strength of the preference for a given trade-off vector, where fi is bounded by two positive real numbers, /I1 and &,, such that /I1 < /I < &. Based on the preference zone concept and the net utility change caused by accepting a trade-off, different responses given by the DM will lead to different characterizations of trade-off vector T. Definition 7. Let /I be the preference zone used by the DM to evaluate the strength of the preference for a given trade-off vector. Suppose two positive real numbers, /I1 and b,, are respectively the greatest lower bound and the smallest upper bound of j such that /I1 < B < j?,,. Then the strength of preference is defined as follows.
(a) A trade-off (b) A trade-off (c) A trade-off (d) A trade-off (e) A trade-off (f) A trade-off
vector T is “strongly preferred” if and only if -Au(T) < vector T is “preferred” if and only if /J1 < Au(T) < j?,. vector T is “weakly preferred” if and only if -bl < -Au(T) vector T is “strongly not preferred” if and only if Au(T) < vector T is “not preferred” if and only if - p, < Au(T) < vector T is “weakly not preferred” if and only if -/I1 < .-Au(T)
- &. < 0. -/?,. -pi. < 0.
Each type of the prefer responses results in a new constraint, -Au(T) < 0. Similarly each type of the do not prefer responses leads to Au(T) < 0. Constraints g,,(n) < 0, g,,(l) < 0, g,,(n) < 0, and g,,(n) < 0 denote weight constraints corresponding to the strongly prefer, weakly prefer, strongly do not prefer, and weakly do not prefer types response, respectively. Thus, based on the strength of preference, it is possible to generate the following additional constraints on the true weight vector 1’ without asking the DM a preference question. (a) s,,(l)
- s,,(n)
< 0 (b) s,,(n) - s,,(n)
< 0 (c) s,,(J) - s,,(J)
< 0 (d) s,,(n) - s,,(J)
< 0
The utilization of strength of preference along with the Type II trade-off vector results in savings in the number of preference questions generated and convergence time to the true weight vector, While Malakooti and Ravindran [lo] utilize a positive real number as a preference boundary to distinguish strongly prefer and weakly prefer type responses, the IPCS method utilizes the preference zone. The use of the preference zone instead of the preference boundary avoids inconsistent results by the DM when Au(T) x /I. 2.7. True weight vector neighborhood
and termination
Since the problem is assumed to have p criteria, at most (p - 1) Type II trade-off vectors can be presented to the DM with a given weight vector estimate. The currently estimated weight vector, 1, is said to be an indifferent weight vector if and only if all (p - 1) preference questions with Type II trade-off vectors result in DM “indifference”. Then, the true weight vector, 1’, is an indifferent weight vector. It is almost impossible for the DM to be absolutely accurate in responding to the preference question with a trade-off vector. Thus the DM should be allowed some degree of vagueness in evaluating a trade-off question. Due to this lack of preciseness, if the remaining weight space is small enough then the DM cannot improve the weight vector estimate any further. Based
442
MYUNGKOOKANG et al.
on the above observation, the true weight vector neighborhood is defined as the set of indifferent weight vectors, A, = (I~RP~IEA, and Iz.IS an indifferent weight vector). If all (p - 1) responses from the DM result in indifference for Type II trade-off preference questions, then we terminate the algorithm with j being the best estimate for the true weight vector. This is referred to as termination criterion II. Due to lo, the true weight vector neighborhood is not empty. Termination criterion II refers to the case where the feasible weight space is small enough such that its middle-most point, 1, is so close to A0 that the DM cannot provide accurate responses to the true weight vector 1’. If termination criterion I or II is met at solution x* with f(x) = (c’x*, . . . , d’x*), then x* with f(x*) is called the best compromise solution to (2). The best compromise solution may not be unique because two different weight vector estimates, 1’ E A, and i2 E AN, may result in different optimal solutions to (2) depending on the degree of preciseness of the DM. Since A(\, is assumed to be a very small region and the utility function is assumed to be linear, the two solutions should be very close. The procedure utilizes the strength of preference along with the Type II trade-off vector to assure fast convergence.
3. ALGORITHM
Overall, the algorithm has two phases: the initialization phase and the main phase. During the initialization phase, the algorithm goes through some preliminary tasks before proceeding to the main phase. Some of the preliminary tasks are scaling the objective functions, partitioning the criterion set, and determining the best achievable values for all criteria. Then during the main phase we start with an efficient solution on hand and ask the DM whether she/he prefers a given trade-off. Based on the response, the feasible weight space is reduced. Each step is presented as a separate subsection in a step-by-step manner with comments and explanations between the steps. 3.1. Initialization Step 1. Present the best achievable values for each criterion. Solve p single optimization problems with each criterion as the objective function to be maximized. Step 2. Partition the criterion set. (a) Ask the DM to specify the PVS (proper vector size) for the trade-off vectors. (b) Ask the DM to specify the reference criterion. (c) Based on the PVS and the reference criterion partition the criterion set. If the DM cannot determine the PVS, then set the PVS equal to five. If the reference criterion is unspecified, arbitrarily choose criterion one. Step 3. Scale each objective function. The objective functions are resealed by applying one of the approaches in Section 1.2: (a) normalization, (b) using range equalization factors, and (c) using 10 raised
to an appropriate power. Step 4. Optimize the composite objective function with equal weights assigned to each objective function. Initially fl’ = ( 1/p, . . . , 1/p) and f,+*(x) = AIf( Maximize the f,+ r(x) over X. The resulting solution, x*, is efficient. Step 5. Refinement of the initial estimate of 1’. One may skip this refinement step and start the main procedure with the arbitrary feasible weight vector estimate for 12’. Initially the feasible weight space is: A’ = (RERPlCiIZi = 1, ni 3 0 for i = l,...,p>. Let rZ’ be the middle-most point of A.‘. With 12l,pose the Type II trade-off question to the DM. If (p - 1) responses are all “indifferent”, terminate the procedure ~termination criterion II). Otherwise update the weight estimate, A’, and the feasible weight set, A’. Repeat Step 5 for an appropriate number of times.
443
Criteria set method for multiple objective linear programming
Step 6. Optimize the composite objective function over X. Determine the composite objective function,&+ 1(x) with the updated weight estimate, rZ’, using fp + l (x) = A’f( x). Optimize &,+ 1(x) over X. The resulting solution, x*, is efficient. Step 7. Let q = 0, where q is an iteration index. 3.2. Main procedure Increase the iteration counter by one. Letq=q+ 1. Step 2. Pick xi nonbasic with z* = min(z= i wj&( rZE A’} 2 0. If no xj exists, then stop (termination criterion I). Otherwise go to Step 3. For each xj nonbasic, solve sub-problem( Elf): z = min {u= 1 w$ li 1rZE A’}. If z* > 0, then xj is utility inefficient. Otherwise xj is utility efficient. This step is repeated until a utility efficient nonbasic variable is found. If no nonbasic variable is utility efficient, then stop the procedure with the current solution as the best compromise solution (termination criterion I). Step 3. Interaction with the DM. Two types of interactions are performed in this algorithm: one with the Type I trade-off vector and the other with the Type II trade-off vector. Step 3.1. Interaction with the Type I trade-off vector Step 1.
Pick a utility efficient nonbasic variable determined in Step 2, say xj. For each partitioned group, present its negative reduced values to the DM as the trade-off vector, and ask the DM if he/she likes the given trade-off. (c) Convert the DM’s response into the appropriate mathematical relationship and test for inconsistency. If the response is inconsistent, then correct the DM’s inconsistency. (d) Based on the strength of preference, generate additional constraints. (e) Employ the redundancy test to remove all redundant constraints. For each prefer type response, add a weight constraint of the form of the following inequality: ~
Wj~i
<
0,
(7)
i=l
where pi is the number of criteria in a given partitioned set. Similarly, for each do not prefer type response, add a weight constraint of the form of the following inequality: ids ( -
l)Wjni
<
0.
(8)
For each non-indifferent response, test inconsistency by solving subproblem(consistency). If sub-problem(consistency) is infeasible, the DM is inconsistent. If the DM is inconsistent, then find the candidate response for inconsistency and take the correction measure suggested in Section 2.4. Once the correction method is followed, add the new constraint to the current constraint set and generate constraints using the preference strength. Then test for any redundancy of the new constraints. Let gs(A) < 0 and g,,,(A) < 0 be two weight constraints associated with a “strong” response and a “weak” response, respectively. Finally, generate the constraint g,(A) - g,,,(d) < 0 for every combination of the strong and the weak responses. Step 3.2. Interaction with the Type II trade-off vector After interaction with the trade-off vectors derived from the reduced costs
MYUNGKOO KANG et al.
444
of a utility efficient (UE) nonbasic variable, interact with the DM using the Type II trade-off vector. If termination criterion II is not satisfied, then go through the following steps.
Step 4. Step 5.
(a) Convert the DM’s response into the appropriate mathematical relationship and test for inconsistency. If the response is inconsistent, then correct the DM’s inconsistency. (b) Based on the strength of preference, generate additional constraints. (c) Employ the redundancy test to remove all redundant constraints. Determine the new estimate of 1’ by solving sub-problem(lambda). Optimize the new composite objective function. Go back to Step 1.
3.3. Convergence Utilization of the strength of preference and the Type II trade-off vector assures convergence to the true weight vector neighborhood. Due to the partitioning of the criterion set, termination criterion I does not guarantee convergence of the algorithm. The set of new cutting planes generated from the preference responses over the Type II trade-off vectors pass through the current weight vector estimate, 1. Unless termination criterion II is satisfied where all (p - 1) responses to the preference questions over Type II trade-off vectors are indifferent, the feasible weight space is strictly reduced. Let H’ be the cutting plane resulting from a preference question over the Type II trade-off vector. Clearly, the feasible weight space, A, is a convex set and the middle-most point, 1, is a relative interior point of A. H’ divides A into two mutually exclusive nonempty sets, A+ and A-. Obviously, either Iz”EA+ or 12’~A-. Hence, from iteration to iteration, the feasible weight space A is strictly reduced. Since 1 is the middle-most point, H’ cuts off approximately half of the feasible weight space. This approach is conservative in the sense that no matter how the DM responds, minimum reduction of the feasible weight space is maximized. The IPCS method terminates after a finite number of iterations. Finite convergence of the algorithm can be proven by the properties of the finiteness of the hyper-volume of the initial weight space and that of AN. Note that a rZE A, cannot be better estimated due to the DM’s vagueness in peference. It is assumed that the hyper-volume of A, is a finite positive real number and is denoted by Vol,(A,) is a finite positive real number where Vol,(A,) denotes the hyper-volume of AN. In the following theorem, the feasible weight space reduction by only responses over the Type II trade-off vectors is considered for simplicity in the proof. Even though the preference responses over the Type I trade-off vectors and the additional constraints resulting from the strength of preference further reduce the feasible weight space, these do not affect the generality of the proof. Theorem 2. The IPCS procedure terminates in a finite number of iterations. ProoJ: Consider the initial weight space: Ainit = {rZE RPllZ> 0, xi Ai = 1, i = 1,. . . , p}. Clearly the hyper-volume of Ai”it is finite. Let the VOl,(Ai,it) denote the hyper volume of Ainit. Similarly, define the Vol,(A,) for AN. Consider the IPCS procedure at an iteration, say iteration k. Let A’ be the feasible weight set at iteration k, and jk~ Ak be the weight vector estimate at iteration k, which is the middle-most point in Ak. If all nonbasic variables are utility inefficient or all responses to the preference questions over the Type II trade-off vectors are indifferent, the procedure terminates at iteration k, proving finite convergence because k is finite. Otherwise, 1 is not in AN. Then, at least one of the responses to the preference questions over the Type II trade-off vectors is not indifferent. This implies one inequality constraining the feasible weight space. Due to the construction of the Type II trade-off vector, the cutting plane corresponding to the constraining inequality passes through point ik which is an interior point of the remaining feasible weight space. The remaining feasible weight space is divided into two mutually exclusive non-empty sets where we let Ak+i be one of those non-empty sets, such that ROEAk+'. Letting cp < 1 be a positive real number such that the VolH(Ak+l) < rpV~cp,(A~), the relation Vol,(A,) c (PJVK(PH(Ainit) clearly holds for some finite integer number j > 0. Q.E.D.
Criteria set method for multiple objective linear programming 4. COMPUTATIONAL
PERFORMANCE
445
OF THE ALGORITHM
A FORTRAN code of the method has been implemented on an IBM RISC workstation 6000. The test results on some 360 randomly generated problems are reported in this section. Problems of form 1 were generated for the tests. Matrix A with 70% density was randomly generated using the random number generator for each of the problems tested. The right-hand side vector, b,was determined by multiplying the sum of the coefficients in each row of A by a constant (m/n). The cost coefficient matrix, C, was also randomly generated such that 20% of the coefficients were negative values. The weight parameters, Izi for i = 1,. . . , p, used in the linear approximated utility function were randomly generated and normalized so that xi& = 1. Problems of twelve different sizes were generated using sample sizes of 30. Note that the number of Type II questions at an iteration was (p - 1). For each of the test problems, the number of questions, the number of iterations, and the solution time were recorded and statistics on these measures have been presented in Tables l-3. The CPU time was recorded in hundredths of a second. Table 1 displays the results of the method for the randomly generated problems with five criteria and a PVS = 3. The table shows that most of the questions required by the method are Type II questions which are easier to answer than other questions since a Type II question involves only two criteria at a time. Table 1 also shows the number of iterations and the CPU time required by the method. Tables 2 and 3 show the results of the method for test problems with eleven criteria. The data in Table 2 were derived from solving problems using five partitioned criteria subsets. The data in
Table 1. Computational
results for the IPCS method with p = 5 and PVS = 3
No. of questions
No. 1 2 3 4 5 6
Problem size 10 10 20 20 30 30
x x x x x x
10 30 30 50 30 50
asked
Number of iterations
CPU time
Total
Type 1:Type II ratio
Average
SD
Average
SD
15.1 18.5 20.9 24.1 23.0 24.4
2.2112.9 2.9115.6 3.4J11.5 4.8119.3 3.8119.2 4.1120.3
3.2 3.9 4.4 4.8 4.8 5.1
0.06 0.09 0.04 0.06 0.05 0.04
1.9 4.7 9.6 20.1 15.8 33.3
0.08 0.18 0.15 0.46 0.26 0.46
Table 2. Computational
results for the IPCS method
with p = 11 and PVS = 3
Number of No. of questions
No. I 2 3 4 5 6
CPU time
Total
Type 1:Type II ratio
Average
SD
Average
SD
10 10 20 20 30 30
16.6 70.1 80.3 90.8 109.2 102.6
0.34 0.33 0.35 0.35 0.35 0.36
5.1 5.3 6.0 6.7 8.1 1.5
0.10 0.11 0.05 0.17 0.18 0.22
80.1 98.8 114.8 178.1 183.2 222.5
2.05 2.56 1.38 5.92 5.71 9.31
x x x x x x
10 30 30 50 30 50
results for the IPCS method
No. of questions
1 2 3 4 5 6
iterations
Problem size
Table 3. Computational
No.
asked
Problem size 10 10 20 20 30 30
x x x x x x
10 30 30 50 30 50
asked
with p = 11 and PVS = 6
Number of iterations
CPU time
Total
Type 1:Type II ratio
Average
SD
Average
SD
49.6 41.9 46.6 46.6 43.8 46.6
0.08 0.07 0.08 0.08 0.08 0.08
4.6 3.9 4.3 4.3 4.1 4.3
0.07 0.06 0.05 0.06 0.06 0.06
15.4 15.8 28.5 43.7 37.0 60.5
0.45 0.51 0.58 0.93 0.82 1.18
446
MYUNGKOOKANG et al.
Table 3, on the other hand, were derived by solving the same problems as before while using only two partitioned criteria subsets. We note that the number of questions posed to the DM for problems with a PVS = 3 was approximately twice the number of questions posed for problems with a PVS = 6. Also, the number of iterations needed for problems with a PVS = 6 was approximately 1.5 times the number of iterations needed for problems with a PVS = 3. It appears that the need for more partitioned subsets requires more questions due to the nature of the Type II questions. From these limited experiments, it is noted that there is trade-off relation between the reduced cognitive burden on the DM and the total number of questions to which the DM must respond. 7. CONCLUSIONS
Utilization of both the partitioning of the criterion set and the Type II trade-off vector are unique features of the IPCS method. The IPCS method has some advantages over the existing methods;’ especially for the case when a problem involves a large number of cirteria-i.e. intense trade-off evaluation-which imposes a heavy cognitive burden on the DM. The DM faces easier preference questions with the IPCS method because the criteria set is partitioned. The reduced burden in turn improves the consistency of DM’s responses. Furthermore the use of the Type II trade-off vectors and the strength of preference reduces the number of preference questions presented to the DM. The computational performance of the algorithm has been presented in this paper. A large number of problems were solved with the number of objectives varying from five to eleven. The number of questions asked, the number of iterations, and the CPU time for various problems have also been reported. REFERENCES 1. S. Zion&, A survey of multiple criteria integer programming methods. Ann. Discr. Math. 5, 389-398 (1979). 2. R. E. Rosenthal, Concepts, theory, and techniques: principles of multiobjective optimization. Decis. Sci. 16, 133-152 (1985). 3. S. Zionts and J. Wallenius, An interactive multiple objective linear programming method for a class of underlying nonlinear utility functions. Mgmt Sci. 29, 519-523 (1983). 4. P. Korhonen and J. Laasko. A visual interactive method for solving the multiple criteria problem. Eur. J. Ops Res. 24, 277-287 (1986). 5. C. W. Churchman and R. Ackoff, An approximate measure of value. Ops Res. 2, 172-181 (1954). 6. S. Zionts and J. Wallenius, An interactive programming method for solving the multiple criteria problem. Mgmt Sci. 22,652-663 (1976). 7. R. E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Application. Wiley, New York (1986). 8. S. M. Belenson and K. C. Kapur, An algorithm for solving multicriterion linear programing problems with examples. Ops Res. Q. 24, 65-77 (1973). 9. G. Miller, The magical number seven plus or minus two: some limits on our capacity for processing information. Psychol. Rev. 3, 81-97 ( 1956). 10. B. Malakooti and A. Ravindran, Interactive paired comparison simplex method. Annul. Ops Res. 5, 575-597 (1985). 11. G. L. Thompson, F. M. Tonge and S. Zionts, Techniques for removing nonbinding constraints and extraneous variables form linear programming problems. Mgmt Sci. 12, 588-608 (1966).