Computers & Operations Research 32 (2005) 2235 – 2254
www.elsevier.com/locate/dsw
Linear programming models for estimating weights in the analytic hierarchy process Bala Chandrana , Bruce Goldenb;∗ , Edward Wasilc a
Department of Industrial Engineering and Operations Research, University of California, Berkeley, CA 94720, USA b R.H. Smith School of Business, University of Maryland, College Park, MD 20742, USA c Kogod School of Business, American University, Washington, DC 20016, USA
Abstract We present an approach based on linear programming (LP) that estimates the weights for a pairwise comparison matrix generated within the framework of the analytic hierarchy process. Our approach makes sense for a number of reasons, which we discuss. We apply our LP approach to several sample problems and compare our results to those produced by other, widely used methods. In addition, we extend our linear program to include applications where the pairwise comparison matrix is constructed from interval judgments. ? 2004 Elsevier Ltd. All rights reserved. Keywords: Analytic hierarchy process; Linear programming; Interval AHP; Sensitivity analysis
1. Introduction In the late 1970s, Saaty [1,2] developed the analytic hierarchy process (AHP) as a robust approach to multicriteria decision making. In the last 25 years, the AHP has been applied in more than 30 diverse areas to rank, select, evaluate, and benchmark decision alternatives (see [3,4]). In the AHP, the decision maker models a problem as a hierarchy of criteria, subcriteria, and alternatives. After the hierarchy is constructed, the decision maker assesses the importance of each element at each level of the hierarchy. This is accomplished by generating entries in a pairwise comparison matrix where elements are compared to each other. For each pairwise comparison matrix, the decision maker typically uses the eigenvector method (EM) (more about this method in the next section) to generate a priority vector that gives the estimated, relative weights of the elements at each level of the hierarchy. Weights across various levels of the hierarchy are then aggregated using the principle of hierarchic composition to produce a ;nal weight for each alternative. ∗
Corresponding author. Tel.: +1-301-405-2232; fax: +1-301-405-3364. E-mail address:
[email protected] (B. Golden).
0305-0548/$ - see front matter ? 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.cor.2004.02.010
2236
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
In this paper, we present a novel approach based on linear programming (LP) to generate a priority vector within the framework of the AHP. The rest of this paper is organized as follows. In Section 2, we review two common methods for deriving priority vectors. In Sections 3 and 4, we formulate our linear program for deriving priority vectors, discuss the advantages of our LP approach, and present possible extensions, in particular, to the recently studied interval AHP. In Section 5, we apply the LP approach to ;ve pairwise comparison matrices and discuss the results. In Section 6, we present our conclusions. 2. Estimating weights: traditional methods Over the years, several methods have emerged for estimating the weights from a matrix of pairwise comparisons including EM and logarithmic least squares (LLS). A discussion of the advantages of the competing methods is provided by Harker and Vargas [5]. EM was developed by Saaty [2] and is the most widely used method. The popular AHP software package Expert Choice [6] uses EM to generate priority vectors. EM solves an eigenvalue problem associated with an n × n pairwise comparison matrix in the following way. Let A = (aij ) for all i; j = 1; 2; : : : ; n denote a square pairwise comparison matrix, where aij gives the importance of element i relative to element j. Each entry in matrix A is positive (aij ¿ 0) and reciprocal (aij = 1=aji for all i; j = 1; 2; : : : ; n). The decision maker wants to compute a vector of weights (w1 ; w2 ; : : : ; wn ) associated with A. If the matrix A is consistent (that is, aij = aik akj for all i; j; k = 1; 2; : : : ; n), then A contains no errors (the weights are already known) and we have aij = wi =wj ;
i; j = 1; 2; : : : ; n:
(1)
Summing over all j, we obtain n
aij wj = nwi ;
i = 1; 2; : : : ; n
(2)
j=1
which, in matrix notation, is equivalent to Aw = nw:
(3)
The vector w is the principal right eigenvector of the matrix A corresponding to the eigenvalue n. If the vector of weights is not known, then it can be estimated from the pairwise comparison matrix Aˆ generated by the decision maker by solving Aˆ wˆ = wˆ
(4)
for w. ˆ The matrix Aˆ contains the pairwise judgments of the decision maker and approximates the matrix A whose entries are unknown. In (4), is an eigenvalue of Aˆ and wˆ is the estimated vector of weights. Saaty [2] uses the largest eigenvalue max of Aˆ when solving for wˆ in ˆ Aˆ wˆ = max w:
(5)
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2237
Saaty has shown that max is always greater than or equal to n and that if its value is close to n, then the estimated vector of weights wˆ solves (3) approximately. In addition, Saaty used max to ˆ The consistency index (CI) is given by develop a measure of the consistency of the matrix A. CI = ( max − n)=(n − 1):
(6)
The consistency ratio (CR) is given by CR = CI=RI:
(7)
The random index (RI) is the average CI of a large number of randomly generated matrices. RI depends on the order of the matrix. A CR of 0.10 or less is considered acceptable (see [2]). The LLS method has also been used to estimate the vector of weights. In LLS, the weights wi , for i = 1; : : : ; n, are chosen to minimize the objective n n (ln aij − ln wi + ln wj )2 : (8) i=1 j=1
Given that aij = 1=aji for all i; j = 1; 2; : : : ; n, the LLS solution is quite simple: wi is given by the geometric mean of row i (see [2]). 3. LP approach In this section, we develop a two-stage LP approach for generating a priority vector. In the ;rst stage, we formulate a linear program that provides a consistency bound for a speci;ed pairwise comparison matrix. In the second stage, we use the consistency bound in a linear program whose solution is a priority vector. We illustrate our approach by constructing models for a small pairwise comparison matrix. 3.1. First stage: linear program to establish the consistency bound Let the equation wi =wj = aij ij ;
i; j = 1; 2; : : : ; n
(9)
de;ne an error ij in the estimate of the relative preference aij . If the decision maker is consistent, then, after taking the natural logarithm of (9), we have ln ij = 0. Next, we de;ne the variables for our linear program. The constants are given by n = number of rows (columns) in the square matrix A and aij = entry for row i and column j in the matrix A. The decision variables are given by wi = weight of element i and ij = error factor in estimating aij . We use three transformed decision variables in our model: xi = ln(wi ), yij = ln( ij ), and zij = |yij |. The ;rst-stage linear program is given by the following: n n− 1 Minimize zij (10) i=1 j=i+1
subject to xi − xj − yij = ln aij ;
i; j = 1; 2; : : : ; n; i = j;
(11)
2238
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
zij ¿ yij ;
i; j = 1; 2; : : : ; n; i ¡ j;
(12)
zij ¿ yji ;
i; j = 1; 2; : : : ; n; i ¡ j;
(13)
x1 = 0;
(14)
xi − xj ¿ 0;
i; j = 1; 2; : : : ; n; aij ¿ 1;
xi − xj ¿ 0;
i; j = 1; 2; : : : ; n; aik ¿ ajk ; for all k; aiq ¿ ajq for some q;
zij ¿ 0;
i; j = 1; 2; : : : ; n;
xi ; yij unrestricted;
i; j = 1; 2; : : : ; n:
(15)
(16) (17) (18)
We obtain constraints (11) by taking the natural logarithm of (9). In comparison matrix A, if aij is overestimated (that is, the decision maker’s judgment of entry i versus entry j is greater than the true value), then aji is underestimated. We then have
ij = 1= ji ;
i; j = 1; 2; : : : ; n
(19)
or yij = −yji ;
i; j = 1; 2; : : : ; n:
(20)
By obtaining the greater of yij and yji , constraints (12) and (13) identify for each i and j the element that is overestimated and the magnitude of the error. Since the solution set to constraints (11)–(13) is in;nitely large, we can arbitrarily ;x the value of any wi without loss of generality. This is done in constraint (14) by setting w1 = 1. Note that the ;nal weights can be normalized to sum to one. There are two desirable properties of a pairwise comparison matrix—element dominance (ED) and row dominance (RD)—that we would like to model in our linear program. A solution method preserves rank weakly if aij ¿ 1 implies wi ¿ wj . This property is known as ED or weak rank preservation [7]. If aij is exactly equal to 1, then an argument could be made for either wi ¿ wj or wj ¿ wi . Therefore, we modify the de;nition of weak rank preservation in the following way: ED is preserved if aij ¿ 1 implies wi ¿ wj . In our ;rst-stage formulation, ED is explicitly enforced through constraints (15). EM and LLS do not preserve ED. We point out that if the comparison matrix has cardinal inconsistency, that is, aij ¿ 1, ajk ¿ 1, and aki ¿ 1, then the only feasible solution is wi = wj = wk . However, such a comparison matrix would be highly inconsistent. We see that the ED constraints in (15) have the additional bene;t of detecting cardinal inconsistency. A solution method preserves rank strongly if aik ¿ ajk for all k implies wi ¿ wj . This property is known as RD or strong rank preservation [7]. If aik = ajk for all k, then an argument could be made for both wi ¿ wj and wj ¿ wi . Therefore, we modify the de;nition of strong rank preservation as follows: RD is preserved if aik ¿ ajk for all k and aik ¿ ajk for some k implies wi ¿ wj . In our
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2239
;rst-stage linear program, RD is explicitly enforced through constraints (16). Both EM and LLS guarantee RD (see [7]). We point out that the xi and yij decision variables in (18) are unrestricted since they are logarithms of positive, real numbers. The objective function (10) minimizes the sum of logarithms of positive errors in natural logarithm space. In the nontransformed space, the objective function minimizes the product of the overestimated errors ( ij ¿ 1). Therefore, the objective function minimizes the geometric mean of all errors greater than one. Let z ∗ be the optimal objective function value of the ;rst-stage linear program. Given a perfectly consistent matrix, there is no error in the estimate and z ∗ is equal to zero (since ij = 1 for all i; j = 1; 2; : : : ; n or yij = 0 for all i; j = 1; 2; : : : ; n). The notion of minimizing the geometric mean of errors ;ts well with the concept of multiplicative errors in the AHP. The objective function is, in some sense, a measure of the inconsistency in the pairwise comparison matrix, that is, the greater the value of the objective function, the more inconsistent is the matrix. Since the objective function minimizes the sum of n(n − 1)=2 decision variables (namely, zij for i ¡ j), we de;ne the CI within the LP framework as follows: CILP = 2z ∗ =n(n − 1):
(6 )
CILP is the average value of zij for elements above the diagonal in the comparison matrix. In preliminary computational experiments, CILP and CI (see Eq. (6)) seem to be highly correlated. We hope to explore this connection in greater detail in future work. 3.2. Second stage: linear program to generate a priority vector When we solve the ;rst-stage linear program, the solution set consists of all priority vectors that minimize the product of all errors ij . It is possible that there are multiple optimal solutions to the ;rst-stage model. In the second stage, we solve a linear program that selects from this set of alternative optima the priority vector that minimizes the maximum of errors ij . The second-stage linear program is given by the following: Minimize
zmax
(21)
subject to n− 1 n
zij = z ∗ ;
(22)
i=1 j=i+1
xi − xj − yij = ln aij ;
i; j = 1; 2; : : : ; n; i = j;
(23)
zij ¿ yij ;
i; j = 1; 2; : : : ; n; i ¡ j;
(24)
zij ¿ yji ;
i; j = 1; 2; : : : ; n; i ¡ j;
(25)
zmax ¿ zij ; x1 = 0;
i; j = 1; 2; : : : ; n; i ¡ j;
(26) (27)
2240
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 1
2
3
1/2
1
1
1/3
1
1
Fig. 1. 3 × 3 pairwise comparison matrix.
xi − xj ¿ 0;
i; j = 1; 2; : : : ; n; aij ¿ 1;
xi − xj ¿ 0;
i; j = 1; 2; : : : ; n; aik ¿ ajk ; for all k; aiq ¿ ajq for some q;
zij ¿ 0;
i; j = 1; 2; : : : ; n;
xi ; yij unrestricted;
i; j = 1; 2; : : : ; n;
zmax ¿ 0:
(28)
(29) (30) (31) (32)
Constraint (22) ensures that only those solution vectors that are optimal in the ;rst-stage linear program are feasible in the second-stage model. Recall that z ∗ is the optimal objective function value of the ;rst-stage model. Constraints (26) ;nd zmax , the maximum value of the errors zij . The objective function (21) minimizes zmax . Constraint (32) is the nonnegativity constraint for zmax (although this constraint is redundant). All other constraints in the second-stage model are identical to the corresponding constraints in the ;rst-stage model. 3.3. Illustrative linear programs We apply our two-stage LP approach to the 3 × 3 pairwise comparison matrix given in Fig. 1. The decision maker needs to specify only the values in the upper triangular part of the matrix, as the matrix is reciprocal. The ;rst-stage model for the matrix in Fig. 1 is given by the following: Minimize
z12 + z13 + z23
(33)
subject to x1 − x2 − y12 = 0:693;
(34)
x2 − x1 − y21 = −0:693;
(35)
x1 − x3 − y13 = 1:099;
(36)
x3 − x1 − y31 = −1:099;
(37)
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2241
x2 − x3 − y23 = 0;
(38)
x3 − x2 − y32 = 0;
(39)
z12 − y12 ¿ 0;
(40)
z12 − y21 ¿ 0;
(41)
z13 − y13 ¿ 0;
(42)
z13 − y31 ¿ 0;
(43)
z23 − y23 ¿ 0;
(44)
z23 − y32 ¿ 0;
(45)
x1 − x2 ¿ 0;
(46)
x1 − x3 ¿ 0;
(47)
x1 − x2 ¿ 0;
(48)
x1 − x3 ¿ 0;
(49)
x2 − x3 ¿ 0;
(50)
x1 = 0;
(51)
zij ¿ 0;
xi ; yij unrestricted;
i; j = 1; 2; 3:
(52)
Constraints (34)–(39) enforce the comparison ratios. Constraints (40)–(45) model the absolute values of the errors. Constraints (46) and (47) enforce ED (element 1 dominates element 2; element 1 dominates element 3). Constraints (48)–(50) enforce RD (row 1 dominates row 2; row 1 dominates row 3; row 2 dominates row 3). Constraint (51) sets the weight of the ;rst element to 1. We point out that constraints (48) and (49) are redundant. When we solve the ;rst-stage model using LINDO [8], we obtain z ∗ = 0:406. The second-stage model for the matrix in Fig. 1 is given by the following: Minimize
zmax
(53)
subject to z12 + z13 + z23 = 0:406;
(54)
x1 − x2 − y12 = 0:693;
(55)
2242
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
x2 − x1 − y21 = −0:693;
(56)
x1 − x3 − y13 = 1:099;
(57)
x3 − x1 − y31 = −1:099;
(58)
x2 − x3 − y23 = 0;
(59)
x3 − x2 − y32 = 0;
(60)
z12 − y12 ¿ 0;
(61)
z12 − y21 ¿ 0;
(62)
z13 − y13 ¿ 0;
(63)
z13 − y31 ¿ 0;
(64)
z23 − y23 ¿ 0;
(65)
z23 − y32 ¿ 0;
(66)
zmax − z12 ¿ 0;
(67)
zmax − z13 ¿ 0;
(68)
zmax − z23 ¿ 0;
(69)
x1 − x2 ¿ 0;
(70)
x1 − x3 ¿ 0;
(71)
x1 − x2 ¿ 0;
(72)
x1 − x3 ¿ 0;
(73)
x2 − x3 ¿ 0;
(74)
x1 = 0;
(75)
zmax ¿ 0;
(76)
zij ¿ 0; xi ; yij unrestricted;
i; j = 1; 2; 3:
(77) z∗
When we solve the second-stage model using LINDO [8], we obtain max = 0:135 and the priority vector (0.55, 0.24, 0.21). This priority vector agrees with the vector generated by EM in Expert Choice [6].
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2243
4. Advantages of the LP approach In this section, we discuss several advantages of our LP approach including the use of dual variables to identify inconsistencies in the decision maker’s pairwise comparisons and extensions to handle interval judgments. 4.1. Simplicity Our linear programs are straightforward and easy to understand and formulate. Furthermore, both linear programs can be solved in very little computational time using readily available software such as LINDO [8]. In fact, the two-stage linear program is no harder, computationally, than a single-stage linear program. Once an optimal solution is obtained in the ;rst stage, the additional second-stage constraints may be added, and the computations can continue from the ;rst-stage extreme point solution. This process can be automated. In addition, we remark that EM requires the solution of a nonlinear program. To be fair, it is a rather easy one (namely, Maximize subject to Aw = w and eT W = 1). Nonetheless, from a theoretical point of view, linear programs are easier than nonlinear programs. 4.2. Sensitivity analysis Every linear program allows a decision maker to perform sensitivity analysis on inputs to the model. In particular, a decision maker might be interested in answering the following questions. Which entry in the pairwise comparison matrix should be changed to reduce inconsistency? How much should the entry be changed? When traditional approaches like EM and LLS are used, it is diMcult to answer these questions. Suppose that a decision maker estimates an entry egregiously or incorrectly inputs the entry into the pairwise comparison matrix. Using EM and LLS, it would be diMcult for the decision maker to identify the oOending entry by simple inspection (we point out that Expert choice [6] can automatically locate inconsistencies among a decision maker’s judgments and recommend revised entries). It is easy for our LP approach to answer these questions. In the ;rst-stage linear program, the values of the dual variables at optimality provide an indication of the egregious or incorrect entries in the pairwise comparison matrix. In LP, we know that a dual variable with value k at optimality has the following interpretation: if we increase the right-hand side of the dual variable’s corresponding constraint by one unit, then the objective function value increases by k units. Since the objective function value in our ;rst-stage linear program can be thought of as a measure of inconsistency (see Section 3.1), the dual variable that corresponds to each constraint in (11) measures the amount by which the inconsistency value increases if the corresponding right-hand side constant (that is, ln aij ) increases by one unit. Since we are in natural logarithm space, a dual variable with value k at optimality says that the objective function value (inconsistency value) increases by k units when the corresponding aij increases by a factor of e, the base of the natural logarithm. This is useful information for a decision maker. It is now possible to identify which aij to change in order to decrease the inconsistency value by the greatest amount. In addition, a dual variable with a negative
2244
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 1
[5,7]
[2,4]
[1/7,1/5]
1
[1/3,1/2]
[1/4,1/2]
[2,3]
1
Fig. 2. 3 × 3 pairwise comparison matrix with lower and upper bounds [‘ij ; uij ] for each entry.
value at optimality indicates that aij should be increased, while a positive value indicates that aij should be decreased. We now consider the dual variables associated with the ED constraints (15). Speci;cally, when the value of the dual variable at optimality is greater than zero for an ED constraint, the constraint is binding. This indicates that the pairwise comparison entry corresponding to that particular constraint might be Pawed. This set of dual variables can help a decision maker detect cardinal inconsistency in a pairwise comparison matrix (detecting cardinal inconsistency is not possible using EM and LLS). We illustrate this capability of our LP approach in Section 5. 4.3. Modeling interval judgments In a pairwise comparison matrix, typically aij is a single number that estimates wi =wj . Suppose that, instead of a single number, an interval is speci;ed with a lower bound ‘ij and an upper bound uij on the estimate. For example, consider the 3 × 3 pairwise comparison matrix given in Fig. 2. We see that entry a12 is a number between 5 and 7. Since the matrix is reciprocal, a21 is a number between 17 and 15 . Arbel and Vargas [9] treat the interval bounds as hard constraints. They develop two techniques to generate priority vectors when interval judgments are used: preference simulation and preference programming. In preference simulation, several comparison matrices are obtained by sampling from the speci;ed intervals. The EM approach is then applied to each matrix to produce a priority vector. The average of the feasible priority vectors gives the ;nal set of weights. Of course, this approach can be extremely ineMcient and computationally burdensome when most of the priority vectors are infeasible. This can happen as a consequence of several tight interval judgments. Preference programming uses linear inequalities and equations of the form ‘ij 6 wi =wj 6 uij ; n
i; j = 1; 2; : : : ; n; i ¡ j;
wi = 1;
(78) (79)
i=1
wi ¿ 0;
i = 1; 2; : : : ; n;
(80)
where ‘ij and uij are the lower and upper bounds of the speci;ed interval. If a solution to this set of equations exists, it de;nes an n-dimensional priority space. The arithmetic mean of the vertices of this feasible region becomes the ;nal priority vector. No attempt is made to identify the best vector in the feasible region. The ;rst-stage linear program speci;ed by (10)–(18) can be revised to handle the interval AHP problem in the following way. Each entry aij is the geometric mean of the interval bounds, that is,
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2245
aij = (‘ij × uij )1=2 . We use the geometric mean in order to preserve the inverse reciprocal property of the matrix: (‘ij × uij )1=2 = 1=((1=‘ij ) × (1=uij ))1=2 . In the ;rst-stage linear program, we replace constraints (15) and (16) with the following constraints: xi − xj ¿ ln ‘ij ;
i; j = 1; 2; : : : ; n; i ¡ j;
(81)
xi − xj 6 ln uij ;
i; j = 1; 2; : : : ; n; i ¡ j:
(82)
If ‘ij ¿ 1, then the priority vector is bound by this value and generates weights such that wi ¿ wj . Thus, when ‘ij ¿ 1, a constraint in (81) behaves like an element dominance constraint for aij ¿ 1. Similarly, when uij ¡ 1, a constraint in (82) behaves like an ED constraint for aij ¡ 1. The ;rst-stage model for handling interval judgments has objective function (10) and constraints (11)–(14), (81), (82), (17), and (18). The ;rst-stage linear program that models the interval judgments shown in Fig. 2 is given by the following: Minimize
z12 + z13 + z23
(83)
subject to x1 − x2 − y12 = 1:778;
(84)
x2 − x1 − y21 = −1:778;
(85)
x1 − x3 − y13 = 1:040;
(86)
x3 − x1 − y31 = −1:040;
(87)
x2 − x3 − y23 = −0:896;
(88)
x3 − x2 − y32 = 0:896;
(89)
z12 − y12 ¿ 0;
(90)
z12 − y21 ¿ 0;
(91)
z13 − y13 ¿ 0;
(92)
z13 − y31 ¿ 0;
(93)
z23 − y23 ¿ 0;
(94)
z23 − y32 ¿ 0;
(95)
x1 − x2 ¿ 1:609;
(96)
2246
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 [8,9]
2
[1/9,1/8]
1
[1/7,1/5]
1/2
[5,7]
1
1
Fig. 3. 3 × 3 mixed pairwise comparison matrix.
x1 − x2 6 1:946;
(97)
x1 − x3 ¿ 0:693;
(98)
x1 − x3 6 1:386;
(99)
x2 − x3 ¿ − 1:099;
(100)
x2 − x3 6 − 0:693;
(101)
x1 = 0;
(102)
zij ¿ 0;
xi ; yij unrestricted;
i; j = 1; 2; 3:
(103)
Constraints (84)–(89) enforce the comparison ratios. Constraints (90)–(95) model the absolute values of the errors. Constraints (96)–(101) enforce the bounds on the ratios of the weights. Constraint (102) sets the weight of the ;rst element to 1. We do not formulate the second-stage linear program due to space considerations. 4.4. Mixed pairwise comparison matrices Our LP approach can easily handle a mixture of single entries and interval bounds in a pairwise comparison matrix. That is, some entries are single numbers aij and some entries have interval bounds of the form [‘ij ; uij ]. The ;rst-stage linear program that models the mixed pairwise comparison matrix shown in Fig. 3 is given by the following: Minimize
z12 + z13 + z23
(104)
subject to x1 − x2 − y12 = 2:138;
(105)
x2 − x1 − y21 = −2:138;
(106)
x1 − x3 − y13 = 0:693;
(107)
x3 − x1 − y31 = −0:693;
(108)
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2247
x2 − x3 − y23 = −1:778;
(109)
x3 − x2 − y32 = 1:778;
(110)
z12 − y12 ¿ 0;
(111)
z12 − y21 ¿ 0;
(112)
z13 − y13 ¿ 0;
(113)
z13 − y31 ¿ 0;
(114)
z23 − y23 ¿ 0;
(115)
z23 − y32 ¿ 0;
(116)
x1 − x2 ¿ 2:079;
(117)
x1 − x2 6 2:197;
(118)
x2 − x3 ¿ − 1:946;
(119)
x2 − x3 6 − 1:609;
(120)
x1 − x3 ¿ 0;
(121)
x1 = 0;
(122)
zij ¿ 0;
xi ; yij unrestricted;
i; j = 1; 2; 3:
(123)
Constraints (105)–(110) enforce the comparison ratios. Constraints (111)–(116) model the absolute values of the errors. Constraints (117)–(120) enforce the bounds on the ratios of the weights. Constraint (121) enforces ED (there are no RD constraints). Constraint (122) sets the weight of the ;rst element to 1. We do not formulate the second-stage linear program due to space considerations. 4.5. Modeling group decisions An important application of AHP involves group decision making (e.g., [10]). Suppose there are n decision makers. The most common approach used in AHP is for each decision maker to ;ll in a comparison matrix independently such that akij denotes the comparison of element i to element j for decision maker k (k = 1; 2; : : : ; n). The individual judgments of the n decision makers are combined using the geometric mean to produce entries aij = [a1ij × a2ij × · · · × anij ]1=n which collectively determine an overall comparison matrix A. EM is applied to A to obtain the priority vector.
2248
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
An alternative direction is to take advantage of the LP approach to mixed pairwise comparison matrices, as discussed in Section 4.4. Instead of computing the geometric mean aij , we can compute interval bounds [‘ij ; uij ] where ‘ij = min{a1ij ; a2ij ; : : : ; anij } and uij = max{a1ij ; a2ij ; : : : ; anij } for i ¡ j: If ‘ij = uij , we use a single number, rather than an interval, in order to avoid over-constraining the linear program. This approach is extremely Pexible. For example, if there are many decision makers (n is large), one can eliminate the high and low values (i.e., eliminate outliers) and compute interval bounds [‘ij ; uij ] or a single number from the remaining n − 2 values. 4.6. Other advantages The ability to ensure ED and RD via LP constraints is a major advantage oOered by the LP approach. In addition, one can think of ED as a mechanism for providing limited protection against rank reversal. If aij ¿ 1, then, irrespective of the number of alternatives added or deleted, wi ¿ wj as a result of the ED constraints. 5. Computational experiment: #ve pairwise comparison matrices In this section, we formulate our ;rst- and second-stage linear programs, solve the models, and discuss results for ;ve square, reciprocal, pairwise comparison matrices that range in size from four to seven rows. 5.1. Matrix 1 In Fig. 4, we give Matrix 1: a 7 × 7 pairwise comparison matrix that exhibits cardinal inconsistency. That is, for three elements i; j, and k; aij ¿ 1, ajk ¿ 1, and aki ¿ 1. The weights generated by EM, LLS, and a second-stage linear program that contains both ED and RD constraints are shown in Table 1. We point out that this matrix has a consistency ratio of 0.10. Recall that both EM and LLS guarantee RD. A close inspection of Matrix 1 reveals that the key sources of inconsistency are the judgments involving elements 4, 6, and 7. We see that element 4 is less important than element 6 (a46 = 12 ), 1 5 1 4 1/5 1 1/8 1 1 8 1 5 1/4 1 1/5 1 1/2 3 1/3 2 1/6 1/4 1/3 2 1/7 1/2 1/3 1/2
2 6 7 1/3 4 2 3 3 3 1/2 1/2 2 1 7 2 1/7 1 1/2 1/2 2 1
Fig. 4. Matrix 1.
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2249
Table 1 Priority vectors for Matrix 1 Weight
w1 w2 w3 w4 w5 w6 w7
EM
LLS
Second-stage LP model
RD
RD
ED and RD
0.291 0.078 0.300 0.064 0.159 0.051 0.058
0.312 0.073 0.293 0.064 0.157 0.044 0.057
0.303 0.061 0.303 0.061 0.152 0.061 0.061
ED: Element dominance. RD: Row dominance.
-
0 0 0 -
-
0 0 0 4 -
0 0 -
0 0 0 0 0
0 0 0 0 0 -
Fig. 5. Optimal values of the dual variables (;rst-stage linear program with ED and RD constraints) for Matrix 1 corresponding to each element dominance constraint (shown only for aij ¿ 0).
element 6 is less important than element 7 (a67 = 12 ), and element 7 is less important than element 4 (a74 = 12 ). Given this instance of cardinal inconsistency, the optimal solution to the second-stage model with ED constraints and RD constraints has w4 = w6 = w7 . In general, when an ED constraint is binding, the two weights corresponding to that constraint are equal at optimality in the LP model. It is diMcult to detect cardinal inconsistency in a matrix by simple inspection alone. However, we can detect inconsistency by examining the optimal values of the dual variables corresponding to the ED constraints given in (15). In Fig. 5, we show the optimal values of the dual variables corresponding to the ED constraints in the ;rst-stage model with ED and RD constraints for Matrix 1. We see that only the ED constraint corresponding to a64 is binding (the optimal value of the dual variable is positive). Hence, we suspect that a64 = 2 and a46 = 12 are incorrect judgments. Expert choice can also be used to identify sources of cardinal inconsistency. The software identi;es a26 as the “most inconsistent judgment.” Furthermore, in Fig. 6, we show the optimal values of the dual variables corresponding to the weight constraints in (11) for the ;rst-stage model with ED and RD constraints of Matrix 1. A nonzero value indicates that an entry in the pairwise comparison matrix might be increased in order to reduce inconsistency. The optimal values of the dual variables corresponding to a46 and a67 (among others) are nonzero. Thus, we can reduce inconsistency by increasing the value of either entry in Matrix 1.
2250
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 0 -
0 0 -2 -2 -2 0 0 -2 0 -2 0 0 - 0 0 0 -2 -2 - - 0 0 -2 0 - - - 0 0 -2 - - - - 0 -2 - - - - - 0
Fig. 6. Optimal values of the dual variables (;rst-stage linear program with ED and RD constraints) for Matrix 1 corresponding to each aij constraint. Lower triangle values are the negative of the upper triangle values.
1 2 2.5 8 5 1/2 1 1/1.5 7 5 1/2.5 1.5 1 5 3 1/8 1/7 1/5 1 1/2 1/5 1/5 1/3 2 1
Fig. 7. Matrix 2.
0 -
0 0 -2 -2 0 -2 0 0 - 0 -2 0 - - 0 -2 - - - 0
Fig. 8. Optimal values of the dual variables (;rst-stage linear program with ED and RD constraints) for Matrix 2 corresponding to each aij constraint. Lower triangle values are the negative of the upper triangle values.
We point out that, for the ;rst-stage model with ED and RD constraints of Matrix 1, the optimal values of the dual variables corresponding to the RD constraints in (16) are all zero. In this case, none of these constraints is binding. 5.2. Matrix 2 Our LP approach has the ability to model the situation in which a decision maker compares two alternatives i and j directly, and wants to enforce that alternative i is more important than alternative j in the ;nal ranking. Our LP approach can preserve this ordering through ED constraints. For example, suppose a father and daughter are using AHP to help decide on the right college for her to attend. The father may want to ensure that “cost” is more important than any factor other than “academic quality.” In Fig. 7, we give Matrix 2: a 5×5 pairwise comparison matrix for which the decision maker has speci;ed that w2 6 w3 . The weights generated by EM, LLS, and our second-stage linear program are presented in Table 2. This matrix has a consistency ratio of 0.03. We observe that EM and LLS violate the ED constraint since w2 ¿ w3 . In Fig. 8, we show the optimal values of the dual variables corresponding to the weight constraints in (11) for the ;rst-stage model with ED and RD constraints of Matrix 2. We observe that the main sources of inconsistency in this matrix come from entries a14 , a15 , a23 , a34 , and a45 . Thus, the
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
2251
Table 2 Priority vectors for Matrix 2 Weight
w1 w2 w3 w4 w5
EM
LLS
Second-stage LP model
RD
RD
ED and RD
0.419 0.242 0.229 0.041 0.070
0.422 0.239 0.227 0.041 0.071
0.441 0.221 0.221 0.044 0.074
ED: Element dominance. RD: Row dominance.
1 [2,5] [2,4] [1,3] [1/5,1/2] 1 [1,3] [1,2] [1/4,1/2] [1/3,1] 1 [1/2,1] [1/3,1] [1/2,1] [1,2] 1
Fig. 9. Matrix 3. Table 3 Priority vectors for Matrix 3 Preference simulationa Weight
Minimum
Average
Maximum
Standard deviation
w1 w2 w3 w4
0.369 0.150 0.093 0.133
0.470 0.214 0.132 0.184
0.552 0.290 0.189 0.260
0.037 0.026 0.016 0.023
a
Preference programminga
Second-stage LP model
0.469 0.201 0.146 0.185
0.425 0.212 0.150 0.212
Results from Arbel and Vargas [9].
inconsistency of Matrix 2 can be reduced by increasing the value of any one of these ;ve aij entries. 5.3. Matrix 3 In Fig. 9, we give Matrix 3: a 4 × 4 pairwise comparison matrix with upper and lower bounds [‘ij ; uij ] speci;ed for each entry. In Table 3, we present weights that were generated by preference simulation and preference programming (these results are due to Arbel and Vargas [9]) and by our second-stage model. In preference simulation, the feasible priority vectors are averaged to give the ;nal set of weights (this is denoted by Average in Table 3; we also show the minimum value and maximum value for each wi from the feasible priority vectors that were generated by the simulation).
2252
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 1 [2,4] 4 [4.5,7.5] 1 [1/4,1/2] 1 1 2 [1/5,1/3] 1/4 1 1 [1,2] 1/2 [1/7.5,1/4.5] 1/2 [1/2,1] 1 1/3 1 [3,5] 2 3 1
Fig. 10. Matrix 4. Table 4 Priority vectors for Matrix 4 Weight
EM
Second-stage LP model
w1 w2 w3 w4 w5
0.377 0.117 0.116 0.076 0.314
0.413 0.103 0.103 0.071 0.310
For our ;rst-stage model, the optimal values of the dual variables corresponding to the lower and upper bound constraints in (81) and (82) are all zero. However, if some of the values of the dual variables were nonzero, this would enable the decision maker to determine which of the bounds are too tight and which of the bounds could be changed in order to reduce inconsistency in the matrix. 5.4. Matrix 4 In Fig. 10, we give Matrix 4: a 5 × 5 pairwise comparison matrix that has a mixture of single aij entries and interval entries where upper and lower bounds [‘ij ; uij ] are speci;ed. In Table 4, we present weights that were generated by EM and our second-stage model with ED constraints for every aij ¿ 1 entry in the matrix. Of course, EM was not designed to handle an interval entry, so that it was necessary to convert every interval entry into a single aij entry. We accomplished the conversion by computing the geometric mean of every lower bound and upper bound and then used the geometric mean as the single entry in the matrix. We observe that the weights generated by EM violate one of the four interval constraints (the interval [1=5; 1=3] is violated). 5.5. Group AHP example To illustrate the material in Section 4.5, we conducted an experiment. Four decision makers (graduate students) were given ;ve geometric ;gures (taken from Gass [11, Chapter 24]) and were asked to compare (by visual inspection) the area of ;gure i to the area of ;gure j for i ¡ j. Lower bounds and upper bounds were determined as speci;ed in Section 4.5. The lower bounds are provided in Fig. 11 and the upper bounds are shown in Fig. 12. Since ‘34 = u34 = 4:0000 (indicated in bold), we use a single number for a34 , rather than an interval. We now seek to compare the results generated by EM, LLS, and the LP model. To apply EM and LLS, we must ;rst compute the geometric means aij = [a1ij × a2ij × a3ij × a4ij ]1=4 :
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254 1.0000 0.2500 0.5000 0.1250 1.3333
2.0000 1.0000 1.3333 0.3333 2.5000
1.5000 0.5000 1.0000 0.2500 1.5000
4.5000 2.0000 4.0000 1.0000 6.0000
2253
0.5000 0.1250 0.2500 0.0625 1.0000
Fig. 11. Lower bounds from the group.
1.0000 0.5000 0.6667 0.2222 2.0000
4.0000 1.0000 2.0000 0.5000 8.0000
2.0000 8.0000 0.7500 3.0000 1.0000 4.0000 0.2500 1.0000 4.0000 16.0000
0.7500 0.4000 0.6667 0.1667 1.0000
Fig. 12. Upper bounds from the group.
1.0000 0.3593 0.6204 0.1492 1.6266
2.7832 1.0000 1.8072 0.4518 4.3559
1.6119 6.7007 0.5533 2.2134 1.0000 4.0000 0.2500 1.0000 2.0453 10.0908
0.6148 0.2296 0.4889 0.0991 1.0000
Fig. 13. Geometric mean of the group. Table 5 Priority vectors for geometry experiment Weight
EM
LLS
Second-stage LP model
Actual geometric areas
w1 w2 w3 w4 w5
0.272 0.096 0.178 0.042 0.412
0.272 0.096 0.178 0.042 0.412
0.277 0.095 0.172 0.041 0.414
0.273 0.091 0.182 0.045 0.409
The results are displayed in Fig. 13. The LP model uses interval bounds and one single number. The three priority vectors and the actual geometric areas (normalized to sum to one) are presented in Table 5. They are remarkably similar. 6. Conclusions In this paper, we have presented an intuitively appealing LP approach for estimating priority vectors in the AHP. Our LP approach has several advantages over more traditional approaches. One advantage is that users are more likely to understand output from an LP model than know about eigenvectors or LSS. A second involves sensitivity analysis. Our measure of inconsistency has a more intuitive interpretation than that in EM. A third is that the LP approach can model pairwise
2254
B. Chandran et al. / Computers & Operations Research 32 (2005) 2235 – 2254
comparison matrices that have single number entries, interval entries, or a mixture of both types of entries (to our knowledge, we are the ;rst to consider matrices of this type). We have demonstrated an extension of this approach to AHP-based group decision making. Finally, the new approach ensures ED and RD via LP constraints. We point out that this paper is not an attempt to resolve the debate as to which is the correct approach to use in deriving priority vectors. Instead, we simply present an alternative approach that has some interesting and desirable properties. Also, we do not claim that adding ED and RD constraints is always the right thing to do, though it certainly seems reasonable. If the decision maker speci;cally indicates a preference for one alternative over another (recall the college selection example in Section 5.2), a method that produces a solution with the weights reversed might not be very helpful or insightful. By including these constraints and obtaining the optimal values of the corresponding dual variables, the decision maker can easily identify inconsistencies in the pairwise comparison matrix. Ultimately, the choice of which constraints to include or omit depends on the importance of rank preservation to the decision maker. Finally, we formulated LP models for several pairwise comparison matrices and compared the LP results to those produced by two widely used methods. In general, the weights generated by our linear programs are similar to those produced by EM and LLS. In addition, we observed that the rankings of alternatives are nearly the same for all three methods. Our LP approach is easy to implement and is computationally inexpensive. Acknowledgements The authors thank Larry Bodin for reading a draft of this paper and providing helpful comments and suggestions. References [1] Saaty T. A scaling method for priorities in hierarchical structures. Journal of Mathematical Psychology 1977;15: 234–81. [2] Saaty T. The analytic hierarchy process. New York: McGraw-Hill; 1980. [3] Wasil E, Golden B. Celebrating 25 years of AHP-based decision making. Computers and Operations Research 2003;30:1419–20. [4] Golden B, Wasil E, Harker P. The analytic hierarchy process: applications and studies. Berlin: Springer; 1989. [5] Harker P, Vargas L. The theory of ratio scale estimation: Saaty’s analytic hierarchy process. Management Science 1987;33:1383–403. [6] Expert choice. Arlington, VA: Expert Choice; 2003. [7] Saaty T, Vargas L. Inconsistency and rank preservation. Journal of Mathematical Psychology 1984;28:205–14. [8] LINDO. Chicago, IL: LINDO Systems; 2003. [9] Arbel A, Vargas L. Preference simulation and preference programming: robustness issues in priority derivation. European Journal of Operational Research 1993;69:200–9. [10] Condon E, Golden B, Wasil E. Visualizing group decisions in the analytic hierarchy process. Computers and Operations Research 2003;30:1435–45. [11] Gass S. Decision making, models and algorithms: a ;rst course. New York: Wiley; 1985.