Mathl. Comput. Modelling Vol. 28, No. 10, pp. 59-71, 1998 @ 1998 Elsevier Science Ltd. All rights reserved
Printed in Great Britain PII: s0895-7177(98)00155-1
0895-7177/98 $19.00 + 0.00
The Hadamard Product of a Positive Reciprocal Matrix and Some Results in AHP CUIPING WEI AND ZHIMIN ZHANG Institute of Operations Research, Qufu Normal University Qufu, Shandong, 273165, P.R. China H. ZHOU Department of Mathematics and Computer Sciences Georgia State University, Atlanta, GA 30303-3083, U.S.A. (Received and accepted February
1998)
Abstract-we define four types of standard matrices, prove that every positive reciprocal matrix is the Hadamard product of one type of standard matrix and a corresponding consistent matrix, and conclude that the set of EM, LDM, LLSM, REM, and GEM perturbation matrices is equal to the set of corresponding type of standard matrices. We also provide a necessary and sufficient condition that the priority vectors of an arbitrary positive reciprocal matrix in EM, LDM, LLSM, REM, and GEM are equal. We conclude our paper by applying this decomposition to the analysis of sensitivity to compute the priority vector of a positive reciprocal matrix obtained from the Had-d product of a perturbed matrix and a perturbation matrix. @ 1998 Elsevier Science Ltd. All rights reserved.
Keywords-Analytic
hierarchy process, Priority vector, Hadamard product, Standard matrix, Analysis of sensitivity, EM, LDM, LLSM, REM, GEM.
1. INTRODUCTION The Analytic Hierarchy Process (AHP), a practicable multicriteria decision theory posed by Saaty in the middle of the 1970s [l-4], has been rapidly spread and widely applied to some fields such as mathematical psychology, economy, business, and government administration [5]. The priority problem with respect to a common criteria is the base of theories in AHP. The critical step in solving the problem is to derive a priority vector for determining the preference ranking among a set of alternatives from a pairwise comparison matrix, which is a positive reciprocal matrix. That is, to find a map from the set of positive reciprocal matrices into the set of priority vectors. Obviously, this kind of map is not unique. There have been more than 20 methods recorded in [6]. Among them, the eigenvalue method EM [1,3,4] is a classical and effective method, and has many advantages over other methods. All the priority vectors induced by these methods are equal when the pairwise comparison matrix is a consistent matrix. Otherwise, there will probably be different perturbations in different methods. In [7], Vargas proved that every positive reciprocal matrix can be decomposed uniquely into the Hadamard product of a consistent matrix and an inconsistent reciprocal matrix. The consistent matrix has the same principal eigenvector as the original matrix. The inconsistent matrix has the same principal eigenvalue as the original one and the principal right eigenvector (l/n, l/n,. . . , l/n). This matrix was named standard matrix (which will be named standard-I matrix in this paper) by Wang and Xu in [8]. Applying this decomposition, Vargas [7] performed the analysis of sensitivity on the priority vector of the reciprocal matrix obtained by the EM method. The analysis of sensitivity tries to answer the
59
60
WE1
c.
et al.
following question. When the entries of the reciprocal matrix change slightly, how large will the entries of the priority vector change ? In this paper, we will discuss the related theories on LDM, LLSM, REM, and GEM by defining the standard-II, -111, and -IV matrices, and derive three other Hadamard decompositions. We provide necessary and sufficient conditions that the priority vectors of any two methods among EM, LDM, LLSM, REM, and GEM are equal with respect to the standard forms. By applying these decompositions, we perform the analysis of sensitivity to the priority vector of the positive reciprocal matrix obtained by LDM, LLSM, REM, and GEM method. For the convenience of readers, we also provide related background information for LDM, LLSM, REM, and GEM in the Appendix. 2. NOTATIONS
AND
DEFINITIONS
Throughout this paper, let R = {1,2,, . . , n}, Ri the set of n-dimension positive column vectors, WT the transpose of a matrix W, and Q = {w E R$ : w = (WI, ~2,. . . , WI,,)~, Cz, wi = 1). We assume that every priority vector [2] is normalized, i.e., an element in Q. We will denote by A o B = (aij x bij) the Hadamard product of A = (Q) and B = (bij). DEFINITION 2.1. Let A = (aij) be an n x n matrix. A is said to be a positive reciprocal
if ai, > 0 and aij = l/aji,
matrix
for i, j, E Cl.
Let A4& be the set of positive reciprocal n x n matrices. DEFINITION 2.2. Let A = (aij) E M&.
A is said to be a consistent matrix if aikakj = aij, for
i, j, k E Cl.
We denote by MC, the set of n x n consistent
matrices.
DEFINITION 2.3. Let E = (eij) E M&.
(i) (ii) (iii) (iv)
E is said to be a standard-l
matrix if c&, eij = cy=“=, ekj, for any i, k E a. matrix if x7=‘=, eij = x7=, eji, for any i E Cl. E is said to be a standard-111 matrix if ny=, eij = n,“=, ekj, for any i, k E 0. E is said to be a standard-IV matrix if C,“=,+l eij = n - i, i = 1,2, . . . , n - 1. E is said to be a standard-II
We denote by El, I&, I&I, and EIV the set of standard-I, -11, -III, and -IV matrices, respectively. In AHP, it is a very important step to calculate the normalized priority vector of alternatives with respect to a common criterion. The first method of calculating the priority vector is called the Eigenvalue Method EM [2]. Up till now, there have been more than 20 methods, such as LDM, LLSM, REM, GEM, CGEM, QPM, and LPM (3,6,8,9-131. We are particularly interested in the following set A = {EM, LDM, LLSM, REM, GEM) of methods, because they have very nice properties. A priority vector of a positive reciprocal matrix A generated by the method a: E A is called cy priority vector of A, denoted by a(A). DEFINITION 2.4. Let A E M&
and Q E A. If w = (WI, ~2,. . . , w,)~ is tbe IYpriority vector A = A o WT is said to be the a! perturbation matrix of of A, and W = (wi/wj) E MC,, then Et,) o W is said to be the Hadamard decomposition of A by the matrix Ei’,, and A,andA=E& the consistent matrix W. We shall call EC,) = {Et,
: A E MZJ
the set of cr perturbation
3. MAIN
matrices.
RESULTS
Let A E M;:,. If the normalized principal right eigenvector w = (WI, ~2,. . . , UJ,)~ is taken &s the priority vector of A, then we call the priority method eigenvalue method (EM). It was proved in [7] and [8, pp. 121-123) that every positive reciprocal matrix can be expressed as the Hadamard product of a unique standard-I matrix and a consistent matrix. The EM perturbation matrix of a positive reciprocal matrix is a standard-I matrix. Conversely, any standard-I matrix is an EM perturbation matrix of some positive reciprocal matrix. In this paper, we will discuss the related problems of LDM, LLSM, REM, and GEM.
Positive Reciprocal Matrix
3.1. The Least
Deviation
61
Method
Let A = (aij) E M&. The least deviation method , w,,)~ which minimizes the function F, (Wl,W2,...
(LDM) 191 is to find the vector w =
with
the constraint w E Q. It was proved [8, pp. 187-1911 that F(w) h as a unique minimum the unique solution of the following system of equations:
solution in Q, which is also
(1) in Q. Based on this result, we can have the following theorem. THEOREM 3.1. The LDM perturbation matrix of a positive reciprocal matrix is a standard-II matrix; conversely, any standard-II matrix is a LDM perturbation matrix of some positive reciprocal
matrix.
tithermore,
the Hadamard decomposition of a positive matrix E is unique.
reciprocal
matrix A
by a consistent matrix W and a standard-II PROOF.
(4
Let A = (aij) be a positive reciprocal matrix and let w = (WI, wz, . . . , w,)~ the LDM priority vector of A. Then, u satisfies (1). Let W = (wi/wj)nxn, and E = (eij) = WT oA be the LDM perturbation matrix of A. Then aij = (wi/wj)eij, for i,j,~ R. By (l), we can obtain 2 j=l
Qj = 2
eji,
i E i12.
j=l
Thus, E is a standard-II matrix. let l? = (&j) be a standard-II matrix. We pick any normalized vector ij = {ai, @z,. . . , tin}, construct a consistent matrix itr = (0&) E MC,, , and construct the positive reciprocal matrix A by A = s 0 I@. Then, aij = (tii/@j)Zij, for i,j E R, and
(b) Now
j=l
j=l
(2)
We shall prove that l? is the LDM-perturbation matrix of A. Assume that w = (WI, wz, . . w,)~ is the LDM priority vector of A. Then by (l), we have (3)
Since E E Mgn, it follows that the system of equations
(4
has a unique solution in Q. F’rom (2) and (3), we have tiiU.Jj/ajwi = 1 for any i, j E 0, i.e., fli/tij = wi/wjl for i, j E 52, and SO m = W. We have thus found that (~TI~,T&,. . . , @JT is the LDW priority vector of A. Therefore, l? is the LDM-perturbation matrix of A. Let A = E o w be any other Hadamard decomposition, where E is a standard-II matrix, and @’is a consistent matrix. There exists a normalized vector LJ = {fli,(I2, . . , ti&.,} such We can use the same argument that we used in (b) to prove that D that IV = (tii/Uj)nxn. is in fact the LDM priority vector of A, and therefore, ,?!?is the LDM-perturbation matrix of A. I
62
c. WEI
3.2. The Logarithmic
et al.
Least Squares Method
Let A = (aij) E M&. The logarithmic least squares method (LLSM) (see [1,3,14], and also [8, pp. 177-1791) is to find a vector w = (2ur,2uz,. . . , TO,,.) T which minimizes the function G, G(w) = FF,(lnwi i=l j=r under the constraint
- lnvj
- lnaij)2
w E Q.
THEOREM 3.2. The USM perturbation matrix of a positive reciprocal matrix is a standard-III matrix; conversely, any standard-111 matrix is a LLSM perturbation matrix of some positive reciprocal matrix. Furthermore, the Hadamard decomposition of a positive reciprocal matrix A by a consistent matrix W and a standard-III matrix E is unique. PROOF.
(4
Let w = (~1, wz, . . , w,)~ be the LLSM priority vector of a positive reciprocal matrix A. Then, from [8], for each k E R,
Let W = (wi/q),,, and E = (+) = WT 0 A be the LLSM perturbation Then, oij = (wi/wj)eij for i,j E R, and hence, from (5), we can obtain
matrix of A.
Since the right-hand sides of the above system of equations are the same constant, it follows that fly=‘=,eij = ny=, for i, k E Cl, i.e., E is a standard-III matrix. Now let E = (Eij) be a standard-III matrix, i.e., l-&r&j = ny=r for i, k E 51. We (b) pick any normalized vector D = {tii, 1lr2,. . . , tin}, construct a consistent matrix IV = (%/Wj) E MC,,, and construct a positive reciprocal matrix A by A = .!? o w. Then, ai = (0ilrZj)Ei.j for i,j E R. From (5), the LLSM priority vector w = (WI, ~2, ,.. ,w,-,)~ of the positive reciprocal matrix A satisfies ekj
Ekj
k E R.
(cl
Thus, @Jr.i& = wi/wj for i, j E R, and so m = W. Hence, l? is a LLSM perturbation matrix of A. Let A = l?Jo w be any other Hadamard decomposition, where 2 is a standard-III matrix, and @ is a consistent matrix. By [8], there exists a normalized vector Q = {@I, t~2, . . . , a,,} such that m = (@i/ri$)nxn. We can use the same argument that we used in (b) to prove that o is in fact the LLSM priority vector of A. Therefore, E is the LLSM-perturbation matrix of A. I
3.3. The Relative Entropy Method Let A E M&. function [ll]:
If the priority vector is defined by the minimum
solution
of the following
R(W~=~~(InWi-InC~~kj)W1 i=l
under constraint
j=1
w E Q, then the priority method is called the relative entropy method (REM).
PositiveReciprocalMatrix
63
From [ll], we know that the REM priority vector w = (wr,‘w2,. . . , w,)~
of A is determined
by w =
t
(l-J;=,
c:=,
utjlB;&)
1’n
(I-I;=, aijlq&kj)
t E s-l.
(‘3)
lb
Comparing with the LLSM, this priority vector is equal to that of A in LLSM. Hence, we can obtain the following theorem. THEOREM
3.3.
The REM
perturbation
matrix
of a positive reciprocal
matrix
is a stan-
dard-III matrix; conversely, any standard-III matrix is a REM perturbation matrix of some positive reciprocal matrix. firthermore, the Hadamard decomposition of a positive reciprocal matrix A by a consistent matrix W and a standard-III matrix E is unique. 3.4.
The
Gradational
Let A E M&.
Eigenvector Method
Consider the entries on the upper triangular of A.
If the priority vector is
defined by the normalized principal right eigenvector of A = (&j), where f&j, CLij =
2,
(. 0,
^
ificj, ifi=j, if i > j,
then the method is called the gradational eigenvector method (GEM) [9]. The recurrence relation of the components of the GEM priority vector w = (wr,q,
. . . , w,,)~
of A is given [9] by the following equations: Ui=-&
2
1.
i=n-l,...,
&jWj,
(7)
j=i+l
3.4. The GEM perturbation matrix of a positive reciprocal matrix is a standard-IV matrix; conversely, any standard-IV matrix is a GEM perturbation matrix of some positive reciprocal matrix. Furthermore, the Hadamard decomposition of a positive reciprocal matrix A by a consistent matrix W and a standard-IV matrix E is unique. THEOREM
PROOF.
(4
Let w = (101, w2,. . . , w,)~ be the GEM priority vector of A E M& . Then, w satisfies the above recurrence relation (7). Let W = (wJw~),~~ and E = (eij)nxn = WT o A be the GEM perturbation matrix of A. Then, eij = (wi/wj)eij for i,j, E n. By (7), we obtain
2
Qj
=
n
(i=n-l,n-2
-i,
,...,
1).
j=i+l
Thus, E is a standard-IV
OJ)Now let
matrix.
I? = (&j) be a standard-IV
matrix, then C,“=,+, cij = n - i for i = 1,2,
. . , n - 1.
We select any consistent matrix l? = (ri&/@j) E MC,, , and construct the positive reciprocal matrix A by A = E o W. Then, aij = (@i/Oj)aij, for i, j E 0. Let W = (w~/w~) be the matrix determined by the GEM priority vector w = (wi, ws, . . . ,w,JT of A. We shall prove that W = WI. Because both (3 = {tir,ti&, . . . ,I&} E Q and w E Q, it is sufficient to prove that for each i, 1 E 0, ai3i Wi = -.
fil It will be proved by induction.
w
(“1
64
c.
WE1
et al.
Ifn-i=l,theni=n-l.F’rom(7),wehave w-1
= qn-l)n%
G-1 = q,-l)n_-wI. WI
Since a(,_,), = 1, it follows that w,_~/w, = tin-l/an. Suppose now that the expressions (*) hold for n - i < k, i.e., for each m and a with n - k -c m, i 5 n, wJ@‘, = w,/&,,. Letn-i=k,theni=n-k. From(7),
-
.
j=n-k+l
Then, for each 1 with n - k < 1 < n, -‘-‘h--k
By the inductive
Since x7=,-k
+I e(,_k)j
a(n-k)je
wj/wl
= i
.
3
j=n-k+l
assumption, 5
2
= i
Wl
= Gj/@l,
(jzg+,Ecn-k]j)
y.
= k, for each 1 with n - k < 1 < n, wn-k -=-. ‘wl
@n-k a
Hence, we have proved that w = W. Therefore, _l?is the GEM perturbation matrix of A. decomposition, where I? is a standard-IV matrix, and m is a consistent matrix. There exists a normalized vector 3 = {@I, tia,. . . , tin} such that W = (2Slilcjj)nXn. We can use the same argument that we used in (b) to prove that 3 is in fact the GEM priority vector of A, and therefore, E is the GEM-perturbation matrix of A. I
(c) Let A = & o I%’be any other Hadamard
Since each of the EM, LDM, LLSM(REM), and GEM priority vector of a positive reciprocal matrix A is unique, it follows that there exists unique Et, E$, E&, and Etv such that A = WI o Et = W, o Efi = Ws o E& = Wb o E&, where WI, Wz, Ws, and WJ are consistent matrices generated by the EM, LDM, LLSM(REM), and GEM priority vectors of A, respectively. We and E& the standard form of A in EM, LDM, LLSM(REM), and GEM, call E:&, -% respectively. Clearly, if A is a consistent matrix, then Et = E1;”= El<, = Et, = (eij), where eij = 1, for each i, j E R. COROLLARY 1. Let A E A4&. Then, any two priority vectors of A in EM, LDM, LLSM, REM, and GEM are equal if and only if A has the same standard form in the two corresponding methods.
Applying Corollary 1, we can get the following corollary. COROLLARY 2. AU the priority LLSM, and R,EM are equal.
vectors of any positive reciprocal
3 x 3 matrix
in EM, LDM,
PROOF. Let A be an arbitrary positive reciprocal 3 x 3 matrix, and E$ be the standard-II of A. Since Efi is also a positive matrix, it has the following form:
matrix
Positive Reciprocal Matrix
65
Hence,
a+c=a+i, C
;+i=b+c. Hence, a = c = l/b, and
Ei;=
1
a
1 a
1a
1
a
i a
-l a
1
I .
Clearly, E{ is not only a standard-I matrix, but also a standard-III matrix. Since both Et and E& are unique, it follows that E;;’= Et = E& From Corollary 1, the EM, LDM, LLSM, and REM priority vectors of A are all equal to each other. I
4. THE ANALYSIS
OF SENSITIVITY
In practice, the pairwise comparison matrix is often perturbed.
This is clear in planning stages
where two processes are used: (a) the forward process, from the present to the future, and (b) the backward process, from the desired future to the present [15]. To study the stability
of the results, we need to observe the sensitivity of the priorities as the factors which characterize the environment of the system in question change. We try to find an answer to the following problem: how large can the perturbations of the priority vector of a positive reciprocal matrix be when the entries of the matrix are slightly changed. If the perturbations of the priority vector of the positive reciprocal matrix are very obvious, then the priority vector is not trustworthy. Thus, the analysis of sensitivity is very important to research the feasibility of a priority method. Both the conclusions and the methods used to obtain the conclusions of Theorem 3.1 to Theorem 3.4 can be used to analyze the sensitivity of those priority methods. Let A = (aij) and B = (bij) be two matrices in MRa. The matrix A is perturbed by matrix B in the form A o 5. Then, A is called a perturbed matrix and B is called a perturbation matrix. We are interested in finding a relationship between the priority vector of A 0 B and the priority vectors of A and B when these priority vectors are all derived by the same method. Let the priority vector of a positive reciprocal matrix be derived by LDM. We have the following theorem. THEOREM 4.1. Let A E MRa be perturbed A o B. Then we have the following.
(1) If A
by a positive reciprocal matrix B in the form
= WI o El, where WI is a consistent matrix and El is in EII, then the LDM priority
vector of A o B is the Hadamard product of the LDM priority vector of A and EI o B up
to a multiplicable constant, i.e., LDM(A o B) = k(LDM(A) 0 LDM(E10 B)), where k is a positive constant. A or B is a consistent matrix, then LDM(AoB) = k(LDM(A)oLDM(B)) where k is a positive constant, in particular, if A is a consistent matrix and B is a standard-11 matrix, then LDM(A o B) = LDM(A).
(2) If either
66
C.
WE1
et al.
(3) If both A and B are 3 x 3 positive reciprocal matrices (not necessarily LDM(A 0 B) = k(LDM(A)oLDM(B)) where k is a positive constant.
consistent),
then
PROOF. We first prove (2). Suppose that A = (q/z+) is consistent, where z = (z~,Q,. . . , z,,), and that B is positive reciprocal. Then, B = W o E where W = (wi/wj) is consistent, w = T is the LDM priority vector of B, and E E EI~. We have A o B = A o W o E. (wrw2,...rWn) Since A o W is consistent and E E EII, it follows that LDM(A 0 B) =LDM(A o W) by proof (b) of Theorem 3.1. Since C,“=1(3~i/5j)(‘~li/‘~j)(~jz~j/~i~i) = C,“=,(~j/~i)(wj/~i)(siuli/zjzuj) for i E s1, it follows that LDM(A o W) = k(wlsl, ~2x2,. . . , ~,.,x,)~ = k(LDM(A) o B), where k = l/CL1 wix,. Thus, LDM(A o B) = k(LDM(A)oLDM(B)). The proof of (2) is completed. Now we prove (1). Let A = WI 0 El, where WI is a consistent matrix and El E EI~, then AoB = WloEloB. From (2) and the fact that WI is consistent, we have LDM(AoB) =LDM(W1o El o B) = k(LDM(W1)oLDM(E1 o B) w h ere k is a positive constant. Hence, (1) is proved. We shall now prove (3). Let A and B be 3 x 3 positive reciprocal matrices (not necessarily consistent). Then, A = WI o El and B = Wz 0 E2, where WI and Wz are consistent and El and Ez are in EII. Then, El and Ez are in the form shown in Corollary 2. El o Ez is still in ELI. We have LDM(Ao B) =LDM(Wlo Wz 0 El 0 Ez) =LDM(Wl o Wz) = k(LDM(Wl)oLDM(Wz)) = k(LDM(A)oLDM(B)), where k is a positive constant. I
For LLSM(REM),
we have the following conclusion on the sensitivity.
4.2. Let A E MR+ be perturbed by a positive reciprocal matrix B in the form where k is a positive constant. A o B. Then, LLSM(A 0 B) = k(i;LSM(A)oLLSM(B)),
THEOREM
PROOF. Let x1 = (Q~,z~~, . . , ~1~)~ and x2 = (~1, x22,. . , ~2~)~ be the LLSM priority vector of A and B, respectively. From Theorem 3.2, A = WI oEl and B = WzoE2 where WI = (zli/zu) and W2 = (xzi/xzj) are consistent, and El = (eZ), E2 = (e$) E EIII. It is easy to see that WloW2 is consistent and El o EZ E EIII. Therefore, from (5), we have LLSM(W1 o WZ) = l/CL1 21ix2i )T = k(LLSM(A)oLLSM(B)), where k = l/CL1 Zlix2i. . ,21nx2n I ( x11=21, X12Q2r. If the priority vector is derived by GEM, then we have the following theorem. THEOREM 4.3. Let A E MRi be perturbed Then, we have the following.
by a positive reciprocal matriv B in the form A o B.
(1) If A = W, o El, where WI is consistent
and El E EIV, then GEM(A o B) = k(GEM(A)o
GEM(E1 o B)), where k is a positive constant.
(2) If either A or B is a consistent matrix, then GEM(AoB) = k(GEM(A)oGEM(B)) where k is a positive constant. In particular, if A is a consistent matrix and B is a standard-IV matrix, then GEM(A o B) =GEM(A). PROOF. We first prove (2). Suppose that A = (xi/xj) is consistent, where z = (21,22,. . . ,z,)~, and that B = W o E, where W = (wi/wj), w = (WI, ‘~2,. . . , w,)~ = GEM(B) and E E EIV. From (i’), it is easy to see that the GEM priority vector of A is X’ = l/~~& xix. From A 0 B = A o W o E, A and W are consistent and E E E IV, we have GEM(A o B) =GEM(A 0 W). Since A o W is consistent, the normalized vector of x o w is the GEM priority vector of A 0 W. Thus, GEM(A o B) =GEM(A o W) = k(GEM(A)oGEM(W)), where k = l/Czl TiWi+ It is now easy to see that the proof of (1) follows from (2). I matrix. From the above Let A E MRi be a perturbed matrix and P E MR,+ a perturbation arguments, we can obtain the cy priority vector of A 0 P by calculating the cy priority vectors of A and E o P, where a! is one of EM, LDM, GEM, and E is the standard form of A in O. For LLSM, the calculation is more convenient since LLSM(A 0 P) =LLSM(A)oLLSM(P). If A is consistent and P is the standard form of A in cr, we can conclude that the cy priority vector of A0 P is equal to that of A, where cx can be one of EM, LDM, LLSM(REM), and GEM.
Positive Reciprocal Matrix
67
EXAMPLE. 2 2
A=
P=
a(A o P)
a(A) o a(E o P)
4-4 EM
LDM
LLSM
GEM
EM
LDM
LLSM
GEM
EM
LDM
LLSM
GEM
4212
.4257
.4259
.4220
.3913
.3985
.3977
.4000
.3913
.3985
.397?
.4000
.2823
.2807
.2903
.3028
.2963
.2825
.2897
.2842
.2963
.2825
.2897
.2842
2156
.2129
.2129
.2202
.2061
.2222
.2201
.2526
.2061
.2223
.2201
.2526
.0809
.0807
.0809
.0550
.1061
.0967
.0925
.0632
.1063
.0967
.0925
.0632
We can see that the small perturbations on the entries of A yield small perturbations on the components of the priority vector of A, calculated by the EM, LLSM (REM), LDM, and GEM, respectively.
APPENDIX For the convenience of readers, we include background materials in this section. If the matrix A = (aij) is not a consistent matrix, and the corresponding priority vector is w = T, then the entries of the matrix A should be the functions of the corresponding (W,ZU2,...,%) components of w and some other factors: a;j=fij(Wi,Wj,ffi,orj =
,...
$qv,
i,
3
IUi3Uj)
(A.11
j E R,
where qij = qij(wi, Wj, CY~, oj, . . . , ui, uj) are called perturbation functions, and (pi, CX~,. . . , ui, uj are some constants. Obviously, when every qij approaches to 1, the matrix A tends to be a consistent matrix, and wi/wj is the ideal estimation for oij. In the method of EM, we ask for the priority vector under the condition C&, qij = X,,. We can also reasonably ask for the priority vector under the condition that gal is as close to 1 es possible. This can be achieved by minimizing the following function: n
n
C Cln2 i=l
(A4
Pij.
j=l
Replacing (A-1) in (A.2), we obtain the following optimization problem: min 2 i,j=l
(In
C&j
-
In wi + In
wj)‘.
(A.31
C.
68
WEI
et
al.
The solution w to (A.3) is called the priority vector of the logarithmic least square method (LLSM). Take the partial derivative with respect to wk for k E fl for the function in (A.3), and let &
(In ai, - In wi + In Wj)2
2
i,j=l
[ we
= 0,
(k E Cl),
.
1
have n
g(ln
ski -In
Wj)
wk+ln
---& (
j=l
>
+x(ln
uzk -In
2ui + In wk)
i=l
(
&
>
= 0,
for k E 0. Therefore, 2n In
wk
=
2 1 Jnukj-ln
(&)+2
lnwj],
kEf-2,
i.e., hl Wz =
ch
('Wjakj)
j=l In other words,
wk=
(~wj)l’n(~ukj)“‘=t(~aki)l’ (A-4)n,
where t = (n,“=, Wj)““.
kEnI
Normalizing
the solution obtained
from (A.4), we have
lln
( ) n
n akj j=l
(A.5)
wk = & (&“ijy
k E &If we ask the total deviation
where qij = oij(wj/wi) and w = (WI, 202,. . , w,)~ E Q, to be minimized, then the original matrix is as close as possible to the matrix W = (Wi/wj) with respect to the consistency. This method is called Least Deviation Method (LDM). The function F(w) has a unique minimum solution in Q, which is also the unique solution of the following system of equations in Q:
kaii$ = $zjiy j=l
j=l
wj
i E n,
which can be proved as follows. First, we know that the continuous function F(w) has a minimum solution in the bounded set Q. Because for any w E Q, F(w) 2 0. When w goes to the boundary of Q (i.e., some wi goes to 0), F(w) goes to 00. Let w* be the solution of min F(w) under the constraint C,“=, wj = 1, and wj > 0, for j E R.
Positive Reciprocal Matrix
69
Construct the Lagrange function
(2wj )
L(w,X) =F(w)+X
-1
j=l
Then, W* must satisfy the following system of equations:
j=l
i.e.,
CWj
=
1.
j=l
Hence,
kWj=1. j-1
Summing both sides of the above with respect to i, we have
Therefore, X = 0 and
eWj=l,
Wj
>O,
jER.
j=l
Finally, we need to prove that the solution of the above system of equations is unique. Let z = (x1,22,. . . ,z,)~ and u = (ur,ya,. . . ,v,)~ be any two solutions. Let ti = yJsi, and zl = maxr
70
c. WEI
which contradicts
et cd.
to eCLlj2
= kaji:.
j=l
j=l
Therefore, Zi = 21 for any i E 0. We will have z = y, since both 2 and y are in Q, Applying the same method to the function
in the REM, i.e., make the partial derivative of the corresponding Lagrange function with respect to each wi equal to 0, and then solve for pi, we will have (6). Let the entries of the upper triangle of the matrix A = (aij) be given. We construct a matrix A = (a,) by f&j, ilj, l&j
=
Wi
1
ul_' i> j,
i,jER,
3
where w = (WI, ~2,. . . , w,)~ is the priority vector to be calculated. If A is a consistent matrix, then A = A; otherwise, a is neither consistent nor positive reciprocal. Let’s calculate the right principle eigenvector w of A, A = Xmaxw. The ith equation is i-l C
TWj
j=1
wJ
+ Wi
2
ClijWj = XmmWi,
j=i+l
i.e.,
n iWj
+
C
aij Wj
=
Xm*xWir
j=i+l
which and
is equivalent
to the eigenvalue
Uij
=
and eigenvector aij,
i
2,
i
(. 0, This method and
is called the Gradational
=
problem
j,
i,j
Uw = Xmaxw, where
U = (uij),
E Cl.
i > j,
Eigenvector
Method
fori=n-l,n-2 It is not hard to verify that if A is a consistent normalizing) obtained by EM, LDM, LLSM, REM,
(GEM).
The solution
,...,
is X,,,
= n,
2,l.
matrix, then all the priority and GEM are equal.
vectors
(after
REFERENCES 1. 2. 3. 4. 5. 6. 7.
T.L. Saaty, A scaling method for priorities in hierarchical structures, Journal of Mathematical Psychology 15 (3), 234-281 (1977). T.L. Saaty, The Analytic Hierarchy Process, McGraw-Hill, (1980). T.L. Saaty and L.G. Vargas, Comparison of eigenvalue, logarithmic least squares and least squares methods in estimating ratios, Mathl. Modelling B (5), 309-324 (1984). T.L. Saaty, Rank according to Perron: A new insight, Mathematics Magazine 60, 211-213 (1987). T.L. Saaty, How to make a decision: The AHP, European Journoi of Operational Research 48,%X (1990). Yingming Wang, An overview of priority methods of comparison matrix, Journal of Decision Making and Decision Support Systems 5, 101-114 (1995). L.G. Vargae, Analysis of sensitivity of reciprocal matrices, Applied Mathematics and Computation 12 (4), 201-320 (1983).
Positive Reciprocal 8. 9. 10. 11. 12. 13.
14. 15. 16. 17.
Matrix
71
Lianfen Wang and Shubo Xu, Introduction to the Analytic Hierarchy Process, Publishing House of the People’s University of China, (1989). K.O. Cogger and P.L. Yu, Eigenweight vectors and least-distance approximation for revealed preference in pairwise weight ratios, Journal of OptimizationTheory and Applications 46 (4), 483-491 (1985). Baoqian Chen, Guiru Liu and Qiaozhu Cai, Priority problem of incomplete judgement matrices in the Analytic Hierarchy Process, Journal of Naikai Daixue Xuebao, Natural Science Edition (l), 38-46 (1989). Gongyan Lei, A note on the application of relative entropy to the Analytic Hierarchy Process, Systems Engineering Theory and Practice 15, 65-68 (1995). T.L. Saaty, Eigenvector and logarithmic least squares, European Journal of Operational Research 48, 156-160 (1990). Lianfen Wang, Deduction and improvement on priority method of gradient eigenvector, Theory and Practice (3), 17-21 (1989). of System Engineering G. Grawford and C. Williams, A note on the analysis of subjective judgements matrices, Journal of Mathematical Psychology 29, 387-405 (1985). Guo Fei and Cuiping Wei, The priority method regard to a common criterion in AHP, Huanghuai Journal, Natural Science Edition 10, 5943 (1994). J.R. Emshoff and T.L. Saaty, Applications of the analytic hierarchy process to long range planning processes, European Journal of Operations Research 10, 131-143 (1982). R.E. Jenson, An alternative scaling method for priorities in hierarchy structures, Journal of Mathematical Psychology
28, 317-322
(1994).