Mathl.
Comput.
Modelling
Vol. 15, No. 11, pp. 141-151, 1991
0895-7177191 $3.00 + 0.00 Copyright@ 1991 Pergamon Press plc
Printed in Great Britain. All rights reserved
THE EXTENDED
ANALYTIC
HIERARCHY
DECISION
METHOD
JOSEPH M. LAMBERT Department of Computer Science, The Pennsylvania State University University Park, Pennsylvania 16802, U.S.A.
(Received
June 1991)
Abstract-This paper extends the Analytic Hierarchy Decision Model of T.L. Saaty by enlarging the set of pairwise comparison values to allow indecision or noncomparability between two alternatives.
1. INTRODUCTION In this paper we present an extension of the Analytic Hierarchy Process of T. L. Saaty [l] by enlarging the set of pairwise comparison values to allow indecision or noncomparability between two alternatives. In section 2, we present a quick overview of the Analytic Hierarchy Process. We follow with a section that gives the intuition for the extension of the model. Section 4 gives the theoretical foundation for the model extension. An application section then follows. 2. THE
ANALYTIC
HIERARCHY
PROCESS
The hierarchy decision process of T. L. Saaty ranks discrete alternatives by using pairwise comparison of the alternatives. In order to rank n alternatives, the input consists of comparing each of the alternatives using a scale set of
S =
.
;,f,+,f,;,+l,2,3,4,5,6,7,8,9
The pairwise comparison of alternative i with alternative Pairwise Comparison Alternative Matrix
. . an1
an2
. .
j is placed
in the element
oij of the
. .
.--
The reciprocal value of this comparison is placed in the element aji of A in order to preserve consistency of judgement. Thus, given n alternatives, the user compares the relative importance of one alternative with respect to a second alternative, using the criteria of Table 1. Hence, if alternative one was strongly favored over alternative two, for example, then a12 = 5. If the converse was true, alternative two was strongly favored over alternative one, aI2 is the reciprocal value 5. The Pairwise Comparison Alternative Matrix is called a reciprocal matrix for obvious reasons. The intuition behind the Analytic Hierarchy Process is that in a perfect world, the Pairwise Comparison Alternative Matrix A would be identical to the matrix
141
J.M. LAMBERT
142
Table 1. Meaning of Pairwise Comparison Scale. Explanation Two alternatives contribute equally to the obiective Experience and judgement slightly favor one alternative over another Essential or strong
Experience and judge-
importance
ment slightly favor one alternative over another
7
Demonstrated
An alternative is strongly
importance
favored and its dominance is demonstrated in nractice
2,4#%3
Intermediate values
When compromise is
between the two
needed
adjacent judgement Reciprocals of above
If alternative i has one
numbers
of the above numbers assigned to it when compared with alternative j, then j has the reciprocal value compared with i
where xi is the relative weight or fuzzy membership value of alternative i. Various methods have been proposed to extract values {wi} from the matrix A, which would be close approximates to the values {xi}. In particular, Saaty recommends solving the Pairwise comparison Alternative Matrix for its maximal eigenvalue, X,,. The associated normalized eigenvector is then taken to be the approximate weight vector {xi}, since
i]
=n
Alternative Matrix used all real numbers rather than a limited scale Other techniques suggested in Saaty [l] include summing the rows Alternative Matrix and normalizing the result, since
+=xi “j
j=l
Column
sums that
1
X7l
Xn
if the Pairwise Comparison set of cardinality seventeen. of the Pairwise Comparison
* [:I Xl
Xl
are inverted
and normalized ” c
( ) 2;
.
j=l
Xi -=-
i=l Xi
also yield the membership 1
n Xi. c 2' J i=l
vector as well, since
143
Extended decision method
Finally, set since
Saaty notes that the normalized
geometric
mean of the rows will yield the membership
Several investigators have noted that these evaluation techniques yield ‘errors in the decision process. Hihn and Johnson [2] show that the four evaluation methods mentioned above and an additional twelve methods yield no clear basis for preferring one evaluation method over another. Moreover, Triantaphyllou and Mann [3] s h ow that in specific examples that the eigenvalue evaluation method can produce ranking errors. [4] shows that the limited cardinality of the input set S leads to the imprecision of the outputs of the analytic hierarchy decision method. Thus fine grain results can not be expected as output. However, the method remains a valuable heuristic that yields first approximations to a weight vector, and the technique is an important tool in the overall evaluation of decision planning. In the best of all possible worlds, the Pairwise Comparison Alternative Matrix would be as close to X as possible. One consistency criteria that Saaty considers crucial is a consistency relationship that fori,j,k = l,..., n. aik = aij
ajk,
He shows that a Pairwise Comparison Alternative Matrix is consistent if and only if, Amax = n. measure is then given for matrices that are close to consistent. This measure is a function of n. Basically, if Amax is close to n, then the Pairwise Comparison Alternative Matrix is close to being consistent.
A statistical
3. MODEL
EXTENSION-INTUITION
AND
EXAMPLES
In order to extend the Analytic Hierarchy Decision Model, we have to incorporate the concept of non comparability or indecision. Given two alternatives, a judge may be unable to compare the alternatives. For example, a judge in a science fair may have a background in biochemistry and have a nonexistent background in computer algorithms. The judge, thus, does not feel that any comparison value can be made to pairwise compare a project in biology and another project in sorting algorithms. None of the seventeen values in S can be used. Thus, the set S must be extended. The value zero is not appropriate since its reciprocal is not defined. Other real numbers could be appropriate, but a relative scaling and ordering is immediately implied. Thus an extension by real numbers is inappropriate. The fact that alternatives are not comparable, leads one to consider the complex number i and its reciprocal iT - -i, where i2 = -1. The framework of the input for the model and the intuition for input and output of the model are unchanged. However, a discussion of the meaning of the output eventually must be made. We used the complex arithmetic EISPACK routines [5] to obtain our numerical results. Consider the simplest of cases. Given two alternatives which are incomparable, we obtain a Pairwise Comparison Alternative Matrix
A= Solving
the eigenvalue
problem,
we obtain
[
‘i ;.
X,,,
$ +
1
= 2 with associated
eigenvector
( ) o_gi
Oi
*
We normalize this vector by adding the magnitudes of each component divide each component in turn by that value. In this case we obtain
MCM
15:11-J
of the vector
and then
J.M. LAMBERT
144
Before we attempt to analyse the meaning of the output vector, we should note that the matrix 1 [i
A=
-i 1 1
should present a similar output vector since, incomparability should be independent of the location of i and -i. The eigenvalue method yields X,,, = 2 with associated normalized eigenvector
Thus, the vectors are not identical but we do notice that each component of the respective vectors have magnitude 3. Moreover, if we look at a specific eigenvector we note that in the complex plane each component is orthogonal to the other. In other words, if the eigenvector has the form i&J1
( > Tie T2eih
3
we have 101 - 82 I= $. Since complex numbers are not comparable, the initial results hold some promise. If we look at total incomparability matrices of dimensions 3, 4, and 5, that is matrices with 1 on the diagonal, i in the upper triangular positions and -i in the lower triangular positions, we obtain the following maximum eigenvalues and associated normalized eigenvalues:
xmaXg=
X,,,
2.732,
= 3.414,
X,,,
= 4.077, ,iO
1 ,z5=-
L
e-i.62S3 _il .xX
e”.“” ei.6283
We see that there are now three, four, or five axes of incomparability, a frightening thought. Moreover, the angles between adjacent axes is equal to : where n was the dimension of the Pairwise Comparison Alternative Matrix. Since the values of X,, < n, there is concern regarding the appropriateness of the model. Again we delay interpretation of the results, until we discuss more cases. If we solve 1 1 i A= 11 i, -i -i 1 [ we see that in this case X,,
1
= 3 with a normalized eigenvector
The model now seems to be in step with the two dimension cases. A pattern begins to emerge, but another problem lurks before us. In the example
A=
we see X,,,
1 1 [ i
= 2 with a normalized eigenvector
1 1 -i
-i i 1
1 ,
145
Extended decision method
A quick analysis cannot arbitrarily
shows that for every non trivial consistency pair, assign i and -i. If we make the simple change to 1 1 11 [ i i
A=
we see that
-i -i 1
in this case Amax = 3 with a normalized
aijajk
#
aik.
Thus,
one
1 )
eigenvector
The model now seems to be in step with the two dimension cases. Thus to insure consistency, the values of i and -i must be used such that all values of i are placed in a row and that the reciprocals be placed in the appropriate column. The converse would also be a valid scaling for consistency. At this stage, we will place -i in columns of the Pairwise Comparison Alternative Matrix and i in the rows of the Pairwise Comparison Alternative Matrix. AS we will see in the theory section, either choice will yield a valid model, although we will use the choice made here to yield a uniform standard model. The difficulty with the completely inconsistent matrices of dimensions 3, 4, and 5 is also a case‘that the Pairwise Comparison Alternative Matrix is inconsistent with regard to the Saaty consistency condition. In point of fact, if alternatives are judged to be mutually incomparable, the value in the Pairwise Comparison Alternative Matrix must be equality or 1. This can be seen in the following example
Note that the row 4 column comparisons, a normalized
i
1
i
1
i
i
1
i
1
i
entries in the Pairwise Comparison Alternative Matrix in row 2 column 4 and in 2 are both one, denoting that alternatives two and four are undecidable by pairwise and thus, when compared they take the value 1. In this case, AA, = 5.6194, with eigenvector,
We note
that two of the components have the same direction with the third component being essentially orthogonal to the others. Comparing two components that have the same direction is clean in the sense that the magnitudes of the complex number may be an appropriate measure. Let us look at two more matrices. Set B to be the matrix obtained from A by defining bij = 1aij I. Thus B has th e value 1 wherever i or -i was found in A. All other values are equal. One finds that XE,, = 5.6194 with a normalized eigenvector
This result is predicted by results in the theory section, We note that and the magnitudes of the associated eigenvectors are equal.
AA,
= &$,
= 5.6194
,
J.M. LAMBERT
146
Finally, let C be the matrix obtained We thus have the bsquare matrix
c=
One finds that
XE,
by discarding
all noncomparable
from A.
alternatives
[ 1 1 3
2 1
1 f
4
3
1
= 3.916, with a normalized
.
eigenvector
.2599 .3274 . ( .4126 1 By rotating the normalized eigenvector associated another normalized eigenvector, namely,
with Xi&,, (multiply
by eSic’O14), we obtain
.1807 .1779i .2074 . .17793 i .2559 1 Here we see that the three components corresponding to the comparable real axis and that the two noncomparable elements lie along the imaginary tion of the model outputs can take one of several paths: 1. Use the values of alternatives 2. Use the values magnitudes of 3. Use the values the comparable
in the normalized eigenvector associated with to zero. in the normalized eigenvector associated with the corresponding vector determined by A and in the normalized eigenvector associated with alternatives are on the real axis.
elements lie along the axis. The interpreta-
A$,,,, setting A:,, Xi,. XA,,
missing
essentially normalized
values
using
the
such that
Choice (1.) clearly is biased toward comparable alternatives rather than noncomparable alternatives. The values of this judge or comparator would take on too much weight in combinations with judgements of other judges. This is unacceptable. Choice (2.) has possible merit but clearly is inappropriate since the answer reflects the matrix B and not A. Thus, we will use Choice (3.) as the only appropriate interpretation. Unfortunately, this still does not give us a complete interpretation. The next step is to determine how to combine judgements in a hierarchy. Given a top level hierarchy, Saaty takes the normalized eigenvector of the highest level and multiplies it by the matrix whose columns correspond to the normalized eigenvectors of the lowest level associated in turn with each alternative in the highest level,
Assigning weights to the branches of a hierarchy tree, multiplying them, and collecting values for the lowest level leaves, yields the same result. We note that when all values are real the vector w is automatically normalized to 1. However, in the extended case, complex arithmetic can yield complex values in the vector w. In particular, values need no longer be conveniently placed on the real and complex axes. Moreover, vector components need no longer differ by an argument of 4. In this model, we continue the Saaty technique, allowing complex vectors at all stages, until the setting of a final membership value by taking the real components of the vector. We see this in the following examples of hierarchal multiplication,
147
Extended decision method
At this final stage, an interpretation must be made that enables resolution of both cases above. The interpretation that we choose is that we assign to each component of the fuzzy membership vector the real part of that component. Thus, the weight vector is (l/3,1/3,0) in the first example and is (l/9,1/9,1/9) in the other. In both instances, the weight vector does not sum to unity but this can be attributed to the noncomparability of alternatives. Note that this interpretation is a compromise in the sense that noncomparable alternatives still would receive zero values, but the weight values of the comparable alternatives are less than those in Choice 1. In effect, the choice is stating that all alternatives are assumed to be equal until comparisons are made. If two alternatives are not able to be compared, they are judged to be equal to one another and to all other alternatives. The values of the noncomparable alternatives in the normalized eigenvector associated with the maximal eigenvector are orthogonal to the comparable values. 4. THEORETICAL In his treatise [l], Saaty used some properties of his results. We do the same.
FOUNDATIONS of non negative
matrices
to set the foundation
B = {bij} is said to be nonnegative if bij 1 0 for aJJ i, j. B is said to be positive if bij > 0 for aJJ i, j. A Matrix C = {cij} is said to dominate a matrix B, if cij 1 bij for aJJ i, j. The coqjugate matrix of a matrix B is the matrix 3 = {gij} where ‘z is the complex conjugate of z. DEFINITION 1. A matrix
DEFINITION 2. A matrix B is said to be cogredient to a matrix C if there exists a permutation matrix P such that B = PTCP where PT denotes the transpose of P. A nonnegative matrix, n-square matrix A, is called reducible if A is cogredient to a matrix of the form
B [ 0 where B and D are square submatrices. The following results and Frobenius [7-81.
can be found
C D
1
Otherwise A is said to be irreducible.
in [6] and they are the cornerstones
of the work of Perron
THEOREM 1. Let A be an irreducible nonnegative matrix A.
max >I p 1 for any eigenvalue p of A. (1) A has a real positive eigenvalue Xfiax such that XA (2) The eigenvector associated with A”,,, is a positive vector, a vector all of whose components are nonnegative. Moreover aJJ components are strictly positive. (3) 4k.X is a simple root of the characteristic equation of A and hence the span of the eigenvaJues associated with X4, has dimension 1. The following theorem is due to Wielandt [9]. Its proof can be found in [6]. Given a matrix C = {cij} with cij complex valued. If an irreducible matrix A with maximal eigenvalue ,!$,, dominates the nonnegative matrix ICI = { lcij I} then for every eigenvalue ~1 of C we have
THEOREM 2.
IPI 2 k&x.
Equality holds if and only if C = @DAD--’
>
where p = X&,, eio and IDI = In, an n-square identity matrix. DEFINITION
hiwise
3. A matrix
Comparison
A is a reciprocal
Alternative
Matrix
matrix if aji is a reciprocal
E = {f,;,+$&i,i,1,2,3,4,5,6,7,8,9}.
=
&
for a/J ij.
An Extended
matrix taking values from the set
148
J.M.
LAMBERT
DEFINITION 4. An Extended Pairwise Comparison Alternative Matrix C is stable if values z = Re(z) + ilm(z) in its coJumns have 1m(z) 2 0 and values in its rows have Im(z) < 0 or if values z in its columns have Im(z) 5 0 and values in its rows have Im(z) 2 0. An n-square RC&rOCd matrix C is called consistent if Cij = Cik Ckj for i, j, /Z= 1,2, . . . , n. Effectively, the definition of a stable Extended Pairwise Comparison Alternative Matrix forces any column with a value z such that Im(z) # 0 must have all values in that column to be elements of the set { 1, i} or for all values in that column to be elements of the set { 1, -i}. THEOREM 3. Let C be an n-square stable Extended then
Pairwise Comparjson Alternative
Matrix
1. A = 1C 1is a Pairwise Comparison AJternative Matrix. 2. C = DAD-‘, where 1D I= I,,. 3. The eigenvalue of C with maximum modulus, Xi, is real with X$, = Xkax > 0. Moreover X&, is a simple root of its characteristic equation and the span of its associated ejgen vet tars has dimension 1. 4. If Ax = iitaxx, then C(Dx) = X$, (Dx) and span(Dx) contains aJJ eigenvectors associated with X&,. 5. If CZ = Xzaxz, then c. PROOF.
1. If Im(cij) # 0, then lcij( = 1 E S. If Im(cij) = 0, then cij E S. Hence 1 C I is a Pairwise Comparison Alternative Matrix. 2. Since C is stable, we make the assumption that all columns with values with non zero imaginary parts take on the values 1 and -i. Let J be a subset of { 1,2,. . . , n} such that if j E J then Ckj E {1,-i} for k = I,2 ,..., n. Then, the diagonal matrix defined by dkk = i if Ic E J and dkk = 1 if Ic 6 J. The diagonal matrix D-l takes on the reciprocal values of the diagonal of D. In particular, the values on the diagonal are 1 and -i. It is equivalent to show D-lCD = A. Postmultiply C by D and we see that all rows of CD that have values with non zero imaginary part have the value i in all positions of that row. Columns with non zero imaginary part have values in the set (1, i}. All other values of CD have the same values as C. Premultiplying CD by D-‘, changes all complex values in CD to 1, thus yielding A = I C I. 3. By Theorem 2, we have that Xt,, = IX$&leio. But then XE,, is real and is positive by Theorem 1, since every Pairwise Comparison Alternative Matrix is irreducible. Since C is similar to A, i.e., C = PAP-’ for some invertible P, C has the same characteristic equation as A. [lo]. By Theorem 1, A$,,, is a simple root of its characteristic equation and the span of the eigenvectors associated with it has dimension 1. 4. The simple computation, [C(Dx) = DAD-‘(Dx) = D(Ax) = D(Xt,x) = Xg,(Dx)] gives the result coupled with the fact that the eigenspace has dimension 1 from (3.) above. 5. c is a square stable Extended Pairwise Comparison Alternative Matrix. Setting E = D-l, where D was defined above we see c = EAE-l. Thus XgU = X$, = Xk, > 0. We know that if z is in the eigenspace of Xz,, that z = crDx for some (Y and fixed 2 in the eigenspace of A;,,. Similarly for any y in the eigenspace of Xz=, we can write y = PEx for some p and the same I. But a simple computation shows y = @Dx = PO+. Thus 7 = Z7Z is an eigenvector of Xza,. The result above includes an explicit representation for an eigenvalue of C corresponding to XE,, as a function of an eigenvalue of A corresponding This enables all computations to be done with real matrices, a substantial to x;,,. advantage. Most importantly, it contains the dubious result that both C and c are stable Extended Pairwise Comparison Alternative Matrices for the same problem. Thus it is important that a uniform convention be used to assign values to Extended Pairwise Comparison Alternative Matrices. In order to keep theoretical values of normalized eigenvectors in the positive quadrant of the complex plane, we use the set { 1, -i} as the values for columns of the Extended Pairwise Comparison Alternative Matrix where we have noncomparable entities. This will then insure
149
Extended decision method
that the eigenvectors of C associated with its maximal eigenvector will be in span(Ex), where z is a positive eigenvector associated with the maximal eigenvector of A = ( C 1, where 1 E I= I, with the diagonal elements of E in the set { 1, i}. 5. EXAMPLE
APPLICATION
An evaluation committee is formed to evaluate five faculty members {n, university. Faculty are reviewed by a committee of five judges based on scholarship, and service. Unfortunately each judge has a different perception be placed on each category of evaluation. Moreover, each judge has different about each faculty member. Using the usual pairwise comparison techniques, set S for inputs and the five judges have the following weight vectors
If an external output
oracle was judging
the respective fl
teaching research scholarship service
faculty f2
f3
fi,fs, f4, fs} in a teaching, research, of the weights to levels of knowledge each judge uses the
the oracle would produce f4
the following
f5
ll u,
4;; u fi
u
where an upward directed arrow indicates exceptional performance in the category. A downward arrow indicates poor performance and no arrow indicates acceptable performance. The judges are not as enlightened as the oracle, but with their limited knowledge come up with the following matrices with teaching, research, scholarship and service given in order for each judge. Within each matrix faculty are arranged in order fi through f5. Below each pattern of matrices are the normalized eigenvectors associate with the maximal eigenvalue. For the eigenvectors with complex components we rotate them such that real components are on the real axis. We thus can use the real eigenvalue routines as explained in the theory section. Judge 1:
1
Judge
Q 511
3951
Q if1
41
T $+1
5
9
1
4
1
1
.0889 .0335 .3839 .1097 .3839
.0421 .4014 .4014 .1129 .0421
.2307 .2307 .2307 .2307 .0769
2:
:1*
-i -i1
+ 1i
51i
-i1
1 il
-i
i
I
-i
ii1
1 ;*
-i -i1
Q 1i
1 il
-i
$ 1 ii1
1 5i
-i -i1 -i
J.M. LAMBERT
150
1 1 1 1 1
1 1 1 1 1
II
.1713i .0840 .2866 .1713i .2866
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
Judge 4:
Judge 5:
9
1
1
4
9
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 1
1 1 1 1 11
Then, for each judge we compose the 5 x 4 matrix with columns being the associated with teaching, research, scholarship and service for each of the five faculty. Then multiply this matrix by the 4 x 1 vector consisting of the weights of the teaching, research, scholarship, and service for that judge. For example for Judge 1, we have .0889 .0335 .3839 .1097 .3839
SO421 .4014 .4014 .1129 .0421
.2307 .2307 .2307 .2307 .0769
.5303 a1400 .1400 .1400 .0496
This having been done for each judge, multiply each vector by .2 and add the results. We have made the assumption that each judge was fair and hence equal. If that were not the case, different weight could have been obtained by the pairwise comparison technique. We then obtain the following vector
Extended
decision method
151
Since this is the final membership vector, we use the real vector as the membership vector. The components of the real vector do not sum to 1, but the noncomparability of items contributed to the result. The judges while not oracles did reasonably well in evaluating the faculty, even though they had different criteria weights. The fact that each judge was honest in the decisions that they made in no small measure contributed to the correspondence with the oracle. The ranking of faculty is thus fs, fi, fs, fs, f1 with the latter three being essentially indistinguishable. This result can be contrasted with the result that is obtained by having each judge use the protocol of discarding all noncomparable alternatives, and then using smaller Pairwise Comparison Alternative Matrices to obtain normalized membership vectors. The full membership vector is obtained by inserting zeroes in the noncomparable alternative components. Then, using the hierarchy process as detailed above, we obtain the vector
indistinguishThis latter vector ranks faculty f 3 , f 2, f 5, f 1, f 4 with the last two being essentially able. As stated in the overview section on the analytic hierarchy process, ranking errors can occur. A decision maker must group components of a membership vector that have close numerical scores. The extended process gives a more realistic measurement when indecision is present. 6. CONCLUDING
REMARKS
This paper has presented an extension of the Analytic Hierarchy Process by allowing the two values i and -i to the scale set of acceptable comparison values. Using the maximal eigenvalue and setting its associated normalized eigenvector as the fuzzy membership vector was shown to be intuitively pleasing and theoretically sound. It is interesting to note that while other techniques can be used to find the fuzzy membership vector in the non extended process, those techniques fail in the extended case. In particular, the row and column methods yield several additonal technique is in directions of incomparability in the membership vector. Thus, the eigenvector some sense the ‘correct way’ to view the membership vector even in the non extended process. A final caveat must always be made when discussing decision processes based on incomplete information and that is that this technique must always be used in conjunction with other decision techniques. No model should be used in isolation. REFERENCES 1. T.L. Saaty, The Analytic HieTaTChy Process, McGraw Hill, New York, (1980). 2. J.M. Hihn and C.R. Johnson, Evaluation techniques for paired ratio-comparison matrices in a hierarchical decision model, Measurement in Economics, (Edited by W. Eichhorn), pp. 269-288, (1988). 3. E. Triantaphyllou and S.H. Mann, An evaluation of the eigenvalue approach for determining the membership values in fuzzy sets, Fuzzy Sets and Syslems 35, 295-301 (1990). 4. J.M. Lambert, The fuzzy set membership problem using the hierarchy decision method, Fuzzy Sets and Systems (to appear). 5. B.T. Smith et al., Matrix Eigensystem Routines, EISPACIC Guide, 2nd ed., Springer-Verlag, New York, (1976). 6. H. Mint, Nonnegative Matrices, John Wiley & Sons, New York, (1988). 7. 0. Perron, Zur Theorie der Matrizen, Math. 2. 52, 642-648 (1950). 8. G. Frobenius, uber Matrizen aus nichl Negaliven Elementen, pp. 456-477, S.B.K. Pruess. Akad Wiss., Berlin, (1912). 9. H. Wielandt, Unzerlegbare nicht negative matrizen, Math. Z. 52, 642-648 (1950). 10. G.W. Stewart, Inlroductions to Matrix Computalions, Academic Press, Inc., New York, (1973).