Judgemental modelling based on geometric least square

Judgemental modelling based on geometric least square

European Journal of Operational Research 36 (1988) 27-35 North-Holland 27 Theory and Methodology Judgemental modelling based on geometric least squ...

556KB Sizes 0 Downloads 54 Views

European Journal of Operational Research 36 (1988) 27-35 North-Holland

27

Theory and Methodology

Judgemental modelling based on geometric least square G. I S L E I a n d A.G. L O C K E T T Manchester Business School, Booth Street West, Manchester, M15 6PB, United Kingdom

Abstract: The Analytic Hierarchy Process developed by T.L. Saaty has received widespread attention in the literature. However this has not been without some criticisms, including the problems of meaning of consistency and large data requirements. This paper addresses the last two problems and presents a new method of calculating preference vectors. To the user the procedure is essentially the same, but the data requirements can be made less onerous and at the same time feedback is provided permitting a greater understanding of the data inputs. Preliminary results indicate the acceptability of the proposed methodology.

Key'words: Decision theory, multiple criteria, optimization

1. Introduction

The use of human judgement in decision models is receiving increasing attention, and a variety of approaches have been developed which cover a wide range of techniques and possible applications (e.g. see Roy, 1981; Sawaragi et al., 1987; Spronk and Zionts, 1984; Von Winterfeldt and Edwards, 1986). One method which has a growing presence in the literature is the Analytic Hierarchy Process developed by Saaty (1980). It has been widely documented on a variety of problem areas e.g. Lockett (1986), Saaty (1979), Wind (1980), Saaty (1985). It is also now possible to purchase a number of Personal Computer packages based on the methodology. Nevertheless this widespread attention has not been without some criticisms of the method. Gear (1983) and Watson (1982) have

raised questions concerning the meaning of the consistency measure, and of the ratio scale. Another problem that is mentioned (e.g. see Lockett, 1986) is the amount of data that is required, much of which may be redundant. However, most of the participants rate the Saaty approach highly (Schoemaker, 1982). Hence we have a method which is generally well received by the user, but which can be onerous in its demand. The authors have used the Saaty approach (along with other methods) over a number of years and are well aware of its shortcomings as well as its strengths. It is to the major problems of large data requirements (Coombs, 1964) and of measuring consistency that this paper is addressed. This paper demonstrates a new method of calculating preferences which is based on the well tried method of minimising least square deviation.

2. Outline of method

Received April 1987; receivedJuly 1987

Assume n attributes A 1. . . . . A n are given, and

0377-2217/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)

28

G. Islei, A . G. L o c k e t t

/ Judgemental

m o d e l l i n g b a s e d on g e o m e t r i c least s q u a r e

we have a judgement matrix of the form: A1

A2

• . .

A n

A1

all

a12

• . .

aln

A2

a21

a22

• . .

a2,

An

anl

an2

•..

ann

with the entry aik representing the degree of preference of attribute Ai over Ak, i.e. A i =

aikAk,

i,

k = 1 , . . . , n.

A judgement matrix is consistent if a i j . a j k = agk for all i, j, k, = 1 ..... n. Saaty's technique assumes that the judgement matrix is reciprocal, (i.e. aik = 1/ak~ ) and requires that all entries of the upper triangular half of the matrix be given,

i.e. n ( n - 1)/2 comparisons have to be made. The normalised principal eigenvector is taken as preference vector and the principal eigenvalue is used to measure consistency. However our experience would suggest that: (i) The requirement of providing all entries in the upper triangular half of the matrix is very demanding on the decision maker. (ii) Saaty's definition of a consistency index/ ratio provides a crude measure with limited statistical properties. (iii) It is arguable whether empirical evidence justifies the assumption that judgements are reciprocal in real life situations. In this section we will introduce a method which takes account of these problems. Let us consider the following examples of attribute com-

A3

/

\

l /

\

\

/

\ 0,0,:

l A 2 - 1/12

.A 1 - 1 / 4

A 3 =0

A3

m-

(0,i,0')

f

/

A1 - 3 A2 = 0

/ / / / /

A1

/ / / Figure 1. Consistent case

A2

29

G. Islei, A.G. Lockett / Judgemental modelling based on geometric least square

stitution). Accordingly it is quite obvious for example that the entries of the upper triangular half of matrix (II) represents the following equations:

parisons:

(I)

(II)

A1 A2 A3

A1 A2 A3

Al

A2

A3

1

3

4

1

1

A, - 3A 2 = 0,

(1)

A, - ¼A3 = 0,

(2)

A 2 - ±A 12 3 = 0,

(3)

1

3

1

4

9

1

A1

A2

A3

1

3

1

½

1



A 1 +A2 +A3 = 1.

4

12

1

We shall use a geometric interpretation of the above equations (1)-(3) and the normalising constraint (c) in order to introduce a version for calculating preference vectors which can effectively reduce the number of comparisons required by the eigenvector method and also has desirable statistical properties.

and normalising requires:

Our observations suggest that ratio comparisons supplied by the decision maker do not merely represent value estimates (i.e. sample points) but estimates of a functional relationship between these attributes (that is, a marginal rate of sub-

(c)

A3

\

1 !

\ \

\

\\co,o,1

1 1 1 k A2

A l - I/4

A3

I/9

A3 = 0

=0

(0,i,0)

(I,0,0)

/

/

A1 - 3 A2 = 0

/ /

/

A.

/ /

/ / /

Figure 2. Inconsistentcase

r A2

G. Islei, A.G. Lockett / Judgemental modelling based on geometric least square

30

It is not too difficult to realise that these equations represent planes in a three-dimensional space with coordinates A~, A 2, .43. For example the equation A a -3.42 = 0 represents the plane (see Figure 1) which is spanned by the straight line Aa - 3 A 2 = 0 of the Al.42-plane and the .43-axis. The normalisation requirement (c) is represented by the plane through the points (1, 0, 0), (0, 1, 0), (0, 0, 1). Therefore, if the comparisons of the judgement matrix are consistent (as is the case for matrix (II)), the solution is the unique point of intersection of all those planes (1)-(3) and (c). (The circled point in Figure 1.) Hence our overdetermined system of linear equations has a unique solution. However, if the comparisons are not consistent, then this is not the case. For example, matrix (I) is not consistent, with equations: A 1 -- 3 A 2 = 0 ,

(1')

A, -

= 0,

(2')

A2 -

= 0,

(3')

A1+ A 2+ A 3=

1.

the form F ( A1, A2 . . . . . An, )~ ) 2

= ~

( xilA1Lxi2A2

+__ " ' " + X i n A n tXin )

i=1

+)~(xlA 1 + "" +xnA .-

(i) The expression in the first bracket is the Euclidean distance of a point (A~, A 2. . . . . An) to the plane x i l A 1 + x i z A 2 + •. • + xinAn - d, = O. (ii) ?, is a Lagrange multiplier. (iii) The constraint in our optimisation problem cannot be abandoned since the overdetermined system of linear equations resulting from judgement matrices is homogeneous. (iv) The number of equations rn for our problem is equal to the number of comparisons. The number of attributes is equal to n. If we now take into account that for our problem d~ = 0; x~ = 1; d = 1 then the objective function can be written as F ( A l, A 2 . . . . . A , , )~) XilA 1 + xi2A 2 + • .

3. Theoretical detail

More generally we have the following mathematical problem: Given an overdetermined system of linear equations X l n A n = d 1, x 2 1 d 1 + x 2 2 A 2 -4- • • • + x 2 n d n = d2, Xl2A 2 +

x~lAl+X~2A2+

•• • +

"" +x~;An=d'~,

(1) (2)

x 2 A 2 + • • • + x n A n = d,

,=1

+

+

.

@ XinA n

+x,nj

+X(A, + A 2 + --. +A n - 1).

(b)

The

solution to our problem is the point An, X) where this function (b) has its minimum. Therefore the following conditions have to be satisfied: 0F OF 0F 8F 0A~- = 0A---2. . . . 0A n 0X 0, (AI,

A 2 .... ,

i.e. we obtain for the objective function (b) the following set of equations OF = ~

o`4,

(m) (m>~n)

2xil(XilA1 + xi2A2 + "'" + x i n A n )

,=i

+

+ . . . +x ,o

+x=0,

(1)

OF _ ~ 2Xiz(XilA1 + xi2A2 + "'" + x , , A , ) OA2 i=a I=~ x 2 + x2~2+ "'" +x2,

with the constraint XIA 1 +

(a)

d).

(c)

Therefore the solution in this case should lie inside the shaded triangle of Figure 2. We propose as optimal result the point of the triangle which has least square distance from the planes (1')-(3'). (Some further discussion can be found in Appendix 1).

xnA 1 +

di

(c)

find the solution which meets the constraint (c) exactly and fulfills equations (1)-(rn) optimal in the sense of having least square distance to the plane represented by equations (1)-(m). Therefore the general objective function is of

+)~=0, OF _ ~ OA°

(2)

2xin(xilA1 + x i 2 A 2 + • "" + X m A n )

,= 1

+X=O,

+

+ . .

+

(.)

G. Islei, A.G. Lockett / Judgemental modelling based on geometric least square

3F

~)~. - - A ~ + A 2 +



''

+A n -

1=

(n + 1)

O.

Solving this set of simultaneous equations gives the optimal solution. Therefore our overdetermined system of linear equations with constraints has been reduced to a system of linear equations with unique solution. (The particular objective function ensures that we have a unique solution). Also our geometric interpretation implies that a solution (A 1, A 2 , . . . , A n ) has the property: A 1 > 0 , A 2 >_ 0 . . . . . A n >_ 0). The version developed so far is of course not comparable to Saaty's method without further adjustment. In the eigenvector method the relative preference of any two attributes A,, A k is not only a function of their direct comparisons a,k but also of their relative scores with respect to all other attributes (indirect comparisons). Therefore in order to make our results compatible we would require that the solution should not only be optimal with respect to all direct but also all indirect comparisons. The indirect comparisons relevant for our calculations (for a full discussion see Islei (1986)) can in fact be deduced from the consistency condition, namely: a i k = a i J a k j ( j = 1 . . . . . n) giving n indirect comparisons between A, and A k (using their respective scores in row i and row k). If we calculate the preference vector by including all indirect comparisons (and hence all direct comparisons) in our least square distance approach then the solution is the most consistent one with respect to all comparisons. Therefore our result has optimal consistency with respect to a given judgement matrix. (This technique will be referred to as Geometric Least Square method

(G.L.S.).) If we apply G.L.S. to the example of matrix (I) we obviously have three indirect comparisons between any two attributes (therefore in this case the total number of attributes is 3 and the number of comparisons used is equal to 9). This leads to the following system of equations: 6.29A~-

1.94A 2 - 1 . 5 4 A 3 + 2 ~ = 0 ,

Since we want to compare our results with the eigenvector method we have assumed that the reciprocal condition is valid, (i.e. a k i = l / a , k ) . This assumption is of course not necessary for our method and can be abandoned if empirical evidence requires this.

4. An illustrative example To give an illustration of how the optimal result changes if additional comparisons are successively included we shall use the following judgement matrix of attributes.

A1

A1+

0.60A 2+0.50A 3+~=0, A 2 +

A 3

= 1,

which has the solution Aa = 20.1%, A 2 = 7.4%, A 3 = 72.4%.

A1

A2

A3

A4

A5

1

7

1

3

1

1

1

A2

~

1

1

A3

1

5

1

3

½

A4

13

3

!3

1

5

A5

1

8

2

!5

1

If we use the comparisons of the top row: comp.l: a12 = 7 ; comp.2: a13= 1; comp.3:a14 = 3; comp.4:a15 = 1, we obtain: A~ = 28.8%, A z = 4.1%, A 3 = 28.8%, A 4 = 9.6%, A 5 = 28.8%. If we now add successively the direct comparisons: comp.5:a23 = ½; c o m p . 6 : a z 4 = ~; comp.7: ae5 = ½; comp.8:a34 = 3; comp.9:a35 = ½; comp. 1 0 : a 4 5 = 5, we obtain the results as shown in Table 1. The succession of inputs in general is of course arbitrary, but was chosen to conform with the usual order. Given these successive results a measure of consistency can be defined which is based on a Standard Error Approach, as shown in Appendix 2. Also our method permits the analysis to stop at any stage if the decision maker considers the result satisfactory.

Table 1 Comparisons l-n, Attr.

- 1.94A~ + 11.21 A z - 0.60 A 3 + )k = 0, -1.54A~-

31

A1

Az A3 A4 .45

n = 5 . . . . . 10 (in %)

Comparisons 1-5

1-6

1-7

1-8

1-9

1-10

29.5 4.7 27.0 9.7 29.0

29.3 4.4 26.2 11.1 29.0

29.0 4.3 24.9 11.1 30.8

28.8 4.3 25.7 10.4 30.9

28.2 4.4 22.9 10.1 34.5

29.9 7.2 24.9 15.5 22.5

G. Islei, A.G. Lockett / Judgementalmodellingbasedon geometricleast square

32

A cursory look at these results indicates that individual attributes show little variation but that by adding comparison 10:a45 = 5 (i.e. going from the results of comp. 1 - 9 to those of comp. 1-10) some distinct noise is generated causing rank reversal for the three leading attributes. This can be made more precise by carrying out an error analysis. For comparisons 1 - 9 we derive the Standard Error of Attributes: .41: E(A1) = 0-94

for A a = 28.2%,

A2: E ( A 2 ) = 0.21

for A2 = 4.4%,

A3: E ( A 3 ) = 3.89

for .43 = 22.9%,

E(.44) = 0.70

for A 4 = 10.1%,

As: E ( A s ) = 4.91

for A 5 = 34.5%,

.44:

where the distance 1 to the final solution of a12: D(a12 ) = - 0 . 3 7 ; D(al3 ) = 3.75; D(at4 ) = - 0 . 6 3 ; D(a15) = - 4 . 4 9 ; D ( a 2 3 ) = - 0 . 1 7 ; D ( a 2 4 ) = 0.99; D ( a 2 5 ) = 0 . 0 8 ; D(a34) = - 2 . 3 1 ; D(a35) = 5.01. For comparisons 1-10 we have the Standard Error of Attributes:

vector much larger but the standard error of individual attributes is now very significant. Obviously this form of error analysis is very sensitive to individual data inputs and effectively highlights possible mistakes. For the model user this is of tremendous advantage since it enables him to assess his consistency at every stage thus facilitating any adjustments if desired. 2

5. R e s u l t s in practice

The proposed method produces solutions in a sequential manner and the user may decide not to input all the data. In the previous section a simple example has been used to illustrate our approach. In order to see how this would look in practice a real life case has been recalculated (see Lockett and Hetherington, 1983). It was from a pharmaceutical company in which eight managers were involved, and the recomputed results for one of them are presented in Table 2. The results show how the preferences change with additional information, and how the final result compares with the original result of the eigenvector method. For the complete matrix our error analysis gives the following results:

AI: E(A1) = 1.06

for A1 = 29.9%,

A2: E(`42) = 2.86

for .42 = 7.2%,

A3: E(.43) = 2.09

for .43 = 24.9%,

e(Al)=l.50

for A 1 = 9.80%,

.44: E(.44) = 5.22

for .44 ---- 1 5 . 5 % ,

E(A2) = E(A3) =

2.62

for A 2 = 33.90%,

1.46

for A3 = 6.05%,

E ( ` 4 4 ) = 5.93

for A 4 = 26.57%,

E( As) = 2 . 8 5

for A 5 = 14.51%,

E ( ` 4 6 ) = 1.42

for A 6 = 5.53%,

E ( A 7 ) = 1.00

for A 7 = 3.64%,

As: E(.45) = 8.22

for .45 = 22.5%,

where the distance to the final solution of a12: - 2 . 9 2 , D ( a 1 3 ) = 3 . 5 4 , D(a14) = - 5 . 2 6 , D(al5 ) = - 5 . 2 1 , D ( a 2 3 ) = 2 . 2 0 , D(a24)=l.94, D(a25) =4.37, D(a34) = - 6 . 8 6 , D(a35) = 12.18, D(a4s) = - 19.04. From the results of comp. 1 - 9 it is clear that the least consistent comparison in this case is a35 = 1 / 2 which causes some noise with respect to attributes A 3 and As. However, if we take a look at the results of comp. 1-10 then the noise generated by the 'outstanding' comparisons a45 = 5 is considerably more severe. Not only are the distances of all comparisons to the final preference

l Note that this distance is not precisely the Euclidean distance but is enlarged by a factor of 100 and retains a sign (indicating if the input comparison a,k has to be increased (+) or decreased ( - ) in order to be more compatible with the calculated solution).

and the distances to the final solution are D(al2 ) =3.0, D ( a 1 3 ) = - 2 . 6 , D(a14)=5.9, D(a15)= 4.7, D ( a 1 6 ) = - 2 . 2 , D(ax7) = - 1 . 6 , D ( a 2 3 ) = - 1.2, D(a24 ) = - 14.5, D(a25 ) = - 3.0, D(a26 ) =-1.8, D(a27)=0.1, D(a34)=2.2, D(a35)= 1.1, D ( a 3 6 ) = 0 . 4 , D(a37) = - 1 . 5 , D ( a 4 5 ) = -9.0, D(a46)=-l.7, D(a,7)=0.2, D(a56)= - 2 . 6 , D(a57) = - 0 . 7 , D(a67) = - 1 . 7 . This clearly indicates the noise w.r.t, attribute A4, and that the direct comparison a24 stands out.

2 A more detailed comparison of G.L.S. with the eigenvector method can be found in Islei (1986).

33

G. Islei, A.G. Lockett / Judgemental modelling based on geometric least square Table 2 Sequential data input results (percentage data)

A1

Az 1

A3

A4

A5

A6

A7

.41

1

3

3

~7

31

3

5

A2

5

1

7

3

3

9

9

A3

1 3

1 7

1

l

1

1

3

A4

7

1

7

1

5

7

7

A5

3

3

1

3

1 ~

1

5

5

1

I

1

71

5i

1

3

1 5

1 9

1

1

1

I

1

3

7

5

3

A6 A7

A1 A2 A3 A4 A5 A6 .47

A1 A2 A3 A4 A5 A6 A7

A1 A2 A3 A4 A5 A6 A7

7

3

Judgement matrix

Comp. 1-6

Rk

Comp. 1-7

Rk

Comp. 1-8

Rk

Comp. 1-9

Rk

Comp. 1-10

Rk

Comp. 1-11

Rk

5.93 29.64 1.98 41.50 17.79 1.98 1.19

4 2 5 1 3 5 7

6.59 28.00 2.79 41.26 17.84 2.18 1.35

4 2 5 1 3 6 7

8.23 35.85 4.93 22.70 21.23 4.10 2.96

4 1 5 2 3 6 7

8.21 38.29 5.36 22.81 17.83 4.33 3.17

4 1 5 2 3 6 7

8.53 37.74 5.44 22.52 17.72 4.77 3.26

4 1 5 2 3 6 7

9.18 36.13 5.61 22.25 17.66 4.92 4.26

4 1 5 2 3 6 7

Comp. 1-12

Rk

Comp. 1-13

Rk

Comp. 1-14

Rk

Comp. 1-15

Rk

Comp. 1-16

Rk

Comp. 1-17

Rk

9.09 34.77 5.24 23.75 17.81 5.00 4.34

4 1 5 2 3 6 7

9.26 35.23 5.36 24.39 16.27 5.08 4.41

4 1 5 2 3 6 7

9.33 35.39 5.18 24.50 16.29 4.90 4.40

4 1 5 2 3 6 7

9.44 35.32 5.54 24.75 16.48 5.00 3.47

4 1 5 2 3 6 7

9.34 34.02 5.77 27.86 13.75 5.41 3.85

4 1 5 2 3 6 7

9.37 34.27 5.71 27.80 13.62 5.36 3.87

4 1 5 2 3 6 7

Comp. 1-18

Rk

Comp. 1-19

Rk

Comp. 1-20

Rk

Comp. 1-21

Rk

E.M.

Rk

9.62 34.32 5.94 26.98 13.50 5.43 4.21

4 1 5 2 3 6 7

9.61 34.20 5.88 26.85 14.25 4.97 4.23

4 1 5 2 3 6 7

9.73 34.03 5.98 26.69 14.52 5.00 4.05

4 1 5 2 3 6 7

9.80 33.90 6.05 26.57 14.51 5.53 3.64

4 1 5 2 3 6 7

7.81 38.32 4.15 30.74 12.88 3.77 2.33

4 1 5 2 3 6 7

E.M.

G.L.S.

Table 3 Overall comparisons (percentage data) Manager A

A1 A2 A3 A4 A5 A6 A7

B

C

D

G.L.S.

E.M.

G.L.S.

E.M.

G.L.S.

E.M.

27.62 20.36 4.80 11.55 17.85 13.43 4.39

37.88 23.10 2.39 8.33 15.32 10.90 2.07

30.02 26.84 12.94 6.44 13.39 4.69 5.69

37.34 31.91 10.03 3.95 11.01 2.53 3.23

20.91 16.60 7.78 8.00 12.87 23.61 10.23

23.28 28.65 1 3 . 2 1 22.43 4.53 9.22 4.56 16.60 15.77 9.64 3 0 . 1 1 5.78 8.54 7.69

G.L.S. = Geometric Least Square Method. E.M. = Eigenvector Method.

G.L.S.

E

37.22 24.92 25.28 32.40 6.99 9.09 16.32 11.09 6.95 4.73 2.87 6.23 4.37 11.52

F

G

E.M.

G.L.S.

E.M.

G.L,S.

25.55 39.06 8.67 9.20 2.86 4.42 10.24

22.25 36.38 6.28 6.66 4.18 14.55 9.69

23.40 21.17 41.76 30.25 4.41 3.95 5.03 8.14 2.75 6.34 13.83 17.09 8.84 13.04

H E.M.

G,L.S.

E.M.

21.14 38.66 2.18 7.00 4.09 16.58 10.35

9.80 33.90 6.05 26.57 14.51 5.53 3.64

7.81 38.32 4.15 30.74 12.88 3.77 2.33

34

G. Islei, A. G. Lockett / Judgemental modelling based on geometric least square

Comparison of the final results for all 8 participants is shown in Table 3. From Table 2 it is clear how the preference results change with the acquisition of data. At this stage of our understanding we do not know how much data would be required to satisfy the decision maker, but this small example shows how the results typically settle down after only a small amount of input. Presenting these sequential weightings and ranks to the decision maker is the basis for an ongoing research project. However, the method clearly has great potential for overcoming many of the criticisms already mentioned. Also if we move to the final results comparison as shown in Table 3, certain points are readily apparent. These are (i) in general, low preferences are increased, and the high preferences decreased, and hence one of the other criticisms is now countered 3 (Islei, 1986), (ii) the results tend to stabilise rapidly and therefore the number of comparisons required can be greatly reduced, (iii) a consistency measure can be defined which permits a more detailed statistical interpretation.

Discussion In this paper we have presented a method of calculating preference vectors which overcomes many of the criticisms of the eigenvector method. Although to the user the process is very similar and keeps the user friendly characteristics it has certain crucial differences. These are: (1) It is easier to alter and manipulate. (2) Far less data may be required. (3) A much better measure of inaccuracy can be derived. Although not used extensively in practice, preliminary results suggest that it is an improvement on the previous method. More importantly it also offers the potential of a research tool in decision making, allowing investigation into the behaviour

3 It is worth remembering that Saaty limits input ratios (i.e. direct comparisons) aik between any two attributes A,, A k to ~ < aa, _< 9. However in most applications Saaty's technique produces results with preference ratios well outside this limit (our method hardly ever does).

of the decision maker towards his/her own inherent uncertainty. This area of research is vital if we are not to go on producing yet more models of literature interest only. Theory and practice must go hand in hand. Given the tremendous potential offered by the developments in Information Technology, it will only be realised by a better understanding of real decision making and the roles that models can play.

Appendix 1 Saaty's method of pairwise comparisons has been criticised because it does not address itself to marginal rate of substitution but is based on a somewhat loosely defined weighting procedure (Watson, 1982). However, it is important to realise that if people do pairwise (i.e. relative) comparisons of attributes for example they tend to implicitly use a functional relationship between them rather than merely deliver value estimates, i.e. sample points (Cogger and Yu, 1983) for some averaging model. Therefore we feel justified in giving the entries of a judgement matrix a functional interpretation. Also this interpretation is conceptually similar to the functional representation used in utility theory and takes account of the above criticism.

Appendix 2 (a) We found that the following Standard Error approach provides many of the prerequisites for a satisfactory consistency measure. Suppose we have k successive outputs of the preference vector for the attributes A 1, A2,..., A n and let A}m) denote the m-th output for attribute A t. We define the Standard Error of A t after k outputs as

k-1

Note that this is precisely the formula for standard deviation except that we do not use square deviation from the mean but from the last output AI k) which in our procedure is the best available estimate. This Standard Error can now be used as

G. Islei, A.G. Lockett / Judgemental modelling based on geometric least square

b a s i s of a c o m p r e h e n s i v e m e a s u r e of c o n s i s t e n c y e n a b l i n g us for e x a m p l e to i d e n t i f y a t t r i b u t e s w h i c h are s u b j e c t to large v a r i a t i o n s . (b) If, g i v e n a final p r e f e r e n c e vector, we wish to i d e n t i f y precisely w h i c h direct c o m p a r i s o n s are the least c o n s i s t e n t o n e s we s i m p l y c a l c u l a t e the d i s t a n c e of this p r e f e r e n c e v e c t o r to the p l a n e s r e p r e s e n t i n g the direct c o m p a r i s o n s . ( F o r a detailed d i s c u s s i o n see Islei (1986).)

References Cogger, K.O., and Yu, P.L. (1983), "Eigen weight vectors and least distance approximation for revealed preference in pairwise weight ratios", ORSA/TIMS Conference. Coombs, C.H. (1964), A Theory of Data, Wiley, New York. Gear, T., and Belton, V. (1983), "On a short-coming of Saaty's method of analytic hierarchies", Omega 11, 228-230. Islei, G. (1986), "An approach to measuring consistency of preference vector derivations using least square distance", in: J. Jahn and W. Krabs (eds.), Recent Advances and Historical Development of Vector Optimization, Lecture Notes in Economics and Mathematical Systems 294, Springer, Berlin, 265-284. Lockett, A.G., and Hetherington, B. (1983), "Subjective data and MCDM", in: Essays and Surveys on Multiple Criteria Decision Making, P. Hansen (ed.), SPringer, Berlin, 247-259.

35

Locken, A.G., Stratford, M., Cox, B., Hetherington, B., and Yallup, P. (1986), "Modelling a research portfolio using AHP: A group decision process", R&D Management, May. Roy, B., and Vincke, B. (1981), "Multicriteria analysis: Survey and new directions", European Journal of Operations Research 8, 207-218. Saaty, T.L., and Vargas, L.G. (1979), "A note on estimating technological coefficients by hierarchical measurement", Socio-Economic Planning Science 13, 333-336. Saaty, T.L. (1980), The Analytic Hierarchy Process, McGrawHill, New York. Saaty, T.L., and Kearns, K.P. (1985), Analytical Planning, Pergamon Press, Oxford. Sawaragi, Y., Inoue, K., and Nakayama, H. (eds.) (1987), Towards Interactive and Intelligent Decision Support Systems, Lecture Notes in Economics and Mathematical Sys-

tems, Vol. 286, Springer, Berlin. Schoemaker, P.J.H., and Waid, C.C. (1982), "An experimental comparison of various approaches to determine weights in additive utility models", Management Science 28, 182-195. Spronk, J., and Zionts, S., (eds.) (1984), Special Issue on Multiple Criteria Decision Making, Management Science 30, 1265-1387. Von Winterfeldt, D., and Edwards, W. (1986), Decision Analysis and Behavioural Research, Cambridge University Press, Cambridge. Watson, S.R., and Freeling, A.N.S. (1982), "Assessing attribute weights", Omega 10, 582-583. Wind, Y., and Saaty, T.L. (1980), "Marketing applications of the Analytic Hierarchy Process", Management Science 26, 641-658.