On rank reversal in decision analysis

On rank reversal in decision analysis

Mathematical and Computer Modelling 49 (2009) 1221–1229 Contents lists available at ScienceDirect Mathematical and Computer Modelling journal homepa...

438KB Sizes 60 Downloads 106 Views

Mathematical and Computer Modelling 49 (2009) 1221–1229

Contents lists available at ScienceDirect

Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm

On rank reversal in decision analysis Ying-Ming Wang a,b,∗ , Ying Luo c a

School of Economics and Management, Tongji University, Shanghai 200092, PR China

b

School of Public Administration, Fuzhou University, Fuzhou 350002, PR China

c

School of Management, Xiamen University, Xiamen 361005, PR China

article

info

Article history: Received 15 March 2008 Received in revised form 17 June 2008 Accepted 19 June 2008 Keywords: Analytic hierarchy process Rank reversal Decision analysis Data envelopment analysis Cross-efficiency evaluation

a b s t r a c t Analytic hierarchy process (AHP) has been criticized for its possible rank reversal phenomenon caused by the addition or deletion of an alternative. This paper shows the fact that the rank reversal phenomenon occurs not only in the AHP but also in many other decision making approaches such as the Borda–Kendall (BK) method for aggregating ordinal preferences, the simple additive weighting (SAW) method, the technique for order preference by similarity to ideal solution (TOPSIS) method, and the cross-efficiency evaluation method in data envelopment analysis (DEA). Numerical examples are provided to illustrate the rank reversal phenomenon in these popular decision making approaches. © 2008 Elsevier Ltd. All rights reserved.

1. Introduction Analytic hierarchy process (AHP), as a very popular multiple criteria decision making (MCDM) approach, has been considerably criticized for its possible rank reversal phenomenon, which means that the relative rankings of two decision alternatives could be reversed when a decision alternative is added or deleted. Such a phenomenon was first noticed and pointed out by Belton and Gear [2], which aroused a long-lasting debate on the validity of AHP and the legitimacy of rank reversal [1–4,10–13,15,16,20–34,36–40]. The purpose of this paper is not to contribute further to that debate, but to illustrate the fact that the rank reversal phenomenon occurs not only in the AHP but also in many other decision making approaches such as the Borda–Kendall (BK) method for aggregating ordinal preferences, the simple additive weighting (SAW) method, the technique for order preference by similarity to ideal solution (TOPSIS) method, and the cross-efficiency evaluation method in data envelopment analysis (DEA). The rest of the paper is organized as follows. Section 2 recalls the rank reversal phenomenon in the AHP. Section 3 illustrates the rank reversal phenomenon in BK, SAW, TOPSIS and the cross-efficiency evaluation of DEA through numerical examples. Section 4 concludes the paper. 2. Rank reversal in the AHP Belton and Gear [2] showed that rank reversal might occur in the AHP when an exact replica or a copy of an alternative was introduced. They considered an example with three consistent comparison matrices over four alternatives A, B, C and D with respect to three criteria a, b and c, where D was a copy of B and the three criteria were assumed to be of equal

∗ Corresponding author at: School of Economics and Management, Tongji University, Shanghai 200092, PR China. Tel.: +86 591 87893307; fax: +86 591 22866677. E-mail address: [email protected] (Y.-M. Wang). 0895-7177/$ – see front matter © 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2008.06.019

1222

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

Table 1 Pairwise comparison matrices of alternatives A, B, C and D with respect to three criteria and their local weights [2] Criterion

Alternatives

A

B

C

Criterion a

A B C A B C A B C

1 9 1 1 1/9 1/9 1 9/8 1/8

1/9 1 1/9 9 1 1 8/9 1 1/9

1 9 1 9 1 1 8 9 1

A B C D A B C D A B C D

1 9 1 9 1 1/9 1/9 1/9 1 9/8 1/8 9/8

1/9 1 1/9 1 9 1 1 1 8/9 1 1/9 1

1 9 1 9 9 1 1 1 8 9 1 9

Criterion b

Criterion c

Criterion a

Criterion b

Criterion c

D

Local weights 1/11 9/11 1/11 9/11 1/11 1/11 8/18 9/18 1/18

1/9 1 1/9 1 9 1 1 1 8/9 1 1/9 1

1/20 9/20 1/20 9/20 9/12 1/12 1/12 1/12 8/27 9/27 1/27 9/27

Table 2 Global weights of the four alternatives A, B, C and D and their ranks Alternative

Local weights

Global weights

Rank

8/18 9/18 1/18

0.4512 0.4697 0.0791

2 1 3

8/27 9/27 1/27 9/27

0.3654 0.2889 0.0568 0.2889

1 2 4 2

Criterion a 1/3

Criterion b 1/3

Criterion c 1/3

A B C

1/11 9/11 1/11

9/11 1/11 1/11

A B C D

1/20 9/20 1/20 9/20

9/12 1/12 1/12 1/12

Table 3 Decision matrix of four alternatives A1 –A4 with respect to four decision criteria [10] Alternative

Criterion 1

Criterion 2

Criterion 3

Criterion 4

A1 A2 A3 A4

1 9 8 4

9 1 1 1

1 9 4 8

3 1 5 5

importance. They first considered alternatives A, B and C and derived a ranking for them, and then considered the four alternatives together and got a new ranking for them, only to find that the ranking between A and B was reversed after the addition of D. Tables 1 and 2 show the comparison matrices, the local and the global weights of the four decision alternatives. As can be seen from Table 2, the ranking between A and B is B  A before D is introduced, but becomes A  B after D is added, where the symbol ‘‘’’ means ‘‘is superior to’’. The ranking is reversed after the addition of alternative D. Such a phenomenon is referred to as rank reversal, which may occur not only when a copy of an alternative is added, but also when a new alternative is added as well as when an existing alternative is removed. Dyer [10] provided an example, as shown in Tables 3 and 4, to illustrate the rank reversal phenomenon between A1 and A3 when a new alternative A4 was added. Troutt [36] provided an example, as shown in Table 5, to demonstrate the rank reversal phenomenon between A1 and A2 when alternative A3 was removed. There may be other AHP examples that can lead to rank reversals. 3. Rank reversal in other decision making approaches In this section we show through numerical examples that the rank reversal phenomenon also occurs in many other decision making approaches such as the BK method for aggregating multiple ordinal preferences, the SAW and the TOPSIS methods for multiple attribute decision making (MADM), and the cross-efficiency evaluation method of DEA.

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

1223

Table 4 AHP weights and the rankings of the four alternatives A1 –A4 Alternative

Local weights

Global weights

Rank

3/9 1/9 5/9

0.3196 0.3362 0.3442

3 2 1

3/14 1/14 5/14 5/14

0.2638 0.2432 0.2465 0.2465

1 4 2 2

Global weights

Rank

Criterion 1 1/4

Criterion 2 1/4

Criterion 3 1/4

Criterion 4 1/4

A1 A2 A3

1/18 9/18 8/18

9/11 1/11 1/11

1/14 9/14 4/14

A1 A2 A3 A4

1/22 9/22 8/22 4/22

9/12 1/12 1/12 1/12

1/22 9/22 4/22 8/22

Table 5 Decision matrix and AHP weights of three alternatives with respect to two decision criteria [36] Alternative

Decision matrix

Criteria and local weights

Criterion 1

Criterion 2

Criterion 1 3/10

Criterion 2 7/10

A1 A2 A3

57 33 10

3 4 3

57/100 33/100 10/100

3/10 4/10 3/10

0.381 0.379 0.240

1 2 3

A1 A2

57 33

3 4

57/90 33/90

3/7 4/7

0.490 0.510

2 1

Table 6 Voting among three political parties by 60 voters Political parties

Number of voters

A B C A B A C

23

17

2

10

8

1 2 3 1 2 1 2

3 1 2 2 1 2 1

2 1 3 2 1 1 2

2 3 1 1 2 2 1

3 2 1 2 1 2 1

BK total score

BK average score

Rank

122 111 127 87 93 95 85

2.03 1.85 2.12 1.45 1.55 1.58 1.42

2 1 3 1 2 2 1

1 = the best, 3 = the worst.

3.1. Rank reversal in Borda–Kendall method The aggregation of ordinal preferences has wide applications in group decision making, social choice, committee election and voting systems. A large amount of research has been conducted in this area. Borda was the first to examine the ordinal ranking problem for choosing a candidate from an election and proposed a method of marks to rank candidates according to the sum of ranks assigned by voters to each candidate. Kendall [19] was the first to study the problem in a statistical framework. He approached it as a problem of estimation. That is, if there is an agreement among observers (voters, decision makers, and so on) and their judgments are accurate, how should we estimate the true ranking? The solution to the problem is ranking candidates according to the sum of ranks, which is precisely equivalent to Borda’s method of marks. So, the method is frequently referred to as Borda–Kendall (BK) method [7], which is probably the most widely used technique in determining a consensus ranking due to its computational simplicity. Suppose there are m voters or electoral committees who vote on n candidates. Each candidate will receive some votes at different ranking places. The BK method gives the first ranking place a mark (rank or score) of one, the second ranking place a mark of two, and the jth ranking place a mark of j for j = 3 to n. Let vij be the votes that candidate i receives at the jth ranking place. Then, the total score candidate i receives can be computed as Zi =

n X

jvij ,

i = 1, . . . , n.

(1)

j =1

Based upon the total scores, candidates can be ranked. The best candidate will be the one with the least total score. It is noticed that such a very popular approach is subject to the rank reversal phenomenon when a candidate is added or dropped out from consideration. Table 6 shows a voting problem in which 60 voters vote on three political parties. The problem was investigated by Hwang and Lin [17] and González-Pachón and Romero [14]. When the three political parties are voted together, candidate B receives the least total score and is therefore identified as the best candidate, followed by candidates A and C . Obviously, C is the most undesirable candidate and can be dropped out from the voting system.

1224

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

Table 7 Decision matrix of five alternatives with respect to four attributes Alternative

Attribute 1

Attribute 2

Attribute 3

Attribute 4

A1 A2 A3 A4 A5 (New)

36 25 28 24 30

42 50 45 40 30

43 45 50 47 45

70 80 75 100 80

After candidate C is dropped out, the left two candidates can only be ranked in either the first or the second ranking place. In other words, they can only be given a mark (rank or score) of either one or two by each of the 60 voters. Suppose the 60 voters’ preferences on the two candidates remain unchanged. Then, candidate A will receive a total score of 87, whereas candidate B receives a total score of 93. The result shows that B is no longer a better candidate than A. The ranking between A and B is reversed with respect to their original ranking B  A. Similarly, if B is dropped out from the voting system, the ranking between A and C will be reversed. 3.2. Rank reversal in SAW method The simple additive weighting (SAW) method is one of the most popular and widely used MADM approaches [18]. Consider a MADM problem with n alternatives, A1 , . . . , An , and m decision attributes (criteria), C1 , . . . , Cm . Each alternative is evaluated with respect to the m attributes. Assessment values of the n alternatives with respect to the m decision attributes form a decision matrix denoted by X = (xij )n×m , which is normalized by the following equations to eliminate the dimensional units of the attributes: zij =

zij =

xij − xmin j xmax − xmin j j xmax − xij j xmax − xmin j j

,

i = 1, . . . , n; j ∈ Ωb ,

(2)

,

i = 1, . . . , n; j ∈ Ωc ,

(3)

= max1≤i≤n {xij }, Ωb and Ωc are, respectively, the sets of where zij are normalized attribute values, xmin = min1≤i≤n {xij }, xmax j j benefit and cost attributes. Benefit attributes are those for maximization, whereas cost attributes are those for minimization. T Let Z = P(mzij )n×m be a normalized decision matrix and W = (w1 , . . . , wm ) be a normalized attribute weight vector satisfying j=1 wj = 1 and wj > 0 (j = 1, . . . , m). According to the SAW method [18] in MADM, the overall assessment value of each alternative Ai can be computed by di =

m X

zij wj ,

i = 1 , . . . , n,

(4)

j =1

where di is a linear function of the weight vector W . The greater the value of di , the better the alternative Ai . The best alternative is the one with the biggest overall assessment value. For brevity, Eq. (4) can be rewritten in vector form as D = ZW ,

(5)

where D = (d1 , . . . , dn ) is a vector consisting of the overall assessment values of the n alternatives. It is observed that the SAW method is also subject to the rank reversal phenomenon when an alternative is added or removed. Tables 7 and 8 show an MADM example in which four alternatives A1 –A4 are assessed with respect to four benefit attributes, whose relative weights are assumed to be W = (1/6, 1/3, 1/3, 1/6)T . The original data in Table 7 are normalized using Eq. (2). According to the SAW method, A3 is the best among the four alternatives, followed by A2 . If A4 is removed from the set of alternatives, it is easy to find that A2 becomes the best alternative and the ranking between A2 and A3 is reversed. If a new alternative A5 is added to the original set of alternatives, then the ranking between A2 and A4 is reversed. T

3.3. Rank reversal in TOPSIS method TOPSIS method is a technique for order preference by similarity to ideal solution [6,18]. The ideal solution (IS), also called positive-ideal solution, is the solution that maximizes the benefit attributes or criteria and minimizes the cost attributes or criteria, whereas the negative ideal solution (NIS), also called anti-ideal solution, maximizes the cost attributes or criteria and minimizes the benefit attributes or criteria. The best alternative is the one closest to the ideal solution and farthest from the negative-ideal solution. TOPSIS method is composed of the following six steps:

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

1225

Table 8 Normalized decision matrix and overall assessment values of the five alternatives Alternative

Normalized decision matrix

Overall assessment value di

Rank

0 0.3333 0.1667 1

0.2333 0.4980 0.5833 0.3571

4 2 1 3

0 0.2857 1

0 1 0.5

0.1667 0.5952 0.5871

3 1 2

0 0.2857 1 0.5714 0.2857

0 0.3333 0.1667 1 0.3333

0.3667 0.4980 0.6667 0.5238 0.2341

4 3 1 2 5

Attribute 1 (1/6)

Attribute 2 (1/6)

Attribute 3 (1/6)

Attribute 4 (1/6)

A1 A2 A3 A4

1 0.0833 0.3333 0

0.2 1 0.5 0

0 0.2857 1 0.5714

A1 A2 A3

1 0 0.2727

0 1 0.375

A1 A2 A3 A4 A5

1 0.0833 0.3333 0 0.5

0.6 1 0.75 0.5 0

(1) Calculate normalized decision matrix. Normalized attribute value rij is calculated by rij = s

xij n P

,

i = 1, . . . , n; j = 1, . . . , m.

(6)

x2kj

k =1

(2) Calculate the weighted normalization decision matrix. The weighted normalized attribute value vij is calculated by

vij = wj rij ,

i = 1, . . . , n; j = 1, . . . , m,

(7)

where wj is the weight of the jth attribute or criterion and (3) Determine the ideal and negative-ideal solutions.

Pm

j =1

wj = 1.

∗ A∗ = {v1∗ , . . . , vm } = {(max vij |j ∈ Ωb ), (min vij |j ∈ Ωc )},

(8)

− A− = {v1− , . . . , vm } = {(min vij |j ∈ Ωb ), (max vij |j ∈ Ωc )},

(9)

j

j

j

j

where Ωb and Ωc are the sets of benefit and cost attributes, respectively. (4) Calculate the Euclidean distances of each alternative from the ideal solution and the negative-ideal solution by the following equations, respectively.

v uX u m 2 ∗ Di = t vij − vj∗ ,

i = 1, . . . , n,

(10)

j =1

v uX u m 2 − Di = t vij − vj− ,

i = 1 , . . . , n.

(11)

j =1

(5) Calculate relative closeness to the ideal solution. The relative closeness of alternative Ai with respect to A∗ is defined as Ci =

D− i D∗i + D− i

,

i = 1, . . . , n.

(12)

(6) Rank alternatives according to their relative closeness to the ideal solution. The bigger the Ci , the better the alternative Ai . The best alternative is the one with the biggest relative closeness to the ideal solution. It is also found that TOPSIS method suffers from the rank reversal when an alternative is introduced or removed. Consider the four alternatives A1 to A4 in Table 7 for example. The weight vector of the four attributes is still assumed to be W = (1/6, 1/3, 1/3, 1/6)T . According to TOPSIS method, A2 is the best among the four alternatives, followed by A3 . The ranking is slightly different from that obtained by the SAW method. It is no wonder that different methods may produce slightly different rankings. Now we drop A4 out of the alternative set. A3 turns out to be the best, followed by A2 . The results are shown in Table 9. It is obvious that the ranking between A2 and A3 is reversed when A4 is removed. If we added a new alternative A6 = {30, 43, 40, 85} to the original set of the four alternatives, then it is found that the rankings between A2 and A3 as well as A1 and A4 are both reversed (see Table 9). This shows that TOPSIS method suffers from the rank reversal phenomenon when an alternative is added or removed.

1226

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

Table 9 Weighted normalized decision matrix and relative closeness of five alternatives Alternative

Weighted normalized decision matrix

Relative closeness Ci

Rank

0.0711 0.0813 0.0762 0.1016 0.1016 0.0711

0.4184 0.4858 0.4634 0.3915 – –

3 1 2 4 – –

0.1795 0.1879 0.2088 0.2088 0.1795

0.0897 0.1025 0.0961 0.1025 0.0897

0.4319 0.4742 0.5007 – –

3 2 1 – –

0.1420 0.1486 0.1652 0.1553 0.1323 0.1652 0.1321

0.0631 0.0722 0.0676 0.0902 0.0767 0.0902 0.0631

0.42607 0.5086 0.5262 0.4317 0.3348 – –

4 2 1 3 5 – –

Attribute 1

Attribute 2

Attribute 3

Attribute 4

A1 A2 A3 A4 IS NIS

0.1047 0.0727 0.0815 0.0698 0.1047 0.0698

0.1576 0.1876 0.1689 0.1501 0.1876 0.1501

0.1547 0.1619 0.1799 0.1691 0.1799 0.1547

A1 A2 A3 IS NIS

0.1154 0.0801 0.0897 0.1154 0.0801

0.1765 0.2102 0.1891 0.2102 0.1765

A1 A2 A3 A4 A6 (New) IS NIS

0.0928 0.0644 0.0722 0.0619 0.0773 0.0928 0.0619

0.1419 0.1689 0.1520 0.1351 0.1453 0.1689 0.1352

3.4. Rank reversal in cross-efficiency evaluation of DEA Cross-efficiency evaluation [8,35] has long been recommended as alternative methodology for ranking decision making units (DMUs) in the data envelopment analysis (DEA) developed by Charnes et al. [5]. Its basic principle is the use of selfand peer-evaluations for performance assessment of DMUs. It is believed that the cross-efficiency evaluation can produce a unique ordering for DMUs [9]. Consider n DMUs that are to be evaluated in terms of m inputs and s outputs. Let xij (i = 1, . . . , m) and yrj (r = 1, . . . , s) be the input and output values of DMUj (j = 1, . . . , n). Then, the efficiencies of the n DMUs can be defined as s P

θj =

r =1 m

P

ur yrj

,

j = 1, . . . , n,

(13)

vi xij

i =1

where vi (i = 1, . . . , m) and ur (r = 1, . . . , s) are input and output weights. For a specific DMU, say, DMUk , k ∈ {1, . . . , n}, its efficiency relative to the other DMUs can be measured by the following CCR model [5]: Maximize θkk =

s X

urk yrk .

(14)

r =1

Subject to

m X

vik xik = 1,

i =1 s X

urk yrj −

r =1

m X

vik xij ≤ 0,

j = 1, . . . , n,

i =1

urk ≥ 0,

r = 1, . . . , s,

vik ≥ 0,

i = 1, . . . , m,

∗ where vik (i = 1, . . . , m) and urk (r = 1, . . . , s) are decision variables. Let u∗rk (r = 1, . . . , s) and vik (i = 1, . . . , m) be an P ∗ optimal solution to the above model (14). Then, θkk = sr =1 u∗rk yrk is referred toP as the CCR-efficiency or simple efficiency of Pm ∗ s ∗ DMUk , which is the best relative efficiency that DMUk can achieve, and θjk = r =1 urk yrj / i=1 vik xik as a cross-efficiency ∗ of DMUj , which reflects the peer-evaluation of DMUk to DMUj (j = 1, . . . , n; j 6= k). If θkk = 1, then DMUk is referred to as DEA efficient or CCR efficient; otherwise, it is referred to as non-DEA efficient or DEA inefficient. All DEA efficient units determine an efficient frontier. CCR model (14) is solved for each DMU, individually. As a result, there are n sets of input and output weights for the n DMUs. Each DMU has (n − 1) cross-efficiencies plus one CCR-efficiency, as shown in Table 10, where θkk (k = 1, . . . , n) are ∗ CCR-efficiencies of the n DMUs, i.e. θkk = θkk . Since CCR model (14) may have multiple optimal solutions, this non-uniqueness could potentially hamper the use of cross-efficiency. To resolve this problem, Sexton et al. [35] suggested introducing a secondary goal to avoid the arbitrariness

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

1227

Table 10 Cross-efficiency matrix of n DMUs DMU

Target DMU

1 2

. . . n

Average cross-efficiency

1

2

...

n

θ11 θ21 . . . θn1

θ12 θ22 . . . θn2

... ...

θ1n θ2n . . . θnn

. . . ...

1 n 1 n

Pn Pkn=1 θ1k k=1 θ2k

n

k=1

. . . P n 1

θnk

Table 11 Data for seven departments in a university Department (DMU)

Outputs

1 2 3 4 5 6 7

Inputs

CCR-efficiency

y1

y2

y3

x1

x2

x3

60 139 225 90 253 132 305

35 41 68 12 145 45 159

17 40 75 17 130 45 97

12 19 42 15 45 19 41

400 750 1500 600 2000 730 2350

20 70 70 100 250 50 600

1 1 1 0.8197 1 1 1

of cross-efficiency. One of the most commonly used secondary goals is the so-called aggressive formulation for crossefficiency evaluation suggested by Doyle and Green [8], as shown in (15), which aims to minimize the cross-efficiencies of the other DMUs in some way. Minimize

Subject to

s X

urk

r =1

j=1,j6=k

m X

n X

vik

∗ urk yrk − θkk

r =1 s X

! .

yrj

(15)

! = 1,

xij

j=1,j6=k

i=1 s X

n X

m X

vi xik = 0,

i =1

urk yrj −

r =1

m X

vik xij ≤ 0,

j = 1, . . . , n; j 6= k,

i=1

urk ≥ 0,

r = 1 , . . . , s,

vik ≥ 0,

i = 1, . . . , m,

∗ where θkk is the CCR-efficiency of DMUk . It is found that the cross-efficiency evaluation also suffers from the rank reversal phenomenon when a non-DEA efficient unit is added or removed. Consider the example investigated by Wong and Beasley [41]. In this example, there are seven departments (DMUs) in a university to be evaluated in terms of three inputs and three outputs, which are defined as follows:

x1 : x2 : x3 : y1 : y2 : y3 :

Number of academic staff. Academic staff salaries in thousands of pounds. Support staff salaries in thousands of pounds. Number of undergraduate students. Number of postgraduate students. Number of research papers.

Table 11 shows the input and output data for the seven departments. It is seen from the CCR-efficiencies in the last column of Table 11 that DMU4 is the only department that is rated as non-DEA efficient and all the other six departments determine an efficient frontier. In Table 12, we show the aggressive cross-efficiencies of the seven departments which are obtained by solving model (15) for each of the seven departments. It is seen that DMU4 is once again evaluated as the least efficient department. Therefore, we have sufficient reason to believe that DMU4 is the worst among the seven departments. We now remove DMU4 from the set of DMUs. This removal has certainly no impact on the CCR-efficiencies of the other six departments because DMU4 is a non-DEA efficient department and not on the efficient frontier. Such a removal, however, has been found to have a significant impact on the cross-efficiencies of the other six departments. Table 13 show the aggressive cross-efficiencies of the other six departments after the removal of DMU4 .

1228

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

Table 12 Aggressive cross-efficiencies of the seven departments Department (DMU)

1 2 3 4 5 6 7

Target DMU 1

2

3

4

5

6

7

1.0000 0.3347 0.5551 0.0686 0.3314 0.5143 0.1514

0.8452 1.0000 0.8481 0.7551 0.6620 1.0000 0.6044

0.9333 0.6178 1.0000 0.2800 0.3148 0.8213 0.1581

0.6874 1.0000 0.7349 0.8197 0.7649 0.9506 1.0000

0.6449 0.8237 0.8129 0.3672 1.0000 1.0000 0.5252

0.7933 0.7008 1.0000 0.2359 0.6988 1.0000 0.2460

0.7521 0.5564 0.4175 0.2063 0.8309 0.6107 1.0000

Average cross-efficiency

Ranking

0.8081 0.7191 0.7669 0.3904 0.6576 0.8424 0.5264

2 4 3 7 5 1 6

Table 13 Aggressive cross-efficiencies of the six departments without DMU4 Department (DMU)

Target DMU

1 2 3 5 6 7

1

2

3

5

6

7

1.0000 0.3347 0.5551 0.3314 0.5143 0.1514

0.8452 1.0000 0.8481 0.6620 1.0000 0.6044

0.9333 0.6178 1.0000 0.3148 0.8213 0.1581

0.6449 0.8237 0.8129 1.0000 1.0000 0.5252

0.9333 0.8426 1.0000 0.4778 1.0000 0.2783

0.7521 0.5564 0.4175 0.8309 0.6107 1.0000

Average cross-efficiency

Ranking

0.8515 0.6959 0.7723 0.6028 0.8244 0.4529

1 4 3 5 2 6

Table 14 Aggressive cross-efficiencies of the seven departments without output 3 Department (DMU)

1 2 3 4 5 6 7

Target DMU 1

2

3

4

5

6

7

1.0000 0.3347 0.5551 0.0686 0.3314 0.5143 0.1514

0.8452 1.0000 0.8481 0.7551 0.6620 1.0000 0.6044

0.9333 0.6178 1.0000 0.2800 0.3148 0.8213 0.1581

0.6878 1.0000 0.7351 0.8197 0.7646 0.9507 0.9985

1.0000 0.7017 0.5551 0.2417 1.0000 0.7915 0.9854

0.9333 0.8426 1.0000 0.4413 0.4778 1.0000 0.2783

0.7521 0.5564 0.4175 0.2063 0.8309 0.6107 1.0000

Average cross-efficiency

Ranking

0.8788 0.7219 0.7301 0.4018 0.6259 0.8126 0.5966

1 4 3 7 5 2 6

It is seen clearly from Table 13 that the ranking between DMU1 and DMU6 is reversed after DMU4 is deleted from the set of DMUs. Before the removal of DMU4 , DMU6 is evaluated as the most efficient department. After the deletion, DMU1 appears as the most efficient department. Such a rank reversal gives rise to the question whether DEA efficient units should be peer-evaluated by those non-DEA efficient units, particularly by the worst DMU. This is a very crucial problem to the cross-efficiency evaluation and particularly to the use of cross-efficiency evaluation for identification of the most efficient DMU. Normally speaking, now that a DMU has already been identified as the worst, it can be removed from further decision analysis, but the above numerical examination reveals the rank reversal phenomenon when we do so to the cross-efficiency evaluation of DEA. In a recent paper by Pérez et al. [21], it was pointed out that the rank reversal in the AHP could also be caused by the addition of indifferent criteria. This is also true to the cross-efficiency evaluation. For convenience, we consider an input or output as unimportant if it makes no contribution to CCR-efficiency. When an unimportant input or output is added or removed, the cross-efficiency evaluation may also suffer from the rank reversal phenomenon. Consider the numerical example in Table 11. It is easy to verify that output 3 has no contribution to the CCR-efficiencies of the seven departments. It is therefore considered as unimportant. For an unimportant input or output, it can be removed from the set of input or output indices without any impact on the CCR-efficiencies. Such a removal, however, is found to have significant impacts on the average cross-efficiencies of the seven departments. Table 14 shows the aggressive crossefficiencies of the seven departments after output 3 is removed from the set of output indices. By comparing the rankings in Tables 12 and 14, we find that the ranking between DMU1 and DMU6 is reversed when output 3 is eliminated from the outputs. Before the elimination, DMU6 is rated as the most efficient department by the aggressive formulation for crossefficiency evaluation. After the elimination of output 3, DMU1 is identified as the most efficient department. Such a rank reversal phenomenon gives rise to a question. That is whether an unimportant input or output should be involved in the cross-efficiency evaluation. From the viewpoint of simple (CCR) efficiency, an unimportant input and/or output should not be involved in efficiency assessment because it contributes nothing to the CCR-efficiency. However, when we do this to the cross-efficiency evaluation, a rank reversal phenomenon has been observed. 4. Conclusions In this paper, we have shown that rank reversal phenomenon not only occurs in the AHP, but also occurs in many other decision making approaches such as the BK method for aggregating multiple ordinal preferences, the SAW and the TOPSIS

Y.-M. Wang, Y. Luo / Mathematical and Computer Modelling 49 (2009) 1221–1229

1229

methods for MADM, and the cross-efficiency evaluation method of DEA when a candidate or alternative is added or removed. The added candidate or alternative discussed in this paper is not limited to the copy of an existing one. Rank reversal in the BK method is caused by the changes of ordinal values when a candidate or alternative is added or removed. Rank reversals in the SAW and TOPSIS methods are caused by the changes of normalized attribute values, whereas rank reversal in the cross-efficiency evaluation method of DEA is caused by the changes of cross-efficiencies under some target DMU(s). Barzilai and Golany [1] have proved that no normalization can prevent rank reversal. Normalization, however, is often necessary for most of the MADM approaches so that different dimensional units can be eliminated. It is clear from this paper that the rank reversal phenomenon is not a problem that only occurs in the AHP. It occurs in many decision making approaches. It might be a normal phenomenon. Acknowledgements The authors would like to thank one anonymous reviewer for the comments and suggestions, which have helped to improve the paper. The work described in this paper is supported by the National Natural Science Foundation of China (NSFC) under the Grant No.70771027. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41]

J. Barzilai, B. Golany, AHP rank reversal, normalization and aggregation rules, INFOR 32 (2) (1994) 57–63. V. Belton, T. Gear, On a shortcoming of Saaty’s method of analytic hierarchies, Omega 11 (3) (1983) 228–230. V. Belton, T. Gear, The legitimacy of rank reversal—A comment, Omega 13 (3) (1985) 143–144. V. Belton, T. Gear, On the meaning of relative importance, Journal of Multi-Criteria Decision Analysis 6 (1997) 335–338. A. Charnes, W.W. Cooper, E. Rhodes, Measuring the efficiency of decision making units, European Journal of Operational Research 2 (1978) 429–444. S.J. Chen, C.L. Hwang, Fuzzy Multiple Attribute Decision Making: Methods and Applications, Springer-Verlag, Berlin, 1992. W.D. Cook, M. Kress, L.M. Seiford, A general framework for distance-based consensus in ordinal ranking models, European Journal of Operational Research 96 (1997) 392–397. J. Doyle, R. Green, Efficiency and cross-efficiency in DEA: Derivations, meanings and uses, Journal of the Operations Research Society 45 (1994) 567–578. J.R. Doyle, R.H. Green, Cross-evaluation in DEA: Improving discrimination among DMUs, INFOR 33 (1995) 205–222. J.S. Dyer, Remarks on the analytic hierarchy process, Management Sciences 36 (1990) 249–258. J.S. Dyer, A clarification of ‘‘Remarks on the analytic hierarchy process’’, Management Science 36 (1990) 274–275. E.H. Forman, AHP is intended for more than expected value calculations, Decision Sciences 36 (1990) 671–673. E.H. Forman, Facts and fictions about the analytic hierarchy process, Mathematical and Computer Modelling 17 (4–5) (1993) 19–26. J. González-Pachón, C. Romero, Distance-based consensus methods: A goal programming approach, Omega 27 (1999) 341–347. P.T. Harker, L.G. Vagas, The theory of ratio scale estimation: Saaty’s analytic hierarchy Process, Management Science 33 (1987) 1383–1403. P.T. Harker, L.G. Vargas, Reply to ‘‘Remarks on the analytic hierarch process’’ by J.S. Dyer, Management Science 36 (1990) 269–273. C.L. Hwang, M.J. Lin, Group decision making under multiple criteria, in: Lecture Notes in Economics and Mathematical Systems, vol. 281, Springer, Berlin, 1987. C.L. Hwang, K. Yoon, Multiple Attribute Decision Making: Methods and Applications, Springer-Verlag, Berlin, 1981. M. Kendall, Rank Correction Methods, 3rd ed, Hafner, New York, 1962. J. Pérez, Some comments on Saaty’s AHP, Management Sciences 41 (1995) 1091–1095. J. Pérez, J.L. Jimeno, E. Mokotoff, Another potential shortcoming of AHP, Top 14 (1) (2006) 99–111. T.L. Saaty, Axiomatic foundation of the analytic hierarchy process, Management Science 32 (1986) 841–855. T.L. Saaty, Rank generation, preservation, and reversal in the analytic hierarchy decision process, Decision Sciences 18 (1987) 157–177. T.L. Saaty, An exposition of the AHP in reply to the paper ‘‘Remarks on the analytic hierarchy process’’, Management Science 36 (1990) 259–268. T.L. Saaty, Highlights and critical points in the theory and application of the analytic hierarchy process, European Journal of Operational Research 74 (1994) 426–447. T.L. Saaty, Decision making, new information, ranking and structure, Mathematical Modelling 8 (1987) 125–132. T.L. Saaty, M. Takizawa, Dependence and independence: From linear hierarchies to nonlinear networks, European Journal of Operational Research 26 (1986) 229–237. T.L. Saaty, L.G. Vargas, Experiments on rank preservation and reversal in relative measurement, Mathematical and Computer Modelling 17 (4–5) (1993) 13–18. T.L. Saaty, L.G. Vargas, The legitimacy of rank reversal, Omega 12 (5) (1984) 513–516. T.L. Saaty, L.G. Vargas, R.E. Wendell, Assessing attribute weights by ratios, Omega 11 (1983) 9–13. B. Schoner, E.U. Choo, W.C. Wedley, A comment on Rank disagreement: A comparison of multicrietria methodologies, Journal of Multi-Criteria Decision Analysis 6 (1997) 197–200. B. Schoner, W.C. Wedley, Ambiguous criteria weights in AHP: Consequences and solutions, Decision Sciences 20 (1989) 462–475. B. Schoner, W.C. Wedley, E.U. Choo, A rejoinder to Forman on AHP, with emphasis on the requirements of composite ratio scales, Decision Sciences 23 (1992) 509–517. S. Schenkerman, Avoiding rank reversal in AHP decision-support models, European Journal of Operational Research 74 (1994) 407–419. T.R. Sexton, R.H. Silkman, A.J. Hogan, Data envelopment analysis: Critique and extensions, in: R.H. Silkman (Ed.), Measuring Efficiency: An Assessment of Data Envelopment Analysis, Jossey-Bass, San Francisco, CA, 1986. M.D. Troutt, Rank reversal and the dependence of priorities on the underlying MAV function, Omega 16 (1988) 365–367. L.G. Vargas, Reply to Schenkerman’s avoiding rank reversal in AHP decision support models, European Journal of Operational Research 74 (1994) 420–425. L.G. Vargas, Why the multiplicative AHP is invalid: A practical example, Journal of Multi-Criteria Decision Analysis 6 (3) (1997) 169–170. S.R. Watson, A.N.S. Freeling, Assessing attribute weights, Omega 10 (1982) 582–583. S.R. Watson, A.N.S. Freeling, Comment on: Assessing attribute weights by ratios, Omega 11 (1983) 13. Y.H.B. Wong, J.E. Beasley, Restricting weight flexibility in data envelopment analysis, Journal of the Operational Research Society 41 (9) (1990) 829–835.