Common benchmarking and ranking of units with DEA

Common benchmarking and ranking of units with DEA

Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Contents lists available at ScienceDirect Omega journal homepage: www.elsevier.com/locate/omega Common benchmarking and rank...

406KB Sizes 0 Downloads 100 Views

Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

Contents lists available at ScienceDirect

Omega journal homepage: www.elsevier.com/locate/omega

Common benchmarking and ranking of units with DEA$ José L. Ruiz n, Inmaculada Sirvent Centro de Investigación Operativa. Universidad Miguel Hernández. Avd. de la Universidad, s/n, 03202-Elche, Alicante, Spain

art ic l e i nf o

a b s t r a c t

Article history: Received 19 January 2015 Accepted 16 November 2015

This paper develops a common framework for benchmarking and ranking units with DEA. In many DEA applications, decision making units (DMUs) experience similar circumstances, so benchmarking analyses in those situations should identify common best practices in their management plans. We propose a DEA-based approach for the benchmarking to be used when there is no need (nor wish) to allow for individual circumstances of the DMUs. This approach identifies a common best practice frontier as the facet of the DEA efficient frontier spanned by the technically efficient DMUs in a common reference group. The common reference group is selected as that which provides the closest targets. A model is developed which allows us to deal not only with the setting of targets but also with the measurement of efficiency, because we can define efficiency scores of the DMUs by using the common set of weights (CSW) it provides. Since these weights are common to all the DMUs, the resulting efficiency scores can be used to derive a ranking of units. We discuss the existence of alternative optimal solutions for the CSW and find the range of possible rankings for each DMU which would result from considering all these alternate optima. These ranking ranges allow us to gain insight into the robustness of the rankings. & 2015 Elsevier Ltd. All rights reserved.

Keywords: Benchmarking Target setting Efficiency measurement Ranking DEA

1. Introduction Data Envelopment Analysis (DEA) evaluates performance of decision making units (DMUs) involved in production processes. In DEA, the DMUs are assumed to be comparable (see Dyson et al. [20] for a discussion on homogeneity assumptions about the units under assessment), albeit they may have their own unique circumstances. Specifically, DEA may allow for the individual circumstances of the DMUs through DMU-specific input and output weights. It is usually argued that, aside from the factors affecting performance considered in the efficiency analysis, there are often considerable variations in goals, policies, etc., among DMUs, which may justify the different weights for the same factor. The variation in weights in DEA may be thus justified by the different circumstances under which the DMUs operate, and which are not captured by the chosen set of inputs and outputs factors (see Roll et al. [34] for discussions). However, in many DEA applications the DMUs experience similar circumstances, so treating them as independent entities may not be appropriate. This means that using input and output weights that differ substantially across DMUs may not be warranted. In those situations, both inputs and outputs should be ☆ n

This manuscript was processed by Associate Editor Prof. B. Lev. Corresponding author. Tel./fax: +34 966658714. E-mail address: [email protected] (J.L. Ruiz).

aggregated by using weights that are common to all the DMUs when calculating the classical efficiency ratios. As stated in Roll et al. [34], common set of weights (CSW) “is the usual approach in all engineering, and most economic, efficiency analyses. In these cases it is assumed that all important factors affecting performance are included in the measurement system, and there is no need (nor wish) to allow for additional, individual, circumstances.” In addition to the appeal of a fair and impartial evaluation, in the sense that each variable is attached the same weight in the assessments of all the DMUs, it should be noted that, unlike DEA, CSW allows us to rank the DMUs. The fact that DEA uses different profiles of weights in the assessments of the different DMUs makes impossible to derive an ordering of the units based on the resulting efficiency scores (see Cooper and Tone [15], Sinuany-Stern and Friedman [40], Kao and Hung [23] and Ramón et al. [30] for discussions). Moreover, poor discrimination is often found in the assessment of performance with DEA models, since many of the DMUs are classified as efficient or are rated near the maximum efficiency score. This can also be alleviated with a CSW. See Adler et al. [1], which provides a survey of ranking methods in the context of DEA. See also AnguloMeza and Estellita Lins [4] and Podinovski and Thanassoulis [28], which review the problem of improving discrimination in DEA. The choice of a CSW may often raise serious difficulties (see Doyle and Green [19] for a discussion). In particular, DEA has been used to find a CSW, specifically as the coefficients of a supporting hyperplane of the DEA technology at some efficient DMUs. In

http://dx.doi.org/10.1016/j.omega.2015.11.007 0305-0483/& 2015 Elsevier Ltd. All rights reserved.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

2

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

many cases, the choice of such hyperplane is made by minimizing the differences between the DEA efficiency scores and those that would result from the associated CSW. See Despotis [17], Kao and Hung [23], Liu and Peng [25,26] 1 and Cook and Zhu [14] (in the latter paper the objective is relaxed to groups of DMUs which operate in similar circumstances). Ganley and Cubbin [22] and Troutt [46] also find CSWs by somehow maximizing the resulting efficiency scores: maximizing the sum of efficiency ratios of all the DMUs, in the former case, or maximizing the minimum efficiency ratio, in the latter2. These DEA-based approaches utilized to derive a CSW, in spite of using a reference production technology, are only concerned with the measurement of efficiency and the subsequent ranking of DMUs. However, in this context, it may be argued that the DMUs at which the hyperplane (associated with the CSW) supports the technology can also be used for purposes of benchmarking and target setting, in a similar manner as they are somehow used as referents when efficiency scores based on that CSW are calculated. Specifically, the facet of the DEA efficient frontier that those DMUs span could be seen as a common best practice frontier for the benchmarking. In contrast to the procedures used to the choice of a CSW mentioned above, this paper is primarily focused on the development of a common framework for the benchmarking. Nevertheless, the DEA-based approach we propose also provides a CSW as an additional product, which can be used for the measurement of efficiency and ranking. In DEA, the efficient DMUs form a piecewise linear frontier that can be seen as a best practice frontier in the circumstance of benchmarking (see [13]). See Thanassoulis et al. [43] for a discussion on the issue of benchmarking in DEA; and also Cook et al. [12], Adler et al. [2] and Dai and Kuosmanen [16] for some references on DEA and benchmarking which include applications. Here, it is assumed that we deal with a situation in which there is no need (nor wish) to allow for individual circumstances of the DMUs. Therefore, as discussed above, the DMUs should be evaluated with input and output weights that are common to all of them. Likewise, it is also reasonable to assume that they should have common benchmarks and establish common best practices. From a methodological point of view, if a DEAbased approach is used for the benchmarking, assuming common weights means that only a facet of the DEA efficient frontier should be considered as the best practice frontier. The common best practice frontier will be therefore the facet of the DEA efficient frontier spanned by a set of technically efficient DMUs, which can be seen as a common reference group. Targets will result from projections of the DMUs on to this common best practice frontier. We select the common reference group as that which provides the closest targets. Minimizing the gap between actual inputs/outputs and targets ensures the identification of best practices that are globally the most similar to the actual performances of the DMUs being evaluated. Thus, they may show the DMUs the easiest way for improvement. The fact that the DMUs are all projected on to the same facet of the efficient frontier means that the setting of targets could involve the deterioration of some observed input/output level (provided that some of the others are improved). That is, efficient targets might suggest that improvements can be accomplished by reallocations between inputs and/or outputs. In other words, in the common benchmarking approach dominance does not prevail. As a consequence, although the approach proposed is developed in a DEA context, we depart here from the notion of technical 1 A model similar to the ones proposed in these two papers can be found in Liu et al. [24]. However, that model provides CSWs that are not necessarily DEA weights and may lead to efficiency scores larger than 1. 2 See also Roll and Golany [35] and Ramón et al. [31] for other papers dealing with DEA and CSWs.

efficiency used in the standard DEA. This situation is similar to that of the DEA models with weight restrictions. Thanassoulis et al. [42] state that in those cases we actually move from the technical efficiency to a kind of overall efficiency. Note also that the scores provided by the CSWs obtained by using the other existing DEAbased methods mentioned above do not measure technical efficiency either, for the same reasons. Technically, we follow a primal-dual approach based on a model that includes constraints of both the envelopment and the multiplier formulations of the DEA models. Thus, its optimal solutions allow us to deal not only with benchmarking and target setting but also with the measurement of efficiency. In particular, the optimal weights, which are the coefficients of the supporting hyperplane of the technology that contains the common best practice frontier, can be used to define efficiency scores for all the DMUs. As said before, as these weights are common to all the DMUs, the resulting efficiency scores allow us to derive a full ranking of units. This approach is in line with that used in Aparicio et al. [7], which is also concerned with minimizing the distance to the Pareto-efficient frontier of the production possibility set (PPS)3. However, that paper provides a self-evaluation of units (regarding technical efficiency), where each DMU can choose its own input and output weights, as opposed to the evaluation of the DMUs made here within a common benchmarking framework. As a result, Aparicio et al. [7] does not address the problem of ranking DMUs (in fact, it does not deal with the measurement of efficiency). Finally, we note that, if the identified common reference group spans a facet of the DEA efficient frontier which is not of full dimension, we will have alternative optimal solutions for the CSW. To deal with this issue, we propose an approach that considers all these optimal solutions for the CSWs, thus avoiding the need to introduce an additional criterion to the choice of weights among alternate optima. Specifically, a couple of models that allow us to yield for each DMU a range for its possible rankings are developed. These ranking ranges can help us gain insight into the robustness of the rankings. The paper unfolds as follows: in Section 2, we develop a model that allows us to set the closest targets on a common facet of the DEA efficient frontier. The optimal solutions for the weights of this model yield CSWs that can be used to define efficiency scores and rank the DMUs. These issues are addressed in Section 3, where we also discuss the existence of alternate optima for the CSW and provide ranges of possible rankings for each unit. Section 4 includes an empirical illustration. Section 5 concludes.

2. Closest targets on a common facet of the Pareto efficient frontier Throughout the paper, we consider that we have n DMUs  which  use m inputs to produce s outputs. These are denoted by Xj ; Y j ; j  0 ¼ 1; :::; n: It is assumed that Xj ¼ x1j ; :::; xmj Z 0; Xj a 0; j ¼ 1; :::; n;  0 and Yj ¼ y1j ; :::; ysj Z 0; Yj a 0; j ¼ 1; :::; n: We also assume a DEA constant returns to scale technology [9] for the measurement of relative efficiency  and benchmarking. Thus,  the production possibility set (PPS), T ¼ as ( ðX; Y Þ= X can produce Y , can be characterized ) n n P P follows T ¼ ðX; Y Þ= X Z λ j Xj ; Y r λ j Y j ; λj Z 0 . j¼1

j¼1

The following model simultaneously provides for every DMU the closest targets on a (common) facet of the Pareto efficient frontier of T by minimizing globally a weighted L1-distance to their 3 See also Portela et al. [29], Tone [45], Fukuyama et al. [21], Aparicio and Pastor [6] and Ruiz et al. [37] for other papers which deal with closest targets in DEA.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

actual inputs and outputs Min

n  X    ‖ X j ; Y j  X^ j ; Y^ j ‖ω 1 j¼1

s:t: :

P kAE

P

kAE

λkj X k ¼ X^ j

j ¼ 1; :::; n

ð1:1Þ

λkj Y k ¼ Y^ j

j ¼ 1; :::; n

ð1:2Þ

 v0 X k þ u0 Y k þ dk ¼ 0

kAE

Xv Z 1m

n X

ð1:3Þ ð1:4Þ

Yu Z 1s dk r M bk

kAE

λkj r Mð1 bk Þ

kAE

ð1Þ

ð1:5Þ ð1:6Þ ð1:7Þ

j¼1

dk Z 0; bk A f0; 1g λkj Z 0; X^ j Z0m ; Y^ j Z 0s

kAE 8 k; j

where   m s y  y^    P rj rj jxij  x^ ij j þ P ¼ ; j¼1,…,n, xi ; i ¼ i) k X j ; Y j  ðX^ j ; Y^ j Þkω 1 x y i¼1

i

r¼1

r

1; :::; m; and yr ; r ¼ 1; :::; s; being the averages across DMUs of the corresponding inputs and outputs. This specification of the weighted L1-distance has already been used in the DEA literature (see [44]). Note that we use the same weighted L1norm in the distances between the n DMUs and their projections. Moreover, the use of a weighted norm makes (1) a problem invariant to the units of measurement of inputs and outputs. Alternatively, we can normalize actual inputs and outputs by the corresponding averages. ii) X and Y are the m  m and s  s diagonal matrices whose entries are the averages of the inputs and the averages of the outputs, respectively. (In (1.4) and (1.5) we use the general notation 10n ¼ ð1; :::; 1Þ1n ). iii) M is a big positive quantity, and iv) E is the set of extreme efficient DMUs of T ([10]).4 The feasible set associated with   the constraints of (1) allows us to consider benchmarks X^ j ; Y^ j ; j ¼ 1; :::; n; for every DMUj, j¼1, …,n, which are all on a same facetof thePareto efficient frontier of T. (1.1) and (1.2) guarantee that X^ j ; Y^ j ; j ¼ 1; :::; n; belong to T. With (1.3)–(1.5) we allow for all of the vectors of non-zero weights, ðv; uÞ; which are the coefficients of a supporting hyperplane of T. (1.4) and (1.5) are actually the constraints vi x i Z 1; i ¼ 1; :::; m; and ur y r Z 1; r ¼ 1; :::; s; which would be those of the dual formulation of the invariant additive model when the slacks in the objective are weighted with 1=x i ; i ¼ 1; :::; m; and 1=y r ; r ¼ 1; :::; s; like here. These constraints secure non-zero weights. (1.6) and (1.7) are the key restrictions that link the two previous groups of constraints in order to ensure benchmarks on the Pareto efficient n P λkj 4 0 then (1.7) implies bk ¼ 0 and, consefrontier of T. If j¼1

quently, dk ¼ 0 by virtue of (1.6). Thus, if DMUk A E participates actively as a referent in the evaluation of some DMUj, j¼1,…,n, the feathen it necessarily belongs to  v0 X þ u0 Y ¼ 0: Therefore,  sible benchmarks considered in model (1), X^ j ; Y^ j ; j ¼ 1; :::; n; are combinations of DMU0k s A E which are all on a same facet of the 4 Following Charnes et al. [10], the DMUs in E and E0 are Pareto-efficient. E consists of the extreme efficient units, whereas those in E0 are Pareto-efficient units that can be expressed as a combination of DMUs in E. F is the set of weakly efficient units. NE, NE0 and NF are sets of inefficient DMUs, which are projected onto points on the frontier that are in E, E0 and F respectively.

3

Pareto efficient frontier, because these DMU0k s belong all to a common supporting hyperplane of T,  v0 X þu0 Y ¼ 0; whose coefficients are all non-zero. Solving (1) allowsnthus to identifying a common reference o  group of DMUs, RG ¼ DMUg =λgj 4 0; for some j ¼ 1; :::; n , which includes potential benchmarks for the rest of units. The targets = RG are actually the coordinates of provided by (1) for every DMUℓ 2 the projection of this unit on to the facet of the DEA efficient frontier that DMUs in RG span. Specifically, these targets can be found by using the optimal solutions of (1) as follows P   λgℓ X g ¼ X^ ℓ g A RG

P



λgℓ Y g ¼ Y^ ℓ

ð2Þ

g A RG

Note that these targets are globally the closest to the actual inputs and outputs of all the DMUs, in the sense that the objective function of (1) minimizes the sum of a weighted L1-distance between the DMUs and their projections on to the efficient frontier. Therefore, model (1) identifies benchmark performances that are most similar to the actual performances of the DMUs, that is, those that show them the way for improvement with less effort globally. It is also worth highlighting that, in order to ensure the feasibility of model (1), benchmarks are not required here to dominate the corresponding DMUs, because we force all of them to be projected on to the same facet of the frontier. This is why this model is not formulated in terms of non-negative slacks but using the absolute value of the deviations between actual inputs and outputs and targets, which can be either positive or negative quantities (or zero). As a result, in this paper we are not concerned with technical efficiency. The idea behind this approach is the following: model (1) allows to identifying a common best practice frontier as that which provides the most similar benchmarks to actual performances. This common best practice frontier is used for setting targets, which could imply the deterioration of some observed input and/or output level. That is, the establishment of common best practices may lead some DMUs to have to change their mixes as well as the volumes of their activities. Note that with this DEA-based approach, the identification of best practices implicitly entails the choice of a CSW, which determines a relative value system of the inputs and outputs considered. Thus, according to the relative worth of inputs and outputs determined by this CSW, we may have that, by worsening the level of one variable, some other variable can improve so as to more than compensate for the loss of value due to the worse level on the former variable. Remark 1. The linearization of the objective of model (1) Model (1) is a non-linear problem as a consequence of the use of absolute values of deviations in the objective. However, (1) can be reformulated without the absolute values, as it is explained next. We introduce the new decision variables Xjþ ; X j Z0m ; Yjþ ; Y j Z 0s ; j¼1,…,n, and add to the set of constraints of (1) the restrictions X j  X^ j ¼ X jþ  X j ; j¼1,…,n, and Y j  Y^ j ¼ Y jþ  Y j ; j¼1,…,n. Then, minimizing the non-linear objective in (1) is n P equivalent to minimizing the linear objective function j¼1 h 0  1  0  1 i X jþ þ X j X 1m þ Y jþ þ Y j Y 1s subject to the resulting set of constraints. Thus, (1) becomes a mixed-integer linear programming model. Remark 2. Solving (1) in practice The formulation of model (1) seeks that the DMUk's in E that participate actively as a referent in the evaluation of some DMUj, j¼1,…,n, necessarily belong to a hyperplane that contains the

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

4

common best practice frontier. This is actually achieved by means of the constraints (1.6) and (1.7), which include the classical big M and binary variables. Nevertheless, (1) can be solved in practice by reformulating these constraints using Special Ordered Sets (SOS) [8]. SOS Type 1 is a set of variables where at most one variable may be nonzero. Therefore, if we remove (1.6) and (1.7) from the formulation and define instead a SOS Type 1, Sk, for each pair of n   P variables λk ; dk ; k A E, where λk ¼ λkj ; then it is ensured that n P j¼1

j¼1

λkj and dk cannot be simultaneously positive for DMUk's, k A E:

CPLEX Optimizer (and also LINGO) can solve LP problems with SOS. Taking into account Remarks 1 and 2, we need to solve the following formulation in order to find the optimal solution of (1) Min

n h X

0 −1  0 −1 i X jþ þ X −j X 1m þ Y jþ þ Y −j Y 1s

j¼1

s:t: :

P kAE

P

^j λkj X k ¼ X

j ¼ 1; :::; n

λkj Y k ¼ Y^ j

j ¼ 1; :::; n

kAE

^ j ¼ X þ −X − X j −X j j

j ¼ 1; :::; n

Y j −Y^ j ¼ Y jþ −Y −j

j ¼ 1; :::; n

−v0 X k þ u0 Y k þdk ¼ 0 Xv Z 1m

kAE

Yu Z 1s n X λk ¼ λkj

ð3Þ

kAE

j¼1

dk Z 0 ^ j Z 0m ; Y þ ; Y − ; Y^ j Z0s λkj Z 0; X jþ ; X −j ; X j j   SOS1 Sk ¼ λk ; dk

kAE 8 k; j kAE

3. Efficiency measurement and ranking of units The optimal solutions for the weights in (1), ðv ; u Þ; can be considered as CSWs in order to define efficiency scores for all the DMUs as usual 0

θj ¼

u Y j ; j ¼ 1; :::; n v0 X j

ð4Þ

As said in the introduction, there exist other DEA-based approaches in the literature which are aimed at determining a CSW to be used for the efficiency measurement and ranking of units. In many of them, the choice of the weight vector ðv; uÞ is made by somehow minimizing the deviations between the DEA efficiency scores and the scores that would result from those weights. Kao and Hung [23] derive a family of CSW's by minimizing the generalized family of " # p 1=p n  P DEA distance measures Dp ðu; vÞ ¼ θj  θj ðu; vÞ ; pZ1. Desj¼1

potis [17] minimizes a convex combination of these deviations measured in terms of both D1 and D1 distances, t 1nD1 ðu; vÞ þ ð1  tÞ D1 ðu; vÞ; where t is a parameter in [0,1] whose specification leads to different CSWs. And Liu and Peng [25] propose another method  P I based on minimizing the following sum of deviations Δj þ ΔOj ; jAE ! s

m P P I O O I where Δj and Δj are such that ur yrj þ Δj = vi xij  Δj ¼ r¼1

i¼1

1 and E represents the total set of efficient DMUs, either extreme or not. It is shown that this procedure is eventually equivalent to finding the weight vector that maximizes the efficiency of an aggregated

DMU (see also Liu and Peng [26]). We can see therefore that these approaches seek to find a weight vector ðv; uÞ, associated with a supporting hyperplane of the PPS, which yields the maximum efficiency scores globally in some sense (like in Ganley and Cubbin [22] and Troutt [46], also mentioned in the introduction). In contrast, our approach provides efficiency scores that result from the weight vectors ðv ; u Þ associated with the hyperplane supported by the group of DMUs which can best play a role as common benchmarks for the remaining units (RG). Since the efficiency scores (4) are calculated with weights that are common to all the DMUs, they can also be used to derive a full ranking of units. In decision making processes in general, rankings often play a relevant role for the selection of alternatives based on the evaluations that are made. In areas like Higher Education, university rankings have experienced an increasing popularity. Most visible international rankings are The Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tung University, commonly known as the Shanghai index and the World University Ranking by Times Higher Education (THESQS). University rankings actually have some influence on the management in these institutions: on the choice of a convenient place by students, on recruitment decisions by employers, on university policies, motivating the competitiveness among them, etc. (see, e.g., De Witte and Hudrlikova [18] for an interesting discussion on this issue). The present paper is particularly concerned with the ranking of DMUs, as a continuation of the work previously developed on benchmarking. So far, in Section 3 it has been proposed a procedure that could be seen as an alternative to the existing DEA-based methods for determining CSWs, which are used to define efficiency scores and produce rankings of units. However, in the next subsection we develop an approach which goes one step further, in the sense that it yields for each unit the range of possible rankings that would result from the efficiency scores of all the DMUs calculated with all the alternative optimal solutions of (1) for ðv; uÞ. Thus, this approach considers all the CSWs provided by (1), instead of giving the single ranking that would result from the choice of a CSW according to some additional criterion. The ranking ranges allows to analyzing the robustness of the rankings against the alternate optima for the CSW. 3.1. Ranking ranges In this subsection, we develop an approach aimed at analyzing how much the ranking of DMUs derived from (4) can change over the set of input and output weights associated with the optimal solutions of (1). As has been said, the CSWs provided by (1) are the coefficients of supporting hyperplanes of T at the facet spanned by the DMUs in RG, which is that used as common best practice frontier in the benchmarking. If jRGj o m þ s  1; where jRGj is the cardinality of RG, then that facet is not of full dimension and, therefore, there will be infinitely many supporting hyperplanes of T containing that facet of the DEA efficient frontier, each one associated with an optimal solution of (1) for ðv; uÞ. As a consequence, the efficiency scores defined in (4), and the subsequent rankings of DMUs, might vary depending on the choice of the CSW that were made. Actually, this will often happen in practice, because full dimensional efficient facets (FDEFs) rarely exist, as a result of the insufficient variation in the data (see Olesen and Petersen [27] for discussions). Therefore, analyzing the robustness of the rankings that the efficiency scores (4) may yield becomes an issue of great interest. Obviously, the DMUs in RG will rank at the top regardless of the CSW provided by (1) is used, because they all are rated with the maximum efficiency score. However, the possible changes in the ranks of the other DMUs due to potential choices of CSWs should be investigated.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

To deal with this issue, we show here how to find for each DMU the range for its possible rankings that would result from considering all the alternate optima for the CSW (see Salo and Punkka [38] for ranking intervals in ratio efficiency analyses and Alcaraz et al. [5] for ranking ranges in the context of the cross-efficiency evaluation). Specifically, we find the best and the worst rankings that a given DMU0 not in RG could achieve if all the alternative optimal solutions for the CSW in (1) were considered. Thus, this is an approach that avoids the need to introduce an additional criterion in order to make a choice of weights among the alternate optima. We start by defining the following sets of DMUs associated with each of the optimal solutions ðv ; u Þ Definition 1. Let DMU0 be a given unit not in RG. For every   ðv ; u Þ, H0 ðv ; u Þ ¼ DMUℓ ; ℓ2 = RG=θℓ 4 θ0 . H0 ðv ; u Þ is the set of DMUs (not in RG) that outperform DMU0 with the CSW ðv ; u Þ. Definition 2. Let DMU0 be a given unit not in RG. For every   ðv ; u Þ, L0 ðv ; u Þ ¼ DMUℓ ; ℓ2 = RG=θℓ o θ0 . L0 ðv ; u Þ is defined in a similar manner as H0 ðv ; u Þ but considering instead the DMUs that perform worse than DMU0 with the CSW ðv ; u Þ. In order to determine the best ranking of DMU0 we need to find the CSW in (1) that gives rise to the minimum number of DMUs with a higher efficiency score than that of DMU0. Formally, Definition 3. The best ranking of a DMU0 not in RG is given by   H 0 ðv ; u Þ þ 1 ð5Þ r b0 ¼ jRGjþ Min   ðv ;u Þ

  where H 0 ðv ; u Þ and jRGj are, respectively, the cardinality of the   sets H 0 ðv ; u Þ and RG. The following proposition shows how to find rb0 Proposition 2. For every DMU0 not in RG, r b0 ¼ n  LE0 ; where Max

LE0

ð6Þ

is the optimal value of the problem P Iℓ LE0 ¼ k A E RG

ð7:1Þ

g A RG

ð7:2Þ

Xv Z 1m

ð7:3Þ

Yu Z 1s 0 θℓ ¼ uv0 XY ℓℓ

ℓ2 = RG

ð7:4Þ ð7:5Þ

ℓ2 = RG; ℓ a 0

ð7:6Þ

θℓ  θ0 r 1  Iℓ

ðv ;u Þ

holds, as jRGj does not depend on ðv ; u Þ: □ Model (7) is a mixed-integer non-linear problem. The following holds Proposition 3. Model (7) has a global optimum. Proof. Trivial, because the objective in (7) is a function that can only take a finite number of values and the feasible set is obviously non-empty. □ To find this global optimum it can be used an approach based on a parametric mixed-integer non-linear problem where LE0 serves as   the integer parameter. Since LE0 belongs to 0; :::; n  ðjRGj þ 1Þ ; we need to find the maximum value of the parameter for which a feasible solution of (7) exists. That will be LE0 . To carry out that search we can start with the larger value of the parameter, LE0 ¼ n  ðjRGj þ1Þ; and go on with LE0 ¼ n  ðjRGj þ 1Þ  1; n  ðjRGj þ1Þ  2 ; :::: until a feasible solution of (7) is found. An alternative which simplifies significantly the search in practice is proposed below. The idea is to start with an initial ranking of DMUs provided by the optimal solution of (1) and realize that, if we have a solution of (7) with LE0 ¼ p, then there exists a solution of that problem for every   LE0 in 0; :::; p  1 (we just need to change some of the Iℓ ’s which are 1 in the former solution of (7) to 0). Suppose that the rank of DMU0 is r according to the optimal solution of (1) that is found when it is solved in practice. Then, we know that there is a solution of (7) with LE0 ¼ n r: Thus, we have to check if (7) is feasible for values of LE0 ¼ ðn  rÞ þ 1; ðn  rÞ þ2; ::: If (7) is infeasible for the first time when LE0 ¼ ðn  rÞ þ q, then LE0 ¼ ðn  rÞ þq  1: We can proceed in a similar manner in order to find the worst ranking of DMU0. The most unfavorable scenario for DMU0 is that in which we have the minimum number of DMUs that perform worse or, equivalently, the maximum number of DMUs that perform no worse than DMU0. Formally, Definition 4. The worst ranking of a DMU0 not in RG is given by   L0 ðv ; u Þ ð8Þ rw 0 ¼ n  Min   ðv ;u Þ

ℓ2 = RG;ℓ a 0

 v0 X g þ u0 Y g ¼ 0

    jRGj þ H 0 ðv ; u Þ þ 1 : Eventually, (6) Therefore, LE0 ¼ n  Min  

  where L0 ðv ; u Þ and jRGj are, respectively, the cardinality of the   sets L0 ðv ; u Þ and RG. The following proposition shows how to find rw 0

s:t: :  v0 X k þ u0 Y k þ dk r0

5

ð7Þ

= RG; ℓ a 0 I ℓ A f0; 1g; 8 ℓ2

Proposition 4. For every DMU0 not in RG,   n   rw 0 ¼ RG þ HE0 þ 1; where Max

ð9Þ

HE0

is the optimal value of the problem P Iℓ HE0 ¼ ℓ2 = RG;ℓ a 0

s:t: :

where Iℓ ,ℓ2 = RG; ℓ a 0, are binary variables that, at optimum, indicate whether DMU0 outperforms DMUℓ or not. Proof. With (7.1)–(7.4) model (7) allows for all the optimal solutions of (1) for the CSWs, The constraints (7.6) are included in order to identify the DMUs not in RG, excluding DMU0, which have efficiency scores lower than or equal to that of DMU0 and, consequently, those with larger efficiency scores. Note that θℓ  θ0 r 1. Hence, for every choice of CSWs, ðv ; u Þ; if θℓ 4 θ0 for some = RG;ℓ a 0; then Iℓ will be necessarily 0, while if θℓ r θ0 then DMUℓ 2 Iℓ can be either 0 or 1. Since we maximize the sum of the Iℓ ’s in the objective of (7), at optimum we have that θℓ 4 θ0 happens if, and only if, Iℓ ¼ 0, while θℓ r θ0 is necessarily associated with Iℓ ¼ 1. Thus, model (7) finds the maximum number of DMUℓ 0 s not in RG with efficiency scores lower than or equal to that of DMU0      (excluding DMU0), i.e., LE0 ¼ Max n  jRGj þ H 0 ðv ; u Þ þ 1 :   ðv ;u Þ

 v0 X k þ u0 Y k þ dk r 0

k A E  RG

v0 X g þ u0 Y g ¼ 0

g A RG

ð10:1Þ ð10:2Þ

Xv Z 1m

ð10:3Þ

Yu Z1s

ð10:4Þ

0

θℓ ¼ uv0 XY ℓℓ θ0  θℓ r 1  Iℓ

ℓ2 = RG

ð10:5Þ

ℓ2 = RG; ℓ a 0

ð10:6Þ

ð10Þ

= RG; ℓ a 0 I ℓ A f0; 1g; 8 ℓ2 = RG; ℓ a 0, are binary variables that, at optimum, indiwhere Iℓ ,ℓ2 cate whether DMUℓ outperforms DMU0 or not. Proof. The proof is very similar to that of Proposition 2. Note first that (10) is only the result of replacing (7.6) in (7) with θ0  θℓ r Mð1  Iℓ Þ: Thus, at optimum θℓ Z θ0 is necessarily associated with Iℓ ¼ 1, while θℓ o θ0 with Iℓ ¼ 0 and, consequently, HE0 accounts for the maximum number of DMUs not in RG with an efficiency

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

6

score higher than or equal to that of DMU0 (excluding DMU0). That

       L0 ðV; U Þ þ 1 ¼ n  Min L0 ðV; U Þ j j n  RG þ is, HE0 ¼ Max     ðv ;u Þ

ðv ;u Þ

  ðjRGj þ 1Þ; and so, r w 0 ¼ HE0 þ 1. □

Obviously, model (10) also has a global optimum, which can be found by using a parametric approach as with (7). The developments above provide a couple of values, rb0 and r w 0; which determine the range of possible rankings for each DMU0. This is useful information regarding the performance of the different units, which results from an analysis of efficiency that is carried out by using as common benchmarks the DMUs in RG. The situation is the following: as said before, the DMUs in RG obviously rank at the top. As for the remaining units, if, for example, if r w j o r bj0 ; this means that DMUj ranks higher than DMUj’ irrespective of

b 0 the CSW that is chosen. If r w j o r j0 ; j A J; where J is certain subset of DMUs, then we have that DMUj outperforms the DMUs in that subset. Similar conclusions can be drawn for DMUs that are in two different groups when these relationships are identified. We illustrate this in the next section. We end this section on the ranking of units pointing out that, as a result of using an approach based on DEA (the CSW ðv ; u Þ are actually the coefficients of a supporting hyperplane that contains a facet of the DEA efficient frontier), the rank of a given DMU that results from the efficiency scores (4) should be seen as reflecting its relative position in presence of the DMUs considered in the sample. If a new DMU were added to the sample, then the rankings could change. That is, there exists the possibility of rank reversal. Obviously, this also happens in other DEA-based approaches for ranking like the cross-efficiency evaluation [19,39], the super-efficiency score [3] and the AHP/DEA methodology in Sinuany-Stern et al. [41] (except in the one input-one output case in the latter paper), aside from the other existing methods for determining a CSW already mentioned in this paper. In fact, as discussed in Wang and Luo [47], the rank reversal phenomenon occurs in many decision making approaches such as the Analytic Hierarchy Process (AHP), the Borda–Kendall (BK) method for aggregating ordinal preferences, the simple additive weighting (SAW) method and the technique for order preference by similarity to ideal solution (TOPSIS) method.

4. Empirical illustration To illustrate the proposed approach, we use the data in Coelli et al. [11], which relates to the performance of 28 international airlines from North America, Europe and Asia–Australia during the year 1990. This data set has already been used for illustrations with different purposes in other DEA papers (see Ray [32,33], Aparicio et al. [7] and Ruiz [36]). For each airline, two outputs and four inputs were considered. The outputs were: passenger–kilometers flown (PASS) and freight tone–kilometers flown (CARGO). And the inputs: number of employees (LAB), FUEL (millions of gallons), other kind of inputs (millions of U.S. dollar equivalent) excluding labor and fuel expenses (MATL) and Capital (CAP), as the sum of maximum takeoff weights of all aircraft flown multiplied by the number of days flown. The DEA efficiency analysis of these data reveals that 9 airlines are technically efficient: JAL, QANTAS, SAUDIA, SINGAPORE, FINNAIR, LUFTHANSA, SWISSAIR, PORTUGAL and AM.WESTERN. The study carried out here can be seen as one in which the analysts wish the airlines to be evaluated on a common basis of analysis (in contrast to DEA). That is, this analysis does not consider individual circumstances of the airlines, which are evaluated within a common benchmarking framework. The optimal solution of model (1) shows that the set of efficient airlines RG ¼ {JAL,

Table 1 Benchmarking: Intensities of the airlines in the common reference group. Airline NIPPON CATHAY GARUDA JALa MALAYSIA QANTAS SAUDIA SINGAPOREa AUSTRIA BRITISH FINNAIRa IBERIA LUFTHANSA SAS SWISSAIR PORTUGAL AIR CANADA AM. WESTa AMERICAN CANADIAN CONTINENTAL DELTA EASTERN NORTHWEST PANAM TWA UNITED USAIR # Referent a

JAL

SINGAPORE

FINNAIR

0.176714

1.36778 0.797066

0.407517 0.952571

0.248414 0.553936

0.805834 1.345242 1.498514

0.103001 1 0.017247 0.138782

0.076883

1 0.001972 0.830153

0.001841

0.28908 2.191799 0.202157 0.714827 0.066394 0.340296

0.390128 4.766583 1 1.88006 1.493617 0.098084 0.686102 2.135744

0.279449 0.221043 0.175026 0.217392 0.033035 0.305363 0.048758 0.127157 0.24945

0.086308

0.455614 0.452778 0.08223 0.187826 1.644974

15

AM. WEST

17

15

0.050201 1 6.409096 1.13033 3.211599 4.785655 1.732589 2.910331 1.990905 2.850996 6.068543 3.066925 13

Airlines in the common reference group.

SINGAPORE, FINNAIR, AM.WESTERN} is the common reference group that allows us to find the closest targets globally on the facet of the DEA efficient frontier they span. This means that QANTAS, SAUDIA, LUFTHANSA, SWISSAIR and PORTUGAL could have taken advantage of the DEA self-evaluation to be rated as efficient (technically), because they become inefficient when they are evaluated with respect to the same set of reference airlines. 0 Table 1 records the intensities (the λkj s) associated with each of the airlines in RG in the benchmarking of each of the other airlines, while Table 2 reports the actual data and the targets found for each airline on to the common best practice frontier. In Table 1 we can see that JAL, SINGAPORE, FINNAIR and AM. WESTERN have participated in the benchmarking of 15, 17, 15 and 13, respectively, of the remaining airlines. AM.WESTERN has played a role as benchmark for the airlines from North America (as well as for NIPPON). SINGAPORE and FINNAIR have been the referents for the European airlines, while the airlines from Asia– Australia have been benchmarked against these two latter airlines together with JAL. As for the targets, we firstly notice that Table 2 does not report obviously those of the airlines in RG, because targets and actual inputs/outputs coincide in that case. However, in the case of the technically efficient airlines not in RG, note that targets suggest that improvements will be accomplished by reallocations between inputs and/or outputs at the DEA efficient frontier. This is because they are located on facets of the frontier different from that determined by the airlines in RG. In particular, LUFTHANSA is suggested to implement important changes both in inputs and outputs. For PORTUGAL, the setting of targets raises the need of substitutions between inputs (increasing FUEL and CAP at the expenses of reducing significantly LAB and other assorted inputs). Note, in contrast, that the targets for QANTAS seem to be less demanding.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎ Table 2 Target setting: actual inputs/outputs and targets.

Table 3 Rankings.

Airline

LAB

FUEL

MATL

CAP

PASS

CARGO

NIPPON

12222 20082.7 12214 12176.2 10428 10428 21430 15156 9653.1 17997 17997 24708 15906.3 10864 4067 3388.2 51802 51802 8630 30140 19365.5 45514 23811.7 22180 15086.1 19985 8612.3 10520 6642.4 22766 22766 11914 80627 82346.6 16613 16613 35661 42013.8 61675 61675 21350 21350 42989 46167.4 28638 29683.5 35783 37585.1 73902 79686.9 53557 50735.5

860 661.4 456 492.3 304 315.4 1351 279 279 393 561.9 235 464.7 523 62 73.2 1294 1419.9 185 499 499 1078 1146.3 377 382.0 392 392 121 161.7 626 591.1 309 2381 2357.9 513 480.8 1285 1228.8 1997 1772.5 580 580 1762 1550.1 991 917.9 1118 1095.8 2246 2310.4 1252 1252

2008 1283.9 1492 1328.6 3171 549.8 2536 1246 619.8 1474 1288.9 806 806 1512 241 121.2 4276 2894.4 303 1238 1006.7 3314 3314 1234 758.2 964 1110.5 831 308.3 1197 1197 611 5149 4624.6 1051 1051 2835 2406.2 3972 3475.3 1498 1142.4 3678 3241.5 2193 2024.7 2389 2188.8 5678 4624.5 3030 2372.3

6074 6074 4174 4174 3305 3258.7 17932 2258 2306.9 4784 4784 6819 4709.4 4479 587 587 12161 12161 1482 3771 4081.0 9004 9817.1 3119 3119 2929 3347.1 1117 1314.2 4829 4829 2124 18624 18624 3358 3519 9960 9960 14063 14063 4459 4272.4 13698 13698 7131 7131 8704 8704 18204 18204 8952 8952

35261 35261 23388 29872.7 14074 15355.2 57290 12891 16047.5 28991 32289.3 18969 22823.6 32404 2943 3935.9 67364 78613.2 9925 23312 28026.9 50989 71023.1 20799 21374.8 20092 24136.7 8961 8961 27676 33252.3 18378 133796 133796 24372 28792.5 69050 69050 96540 100405.2 29050 33734.1 85744 85744 54054 54054 62345 62345 131905 131905 59001 72690.3

614 899.3 1580 1580 539 539 3781 599 599 1330 1330 760 760 1902 65 65 2618 2618 157 845 845 5346 4168.8 619 619 1375 1375 234 234 998 998 169 1838 2139.7 625 625 1090 1204.5 1300 1630.7 245 417.7 2513 2513 1382 1382 1119 1119 2326 2326 392 776.6

CATHAY GARUDA JALa MALAYSIA QANTAS SAUDIA SINGAPOREa AUSTRIA BRITISH FINNAIRa IBERIA LUFTHANSA SAS SWISSAIR PORTUGAL AIR CANADA AM. WESTa AMERICAN CANADIAN CONTINENTAL DELTA EASTERN NORTHWEST PANAM TWA UNITED USAIR

a

7

Airlines in the common reference group.

It is also worth highlighting that the results provided by model (1) appear to show a better performance of the airlines from North America, because the targets in those cases are closer to the actual inputs/outputs (with the exception of EASTERN and USAIR). See, in particular, TWA, PANAM, CANADIA or AIR CANADA (in the latter case, the differences between data and targets can be explained as technical inefficiency). In contrast, SAUDIA and NIPPON are probably the airlines exhibiting the worst performance; the differences between actual inputs/outputs and targets reveal that improvements will require significant efforts. The approach proposed also allows us to measure efficiency and, in particular, to provide a ranking of airlines based on the resulting efficiency scores. In order to do so, we firstly realize that the facet spanned by the airlines in RG is not of full dimension: jRGj ¼ 4 while m þs  1 ¼ 5: Thus, we will have infinite optimal solutions for the

Liu and Peng's [25] approach Airline

Best

Worst

Difference

Eff. scores

Ranking

NIPPON CATHAY GARUDA JALa MALAYSIA QANTAS SAUDIA SINGAPOREa AUSTRIA BRITISH FINNAIRa IBERIA LUFTHANSA SAS SWISSAIR PORTUGAL AIR CANADA AM. WESTa AMERICAN CANADIAN CONTINENTAL DELTA EASTERN NORTHWEST PANAM TWA UNITED USAIR

19 17 28 1 26 11 16 1 25 23 1 21 16 24 12 27 14 1 7 11 9 11 20 8 5 5 10 22

19 18 28 1 26 15 18 1 25 23 1 21 17 24 13 27 15 1 7 14 9 13 20 8 6 6 10 22

0 1 0 0 0 4 2 0 0 0 0 0 1 0 1 0 1 0 0 3 0 2 0 0 1 1 0 0

0.8220 0.7126 0.2772 0.8525 0.4102 0.7368 0.4743 1 0.3929 0.5961 0.7205 0.4879 0.5867 0.5259 0.6122 0.4200 0.6697 0.8508 0.8249 0.7506 0.8608 0.7754 0.6560 0.8543 0.8729 0.8491 0.8205 0.5897

9 15 28 5 26 13 24 1 27 19 14 23 21 22 18 25 16 6 8 12 3 11 17 4 2 7 10 20

a

Airlines in the common reference group.

weight vector provided by model (1). Table 3 reports the ranges of possible rankings for each airline, rb0 ; r w 0 , as defined in (5) and (8), b together with the differences rw 0  r0 . These allow us to analyze the robustness of the rankings provided against the alternate optima for the CSWs. Obviously, the airlines in RG always rank at the top. As for the remaining airlines, the large number of zeros under the column “differences” reveals that, in spite of the existence of multiple CSWs, the rankings that the efficiency scores (4) may yield are robust. Note, in particular, that in 13 out of the 24 airlines not in RG the rankings remain unchanged for the different CSWs. Therefore, we can have a clear picture of the ranking of airlines regarding the measurement of efficiency. For example, Table 3 shows that after the airlines in RG, PANAM and TWA would rank either 5th or 6th, followed by AMERICAN, NORTHWEST, CONTINENTAL and UNITED in this order. Thus, we can state that these airlines complete the top 10, irrespective of the optimal CSW that is chosen. Again, the airlines from North America appear to show a better performance. Finally, we note that the ranking is uniquely determined since position 19th on. In particular, SAS, AUSTRIA, MALAYSIA, PORTUGAL and GARUDA are the bottom 5 airlines (in this order). Table 3 also reports a couple columns with the results provided by the approach in Liu and Peng [25] previously mentioned. Specifically, the efficiency scores and the ranking of airlines associated with the CSW determined by that approach. In contrast to the approach we propose, it can be seen that the efficiency scores provided by that in Liu and Peng [25] are obtained with weights associated with a facet of the DEA efficient frontier supported by only one airline, SINGAPORE, which is the only one rated 1 (while in our approach RG consists of 4 airlines). Note, in any case, that the approach in Liu and Peng [25] pursues a different objective: maximizing the efficiency scores. Obviously, there are important differences in the rankings of some airlines provided by both approaches. However, some of the conclusions we have drawn

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎

8

with ours remain valid when the Liu and Peng's results are analyzed. For example, the top 10 airlines are about the same (with the exception of NIPPON, which enters the top 10 according to Liu and Peng at the expenses of FINNAIR), and the same happens with those that exhibit the poorest performance (PORTUGAL, MALAYSIA, AUSTRIA and GARUDA).

5. Conclusions In management, organizations use benchmarking for the evaluation of their processes in comparison to best practices of others within a peer group of firms in an industry or sector. In the best practice benchmarking process the identification of the best firms enables the setting of targets, which allows these organizations to learn from others and develop plans for improving some aspects of their own performance. Ranking firms on the basis of the evaluations also provides useful information for decision making. Usually, higher ranking means better performance. In this paper, we propose a DEA-based approach for benchmarking and ranking decision making units. The DMUs involved in production processes often experience similar circumstances, so benchmarking analyses in those situations should identify common referents and establish common best practices. This approach is thus to be used when there is no need (nor wish) to allow for individual circumstances of the DMUs. In particular, this means that input and output weights should be common to all units in the evaluations, in contrast to DEA. DEA, however, has been used in the literature to find the CSW to be used in a ratio efficiency analysis, which suggests that it could also be used for the development of a common framework for the benchmarking. The proposed approach identifies a common best practice frontier as the facet of the DEA efficient frontier generated by some technically efficient DMUs in a common reference group. This common reference group is selected as that which provides the closest targets, that is, by minimizing the gap between actual performances and best practices. The model developed also provides CSWs, which can be used to define efficiency scores and rank the DMUs. As a future research, we would like to investigate a possible extension of this common benchmarking approach to the case where DMUs can be put into groups whose members experience similar circumstances, in line with the work in Cook and Zhu [14].

[9] [10]

[11]

[12]

[13] [14] [15]

[16] [17] [18] [19]

[20] [21]

[22] [23]

[24]

[25] [26] [27]

[28]

[29]

[30] [31]

Acknowledgements We are very grateful to Juan F. Monge and José L. Sainz-Pardo for their help with computational aspects.

[32] [33]

[34]

References

[35] [36]

[1] Adler N, Friedman L, Sinuany-Stern Z. Review of ranking in the data envelopment analysis context. European Journal of Operational Research 2002;140(2):249–265. [2] Adler N, Liebert V, Yazhemsky E. Benchmarking airports from a managerial perspective. Omega 2013;41(2):442–458. [3] Andersen P, Petersen NC. A procedure for ranking efficient units in data envelopment analysis. Management Science 1993;39(10):1261–1294. [4] Angulo-Meza L, Estellita Lins MP. Review of methods for increasing discrimination in Data Envelopment Analysis. Annals of Operations Research 2002;116(1–4):225–242. [5] Alcaraz J, Ramón N, Ruiz JL, Sirvent I. Ranking ranges in cross-efficiency evaluations. European Journal of Operational Research 2013;226(3):516–521. [6] Aparicio J, Pastor JT. Closest targets and strong monotonicity on the strongly efficient frontier in DEA. Omega 2014;44:51–57. [7] Aparicio J, Ruiz JL, Sirvent I. Closest targets and minimum distance to the Paretoefficient frontier in DEA. Journal of Productivity Analysis 2007;28(3):209–218. [8] Beale EML, Tomlin JA. Special facilities in a general mathematical programming system for non-convex problems using ordered sets of variables. In:

[37]

[38] [39]

[40] [41]

[42]

Lawrence J, editor. Proceedings of the fifth international conference on operational research, 1970. London: Tavistock Publications; 1970. p. 447–454. Charnes A, Cooper WW, Rhodes E. Measuring the efficiency of decision making units. European Journal of Operational Research 1978;2(6):429–444. Charnes A, Cooper WW, Thrall RM. A structure for classifying and characterizing efficiency and inefficiency in data envelopment analysis. Journal of Productivity Analysis 1991;2(3):197–237. Coelli TE, Grifell-Tatjé E, Perelman S. Capacity utilization and profitability: a decomposition of short-run profit efficiency. International Journal of Production Economics 2002;79(3):261–278. Cook WD, Seiford LM, Zhu J. Models for performance benchmarking: measuring the effect of e-business activities on banking performance. Omega 2004;32(4):313–322. Cook WD, Tone K, Zhu J. Data envelopment analysis: prior to choosing a model. Omega 2014;44:1–4. Cook WD, Zhu J. Within-group common weights in DEA: an analysis of power plant efficiency. European Journal of Operational Research 2007;178(1):207–216. Cooper WW, Tone K. Measures of inefficiency in data envelopment analysis and stochastic frontier estimation. European Journal of Operational Research 1997;99(1):72–88. Dai X, Kuosmanen T. Best-practice benchmarking using clustering methods: application to energy regulation. Omega 2014;42(1):179–188. Despotis DK. Improving the discriminating power of DEA: focus on globally efficient units. Journal of the Operational Research Society 2002;53(3):314–323. De Witte K, Hudrlikova L. What about excellence in teaching? A benevolent ranking of universities Scientometrics 2013;96:337–364. Doyle JR, Green RH. Efficiency and cross-efficiency in DEA: derivations, meanings and uses. Journal of the Operational Research Society 1994;45 (5):567–578. Dyson RG, Allen R, Camanho AS, Podinovski VV, Sarrico CS, Shale EA. Pitfalls and protocols in DEA. European Journal of Operational Research 2001;132(2):245–259. Fukuyama H, Masaki H, Sekitani K, Shi J. Distance optimization approach to ratio-form efficiency measures in data envelopment analysis. Journal of Productivity Analysis 2014;42(2):175–186. Ganley JA, Cubbin JS. Public sector efficiency measurement: applications of data envelopment analysis. Amsterdam: North-Holland; 1992. Kao C, Hung HT. Data envelopment analysis with common weights: the compromise solution approach. Journal of the Operational Research Society 2005;56(10):1196–1203. Liu FHF, Peng HH, Chang HW. Ranking DEA efficient units with the most compromising common weights. In: Zhang XS, Liu DG, Wu LY, editors. Operations Research and Its Applications, 6; 2006. p. 219–234. Liu FHF, Peng HH. Ranking of units on the DEA frontier with common weights. Computers and Operations Research 2008;35(5):1624–1637. Liu FHF, Peng HH. A systematic procedure to obtain a preferable and robust ranking of units. Computers and Operations Research 2009;36(4):1012–1025. Olesen O, Petersen NC. Indicators of Ill-conditioned data sets and model misspecification in data envelopment analysis: an extended facet approach. Management Science 1996;42:205–219. Podinovski VV, Thanassoulis E. Improving discrimination in data envelopment analysis: some practical suggestions. Journal of Productivity Analysis 2007;28(1-2):117–126. Portela MCAS, Borges PC, Thanassoulis E. Finding closest targets in nonoriented DEA models: the case of convex and non-convex technologies. Journal of Productivity Analysis 2003;19(2-3):251–269. Ramón N, Ruiz JL, Sirvent I. On the choice of weights profiles in cross-efficiency evaluations. European Journal of Operational Research 2010;207(3):1564–1572. Ramón N, Ruiz JL, Sirvent I. Reducing differences between profiles of weights: a “peer-restricted” cross-efficiency evaluation. Omega 2011;39(6):634–641. Ray SC. Data envelopment analysis: theory and techniques for economics and operations research. Cambridge: Cambridge University Press; 2004. Ray SC. The directional distance function and measurement of super-efficiency: an application to airlines data. Journal of the Operational Research Society 2008;59(6):788–797. Roll Y, Cook WD, Golany B. Controlling factor weights in data envelopment analysis. IEEE Transactions 1991;23(1):2–9. Roll Y, Golany B. Alternate methods of treating factor weights in DEA. Omega 1993;21(1):99–109. Ruiz JL. Cross-efficiency evaluation with directional distance functions. European Journal of Operational Research 2013;228(1):181–189. Ruiz JL, Segura JV, Sirvent I. Benchmarking and target setting with expert preferences: an application to the evaluation of educational performance of Spanish universities. European Journal of Operational Research 2015;242(2):594–605. Salo A, Punkka A. Ranking intervals and dominance relations for ratio-based efficiency analysis. Management Science 2011;57(1):200–214. Sexton TR, Silkman RH, Hogan AJ. Data envelopment analysis: critique and extensions. In: Silkman RH, editor. Measuring efficiency: an assessment of data envelopment analysis. San Francisco: Jossey-Bass; 1986. p. 73–105. Sinuany-Stern Z, Friedman L. DEA and the discriminant analysis of ratios for ranking units. European Journal of Operational Research 1998;111(3):470–478. Sinuany-Stern Z, Mehrez A, Hadad Y. An AHP/DEA methodology for ranking decision making units. International Transactions in Operational Research 2000;7:109–124. Thanassoulis E, Portela MCAS, Allen R. Incorporating value judgments in DEA. In: Cooper WW, Seiford LW, Zhu J, editors. Handbook on data envelopment analysis. Boston: Kluwer Academic Publishers; 2004.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i

J.L. Ruiz, I. Sirvent / Omega ∎ (∎∎∎∎) ∎∎∎–∎∎∎ [43] Thanassoulis E, Portela MCS, Despic O. Data envelopment analysis: the mathematical programming approach to efficiency analysis. In: H.O. Fried, C.A.K. Lovell, S.S. Schmidt (Eds.), The measurement of productive efficiency and productivity growth. New York: Oxford University Press; 2008. [44] Thrall RM. Duality, classification and slacks in DEA. Annals of Operations Research 1996;66:109–138. [45] Tone K. Variations on the theme of slacks-based measure of efficiency in DEA. European Journal of Operational Research 2010;200(3):901–907.

9

[46] Troutt MD. Derivation of the maximin efficiency ratio model from the maximum decisional efficiency principle. Annals of Operations Research 1997;73: 323–338. [47] Wang YM, Luo Y. On rank reversal in decision analysis. Mathematical and Computer Modelling 2009;49(5–6):1221–1229.

Please cite this article as: Ruiz JL, Sirvent I. Common benchmarking and ranking of units with DEA. Omega (2016), http://dx.doi.org/ 10.1016/j.omega.2015.11.007i