How to generate regularly behaved production data? A Monte Carlo experimentation on DEA scale efficiency measurement

How to generate regularly behaved production data? A Monte Carlo experimentation on DEA scale efficiency measurement

European Journal of Operational Research 199 (2009) 303–310 Contents lists available at ScienceDirect European Journal of Operational Research journ...

233KB Sizes 2 Downloads 32 Views

European Journal of Operational Research 199 (2009) 303–310

Contents lists available at ScienceDirect

European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Interfaces with Other Disciplines

How to generate regularly behaved production data? A Monte Carlo experimentation on DEA scale efficiency measurement Sergio Perelman a,*, Daniel Santín b,1 a b

CREPP, HEC-ULg Management School, Université de Liège, Bd. du Rectorat 7 (B31), B-4000 Liège, Belgium Departamento de Economía Aplicada VI, Universidad Complutense de Madrid, Campus de Somosaguas, 28223 Pozuelo de Alarcón, Madrid, Spain

a r t i c l e

i n f o

Article history: Received 14 September 2006 Accepted 17 November 2008 Available online 25 November 2008 Keywords: Parametric distance function DEA Technical efficiency Scale efficiency Monte Carlo experiments

a b s t r a c t Monte Carlo experimentation is a well-known approach used to test the performance of alternative methodologies under different hypotheses. In the frontier analysis framework, whatever the parametric or non-parametric methods tested, experiments to date have been developed assuming single output multi-input production functions. The data generated have mostly assumed a Cobb–Douglas technology. Among other drawbacks, this simple framework does not allow the evaluation of DEA performance on scale efficiency measurement. The aim of this paper is twofold. On the one hand, we show how reliable two-output two-input production data can be generated using a parametric output distance function approach. A variable returns to scale translog technology satisfying regularity conditions is used for this purpose. On the other hand, we evaluate the accuracy of DEA technical and scale efficiency measurement when sample size and output ratios vary. Our Monte Carlo experiment shows that the correlation between true and estimated scale efficiency is dramatically low when DEA analysis is performed with small samples and wide output ratio variations. Ó 2008 Elsevier B.V. All rights reserved.

1. Introduction Most real-world problems involve multi-output multi-input processes. This is especially observed in the public sector, where policy makers usually face multi-dimensional decisions and budget trade-offs, but it is also the case in financial services, agriculture, transportation and other areas. In the field of frontier analysis, a number of non-parametric and parametric methods have been proposed to build best practice frontiers and to measure technical and scale efficiencies across decision making units (DMUs). In order to shed light on the properties of one methodology or in order to test the potential advantages and disadvantages of competing techniques, Monte Carlo experimentation appears to be the statistical referee most often selected. Bowlin et al. (1985), Banker et al. (1987), Gong and Sickles (1992), Banker et al. (1993) and Thanassoulis (1993) initiated the tradition of comparing the performance of non-parametric, mainly DEA (data envelopment analysis) approaches vs. parametric frontier approaches. In more recent years, several papers involving Monte Carlo experiments have concerned DEA issues: Pedraja-

* Corresponding author. Tel.: +32 4 3663098; fax: +32 4 3663106. E-mail addresses: [email protected] (S. Perelman), [email protected] (D. Santín). 1 Tel.: +34 91 3943039; fax: +34 91 3942431. 0377-2217/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2008.11.013

Chaparro et al. (1997) study the benefits of weight restrictions, Ruggiero (1998) and Yu (1998) analyze the introduction of nondiscretionary inputs, Zhang and Bartels (1998) investigate the effect of sample size on mean technical efficiency scores, Holland and Lee (2002) measure the influence of random noise and Steinmann and Simar (2003) assess the comparability of estimated inter-group mean efficiencies. These are only a few examples; the complete list of Monte Carlo studies and experimental designs also includes sensitivity analysis of the number of DMUs employed, random noise and efficiency–inefficiency distributions, functional forms, the number of replications, and so on. Our goal is not to present here a survey of these studies or of their main conclusions, but to make the observation that to our knowledge, without exception, these studies were performed in a single output multi-input framework and mostly using a Cobb–Douglas technology in order to generate the data. As most real-world production activities are of a complex multi-dimensional nature, simple experimental designs imply a potential limitation on the generalization of the results presented. Nevertheless, in a seminal paper, Lovell et al. (1994) introduced a methodology that allows the estimation of a parametric production function in a multi-output multi-input setting. For this purpose, they used an output distance function and a translog technology. However, if authors performing Monte Carlo experiments have neglected parametric distance functions and, more generally,

304

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

flexible translog technologies in generating testable production data, the reason must be found in the difficulties encountered in imposing microeconomic behavioral regularity conditions upon them, mainly those of monotonicity and convexity constraints. In a recent paper, O’Donnell and Coelli (2005) address this issue. They show how well-behaved curvature conditions on distance functions can be imposed in empirical studies using a Bayesian approach. The aim of this paper is closely related to theirs. First, we show how reliable data for Monte Carlo experiments can be generated using parametric distance functions and a flexible translog technology. We derive the complete set of necessary and sufficient conditions to generate data in the case of a simple two-input two-output production function, which may be generalized to higher dimensionality problems. Second, the translog distance function technology allows us to impose variable returns to scale elasticities and therefore to explore the economic concept of scale efficiency measurement. This is an important issue in DEA studies and it is rare to see any DEA application that does not report scale efficiency results. However, to date, most of the Monte Carlo studies exploring DEA properties have relied on the Cobb–Douglas, fixed returns to scale, technology for data generation and have not been able to validate the accuracy of scale efficiency results in relation to varying sample characteristics. Finally, we carry out in this paper a Monte Carlo experiment in order to show the steps involved in the data generation process. This simulation analyzes how the DEA average technical and scale efficiency estimations approximate to the true designed values when varying sample size and output composition. The sections of the paper are organized as follows. Section 2 reviews the main properties and characteristics of parametric output distance functions. In Section 3 we derive the necessary and sufficient conditions for the monotonicity and convexity properties to be fulfilled and Section 4 illustrates how these conditions apply in the two-output two-input setting. Section 5 describes the steps to follow in the experimental design and Section 6 presents a complete example, including details of the data generation process and experimentation results regarding technical and scale efficiency. The final section focuses on the main conclusions and directions for further research.

2. The translog output distance function Defining a vector of inputs x ¼ ðx1 ; . . . ; xK Þ 2 RKþ and a vector of outputs y ¼ ðy1 ; . . . ; yM Þ 2 RMþ , the feasible multi-input multi-output production technology can be defined using the output possibility set P(x), which can be produced using the input vector x : P(x) = {y : x can produce y), which is assumed to satisfy the set of axioms described in Färe and Primont (1995). This technology can also be defined as the output distance function proposed by Shepard (1970):

ln DOi ðx; yÞ ¼ a0 þ

M X

am ln ymi þ

m¼1

þ

K X

bk ln xki þ

k¼1

þ

K X M X

M X M 1X amn ln ymi ln yni 2 m¼1 n¼1

K X K 1X b ln xki ln xli 2 k¼1 l¼1 kl

dkm ln xki ln ymi ;

i ¼ 1; 2; . . . ; N;

ð1Þ

k¼1 m¼1

where i denotes the ith unit (DMU) in the sample. In order to obtain the production frontier surface, we set DO ðx; yÞ ¼ 1, which implies ln DO ðx; yÞ ¼ 0. The parameters of the above output distance function must satisfy a number of restrictions. Symmetry requires

amn ¼ anm ; m; n ¼ 1; 2; . . . ; M; and bkl ¼ blk ;

k; l ¼ 1; 2; . . . ; K;

and linear homogeneity of degree +1 in outputs can be imposed in the following way: M X

am ¼ 1;

m¼1 M X

amn ¼ 0; m ¼ 1; 2; . . . ; M; and

n¼1 M X

dkm ¼ 0;

k ¼ 1; 2; . . . ; K:

m¼1

This latter restriction indicates that distances with respect to the boundary of the production set are measured by radial expansions. Following Shepard (1970), homogeneity in outputs implies:

DO ðx; xyÞ ¼ xDO ðx; yÞ;

for any x > 0:

Furthermore, according to Lovell et al. (1994), normalizing the output distance function by one of the outputs is equivalent to setting x ¼ 1=yM imposing homogeneity of degree +1, as follows:

DO ðx; y=yM Þ ¼ DO ðx; yÞ=yM : For unit i, we can rewrite the above expression as

lnðDOi ðx; yÞ=yMi Þ ¼ TLðxi ; yi =yMi ; a; b; dÞ;

i ¼ 1; 2; . . . ; N;

where

TLðxi ; yi =yMi ; a; b; dÞ ¼ a0 þ

M 1 X

am lnðymi =yMi Þ

m¼1

þ þ

X M1 X 1 M1 amn lnðymi =yMi Þ lnðyni =yMi Þ 2 m¼1 n¼1 K X

bk ln xki þ

k¼1

þ

K M1 X X

K X K 1X b ln xki ln xli 2 k¼1 l¼1 kl

dkm ln xki lnðymi =yMi Þ:

ð2Þ

k¼1 m¼1

And rearranging terms:

DO ðx; yÞ ¼ inffh : h > 0; ðx; y=hÞ 2 PðxÞg:

 lnðyMi Þ ¼ TLðxi ; yi =yMi ; a; b; dÞ  ln DOi ðx; yÞ;

If DO ðx; yÞ 6 1 then (x,y) belongs to the production set P(x). In addition, DO ðx; yÞ ¼ 1 if y is located on the outer boundary of the output possibility set. Regularity conditions assume that P(x) is nondecreasing, linearly homogeneous and convex in outputs, and non-decreasing and quasi-convex in inputs. In order to estimate the distance function in a parametric setting, a translog functional form is assumed. According to Coelli and Perelman (1999, 2000), this specification fulfills a set of desirable characteristics: flexible, easy to derive and allowing the imposition of homogeneity. The translog distance function specification herein adopted for the case of K inputs and M outputs is

where  ln DOi ðx; yÞ corresponds to the radial distance function from the boundary. Hence we can set u ¼  ln DOi ðx; yÞ and add up a term v i capturing for noise to obtain the Battese and Coelli (1988) version of the traditional stochastic frontier model proposed by Aigner et al. (1977) and Meeusen and van den Broeck (1977):

 lnðyMi Þ ¼ TLðxi ; yi =yMi ; a; b; dÞ þ ei ;

i ¼ 1; 2; . . . ; N;

ei ¼ v i  ui ;

where u ¼  ln DOi ðx; yÞ, the distance to the boundary set, is a negative random term assumed to be independently distributed as truncations at zero of the Nðu; r2u Þ distribution, and the term v i is assumed to be a two-sided random (stochastic) disturbance desig-

305

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

nated to account for statistical noise and distributed iid Nð0; r2v Þ. Both terms are independently distributed ruv ¼ 0.

Monotonicity implies two conditions on distance function partial derivatives. For D to be non-increasing in x, it is required that:

2.1. Scale efficiency

fk ¼

Following Balk (2001) the scale elasticity value for any data point of the output distance function described in Eq. (1) is

while for D to be non-decreasing in y, it is required that:

K X @ ln DOi ðxi ; yi Þ /Oi ðxi ; yi Þ ¼  ; @ ln xk k¼1

hm ¼ ð3Þ

where /Oi ðxi ; yi Þ denotes the (output distance function based) scale elasticity value of DMU i at point ðxi ; yi Þ. Its value will be greater than, equal to or lower than 1 for increasing, constant or decreasing returns to scale, respectively. In other words, compared with the simple Cobb–Douglas constant elasticity technology, the translog function allows for a variable returns to scale technology, as is the case under DEA. But as Ray (1998) noted, the magnitude of scale elasticity does not directly reveal anything about the level of scale efficiency. Following Balk (2001), the corresponding scale efficiency value can be obtained through the following expression:

ln SEOi ðxi ; yi Þ ¼ 

ð/Oi ðxi ; yi Þ  1Þ2 ; 2b

ð4Þ

where SEOi ðxi ; yi Þ denotes the output scale efficiency, and P P b ¼ Kk¼1 Kl¼1 bkl , its value will be 1 for local constant returns to scale and lower than 1 otherwise. 3. Regularity conditions in a well-behaved output distance function: A review One serious drawback of applied production studies is that most of the time the underlying technology is unknown. However, we learn from microeconomic foundations that a well-behaved production function must fulfill a number of desirable properties. Färe and Primont (1995) provide the general regularity properties for output distance functions: monotonicity (non-decreasing in outputs), convexity and homogeneity of degree +1 in outputs, and non-increasing and quasi-convexity in inputs.2 These technological constraints rely on economic theory but real circumstances or a legal framework could relax or even make more restrictive these assumptions.3 Regardless of whether or not these microeconomic properties are true in real production situations, they impose desirable assumptions for the design of experimental data generation. Following O’Donnell and Coelli (2005), monotonicity and curvature conditions involve constraints on distance function partial derivatives with respect to inputs4:

sk ¼

@ ln D ¼ bk þ @ ln xk

K X

bkl ln xl þ

l¼1

M X

dkm ln ym ;

@D @ ln D D D ¼ ¼ sk 6 0 () sk 6 0; @xk @ ln xk xk xk

@D @ ln D D D ¼ ¼ rm P 0 () rm P 0: @ym @ ln ym ym ym

For quasi-convexity in x, it is necessary to evaluate the corresponding bordered Hessian matrix on inputs:

2

0

6 6 f1 6 F¼6 . 6 . 4 . fk

f1



f11 .. . f1k

fk

3

7    f1k 7 7 ; .. 7 7  . 5 

fkk

where

    D @2D @fk @ sk xk D fkl ¼ ¼ ¼ ¼ ðbkl þ sk sl  dkl sk Þ ; @xk @xl @xl x h xl @xl with dkl ¼ 1 if k = l and 0 otherwise. For D to be quasi-convex on x over the non-negative orthant (the n-dimensional analogue of the non-negative quadrant), a sufficient condition implies that all principal minors of F must be negative. Finally, for convexity in y, we evaluate the Hessian matrix on outputs:

h11

h12



h1M

3

6 6 h12 6 H¼6 . 6 . 4 .

h22 .. .

 

h2M .. .

7 7 7 7; 7 5

h1M

h2M

   hMM

2

2

D m where hmn ¼ @y@m @y ¼ @h ¼ @yn n

@ ðrm yD

m

@yn

Þ

¼ ðamn þ rm r n  dmn r m Þ



D ym yn



:

According to Lau (1978) the function D will be convex in y over the non-negative orthant if and only if H is positive semi-definite. Thus, D will be convex in y if and only if all the principal minors of H are non-negative. 4. Necessary and sufficient conditions to generate regular data in a two-input two-output setting In this section we are concerned with the generation of data in the simplest two-input two-output case, for which the output distance function can be defined as follows:

m¼1

and with respect to outputs: M K X X @ ln D rm ¼ ¼ am þ amn ln yn þ dkm ln xk : @ ln ym n¼1 k¼1

ln DO ¼ a0 þ a1 ln y1 þ a2 ln y2 þ 0:5a11 ðln y1 Þ2 þ 0:5a22 ðln y2 Þ2 þ 0:5a12 ln y1 ln y2 þ 0:5a21 ln y2 ln y1 þ b1 ln x1 þ b2 ln x2 þ 0:5b11 ðln x1 Þ2 þ 0:5b22 ðln x2 Þ2 þ 0:5b12 ln x1 ln x2 þ 0:5b21 ln x2 ln x1 þ d11 ln x1 ln y1 þ d12 ln x1 ln y2 þ d21 ln x2 ln y1 þ d22 ln x2 ln y2 :

2

O’Donnell and Coelli (2005, Footnote 1) contribute a clarification of a typographical error in Färe and Primont (1995). 3 What we have in mind here is the case of input congestion that breaks the monotonicity assumption, e.g. the case of a highly regulated firm manager who hires new workers regardless of productivity and efficiency issues. Also it could arrive that the homogeneity of degree +1 in outputs assumption will be unrealistic, e.g. if an industry is not interested in producing more of one specific output. 4 From now on, the notation is simplified after eliminating i subindexes and indicating DO as D.

ð5Þ

Moreover, homogeneity of degree +1 requires: a1 þ a2 ¼ 1, a11 þ a12 ¼ 0, a22 þ a21 ¼ 0, d11 þ d12 þ d21 þ d22 ¼ 0, and symmetry a12 ¼ a21 and b12 ¼ b21 . Furthermore, in order to avoid negative logarithm values, we assume that production input and output quantities are greater or equal to one: y1 ; y2 ; x1 ; x2 P 1. After taking into account these restrictions in Eq. (5), and choosing y1 as numeraire, we obtain:

306

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

ln D= ln y1 ¼ a0 þ a2 ðln y2 = ln y1 Þ þ 0:5a22 ðln y2 = ln y1 Þ2 þ b1 2

 ln x1 þ b2 ln x2 þ 0:5b11 ðln x1 Þ þ 0:5b22 ðln x2 Þ

the two-outputs–two inputs case the scale elasticity of the output distance function is given by

2

/Oi ðxi ; yi Þ ¼ ðs1 þ s2 Þ;

þ b12 ln x1 ln x2 þ 0:5d12 ln x1 ðln y2 = ln y1 Þ þ 0:5d22 ln x2 ðln y2 = ln y1 Þ:

ð6Þ

ð10Þ

and the logarithm of the scale efficiency value computed as follows:

ð/Oi ðxi ; yi Þ  1Þ2 : 2ðb11 þ b22 þ b12 Þ

Therefore, the monotonicity conditions on inputs can be written as follows:

ln SEOi ðxi ; yi Þ ¼ 

@ ln D ¼ b1 þ b11 ln x1 þ b12 ln x2 þ d11 ln y1 þ d12 ln y2 6 0; ð7Þ @ ln x1 @ ln D ¼ b2 þ b22 ln x2 þ b12 ln x1 þ d21 ln y1 þ d22 ln y2 6 0: ð8Þ s2 ¼ @ ln x2

According to Balk (2001, p. 168), we notice that for SEOi ðxi ; yi Þ < 1 (increasing or decreasing returns to scale inefficiency) and for SEOi ðxi ; yi Þ ¼ 1 (constant returns to scale efficiency) it is required that ðb11 þ b22 þ b12 Þ > 0. The monotonicity conditions on outputs are

s1 ¼

For quasi-convexity in inputs, we must calculate the input bordered Hessian:

2

3

0 6 F D ¼ 4 f1

f11

f2 7 f12 5;

f2

f21

f22

f1

where

ð11Þ

@ ln D ¼ a1 þ a11 ln y1 þ a12 ln y2 @ ln y1 þ d11 ln x1 þ d21 ln x2 P 0;

ð12Þ

@ ln D r2 ¼ ¼ a2 þ a22 ln y2 þ a21 ln y1 @ ln y2 þ d12 ln x1 þ d22 ln x2 P 0:

ð13Þ

r1 ¼

@D ¼ s1 ðD=x1 Þ; @x1 @D s2 ðD=x2 Þ; f2 ¼ @x2 2 @ D f11 ¼ ¼ ðb11 þ s1 s1  s1 ÞðD=x1 x1 Þ; @x1 @x1 @2D ¼ ðb12 þ s1 s2 ÞðD=x1 x2 Þ; f12 ¼ f21 ¼ @x1 @x2 2 @ D f22 ¼ ¼ ðb22 þ s2 s2  s2 ÞðD=x2 x2 Þ: @x2 @x2

Convexity on outputs over the non-negative orthant requires that the Hessian matrix in outputs be positive semi-definite. The twooutput Hessian matrix and its components are as follows:

For D to be quasi-convex in x over the non-negative orthant, a sufficient condition is that all the principal minors of F must be negative. Therefore, the first principal minor jF 1 j ¼ f12 will always be negative. The second principal minor can be defined as follows:

f1 ¼



¼ 2f 1 f2 f12 



f22 f11

< 0:

Therefore, the expression above is equivalent to: 3

2s1 s2 

h12

h21

h22

 ;

h12 ¼

@2D @ðr 1 D=y1 Þ ¼ ¼ ða12 þ r 1 r 2 ÞðD=y1 y2 Þ; @y1 @y2 @y2

h21 ¼

@2D @ðr 2 D=y2 Þ ¼ ¼ ða21 þ r 2 r 1 ÞðD=y2 y1 Þ; @y2 @y1 @y1

h11 ¼

@2D @ðr 1 D=y1 Þ ¼ ¼ ða11 þ r 1 r 1  r 1 ÞðD=y1 y1 Þ; @y1 @y1 @y1

h22 ¼

@2D @ðr 2 D=y2 Þ ¼ ¼ ða22 þ r 2 r 2  r 2 ÞðD=y2 y2 Þ: @y2 @y2 @y2

a11 P r1 r2 :

ð14Þ

3

D D ðb þ s1 s2 Þ  s21 2 2 ðb22 þ s22  s2 Þ x21 x22 12 x1 x2 s22

h11

According to O’Donnell and Coelli (2005, p. 501), in the case of 2 outputs, the Hessian matrix will be positive semi-definite if and only if

jF 2 j ¼ f1 f12 f2  f1 f1 f22 þ f2 f1 f21  f2 f11 f2 f12 f22



4.1. A numerical illustration

D3 ðb þ s21  s1 Þ < 0: x21 x22 11

Operating this expression and multiplying the resulting expression by 1 to change the sign of the inequality, we obtain the necessary and sufficient condition to guarantee quasi-convexity in inputs:

b22 s21  s2 s21 þ b11 s22  s1 s22  2b12 s1 s2 > 0:

ð9Þ

In Eq. (9), the term s2 s21  s1 s22 is always positive by construction. Therefore, it is easy to show that one sufficient condition to guarantee quasi-convexity in inputs is that b22 ; b11 > 0 and b12 < 0. Another sufficient condition to guarantee quasi-convexity in inputs is to impose the negativity of all input parameters ðb1 ; b2 ; b11 ; b22 ; b12 < 0Þ. Under this assumption, all the terms with a negative coefficient in Eq. (9), as well as the term 2b12 s1 s2 , are positive. Therefore, it is easy to show that s2 < b22 and s1 < b11 and, given that positive values always compensate for negative ones, second principal minors will therefore be less than zero, as expected. The imposition of the beta values is one of the main decisions that the experimental researcher has to make here, as this will have direct consequences on the design of scale elasticity and the computation of scale efficiency scores. From (3) and (4) and for

Given the empirical or the theoretical problem we want to address, it is possible to impose easily the value of the different parameters fulfilling the necessary and sufficient regularity conditions. However, in practice, many methods can be followed in order to perform the experimental design and it is not the aim of this paper to explore all these possibilities. Our main objective is to point out that the relevant variables implied in the data generation process are the translog parameters as well as the variations in the ratios of outputs and in the ratios of inputs (or differences in outputs and differences in inputs, if variables are expressed in logarithm terms). For illustration purposes, let us now assume that we want to explore DEA properties using the translog output distance function defined in Eq. (5) and that, as a starting point, we make the following choices: b11 ; b22 > 0, b12 < 0 (guaranteeing that Eq. (9) is fulfilled), a11 ¼ a22 ¼ a12 and d11 ¼ d12 ¼ d21 ¼ d22 . Furthermore, we want to evaluate how DEA measures scale inefficiency under possible local constant, decreasing or increasing returns to scale. In order to deal with this problem, we must impose parameters and input and output plausible logarithm values allowing the fulfilling conditions (7), (8), (12), (13) and (14) as follows:

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

s1 ¼ b1 þ b11 ln x1 þ b12 ln x2 þ d11 ðln y1  ln y2 Þ 6 0;

ð70 Þ

s2 ¼ b2 þ b22 ln x2 þ b12 ln x1 þ d11 ðln y2  ln y1 Þ 6 0;

ð80 Þ

r 1 ¼ a1 þ a11 ðln y1  ln y2 Þ þ d11 ðln x1  ln x2 Þ P 0;

ð120 Þ

r 2 ¼ a2 þ a22 ðln y2  ln y1 Þ þ d11 ðln x2  ln x1 Þ P 0;

ð130 Þ

a11 P r1 r2 :

ð140 Þ

To do this, we arbitrarily define b1 ¼ 1:5; b2 ¼ 0:6; b11 ¼ 0:4:; b22 ¼ 0:1; b12 ¼ 0:1; d11 ¼ d22 ¼ 0:05; d12 ¼ d21 ¼ 0:05; a1 ¼ a2 ¼ 0:5 and a11 ¼ a22 ¼ 0:25. Thus, the above conditions correspond to

s1 ¼ 1:5 þ 0:1ðln x1  ln x2 Þ þ 0:3 ln x1  0:05ðln y1  ln y2 Þ 6 0; s2 ¼ 0:6 þ 0:1ðln x2  ln x1 Þ  0:05ðln y2  ln y1 Þ 6 0; r1 ¼ 0:5 þ 0:25ðln y1  ln y2 Þ þ 0:05ðln x1  ln x2 Þ P 0; r2 ¼ 0:5 þ 0:25ðln y2  ln y1 Þ þ 0:05ðln x2  ln x1 Þ P 0; 0:25 P r1 r 2 : These conditions must be fulfilled in order to assure regularity conditions. In this example, the imposition of a11 ¼ 0:25 and the term 0:3 ln x1 definitively force the absolute difference between the logarithms of outputs and between the logarithms of inputs to belong to a set of plausible values. In practice, when one ratio is fixed it will condition the other. For example, if we set x1 6 50:22, once we fix the ratio of outputs, the plausible ratio of inputs is automatically derived. We can illustrate this result with several examples: 1. If j ln y2  ln y1 j ¼ 1, then j ln x2  ln x1 j 6 2:75 ! or if ðln y2  ln y1 Þ 2 ½1; þ1, then ðln x2  ln x1 Þ 2 ½2:75; þ2:75. 2. If j ln y2  ln y1 j ¼ 1:5, then j ln x2  ln x1 j 6 2:5. 3. If j ln y2  ln y1 j ¼ 0:5, then j ln x2  ln x1 j 6 3. In this case,5 if we generate the input and output data in logarithms using a uniform distribution for the defined interval guaranteeing the regular conditions, the real values would also be constrained inside the exponential of the interval. However, as we suggested above, it is possible to proceed the other way around. To generate in a first step plausible exogenous output and input logarithm intervals (output ratios and input ratios) and in the second step to derive the range of valid parameters and input and output values fulfilling the regularity conditions. At this point, it is worth noting again that the necessary and sufficient conditions provided in this paper for the input, output and parameter values of the regular distance function, as well as in the example presented in the following section, are given mainly for illustration purposes. Our main objective is to illustrate how to use the translog output distance function in order to generate easily regular data for Monte Carlo experimentation purposes. 5. Steps in experimental design for generating regular data In order to carry out the experimental design, the first step is the selection of the parameters for the translog production function described in Eq. (5) as well as the definition of a meaningful distribution ratio of outputs and inputs, and its logarithm to fulfill Eqs. (7), (8), (12), (13) and (14). As we have indicated previously, the parameters will impose the maximum and minimum values

5 Note the importance of the 0:3 ln x1 term in the s1 equation. For example the condition j ln y2  ln y1 j ¼ 1:5, then j ln x2  ln x1 j 6 2:5 is only fulfilled when x1 6 50:22. Operating in this case s1 ¼ 1:5 þ 0:1ð2:5Þ þ 0:3ð3:9164Þ  0:05ð1:5Þ ¼ 0:0000759939 < 0.

307

for the exogenous inputs and outputs in logarithms as well as the range of scale elasticity and scale inefficiency defined by Eqs. (10) and (11). Now, if we choose as numeraire ln y1 , we can calculate a value for  ln y1 in the production frontier using:

    2 y2 1 y þ b1 ln x1 þ b2 þ a11 ln 2 2 y1 y1 1 1  ln x2 þ b11 ½ln x1 2 þ b22 ½ln x2 2 þ b12 ln x1 2  2   y y  ln x2 þ d12 ln x1 ln 2 þ d22 ln x2 ln 2 y1 y1

 lnðy1 Þ ¼ a0 þ a1 ln

ð15Þ

where the value of a0 must be imposed with the restriction that  ln y1  a0 < 0 for all DMUs, in order to avoid negative production values. Second, we can calculate ln y1 and ln y2 , and y1 and y2 where the asterisks represent the output values in the production frontier. Third, the distribution of technical inefficiency values has to be defined within the interval [1; 1]. A recommended possibility is to generate ln D ¼ u  jNð0; r2u Þj where efficient units will receive D ¼ 1ðln D ¼ 0Þ. The fourth step consists of generating a normal distribution for the random noise v, v  Nð0; r2v Þ by definition distributed independently of the inefficiency term D. Here it is possible to relax the hypothesis of a radial random disturbance, affecting the two-outputs in the same way, and to generate two independent random noise terms v 1  Nð0; r2v 1 Þ, v 2  Nð0; r2v 2 Þ for each output.6 The fifth step is to generate the observed outputs capturing technical inefficiency. In order to do this, we multiply the output 1 in order to generate outvalues in the frontier y1 and y2 by expðln DÞ puts taking into account potential inefficiency:  y 1 ¼ y1

1 1  and y : 2 ¼ y2 expðln DÞ expðln DÞ

The sixth and last step is to introduce random noise, independently for each output, in order to obtain the final output values that we will employ in the Monte Carlo experiment:

y1 ¼ y 1

1 1 and y2 ¼ y : 2 expðv 1 Þ expðv 2 Þ

From this well-behaved production function, we can extract the required number of samples in order to perform Monte Carlo experimentation in a multi-input multi-output setting. This proposed methodology can be generalized to include more dimensions. For example, in the case of three outputs it would be necessary to generate exogenously two ratios of outputs, say lnðy2 =y1 Þ and lnðy3 =y1 Þ, analogously to the two-input two-output case discussed here, which would impose the range of output parameter values, and so on.7 6. Data generation and experiment results In order to illustrate the ideas developed above we performed a Monte Carlo experiment. Our main purpose was to evaluate the accuracy of DEA technical and scale efficiency measurement when sample size and output ratios vary. In order to conduct the Monte Carlo experiment, we first needed to define the production function. For the two-input

6 This assumption allows us to explore DEA properties when random shocks affect differently each output. In any case, in the data generating process, the researcher may decide whether to make this assumption or to include a single random term affecting both outputs identically. 7 Increasing the number of inputs and/or outputs also increases the number of regularity conditions the generated data has to fulfill. The procedure described here can be extended in a straightforward way for use in higher multi-input multi-output dimensions, although generating regular data in these cases could become cumbersome.

308

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

Table 1 Monte Carlo experiments. Two-output two-input production function.a Sample size (DMUs)

Generated data TE

DEA CRS

VRS

SE

CRS

VRS

SE

Experiment 1: Output differences j ln y2  ln y1 j 6 1:5b 20 0.8520 0.9314 (0.0246) (0.0203) 40 0.8497 0.9310 (0.0166) (0.0146) 80 0.8481 0.9304 (0.0131) (0.0109) 120 0.8475 0.9277 (0.0076) (0.0079) 200 0.8494 0.9344 (0.0055) (0.0052) 400 0.8427 0.9334 (0.0041) (0.0030)

0.9008 (0.0242) 0.8710 (0.0168) 0.8513 (0.0132) 0.8341 (0.0106) 0.8265 (0.0093) 0.8133 (0.0069)

0.9485 (0.0189) 0.9241 (0.0155) 0.9022 (0.0136) 0.8862 (0.0070) 0.8747 (0.0065) 0.8564 (0.0045)

0.9495 (0.0182) 0.9429 (0.0118) 0.9444 (0.0089) 0.9421 (0.0075) 0.9459 (0.0067) 0.9585 (0.0048)

0.7073 (0.1277) 0.7809 (0.0714) 0.8273 (0.0275) 0.8416 (0.0212) 0.8540 (0.0177) 0.8820 (0.0117)

0.5469 (0.1811) 0.7261 (0.0874) 0.7749 (0.0775) 0.8475 (0.0529) 0.8907 (0.0296) 0.9220 (0.0167)

0.2497 (0.2190) 0.3333 (0.1881) 0.3633 (0.1088) 0.4282 (0.0921) 0.4790 (0.0602) 0.5163 (0.0505)

Experiment 2: Output differences j ln y2  ln y1 j ¼ 0b 20 0.8485 0.9359 (0.0259) (0.0202) 40 0.8427 0.9424 (0.0181) (0.0151) 80 0.8481 0.9386 (0.0131) (0.0084) 120 0.8475 0.9353 (0.0103) (0.0077) 200 0.8468 0.9413 (0.0086) (0.0065) 400 0.8482 0.9389 (0.0027) (0.0037)

0.8550 (0.0271) 0.8317 (0.0228) 0.8250 (0.0132) 0.8156 (0.0097) 0.8134 (0.0090) 0.8093 (0.0032)

0.9167 (0.0236) 0.8875 (0.0185) 0.8768 (0.0130) 0.8664 (0.0089) 0.8593 (0.0058) 0.8512 (0.0028)

0.9339 (0.0217) 0.9390 (0.0154) 0.9428 (0.0100) 0.9447 (0.0089) 0.9477 (0.0055) 0.9508 (0.0046)

0.7988 (0.1088) 0.8624 (0.0519) 0.8685 (0.0341) 0.8719 (0.0337) 0.8885 (0.0251) 0.8907 (0.0115)

0.6922 (0.1370) 0.8178 (0.0780) 0.8562 (0.0554) 0.8978 (0.0374) 0.9285 (0.0195) 0.9538 (0.0140)

0.3606 (0.2429) 0.4874 (0.1381) 0.5402 (0.1116) 0.6751 (0.0700) 0.7149 (0.0492) 0.7760 (0.0288)

a b

SE

Spearman’s correlation

Standard deviation in brackets. Generated output (in logs) differences, before addition of random noises

v1

and

two-output case, we used the translog output distance function described in Eq. (5). In accordance with Eq. (15), we defined a0 ¼ 1; a1 ¼ 0:5; a11 ¼ 0:5; b1 ¼ 1:5; b2 ¼ 0:6; b11 ¼ 0:4; b22 ¼ 0:1; b12 ¼ 0:1; d11 ¼ d22 ¼ 0:05; d12 ¼ d21 ¼ 0:05. The second step was to define the exogenous ratio between the two-outputs and the two-inputs. We defined two scenarios; on the one hand in the first specification (Experiment 1) we let j ln y2  ln y1 j 6 1:5, and j ln x2  ln x1 j 6 2:5. The endowments of inputs x1 and x2 were generated randomly and independently using a uniform distribution over the interval [5, 50]. This generation assures that j ln 50  ln 5j ¼ j ln 5  ln 50j 6 2:3. On the other hand, in the second specification (Experiment 2) we let j ln y2  ln y1 j ¼ 0, and j ln x2  ln x1 j 6 2:5. As we clarify below, although the two-outputs in the frontier were equal for the DEA estimations, they showed a few differences due to different random noise terms. Once parameters and output and input logarithms had been generated, we calculated the output values in the frontier y1 and y2 . Third, in order to generate inefficiency, we used a half-normal distribution, where ln D ¼ u  jNð0; 0:3Þj so that the true distance 1 . or technical inefficiency could be easily calculated as D ¼ expðln DÞ In both experiments we allowed 25% of the decision making units to be on the true frontier.8 Regarding the random statistical perturbation in the production function, we independently generated two random terms for each output v 1  Nð0; 0:01Þ, v 2  Nð0; 0:01Þ, allowing random shocks to affect output in different quantity and direction. With this information we generated observed outputs y1 and y2 , as described in Section 5.

8 We follow here Bardham et al. (1998), who also place 25% of DMUs in the production frontier, giving a number of reasons for making such an assumption. This assumption can be relaxed or made more restrictive depending on the research objectives, e.g. 30% in Holland and Lee (2002), 12.5% in Ruggiero (1998), 10% in Muñiz et al. (2006), and 0% DMU on the production frontier in Pedraja-Chaparro et al. (1999).

v2.

Finally, note that under the defined output distance function, scale elasticity value /Oi ðxi ; yi Þ ¼ ðs1 þ s2 Þ, belongs to the [0.9264; 1.6171] interval. As a consequence, the output scale efficiency, SEOi ðx1 ; yi Þ, belongs to the [0.6212; 1] interval, with 1 indicating CRS scale efficiency. Each experiment was replicated 100 times for each sample size ranging from 20 to 400 DMUs. In each experiment we solved the constant and the variable returns to DEA scale output oriented models (CRS and VRS respectively from now on) proposed by Charnes et al. (1978) and Banker et al. (1984). The average technical (TE) and scale (SE) efficiency scores generated and the DEA-CRS, DEA-VRS and DEA-SE computed, as well as the Spearman correlation coefficients between them are reported in Table 1. But before coming to these results, it is worth noting that our expectations are that technical efficiency scores computed under DEA-VRS will be better correlated with generated data scores than DEA-CRS. This is because by construction we impose a variable returns to scale technology, going from increasing to decreasing scale elasticities, on the flexible functional form used to generate the data. Nevertheless, we are interested on DEA-CRS scores for two reasons. On the one hand, under DEA they have to be computed anyway in order to measure scale efficiencies. On the other hand, we are interested in estimating the potential bias implied by the computation of DEA-CRS technical efficiency scores when the true technology is characterized by variable returns to scale. The results presented in Table 1 can be summarized as follows. First, we can observe in both experiments that, as expected, the estimated average technical efficiencies decrease as the simple size increases and VRS scores are systematically higher than their CRS counterparts. Moreover, mean technical efficiencies are always slightly higher in Experiment 1 than in Experiment 2. Second, DEA-VRS overestimates technical efficiency for small samples but converges to the generated average scores for larger samples. On the contrary, DEA-CRS overestimates less for smaller samples, but underestimates for larger samples. This is also

309

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310 Table 2 Monte Carlo experiments. Identification of true technical and scale efficient DMUs by DEA. Two-output two-input production function. Sample size (DMUs)

Generated data TE = 1 (%)

True TE = 1 DEA TE = 1 (%) SE = 1 (%)

CRS

Experiment 1: Output differences j ln y2  ln y1 j 6 1:5a 20 25.00 40 25.00 80 25.00 120 25.00 200 25.00 400 25.00

33.15 34.78 34.99 34.36 34.41 33.98

86.00 80.10 69.35 61.97 54.04 42.18

Experiment 2: Output differences j ln y2  ln y1 j ¼ 0a 20 25.00 40 25.00 80 25.00 120 25.00 200 25.00 400 25.00

46.10 45.80 46.85 47.25 47.03 47.25

76.40 63.30 51.55 42.73 34.32 23.70

a

Generated output (in logs) differences, before addition of random noises

v1

and

indicated by the Spearman’s rank correlation coefficients between true technical efficiency and DEA-CRS and DEA-VRS computed scores. Both correlation coefficients increase with sample size, VRS faster than CRS. For small samples CRS overcomes VRS but the reversal arrives for larger samples (120 DMU and more). This is a surprising result given that the data is generated assuming a VRS technology. This is, however, the case for high average scale efficiency scores such as those assumed for this experimentation. Third, average DEA scale efficiency (SE) scores are remarkably close, for both experiments and independently of sample sizes, to average scores in the generated data. Rank correlation coefficients are, by contrast, highly sensitive to sample sizes and vary dramatically across both experiments. For instance, for the 40 DMU sample size, Experiment 2 yields a correlation coefficient of 0.4874 as against 0.3333 with Experiment 1. Even for larger samples, this difference in accuracy persists. The explanation of these results can be found, at least partially, checking for the accuracy of DEA to detect full efficient DMUs in the generated data. Table 2 reports the proportion of true technical efficient (TE = 1) and inefficient (TE < 1) DMUs qualified as technical efficient under DEA, and the proportion of true scale efficient (SE = 1) qualified as scale efficient by DEA. First, we observe that by construction the percentage of scale efficient DMU is higher in Experiment 1 than in Experiment 2 for an identical proportion (25%) of technical efficient DMU in generated data. Second, as expected, in both experiments DEA-VRS performs better than DEA-CRS to detect efficient DMUs and the percentage of exact predictions decreases when the sample size grows. Third, the proportion of mistakes done by DEA models in identifying true technical inefficient DMUs as efficient decreases dramatically when sample size increases. As a consequence of that, the percentage of units correctly identified as scale efficient increases with sample size (in last column of Table 2), that is a result in relation with the evolution of rank correlation coefficients shown in Table 1. Finally, from the comparison of both experiments it appears that Experiment 2, characterized by a smaller range of variation in the output ratios, allows a better detection by DEA of the true production technology. Certainly these results will be sensitive to other driving factors not addressed in this experimental design, such as the importance and variance of random noise on each output or the range of scale elasticity and scale efficiency variation allowed. We are also aware that the results will be sensitive to the main assumptions on the technology, e.g. the axiom of strong disposability underlying the output distance function proposed in this paper. Nevertheless,

True TE < 1 DEA TE = 1 (%) VRS

True SE = 1 DEA SE = 1 (%)

CRS

VRS

99.40 98.60 96.50 92.23 86.94 76.59

35.93 19.47 11.25 7.71 5.22 2.66

61.93 40.30 28.12 21.47 15.33 9.08

26.65 27.65 29.00 29.18 30.70 31.86

100 97.00 88.80 82.20 72.30 56.60

13.40 7.03 3.27 2.31 1.30 0.01

35.93 23.57 14.40 10.53 7.17 3.23

31.80 38.45 43.79 46.43 50.03 57.33

v2.

even with the very simplified experiment we ran here, it appears that the results obtained using a traditional Cobb–Douglas setting cannot be directly extrapolated and generalized to the flexible multi-output multi-input case, at least in small samples.

7. Conclusions and further research In the field of frontier analysis, several studies have compared the performance of non-parametric and parametric approaches by using Monte Carlo experiments based on a single output Cobb–Douglas technology for data generation. The main explanation as to why authors have not considered multi-output multi-input distance functions and more flexible technologies, e.g. translog, lies in the difficulties encountered in generating data satisfying microeconomic regularity conditions, such as monotonicity and convexity. Moreover the lack of flexibility of the data generation processes in previous research has been a serious handicap in evaluating, among other aspects, DEA performance in scale inefficiency measurement. We have shown how parametric output distance functions allow the generation of random data in the production process characterized by multi-output multi-input dimensions. In addition to this, extending the O’Donnell and Coelli (2005) results, we have derived the necessary and sufficient conditions under which second order translog technologies fulfill regularity conditions. Furthermore, we have shown how a valid range of parameters, and output and input ratio values can be derived, which satisfy the regularity conditions. These values would also be useful for practitioners who could use them as a rule of thumb in empirical studies estimating parametric output distance functions technologies. Finally, the two simple Monte Carlo experiments we carried out show how conclusions obtained in a two-output two-input production function setting differ with output ratios and sample size. In particular, it appears that the correlation between scale efficiency scores, in computed DEA and generated data, is lower when the range of output ratios is higher and when the sample size is smaller. More research is still needed in order to measure the potential bias regarding DEA properties introduced in previous Monte Carlo experimentation based exclusively on single output data, especially where small samples are involved. Another line of investigation would be to derive regularity conditions for other multioutput multi-input parametric forms, like the quadratic function introduced by Färe et al. (2005) or the hyperbolic distance function

310

S. Perelman, D. Santín / European Journal of Operational Research 199 (2009) 303–310

introduced by Cuesta and Zofío (2005), in order to use them for data generation process purposes. Moreover, we think that the methodology presented here also could be extended for panel data with a time trend in order to evaluate and enhance the accuracy of Malmquist productivity growth index computations and their decomposition. Acknowledgements We thank Luis Orea, Tim Coelli, Chris O’Donnell and two anonymous referees for useful discussion and suggestions. We are also grateful to participants at the IV North American Productivity Workshop and IX European Workshop on Efficiency and Productivity for their comments. A preliminary version of this article was published in the Working Papers Collection of Fundación de las Cajas de Ahorros–FUNCAS–(Spanish Saving Banks Foundation) as the Working Paper number 217. References Aigner, D.J., Lovell, C.A.K., Schmidt, P., 1977. Formulation and estimation of stochastic production function models. Journal of Econometrics 6, 21–37. Balk, B.M., 2001. Scale efficiency and productivity change. Journal of Productivity Analysis 15, 159–183. Banker, R.D., Charnes, A., Cooper, W.W., 1984. Some models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science 30, 1078–1092. Banker, R., Charnes, A., Cooper, W., Maindiratta, A., 1987. A comparison of data envelopment analysis and translog estimates of production frontiers using simulated observations from a known technology. In: Dogramaci, A., Fare, R. (Eds.), Applications in Modern Production Theory Inefficiency and Productivity. Kluwer Academic Publishers, Boston, pp. 33–55. Banker, R., Gadh, V., Gorr, W., 1993. A Monte Carlo comparison of two production frontier estimation methods: corrected ordinary least squares and data envelopment analysis. European Journal of Operational Research 67, 332–343. Bardham, I.R., Cooper, W.W., Kumbhakar, S.C., 1998. A simulation study of joint uses of data envelopment analysis and statistical regressions for production function estimation and efficiency evaluation. Journal of Productivity Analysis 9, 249– 278. Battese, G.E., Coelli, T.J., 1988. Prediction of firm-level technical efficiencies with a generalised frontier production function and panel data. Journal of Econometrics 38, 387–399. Bowlin, W., Charnes, A., Cooper, W., Sherman, H., 1985. Data envelopment analysis and regression approaches to efficiency estimation and evaluation. Annals of Operational Research 2, 113–138. Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2 (6), 429–444. Coelli, T., Perelman, S., 1999. A comparison of parametric and non-parametric distance functions: with application to European railways. European Journal of Operational Research 117, 326–339.

Coelli, T., Perelman, S., 2000. Technical efficiency of European railways: A distance function approach. Applied Economics 32, 1967–1976. Cuesta, R.A., Zofío, J.L., 2005. Hyperbolic efficiency and parametric distance functions: With application to Spanish savings banks. Journal of Productivity Analysis 24, 31–45. Färe, R., Primont, D., 1995. Multi-output Production and Duality: Theory and Applications. Kluwer Academic Publishers. Färe, R., Grosskopf, S., Noh, D.W., Weber, W., 2005. Characteristics of polluting technology: Theory and practice. Journal of Econometrics 126, 469–492. Gong, B.H., Sickles, R., 1992. Finite sample evidence on the performance of stochastic frontiers and data envelopment analysis using panel data. Journal of Econometrics 51, 259–284. Holland, D.S., Lee, S.T., 2002. Impacts of random noise and specification on estimates of capacity derived from data envelopment analysis. European Journal of Operational Research 137, 10–21. Lau, L.J., 1978. Testing and imposing monotonicity, convexity and quasi-convexity constraints. In: Fuss, M., McFadden, D. (Eds.), Production Economics: A Dual Approach to Theory and Applications. North Holland, Amsterdam. Lovell, C.A.K., Richardson, S., Travers, P., Wood, L.L., 1994. Resources and functionings: A new view of inequality in Australia. In: Eichhorn, W. (Ed.), Models and Measurement of Welfare and Inequality. Springer-Verlag, Berlin, pp. 787–807. Meeusen, W., van den Broeck, J., 1977. Efficiency estimation from Cobb–Douglas production functions with composed error. International Economic Review 18, 435–444. Muñiz, M., Paradi, J., Ruggiero, J., Yang, Z., 2006. Evaluating alternative DEA models used to control for non-discretionary inputs. Computers and Operations Research 33, 1173–1183. O’Donnell, C.J., Coelli, T.J., 2005. A Bayesian approach to imposing curvature on distance functions. Journal of Econometrics 126, 493–523. Pedraja-Chaparro, F., Salinas-Jimenez, J., Smith, P., 1997. On the role of weight restrictions in data envelopment analysis. Journal of Productivity Analysis 8, 215–230. Pedraja-Chaparro, F., Salinas-Jimenez, J., Smith, P., 1999. On the quality of the data envelopment analysis model. Journal of Operational Research Society 50, 636– 644. Ray, S.C., 1998. Measuring scale efficiency from a translog production function. Journal of Productivity Analysis 11, 183–194. Ruggiero, J., 1998. Non-discretionary inputs in data envelopment analysis. European Journal of Operational Research. 111, 461–469. Shepard, R.W., 1970. Theory of Cost and Production Functions. Princeton University Press, Princeton. Steinmann, L., Simar, L., 2003. On the comparability of efficiency scores in nonparametric frontier models. Discussion Paper 0318, Institut de Statistique, Université Catholique de Louvain. Thanassoulis, E., 1993. A comparison of regression analysis and data envelopment analysis as alternative methods for assessing performance. Journal of Operational Research Society 44, 1129–1145. Yu, C., 1998. The effects of exogenous variables in efficiency measurement. A Monte Carlo study. European Journal of Operational Research 105, 569–580. Zhang, Y., Bartels, R., 1998. The effect of sample size on the mean efficiency in DEA with an application to electricity distribution in Australia, Sweden and New Zealand. Journal of Productivity Analysis 9, 187–204.