Competitive effects on teaching hospitals

Competitive effects on teaching hospitals

European Journal of Operational Research 154 (2004) 515–525 www.elsevier.com/locate/dsw Competitive effects on teaching hospitals Shawna Grosskopf a, ...

288KB Sizes 1 Downloads 62 Views

European Journal of Operational Research 154 (2004) 515–525 www.elsevier.com/locate/dsw

Competitive effects on teaching hospitals Shawna Grosskopf a, Dimitri Margaritis

b,* ,

Vivian Valdmanis

c

a Department of Economics, Oregon State University, Corvallis, OR 97331-3612, USA Department of Economics, University of Waikato, Private Bag 3105, Hamilton, New Zealand Health Policy Unit, London School of Hygiene and Tropical Medicine, London WC1E 7HT, UK b

c

Abstract In this study we assess the performance of US teaching hospitals operating in 1995. Since teaching hospitals must increasingly compete with non-teaching hospitals for managed care contracts based on price, decreasing costs can only come from either reducing inefficiencies or decreasing the Ôpublic goodÕ production of teaching and research. We use a data envelopment analysis (DEA) approach to measure the relative technical and scale efficiencies on a sample of 254 US teaching hospitals. The next step of our research is to assess in a bivariate context the effect market competition has on the teaching hospitals in our sample. We find that competition (as measured by the number of managed care contracts per hospital and the number of patients covered by these contracts per hospital) has positive effects on the teaching hospitals. In other words, as competition increases so does the teaching hospitalÕs relative efficiency. We also regress each hospitalÕs relative efficiency scores on ownership form, organization structure, teaching effort, and competitive market variables. We find that increased competition leads to higher efficiency without compromising teaching intensity. Ó 2003 Elsevier B.V. All rights reserved. Keywords: Data envelopment analysis; Teaching hospitals; Efficiency; Competition

1. Introduction There have been numerous changes in the US health care industry due to both public policy changes and market reforms. One of the most dramatic changes is the dominance of managed care in the financing of patient care (based on discounted fees and capitated payments) which has

*

Corresponding author. Tel.: +64-7-838-4454; fax: +64-7838-4331. E-mail addresses: [email protected] (S. Grosskopf), [email protected] (D. Margaritis), [email protected] (V. Valdmanis).

indirectly changed hospitals and providers from price makers to price takers. This creates pressure for hospitals to become more price competitive in order to attract managed care contracts and in turn their patients. We define managed care as a system that either pays hospitals on a capitated basis (a payment to contracted providers on a per member per month basis) or pays the hospitals based on a discounted price. Included in the managed care system we use here are agreements between hospitals and health maintenance organizations (HMOs) or preferred provider organizations (PPOs). Whereas managed care may be credited with reducing hospital and health care cost inflation, hospital decision-makers complain

0377-2217/$ - see front matter Ó 2003 Elsevier B.V. All rights reserved. doi:10.1016/S0377-2217(03)00185-1

516

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

that other important hospital objectives may be jeopardized by these financial constraints. One specific type of hospital that is particularly hard hit by restrictive budgets is the teaching hospital. In addition to providing direct patient care, the teaching hospital is responsible for training the US medical work force and engaging in medical research. Both these endeavors are important, yet very seldom fully remunerated. Several recent studies dealing with teaching hospitals point out the potential economic problems these hospitals may face due to the proliferation of managed care. Pardes (1997) suggests that in light of managed care, teaching hospitals should respond by becoming as efficient as possible which would include decreasing costs, reducing the number of medical residents, and providing more primary care. Lehrer and Burgess (1995) find that attending physiciansÕ productivity is lower in teaching hospitals than physician productivity in non-teaching hospitals which is corroborated by Kralewski et al. (1997) who find that productivity of clinical practitioners is lower in teaching than in non-teaching clinics. If there is lower productivity of medical labor in teaching hospitals, it may be concluded that these hospitals are less efficient than their non-teaching counterparts, and thus more costly. Therefore, teaching hospitals may not be well equipped to compete with non-teaching hospitals for managed care contracts. Teaching hospitals also face a decreasing commitment from the public sector in financing the costs attributed to teaching. Teaching costs must become increasingly reliant on clinical earnings for subsidization (Fried et al., 1995). To maintain clinical earnings, teaching hospitals must be able to compete for managed care contracts with nonteaching hospitals, something that may be difficult since teaching hospitals are at an inherent cost disadvantage (Rutledge, 1997; Pardes, 1997). However, researchers agree that providing teaching is not an excuse for inefficient behavior. The issue becomes a debate between the hypothesis that teaching hospitals cannot compete with their non-teaching counterparts because of legitimate teaching costs; or that teaching hospitals are inefficient due to poor management and thus there is

room for increases in productivity (Kralewski et al., 1997). The purpose of this paper is to provide some evidence concerning these hypotheses––we are particularly interested in determining whether competition has beneficial effects on teaching hospital performance. We begin by modeling teaching hospital performance in terms of technical efficiency using 1995 American Hospital Association (AHA) data. We then investigate for any discernible impact on performance, and whether that impact is at the cost of teaching intensity and dedication. In other words, does market pressure result in more efficient hospitals providing the social good of medical education?

2. Methodology We use non-parametric frontier methods (DEA) in order to evaluate the technical efficiency of teaching hospitals in our sample. This approach uses linear programming techniques in order to construct a best practice frontier and measures deviations of individual observations (in this case hospitals) from that frontier. This frontier is determined by the hospitals in the sample yielding a measure of efficiency that is relative rather than absolute. In this paper we take the input orientation, i.e., we take outputs as given and seek to minimize inputs. The best practice frontier is derived by the hospitals in the sample that use the fewest inputs in order to produce a given levels of outputs. More formally, the frontier is the isoquant 1 or lower boundary of the input requirement set LðyÞ, where   LðyÞ ¼ x 2 RNþ : x can produce y ; y 2 RM þ; and y is the vector of outputs produced by input vectors x. In Fig. 1, all feasible input combinations that yield output vector y make up LðyÞ. The best practice frontier is determined by the benchmark hospitals B, C, and D.

1 The isoquant is defined as follows: Isoq LðyÞ ¼ fx : kx 62 LðyÞ; 0 < k 6 1g.

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

required. The latter is an appealing methodological feature since teaching hospitals typically are neither cost-minimizers nor profit-maximizers. In fact, these deviations from typical economic behavior may be due to their organizational missions which include providing social goods such as teaching and research. A second benefit is that data on input and output prices and costs are not required, which are often absent from hospital data sources. For example, attending physicians are often paid directly by a third party and therefore, input prices for physicians are absent. Third, this approach allows us to include multiple inputs and outputs in modeling the underlying technology.

X2

B A

C D

517

L(y)

X1

O

Fig. 1. Best practice frontier.

Other hospitals that do not perform as well as the benchmark hospitals (those hospitals on the best practice frontier) lie inside the frontier. Inefficiency for these hospitals is measured as the distance between their observed input bundle and the best practice frontier. Formally we solve the following problem: Fi ðy; xÞ ¼ minfk : kx0 LðyÞg; where Fi ðy; xÞ refers to Farrell input-based technical efficiency, and k is the proportional scaling factor. In our Fig. 1, Hospital A is inefficient relative to the best practice frontier which is constructed from hospitals B C D, where all four hospitals produce the same level of outputs. As it can be seen, Hospital A could radially reduce the amount of inputs by OC/OA. In other words, Hospital A could reduce inputs by (1 ) OC/OA)% and still maintain the production of its current output level. Formal specification of the linear programming problem used to construct the frontier of the technology can be found in Appendix A. See also F€ are et al. (1994). The benefits of this approach that make it particularly well suited for the study of hospitals include the following. One benefit is that this approach is flexible so that no functional form needs to be imposed on the hospitalsÕ production technology, nor is there any assumption of optimization

3. Data and results The data used for this study are from the 1995 American Hospital Association Survey of Hospitals. The sample we use consists of 254 teaching hospitals. We select only the teaching hospitals that do not have missing values for inputs or outputs, and we define teaching hospitals as hospitals with medical residents. For our specification of outputs we include the number of inpatients (PATIENTS), the number of inpatient plus outpatient surgeries (SURGERIES), and the number of outpatient visits plus emergency room visits (OUTPAT/ER). As inputs we include the number of fully licensed and staffed beds (BEDS), full time equivalent (FTE) physicians with staffing privileges (FTEMD), FTE registered nurses (FTERN), FTE licensed practical nurses (FTELPN), FTE medical residents (FTERES), and FTE other personnel (FTEOTH). Descriptive statistics of our input and output variables are provided in Table 1. As seen in Table 1, our sample of teaching hospitals is quite varied: standard deviations exceed means for physicians and residents, which are the key distinguishing features of teaching hospitals on the input side. The range of all the variables is large. Based on these data, we compute Fi ðy; xÞ for each teaching hospital in our sample. The mean technical efficiency score equals 0.60. This result is interpreted as, on average, hospitals in this sample could reduce inputs by 40% and maintain

518

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

Table 1 Descriptive statistics––inputs/outputs teaching hospitals 1995 N ¼ 254 Variable

Mean

Inputs BEDSa FTEMD FTERN FTELPN FTERES FTEOTHER Outputs PATIENTS SURGERIES OUTPAT/ER a

Standard deviation

343.27 54.76 359.06 59.43 62.29 1102.22 10,592 7422 182,432

Min

Max

228.77 64.76 278.03 47.07 102.78 802.52

18 1 11 0 1 16

1299 442 1357 247 673 4769

8845.71 6875.30 157605.10

152 0 0

48,125 34,732 1,068,684

Fully licensed and staffed.

Table 2 Descriptive statistics efficiency measures, N ¼ 252 (A) and of inefficient measures, N ¼ 221 (B) Variable

Mean

Standard deviation

Min

Max

(A) N ¼ 252 CRS technical efficiency scorea VRS technical efficiency score Scale efficiency score

0.60 0.71 0.85

0.257 0.256 0.169

0.04 0.10 0.10

1.00 1.00 1.00

(B) N ¼ 221 CRS technical efficiency score VRS technical efficiency score Scale efficiency score

0.54 0.61 0.82

0.216 0.221 0.170

0.04 0.10 0.10

0.98 (N ¼ 221) 0.98 (N ¼ 189) 0.99 (N ¼ 207)

a

Constant returns to scale (CRS) technical efficiency is decomposed into measures of variable returns to scale (VRS) technical efficiency and scale efficiency.

production of current output levels (Table 2(Panel A)). We also assess technical efficiency under the assumption of variable returns to scale (a measure of short run efficiency) and scale efficiency which is a measure of the size of the hospital. Constant returns to scale refers to the ‘‘right’’ size of the hospital and corresponds to the minimum of the U shaped cost curve. The mean scores equal 0.71 and 0.85 for the technical efficiency under variable returns to scale and scale efficiency, respectively. These findings suggest that the teaching hospitals in our sample are not only inefficient in the use of inputs, but these hospitals are also the wrong ‘‘size’’. Upon further examination, we find that all of the 125 teaching hospitals which are not scale efficient are operating at decreasing returns to scale. (The descriptive results among ÔinefficientÕ hospitals are given in Table 2(Panel B).)

In Table 3, we provide the descriptive statistics of our market competition variables 2: the number of Health Maintenance Organization (HMO) contracts with each hospital, the number of Preferred Provider Organization (PPO) contracts with each hospital and the number of patients covered by a contracting HMO plan (CAPCOV) 3. We chose to use the number of patients covered by a capitated plan rather than proportion, because

2 We also looked at the performance change between 1994 and 1995 using a matched sample of 180 teaching hospitals. The Malmquist index indicated an average increase in productivity of 1.84% driven by an improvement in efficiency (3.51%). This improvement in efficiency may be a response by teaching hospitals to the increased competitive pressures in the health care delivery industry. 3 These data first appeared on the 1995 AHA data set.

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

519

Table 3 Descriptive statistics market variables Variable

Mean

Standard deviation

Min

Max

Median

HMO contracts PPO contracts CAP COV

9.04 19.41 13408.02

8.83 35.04 29760.71

0 0 0

51 349 185,643

7 10 1090

some patients who are included as covered are outpatients and there was no clear way to apportion across outpatients and inpatients. On average, hospitals in our sample have almost twice as many PPO contracts (which pay hospitals a discounted fee) as HMO contracts (which pay hospitals a capitated or fixed rate per day). The mean value of 13,408 for CAPCOV indicates the size of the plan that contracts with the hospital. We assume that the larger the plan, the more influence the plan has on the hospitalÕs reimbursements. Based on these market variables, we divide the teaching hospitals in our sample into ÔcompetitiveÕ versus Ônon-competitiveÕ hospitals. We define competitive as those hospitals having either more than seven HMO contracts, 10 PPO contracts or contract with plans enrolling at least 1090 capitated covered lives. (We use the median number of contracts and CapCov as the cut off.) 4 We argue that hospitals which attract managed care contracts and patients covered by capitated payment schemes do so by offering lower prices. This strategy is pursued because decision makers for managed care programs are likely to be better informed about prices or costs of a particular hospitalÕs services, quality, and the availability of alternative providers than individuals or their health professionals. Thus, the elasticity of demand for the services of a particular hospital by managed care programs could be significantly higher than that for other demanders of the hospitalÕs services. The elasticity of demand for the hospitalÕs services would thus be an increasing function of the proportion of managed care patients. Non-competitive hospitals are defined as

those hospitals with below the median number of contracts or capitated covered lives. Table 4 displays the relationship between market competition and efficiency scores. For all three market variables, we find that hospitals with more Ôcompetitive pressureÕ have on the average higher efficiency scores at statistically significant levels. These findings suggest that external market pressure can have positive effects on increasing efficiency. Hospitals that face more competitive pressure can charge lower prices for their services by keeping costs down. We also want to know if the increased hospital efficiency associated with competitive factors is achieved at the expense of teaching dedication or teaching intensity. In Table 5, we summarize the relationship between our two proxies of the hospital teaching function (1) teaching dedication (the number of residents per physicians RES/MD) and (2) teaching intensity (the number of residents per bed RES/BED) 5 by our three ÔcompetitionÕ variables. We find that teaching hospitals facing more market competition have statistically significant higher levels of teaching dedication. These results are interesting because they imply that hospitals which are on average more technically efficient use relatively more residents per attending physicians as inputs in treating patients. This is a consistent finding with the results from Hosek and Palmer (1983) and Lehrer and Burgess (1995) who argue that substitution between physicians and residents exists and that resident productivity goes up with higher levels of teaching dedication. The results for teaching intensity are similar but with less statistical significance. It appears that competitive hospitals substitute residents for physicians, but do not necessarily have more residents per bed.

4

Of course this classification is rather arbitrary but our results appear to be fairly robust to alternative specifications. More rigorous classifications into various types of industry competition, as for example in Shepherd (1982), are not feasible due to data limitations.

5 These definitions are consistent with the definitions in Campbell et al. (1991) and Rogowski and Newhouse (1992).

520

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

Table 4 Market variables and efficiency #HMO CON

Mean technical efficiency score

Low High

0.574 0.752

Test

Value

P>

F -test Median KSA

27.99 27.38 2.51

0.001 0.0001 0.0001

#PPO CON

Mean technical efficiency score

Low High

0.569 0.748

Test

Value

P>

F -test Median KSA

29.52 24.03 2.68

0.0001 0.0001 0.0001

CAP COV

Mean technical efficiency score

Low High

0.588 0.784

Test

Value

P>

F -test Median KSA

23.06 19.90 2.46

0.0001 0.0001 0.0001

Teaching hospitals vary among themselves in terms of standards and the severity of patientsÕ case mix. Two proxy variables for standards and case mix are whether the hospital is a member of the Council of Teaching Hospitals (COTH) and whether the hospital has a medical school affiliation (MDSCH), respectively (Hao and Pegels, 1994). In order for a hospital to join COTH, it must demonstrate that resources exist to maintain a teaching program while fulfilling requirements for patient care as specified by the Joint Commission of Accreditation for Health Care Organizations (JCAHCO). A hospital must be JCAHCO accredited in order to treat Medicare patients. Similarly, affiliation with a medical school is associated with a higher case-mix of patients due to the tertiary care provided by these medical school hospitals. These tertiary hospitals ensure that residents and medical students have exposure as to how to treat seriously ill and injured patients.

Therefore, we follow Hao and Pegels suggestion of using these variables as proxies for quality standards and case mix of patients. We compare these proxy variables with the hospitalsÕ efficiency scores, teaching dedication, and teaching intensity (Table 6). We find that COTH members and hospitals with medical school affiliation are relatively less efficient than the hospitals without such memberships, and they maintain higher levels of teaching dedication and intensity. These results suggest that since COTH hospitals follow more stringent standards in order to qualify for COTH membership, that it may be these higher inputs requirements that lead to inefficiency (Hao and Pegels, 1994). Similarly, if medical school affiliation is associated with a more severe case mix of patients who require more sophisticated treatment, then the case mix may be driving up inefficiency rather than the teaching function per se. The size of these inefficiencies is

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

521

Table 5 Market variables and teaching # PPO CON

Average teaching dedication (RES/MD)

Average teaching intensity (RES/BED)

Low High

1.37 6.42

0.158 0.183

Test

Value

P>

Value

P>

F -test Median KSA

15.16 7.77 1.88

0.0001 0.0055 0.0020

0.553 0.695 1.260

0.458 0.404 0.080

# HMO CON

Average teaching dedication (RES/MD)

Average teaching intensity (RES/BED)

Low High

1.61 6.25

0.149 0.210

Test

Value

P>

Value

P>

F -test Median KSA

11.74 23.57 2.64

0.0007 0.0001 0.0001

3.37 6.61 1.51

0.06 0.01 0.02

# CAP COV

Average teaching dedication (RES/MD)

Average teaching intensity (RES/BED)

Low High

2.066 6.702

0.160 0.193

Test

Value

P>

Value

P>

F -test Median KSA

8.114 14.325 2.15

0.005 0.0001 0.0002

0.688 0.724 1.35

0.41 0.40 0.05

Table 6 Organizational features and efficiency COTH

N

Mean technical efficiency score

Mean teaching dedication (RES/MD)

Mean teaching intensity (RES/BED)

Yes No

79 175

0.522 0.665

3.41 2.55

0.319 0.096

Test

Value

P>

Value

P>

Value

P>

F -test Median KSA

19.25 24.09 2.46

0.0001 0.0001 0.0001

0.429 4.117 2.09

0.513 0.042 0.0003

62.602 59.460 3.97

0.0001 0.0001 0.0001

MDSCH

N

Mean technical efficiency score

Mean teaching dedication

Mean teaching intensity

Yes No

179 75

0.600 0.668

2.99 2.38

0.196 0.093

Test

Value

P>

Value

P>

Value

P>

F -test Median KSA

4.069 3.236 1.570

0.05 0.07 0.01

0.211 3.185 1.699

0.646 0.074 0.006

10.68 25.80 2.73

0.001 0.0001 0.001

likely to be significantly smaller if output measures were adjusted for case mix severity and/or standard of quality (see Foonote 7 below).

So far we have looked at efficiency scores separately by a variety of hospital market variables. Next, we account simultaneously for the effects of

522

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

variables that are potential correlates of hospital efficiency. This is implemented by regressing the teaching hospitalÕs efficiency score on a set of market and organizational variables. In this analysis, we include our previously defined market variables, CAPCOV, PPO Contracts, HMO Contracts along with teaching intensity (RES/BED), teaching dedication (RES/MD) as well as organizational features such as the Joint Commission for Accreditation of Health Care Organizations (JCAHCO) 6, medical school affiliation (MDSCH), membership in COTH and the ownership form of the hospital. Ownership is included in order to determine if increased efficiency is achieved through property rights motivations. Hospitals are characterized as either non-federal public, not-for-profit, for-profit, or federal public hospitals. (The federal group is the omitted reference category.) 7 The regression results are presented in Table 7 8. The results should be interpreted as providing information on correlation rather than causality. We find positive and statistically significant effects of the not-for-profit and for-profit hospital ownership forms on efficiency, which is consistent with property rights theory. The size of the capitation

6

This is a general accreditation for hospitals which specifies the minimum criteria for safe and proper patient care. 7 We also analyzed our data set using an alternative specification of outputs. In the alternative model, we use the percent of Medicare inpatients to total inpatients, and the percent of Medicaid inpatients to total inpatients as additional outputs. By including a more detailed breakdown of our inpatient outputs, we hope to more fully capture case mix differences among the hospitals in our sample. As might be expected we find that there are statistically significant differences between the means of efficiency scores of the two models as well as some differences in the regression results. A common feature of the regression results of the two models is that the competition proxy CapCov is statistically significant. However, the sample size in the second specification decreases by 65% from 254 hospitals to 88 hospitals. Given the dramatic decrease in sample size, we choose not to fully compare the results from the first specification with the second specification. 8 Standard errors and therefore p-values of estimated coefficients have been adjusted using WhiteÕs heteroscedasticityconsistent covariance matrix estimator. We also estimated the same model using a tobit regression and found no differences in results between the tobit and the OLS.

plan (CAPCOV), teaching dedication (RES/MD) and COTH membership also have significant effects on efficiency. The direction of the effect of these variables on efficiency is similar to that reported in Table 6 in a bivariate context 9. From these results we surmise that teaching hospital differences in efficiency are related to a number of market and organizational factors. One interesting result of note is our finding that while competition in terms of the HMO plan size increases the level of relative efficiency it does not do so at the expense of the hospitalÕs teaching dedication. Once we control for at least some of these multiple influences in our regression equation, we find that medical school affiliation, and the number of HMO and PPO contracts no longer have a significant relation with efficiency. It may be assumed that multicollinearity across these variables may have lead to these results. Indeed, all these variables had statistically significant simple correlations when compared with each other. However, the regression results are fairly robust (in terms of both magnitudes and signs of coefficients) to alternative specifications including the Ôbest regressor subsetÕ regression reported in column (Coln) 1A of Table 7. We also applied the same regression model on variable returns to scale (VRS) technical efficiency and scale efficiency. In the VRS technical efficiency model, we find that organizational features of the hospital are significant in describing efficiency. Specifically, the results suggest that private hospitals use fewer inputs to produce outputs. This finding is consistent with property rights theory. Hospitals with higher levels of teaching dedication or JCAHCO accreditation also are relatively more 9 The reader should keep in mind the possible effects of endogeneity bias. For example, COTH may not be strictly exogenous as it is possible for a hospital to change its membership affiliation. The presence of an endogenous variable in a regression model can also bias the coefficients of the exogenous variables. We run simple in addition to multiple regressions to ascertain consistency on the sign and significance of the determinants of hospital efficiency. The simple regression results indicate that all competition variables as well as Res/MD and private ownership have a positive and significant effect on efficiency. The effect of COTH and MDSCH was negative and significant.

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

523

Table 7 Regression results dependent variable ¼ efficiency score (level of statistical significance) Variable

CRS technical efficiency Coln 1A

Intercept Public Not-for-profit For-profit CAP COV HMO CON PPO CON RES/MD RES/BED MDSCH COTH JCAHCO R2 Adj. R2 F



0.319

Coln 1 (0.000)

0.224 (0.000) 0.258 (0.003) 0.084 (0.008)

0.003 (0.000)

)0.085 (0.001) 0.35 0.34 26.56



VRS technical efficiency

Scale efficiency

Coln 2

Coln 3



0.369 (0.001) 0.051 (0.423) 0.225 (0.000) 0.257 (0.003) 0.071 (0.032) 0.023 (0.509) 0.028 (0.400) 0.003 (0.000) )0.060 (0.161) )0.009 (0.826) )0.060 (0.037) 0.025 (0.677)

0.208 (0.019) 0.096 (0.182) 0.382 (0.000) 0.398 (0.001) 0.030 (0.582) 0.013 (0.786) )0.040 (0.416) 0.003 (0.091) )0.063 (0.513) 0.060 (0.220) )0.070 (0.099) 0.241 (0.003)

0.939 (0.000) )0.309 (0.001) )0.062 (0.060) 0.111 (0.196) 0.091 (0.029) 0.031 (0.385) )0.022 (0.538) 0.002 (0.135) 0.040 (0.563) )0.070 (0.055) 0.058 (0.071) 0.091 (0.115)

0.36 0.33 12.37

0.59 0.55 31.66

0.343 0.271 11.49



Significant at the 90% level. Significant at the 95% level. Significant at the 99% level.

efficient, but COTH membership does not exhibit the same effect. We surmise that hospitals can attain higher levels of short run efficiency while maintaining both teaching dedication as well as JCAHCO requirements for patient care. Turning next to our analysis of scale efficiency, we find that public and not-for-profit hospitals are significantly less scale efficient than federal hospitals. The latter are closer to the ÔrightÕ size than other ownership forms. The CAPCOV variable is positively related to scale efficiency suggesting that hospitals facing competitive pressure adjust their size in order to operate more efficiently. A presumed difference in patientsÕ case mixes in medical school affiliated hospitals where patients often treated at these hospitals are more seriously ill and/or injured may justify their larger size, whereas COTH affiliation alone does not.

4. Conclusion In this paper we investigate whether the variation in performance across hospitals can be related to or explained by institutional characteristics and market factors. Our performance measure––Farrell based technical efficiency––is estimated for a

sample of 254 teaching hospitals operating in 1995 using AHA data. Using bivariate statistical analysis, we find significant variation in efficiency when the sample is disaggregated into ÔlowÕ and ÔhighÕ competition classification based on the number of HMO and PPO contracts and the number of patients covered in HMO plans. These proxies for competition suggest that greater competition is positively associated with efficiency, teaching dedication and (weakly) with teaching intensity. In addition, similar analysis suggests that medical school affiliation and accreditation are positively related with teaching dedication and teaching intensity and negatively related to efficiency. The latter effect is likely to change should a quality adjusted output measure be included. When the competition variables are included among other explanatory variables in a regression in which technical efficiency is the dependent variable, only one of our ÔcompetitionÕ proxies is significant (CAPCOV); suggesting that increases in the number of patients under HMO coverage is associated with improved efficiency. We also find significant positive effects of teaching dedication (RES/DOC), non-public ownership, and negative COTH affiliation effects on efficiency. This suggests to us that competition in the form of

524

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

managed care does appear to reduce resource use as we might expect. The finding that teaching dedication is positively related to performance and medical school affiliation is not significantly related to performance suggests that some teaching hospitals find it possible to perform well while maintaining high teaching dedication, even with a medical school affiliation. For future research, we would like to follow up this work with panel data to get a better picture of competitive effects over time in this fast-changing sector. This would allow us to look at whether less efficient hospitals eventually exit or are taken over, or whether they ÔcatchupÕ with the more efficient hospitals. Appendix A This appendix includes a more formal treatment of the specification of our performance measures. An even more detailed discussion of these concepts may be found in F€ are et al. (1994). Here we denote inputs by x ¼ ðx1 ; . . . ; xN Þ 2 RNþ . We denote y ¼ ðy1 ; . . . ; yM Þ 2 RM þ as the outputs produced by the hospital. The technology, expressed by the input requirement set, consists of all input vectors that can produce a given output vector, i.e.   LðyÞ ¼ x 2 RNþ : x can produce y ; y 2 RM þ: ðA:1Þ These are the sets depicted in Fig. 1, i.e., the best practice frontier as well as the points interior to the frontier. We assume that there are k ¼ 1; . . . ; K observations of inputs and outputs, and we use activity analysis to model our reference technology. In particular, we have K n X LðyjCÞ ¼ ðxÞ : zk ykm P ym ; m ¼ 1; . . . ; M; k¼1 K X

zk xkn 6 xn ;

k¼1

zk P 0;

n ¼ 1; . . . ; N ;

o k ¼ 1; . . . ; K :

ðA:2Þ

The right-hand sides of the inequalities represent all of the outputs and inputs that are feasible

given the observed inputs and outputs that are on the left-hand side. The zÕs are usually referred to as intensity variables and serve to construct convex combinations of the observed data points. In Fig. 1, these convex combinations are the linear segments connecting the observed points as well as the points interior to the frontier. The fact that the intensity variables are restricted only to be non-negative in LðyjCÞ implies that the technology satisfies constant returns to scale (hence the ÔCÕ). If we add the constraint P that these intensity variables sum to one, i.e. zk ¼ 1, then the technology satisfies what we refer to as variable returns to scale (ÔV Õ, which allows for increasing, constant and decreasing returns to scale). We refer to the resulting reference technology as LðyjV Þ, i.e. K n X LðyjV Þ ¼ ðxÞ : zk ykm P ym ; m ¼ 1; . . . ; M; k¼1 K X

zk xkn 6 xn ;

n ¼ 1; . . . ; N ;

k¼1

zk P 0; k ¼ 1; . . . ; K; X o zk 1 : ðA:3Þ Relative to these two technologies we compute the performance of each hospital by solving the following linear programming problems for each observation k ¼ 1; . . . ; K: Fi ðyk ; xk jCÞ ¼ minfk : kxk 2 LðyjCÞg;

ðA:4Þ

Fi ðyk ; xk jV Þ ¼ minfk : kxk 2 LðyjV Þg:

ðA:5Þ

The full problem is specified by adding the technology constraints from (A.2) and (A.3). 10 These are Farrell (1957) type efficiency measures (hence the ÔF Õ). The lambdas scale the observed input vector xk toward the origin until the best practice frontier is attained, as in Fig. 1. Values of these efficiency measures will be less than or equal to one, with one signifying efficiency (best practice). One minus the value of the efficiency score gives the proportion by which inputs must be 10

That is what the Ô2 Lðyj:ÞÕ means.

S. Grosskopf et al. / European Journal of Operational Research 154 (2004) 515–525

reduced to achieve best practice. Costs would be reduced by the same percentage. Our measure of scale efficiency is constructed as the ratio of these two efficiency measures, i.e., Si ðyk ; xk Þ ¼ Fi ðyk ; xk jCÞ=Fi ðyk ; xk jV Þ

ðA:6Þ

which is computed for each hospital, k ¼ 1; . . . ; K. Values of S will be less than or equal to one. One minus the value of the scale efficiency score gives the proportion by which inputs could be reduced if the hospital were operating at the most productive scale size, consistent with constant returns to scale (capturing production at maximum average product or minimum average cost). References Campbell, C.K., Gillespie, K., Romeis, J., 1991. The effects of residency training programs on the financial performance of veterans affairs medical centers. Inquiry 28 (3), 288–299. F€ are, R., Grosskopf, S., Lovell, C.A.K., 1994. Production Frontiers. Cambridge University Press, New York. Farrell, M.J., 1957. The measurement of productive efficiency. Journal of the Royal Statistical Society Series A, General 120 (Part 3), 253–281.

525

Fried, B., Pink, G., Baker, G.R., Deber, R., 1995. Managing health services organizations with an educational mission: The case of Canada. Journal of Health Administration Education 12, 173–185. Hao, S., Pegels, C.C., 1994. Evaluating relative efficiencies of veterans affairs medical centers using data envelopment, ratio, and multiple regression analysis. Journal of Medical Systems 18, 55–67. Hosek, J.R., Palmer, A.R., 1983. Teaching and hospital costs. Journal of Health Economics 2, 29–46. Kralewski, J., Kephart, K., Hakanson, S., 1997. Untangling the costs of family practice residency training. Minnesota Medicine 80, 19–22. Lehrer, L., Burgess, J., 1995. Teaching and hospital production: The use of regression estimates. Health Economics 4, 113– 125. Pardes, H., 1997. The future of medical schools and teaching hospitals in the era of managed care. Academic Medicine 72, 97–102. Rogowski, J., Newhouse, J., 1992. Estimating the indirect costs of teaching. Journal of Health Economics 11, 153–171. Rutledge, R., 1997. Can medical school-affiliated hospitals compete with private hospitals in the age of manage care. Journal of the American College of Surgeons 185 (2), 207– 217. Shepherd, W.G., 1982. Causes of increased competition in the US economy. Review of Economics and Statistics 64, 613– 626.