Economics of Educafion Review, Vol. Ii. No. 2. PP. 153-160. 1992. Printed in Great Britain.
0272-7757/92 $5.00 + 0.00 @ 1992 Pergamon Press Ltd
Academic Research Productivity, department Size and Organization: Further Results, Comment JOHN
*Department
GOLDEN”?
and
FRED
V.
of Economics, Allegheny College, Meadville, Economics, U-63, University of Connecticut,
CARSTENSEN$
PA 16335, U.S.A.; and *Department Storrs. CT 062.69, U.S.A.
of
Abstract Jordan et nl. (1989, Econ. Edtrc. Rev. 8. 345-352) provided evidence from 2508 PhDawarding departments showing that per capitapublication increases, up to a point, with department size and is higher at private institutions than elsewhere. This article demonstrates that the impact of both private vs public affiliation as well as department size on per capita publication plunges after controlling for both research support and the department’s faculty rating.
INTRODUCTION RECENTLY in this Review, Jordan et al. (1989) provided evidence from 2058 PhD-awarding departments and 23 academic disciplines showing that yer capi& publication increases, up to a point, with department size and is higher at private institutions than elsewhere. The explanatory power of their model, however, was quite small. Not surprisingly, they called for an additional study to “pin down precisely” why size and public vs private status affect productivity. This comment takes up that challenge: it moves a step closer to explaining these findings. The basic model of Jordan et al. and two expanded versions appear in the next section. Its successor discusses the data used, their limitations, as well as those of this paper. Regression results appear in Tables l-3 and are briefly discussed. A concluding section summarizes our findings, their implications, and suggests areas for future research.
THE MODELS
The variables used by Jordan et al. form Model I: AVEPUBSi = a + b SZi + c SZSQi + d PUBDUMMY~ t Ui.
AVEPUBSi is the ith department’s per capita faculty publication performance per year; SZi is the size of the ith department’s faculty; and SZSQi is department size squared. One would expect b > 0 and c < 0 if publication increases, up to a point, with faculty size. PUBDUMMY~ takes the value one if the ith department is housed in a public university, zero if located in a private school. Jordan et a[. cite d < 0 as evidence of the relative efficiency of private institutions. In the rest of this section we argue that a department’s research vs teaching/ service orientation better explains its per cupifa research output than does either its public/private status or size. Since institutional goals can differ greatly, it follows that one must adjust for both the quantity and quality of all educational inputs and outputs to determine efficiency differences between publics and privates. The university is a multi-product firm, producing research, teaching and service outputs. The key point here is that the emphasis upon these outputs can differ greatly among PhD-awarding universities. Public institutions, as Garvin (1980) observes, might place more emphasis on teaching large numbers of undergraduates and performing service activities. Elite private institutions, where research support is typically great and service activities slight, often choose small, select student bodies and high-quality
t Author to whom correspondence should be addressed. [Manuscript received 18 March 1991; revision accepted for publication 153
14 October
1991.1
154
Economics of Education Review
faculties. This suggests three specific reasons for higher publication per capita at departments housed in private universities. First, both research capital and assistants per faculty member might be more plentiful. Second, the fraction of the typical faculty member’s work-time available for research would likely be larger. Third, academics who are “research stars” probably select jobs that are light on teaching and service demands and stress scholarly success, i.e. positions at those private institutions. For example Graves et al. (1982) found that 1974-1978 per capita publication output of economics departments varied directly with both full professors’ average salary and the secretary/faculty ratio and inversely with the teaching load. (Unfortunately their study controlled for neither faculty size nor public/private status.) The issue then is how to gauge differences in research orientation among PhD-awarding departments. There are two types of measures available: objective and subjective criteria. One objective criterion is the percentage of a department’s faculty sponsored by leading research foundations and institutes. A high-quality faculty, reduced course loads and strong research support facilities would all raise this grant sponsorship proportion. Indeed, the grant itself may provide funds that permit purchase of needed capital and research assistants. Further, stiff competition guarantees that those who win grants are usually of high caliber. In light of the above discussion, we add an additional explanatory variable, the proportion of the ith department’s faculty sponsored by a major research grant (PRESUPi), to the framework of Jordan et al. to form Model IIA: AVEPUBSi = a + b SZi + c SZSQi + d PUBDUMMYi + e PRESUPi + Ei. If per capita departmental publication rates at private institutions exceed those at public institutions for reasons other than those captured by department size, size squared and research support, then d < 0. If the dependent variable varies directly with the department’s research support proportion (PRESUPi), e > 0. The expected signs of the size variables are as they were in Model I (i.e. b > 0 and c < 0). Note that if the higher publication rate of faculty at private institutions was chiefly due to the greater grant support at such universities, then d =
0 and e > 0. Ei is a randomly distributed error term. A key subjective measure of a department’s research orientation is its graduate faculty rating. This is a collective assessment of the quality of its faculty by members of other PhD-awarding departments within its discipline. Prestigious departments harbor “star” researchers with important editorial ties; also, they often house important journals that serve as publication outlets for their faculty. To determine the independent impact of faculty quality on AVEPUBSi, we add the ith department’s faculty rating (FACRATINGi) to the previous equation to form Model IIB: AVEPUBSi = a + b SZi + c SZSQi + d PUBDUMMYi + e PRESUPi + f FACRATINGi + Wi. We anticipate that f > 0 for the reasons outlined above. The signs of all other parameters should be the same as in Model IIA. Since PRESUPi and FACRATINGi are positively correlated, we would expect that inclusion of the latter variable would reduce the former’s impact. Wi is a random error term. DATA AND LIMITATIONS The same data source employed by Jordan et al. was used to estimate the regression models in this paper. Figures for PRESUP, however, were not available for 17% of their sample. Publication data pertain to articles published during 197881979 for all fields except the social sciences. There the publication period extends from 1978 to 1980. PRESUPi is the fraction of the ith department’s faculty who held research grants from the National Science Foundation, National Institutes of Health, or the Alcohol, Drug Abuse and Mental Health Administration at time during FY1978-1980. any FACRATINGi is the average rating that the ith department’s faculty received during a 1981 survey of academic scholars in its respective discipline. Scores ranged from five (distinguished) to zero (not sufficient for doctoral education). (For further details see Conference Board of Associated Research Councils, 1982.) There are four important limitations of this paper that we now address. First, the publication data overlap imperfectly with those for research support.
Research Productivity, Department Size and Organization Ideally the latter should precede the former by a few years. Our assumption here is that the actual grant data (for FY1978-1980) serve as a proxy for those of some ideal period (say, FY1976-1978). One can reasonably presume a positive correlation between PRESUPi values between one 3-year period and the next. Second, the “timing” problem is more severe using the faculty ratings of 1981 to explain variations in AVEPUBS data for 1978-1980. We would prefer values of FACRATING for the mid-1970s had such a study been conducted. Further, one can argue that publication performance more likely affects graduate faculty ratings than the other way around. Fortunately, the latter change at a glacial pace (cf. Cartter, 1966; Roose and Andersen, 1970; Conference Board of Associated Research Councils, 1982). We assume that the flow of publications during the late 1970s caused only marginal changes in the absolute levels of the 1981 ratings from (unmeasured) mid-1970s levels. Third, the data base does not permit us to test whether private institutions are more efficient than public ones. The possibility remains that privates enjoy both a comparative advantage in research and an absolute advantage in providing research, teaching and other activities. Indeed, both grants and the graduate faculty ratings might be viewed as outputs produced at less cost by privates. Cohn et al. (1989), however, show that privates are not particularly more efficient than publics in generating research. Fourth, to the extent that research grants, publication performance and faculty ratings are determined jointly, estimating a single equation for AVEPUBS using OLS introduces more potential bias. Mindful of these limitations, we turn to the parameter estimates in the next section.
EMPIRICAL
RESULTS
AND IMPLICATIONS
Table 1 shows aggregate regression results and is divided into three parts. The top equation cluster pertains to Model I. Along the first line are estimates based on the full sample of 1710 departments from 22 disciplines. (History does not appear since PRESUPi data were not available.) All estimates are significant at the 1% level and have the same signs as in Jordan et al. While results from the social and natural sciences generally had the expected signs, their explanatory power is small.
155
The next cluster in Table 1 corresponds to Model IIA. Once PRESUP is added, its coefficient is highly significant; those for SZ and SZSQ retain significance, while that for PUBDUMMY drops drastically. This suggests that grant support is a key factor that enables private institutions to outpublish public ones. In each of the remaining equations of this second cluster, PRESUP plays an important role. Note, though, that whenever any of the other parameters achieves significance at the 5% level it has the anticipated sign. Further, observe the sharp rise in the explanatory power of each equation of Model IIA relative to its Model I counterpart. The final group in Table 1 corresponds to Model IIB which includes FACRATING. Both this last variable and PRESUP are positive in all five equations. PUBDUMMY generally lacks significance, except in engineering, where it is positive. Excluding the social sciences, SZ either loses significance or has the wrong sign. Prestigious departments tend to be larger than average and apparently this is why the size variables played such an important role in Model I. The adjusted R-square of each equation in Model IIB is slightly, but significantly, greater than its Mode1 IIA counterpart. Model I results for separate disciplines are not reported here since they are similar to those found in Jordan et al. Only six field regressions were significant; only for physics did the adjusted Rsquare (barely) exceed 0.20. Table 2 lists estimates for each of the 22 separate disciplines of Model IIA. Twelve of these regressions were significant at the 5% level and PRESUP plays a mighty role in most cases. As with the aggregate results for Model IIA, when SZ, SZSQ or PUBDUMMY is significant, it has the “correct” sign. Table 3 features Model IIB estimates for each discipline. All but two regressions are highly significant and FACRATING dominates these equations with its strong, positive sign. In four of the five cases where PRESUP remains significant its coefficient is positive as expected. In contrast, wherever SZ, SZSQ or PUBDUMMY is different from zero, each has the wrong sign! That the addition of PRESUP and FACRATING break the impact of SZ and PUBDUMMY is our key finding. Caution is appropriate, though. Results vary widely depending on variables included in model specification. Further, the reader should recall the limitations discussed
=
500)
(1.1) -0.00035** (2.4) -0.00014* (5.9) -0.00011**
(0.9) 0.037* (3.4) 0.022* (8.5) 0.011*
(6.6) 0.84*
(4.9) 0.38*
(7.3)
(4.0) (4.9)
(4.3)
(1.0) -0.00009*
(0.8) 0.013*
(1.1) 0.21*
(2.1) -0.18
(2.0) 0.00023 (1.7) -0.00013
(2.7) -0.042* (3.1) 0.008
(7.0)
(0.3) -0.67**
(5.3)
(2.3) 0.017*
(1.0) 0.28* (5.9) -0.00006 (1.3) 0.00011**
(1.4) -0.00011*
(0.8) 0.022**
(0.8) 0.17
0.002 (0.5) -0.011”
(0.1) -0.00019
(0.8) -0.010
(3.2) 0.21
-0.078 (0.8) 0.021
(1.2) 0.00001
(2.7) -0.003
(1.8) 0.21*
(2.4) 0.00006
(1.7) -0.00017
(1.1) 0.013
(6.6) 1.71*
0.14
(2.8) 0.0001
-0.00014*
0.018* (4.0) -0.005
0.87* (10.1) 0.40
variable:
publications
per faculty
(0.3)
(1.1) -0.011
(2.8) -0.066 (0.4) -0.086
(0.6) 0.088*
-0.029
(0.9)
(2.6) -0.027
(1.3) -0.21*
(1.6) -0.21
(1.6) 0.051
-0.080
(2.7)
(4.5) -0.092*
(3.3) -0.39*
(0.1) -0.56*
(4.9) 0.003
-0.28*
(7.0)
(6.2) 0.85*
(l6;‘!
(2.0) 2.3*
2.0* (18.1) 0.21**
(6.0) 3.2* (10.9) 2.0* (10.9) 1.1* (10.6)
2.2* (23.9) 0.54*
-
-
-
Independent variables: PUBDUMMY PROP RES SUP
Dependent
SIZE’
results.
SIZE
regression
The absolute values of r-statistics are given in parentheses. *Indicates significance at the 1% level. **Shows significance at the 5% level.
(n
All 22 fields (n = 1698) Engineering (n = 272) Biosciences (n = 427) Physical sciences (n = 499) Social sciences
All 22 fields (n = 1710) Engineering (n = 273) Biosciences (n = 434) Physical sciences (n = 503) Social sciences (n = 500)
All 22 fields (n = 1710) Engineering (n = 273) Biosciences (n = 434) Physical sciences (n = 503) Social sciences (n = 500)
Intercept
Table 1. Aggregate
(3.1)
(6.3) 0.064*
(4.9) 0.14* (5.2) 0.61* (5.0) 0.32’
0.16*
-
-
RATING
0.31
0.29
0.27
0.20
0.28
0.30
0.24
0.23
0.13
0.27
0.14
0.06
0.02
0.01
0.02
Adi.
per year (AVEPUBS)
FACULTY
member
R’
46.3*
42.0*
32.4*
14.7*
129.9*
54.4*
40.4*
33.7*
10.7*
156.5*
28.3*
11.4s
3.9*
1.9
13.3*
F Statistic
g fi 3. s a : K’ F
*
h 8 g zl 5’
Research Productivity, Department Size and Organization Table 2. Regression
Anthropology (n = 59) Biochemistry (n = 107) Botany (n = 36) Cell Biology (n = 66) Chem. Engineering (n = 55) Chemistry (n = 137) Civil Engineering (n = 60) Comp. Sciences (n = 45) Economics (n = 91) Elec. Engineering (n = 87) Geography (n = 38) Geoscience (n = 61) Mathematics (n = 110) Mech. Engineering (n = 71) Microbiology (n = 88) Physics (n = 116) Physiology (n = 73) Polit. Sciences (n = 80) Psychology (n = 146) Sociology (n = 86) Statistics (n = 34) Zoology (n = 64)
results
for separate
Intercept
SIZE
0.44* (2.3) -0.47 (0.6) 0.064
-0.0098
(0.1) -0.28
(1.4) -0.012
disciplines.
Dependent (AVEPUBS)
SIZE’ 0.00024
(0.6) 0.033
(0.6) -0.WO61
(0.6) 0.067
(0.7) -0.00091 (1.4) O.OOoOOl
(0.3) -0.12
(0.4) 0.039
(0.0) -0.0011
(0.3) -0.57
(0.7) 0.083*
(0.8) -0.0012*
(1.2) 0.16
(2.3)
(2.1) -0.00008
(1.4) 0.63
$7 0.004
(0.9) 0.24
(0.1) 0.022
(0.7) O.OOOOO4 (0.0) -0.00029
variable:
publication
Independent variables: PUBDUMMY PROP 0.10 (1.7) -0.45
157
per faculty
RES SUP 0.70** (3.2) 4.5**
member
Adj.
R*
per year
F Statistic
0.13
3.1*
0.31
12.9**
(1.4) 0.026
(6.8) 0.31
(0.0) 0.20
(0.4) 4.9**
0.20
5.0**
(0.4) 0.089
(4.2) 0.42
0.02
1.2
(1.8) 2.1**
0.31
16.5**
(1.1) -0.040 (0.2) 0.044 (0.7) -0.059 (0.2) -0.030
-0.06
0.5
(;:;) -0.05
0.3
-0.03
0.7
(Y.i) (1:6) 1.3**
0.34
12.7**
(1.2) 0.30:’
(1.4) -0.006
(1.1) 0.00009
(0.4) 0.036
(5.1) 0.8**
0.21
6.6**
(2.7) 0.35
(1.0) -0.009
(1.2) 0.00033
(0.6) 0.008
(4.5) 0.9
0.13
2.4
(0.6) 0.17 (0.3) 0.34* (2.2) 0.07
(0.1) 0.050 (1.1) -0.0056 (0.6) 0.0067
(0.1) -0.28
(3.1) 1.4**
0.20
4.7**
(1.1) -0.047
(2.7) 1.2**
0.30
12.9**
(6.1) 0.4
0.05
1.9
(Y:,“! *
(0.7) -0.032
(0.6) 0.013 (0.2) -0.29
(2.4) 1.8**
0.14
4.5**
(1.2) -0.65**
(3.8) 2.2**
0.31
14.0**
(3.8) -0.026
(4.1) 0.9**
0.25
6.8**
(0.3) -0.051
(;.;)
(2.6) 0.16 (0.4) 0.26 (1.3) 0.40* (2.0) 0.30**
(1.1) 0.052* (2.6) -0.015
(0.1) -0.00091 (1.1) 0.00006 (0.6) -0.00008 (0.5) 0.00028 (0.7) -0.00033 (1.3) 0.00012
(1.4) 0.012
(0.8) -0.00012
(0.8) 0.015**
(0.5) -o.tKKl11**
(3.1) 0.002
(3.5) 0.054**
(0.0) 0.046
(2.2) 0.043
(0.1) 0.55 (2.2)
(0.8) -0.0086 (0.4)
0.05
2.0
(0.9) 0.073
(2:2) 1.2**
0.33
18.7**
(3.4) -0.0011*
(1.1) -0.0088
(6.7) 1.8**
0.31
10.7**
(2.2) -0.0014
(0.1) -0.040
(6.0) 0.9**
0.19
3.0*
(0.3) -0.048
0:)
(1.0) o.OOOOOO2 (0.0)
The absolute values of f-statistics are given in parentheses. *Shows significance at the 5% level. ** Indicates significance at the 1% level.
(0.3)
-0.00 (1.1)
1.0
**9'1 ee6'6
LP'O PE'O SP’O
EO’O-
**0'9I
1'0 OS’0
Of’0
*eS‘82 SZ’O
ee8.P
**9'b
LO’0
SC’0
S'I 8S'O
ae8.L
**I'Of
z2i ?PV
ET'0 g
*8'Z J!ls!lwS
(SandgAv)
(L’k) **or0
(VP) **IZ’O
(f.P) **IZ’O
(S’P)
(9.0) SI'O (0'2)
*SE’0
(Z’I)
(SO)
(Z”I)
(E’I) SSO’O
IVO’O (6'1) 660'0 (6'0) LSO'O CL.11 9f.0
(P.0)
(0’1) 21’0 19.t) O’Z(8'0) LS’O (S’Z) *09'0
:salqe!.teA ,uapuadapuI
luapuadaa
dns sax doted uwnafmd
LI.0 (8'1) EI'O (L'I) 18'0 (8'0) IP'O (2'1) ZE‘O (6'1) EI'O
6E'O (L.1) I’I(E’O) ELO’O-
(1‘0)
LZ'O-
OZ’O-
(f 34
ee89.0 (VI) 01'0 (Z’L) **Ih’O (I’!4 **fZ’O ~8'1 (E'Z) G reg.1 (IT) 80'0 ~)NI.LVX kt.-mvd
mad ad Jaqumu l(llme3 md uo!lexlqnd :alqeyh
(S’O) 0100’0 (L’I) zlooo’o
(S’O) IIm~o (8'0) 0100'0 (Z'O) EOOOO'O(CO) 91ooO.O (1'0) L1000’0 (8'1) LvoOo’O kY0) Zoooo’Ofm) 99000’0 (8'0) IE000.0
s3ZIS
(8.0) 6w'O(L’Z) se910'0(L’O) ZIO'O(Z'I) SLO’O(Z’O) ZOO’O(Z'I) IWO(P'O) ozo*o(9'2) *z60‘0(P'O) ZZO'O(E'Z) *II'O(0'1) 610'0-
tdamlul
EI'O (1'0) WO(P'If P'IfS'I1 L’I(0’2) LZ'I(L’IJ 9f’0
(9'0) 82'0 (0'0) Zoo'0 (6'1)
BZIS
UO!SS~J%~
9f’0 (S-0) Of’0 (Z’O) EO’O (E’O)
103 s,pwal
.sauyd!x!p awedas
(8f = u) AI@?lSO~f)
(98 = 4 i3u!mxI!%u~ 'xq3
(16 = U) s3ylIouo3~
(pP= 4
sax~a!~g.dwoD (09 = u) %u!laau!%g [!A!3
(LEI = 4 hlS!Wl(3 (ss = 4 su!mu!%ug ‘“aye
(f9 = 4 k%olo!B IP3 (9f = u) helog;
(6s = U)
(LO1 = u) rlJ)syIalpo!~
Gio1odoyyiv
‘E alqsL
Research Productivity, Department Size and Organizafion
160
Economics of Education Review
earlier. Log versions of each regression in Tables l3 yielded results so similar to those shown that they are not reported. CONCLUSION The impact of both private vs public affiliation as well as department size on per capita publication plunge after controlling for both research support and the department’s faculty rating. This is consistent with the view that departments in private schools emphasize an output mix stressing research over teaching and service activities. It follows that private institutions may not be more efficient in their resource use than are public universities; the
latter may produce more teaching and service outputs per faculty member, provide fewer support facilities and pay lower salaries. Is this the case? Unfortunately, the Conference Board of Associated Research Councils did not provide data to answer this question; it invites further research. Also, a simultaneous equation approach, in which per capita research grants and departmental publication, faculty ratings are determined simultaneously might prove fruitful.
Acknowledgements
help checking
data.
-
The authors thank Joe Cary for his This paper also benefited from the
suggestions of an anonymous applies.
referee;
the usual caveat
REFERENCES (1966) An Assessment of Quality in Graduate Education. Washington. DC: American Council on Education. COHN, E., RHINE, S.L.W. and SANTOS, M.C. (1989) Institutions of higher education as multi-product firms: economies of scale and scope. Rev. Econ. Statist. 71, 284-290. CONFERENCE BOARD OF ASSOCIATED RESEARCH COUNCILS (1982) In An Assessment of ResearchDoctoral Programs (Edited by JONES, L.V., HARMON, L. and COGGESHALL. P.E.). Washington, DC: National Academy Press. GARVIN, D.A. (1980) The Economics of University Behavior. New York: Academic Press. GRAVES, P.E., MARCHAND, J.R. and THOMPSON, R. (1982) Economics departmental rankings: research incentives, constraints, and efficiency. Am. Econ. Rev. 72, 1131-1141. JORDAN, J.M., MEADOR, M. and WALTERS, S.J.K. (1989) Academic research productivity. department size and organization: further results. Econ. Educ. Rev. 8, 345-352. ROOSE, K.D. and ANDERSEN. C.J. (1970) A Rating of Grahate Programs. Washington, DC: American Council on Education.
CARITER, A.M.