Economics of Education Review 29 (2010) 842–856
Contents lists available at ScienceDirect
Economics of Education Review journal homepage: www.elsevier.com/locate/econedurev
The effects of accountability on higher education Marcelo Rezende Board of Governors of the Federal Reserve System, 20th and Constitution Avenue NW, Washington, DC 20551, USA
a r t i c l e
i n f o
Article history: Received 5 June 2008 Accepted 1 March 2010 JEL classification: I20 I28 Keywords: Economic impact Educational economics
a b s t r a c t This paper analyzes the effects of a higher education accountability system in Brazil. For each discipline, colleges were assigned a grade that depended on the scores of their students on the ENC, an annual mandatory exam. These grades were then disclosed to the public and colleges were rewarded or penalized based on them. I find that the ENC had a positive effect on the education and the proportion of full-time faculty and that it increased the number of vacancies offered, applicants and enrollments. Colleges were affected differently depending on grades, ownership and academic organization, changing the distribution of students among them. © 2010 Elsevier Ltd. All rights reserved.
1. Introduction Accountability systems have become widespread among policies intended to improve school quality. They typically require students to take a standardized test. Grades aggregated by school are then disclosed to the public, and schools are rewarded or penalized depending on them. The impact of these policies on school quality is twofold. On one hand, rewards and penalties have a direct effect on incentives to invest in quality. On the other hand, grades inform applicants, thereby increasing demand for high-quality schools and eliminating the low-quality ones from the market, providing additional incentives to invest in quality.1 Examples of these policies are found mostly in the United States, in particular since the No Child Left Behind Act of 2001, but they exist in other countries too. In contrast with its widespread application in primary and secondary education and despite the strength of the incentives, accountability is rarely applied to higher education. For this reason, empirical analysis of the effects of
E-mail address:
[email protected]. 1 Accountability usually implies information disclosure because penalties or rewards are based on school performance, which is publicly announced. For this reason the effects of information disclosure and accountability itself may become indiscernible. 0272-7757/$ – see front matter © 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.econedurev.2010.03.002
accountability on higher education is also scarce compared to research on the former. However, research on this subject is particularly important given the returns to higher education.2 Moreover, it has become extremely opportune since a federal commission was established to consider the expansion of standardized testing into higher education in the United States (Arenson, 2006). This paper analyzes the effect of an accountability policy in Brazilian colleges. I study the consequences of the National Exam of Courses (ENC henceforth), created in 1996 by the Brazilian Ministry of Education.3 The ENC was an annual exam mandatory for every senior college student from a certain list of majors.4 For each discipline, colleges were assigned a grade, ranging from A to E, which depended on the average score of their students. Those grades were then published by the Ministry of Education and widely publicized in the media, thus giving applicants information about college quality.
2 Card (1999) and Katz and Autor (1999) survey the literature on returns to schooling and wage structure and discuss the research on the returns to higher education. 3 ENC is the acronym of Exame Nacional de Cursos in Portuguese. 4 Unless otherwise noted, I refer to colleges, universities and other higher education institutions generally as colleges.
M. Rezende / Economics of Education Review 29 (2010) 842–856
Besides informing applicants, the ENC belonged to an accountability system that rewarded and penalized colleges depending on their performance. Colleges with good grades would be allowed to admit more students and open new campuses while those with bad grades would be temporarily prohibited from admitting new students. However, the system was weakened because its main threat – suspending the accreditation of college–discipline pairs – was not enforced. Threatened colleges obtained exemptions from those standards in court; so the consequences of poor performances on the exam were not as drastic as initially planned (Souza, 2005). Thus, the ENC was considered mainly a mandatory information disclosure policy.5 In this paper I study the impact of the ENC on faculty characteristics, vacancies, applications, and enrollments using a unique panel dataset where the unit of observation is a college–municipality–discipline–year quartet. This effect is measured comparing college–municipality– discipline triplets before and after they cover a discipline, which happened in different years for different disciplines. The empirical results suggest that the ENC had a positive effect on faculty characteristics, consistent with other research on the topic (Pitoli & Picchetti, 2005) and with the incentives it provided. Colleges were encouraged to invest in the quality of their faculties both due to the indirect effect on ENC grades, whereby better faculty might translate into better teaching and then hence better grades, and also (and more likely), because the Ministry of Education considered faculty quality as part of the criteria to apply the penalties mentioned above. Indeed, I find that the ENC caused colleges to increase the percentage of faculty members with Ph.D.s by 1.19, a large effect considering that the sample mean is equal to 20.05%. Similar results are found for other measures of faculty education. The percentage of full-time teachers also increased by 1.03% points from its mean of 41.52%. The ENC had a substantial impact on the number of vacancies offered, applications and enrollments, and such impact was highly dependent on colleges’ academic organization and the grade received. For instance, colleges that offered a single discipline increased the number of vacancies by more than 30% after receiving a grade of A, and this effect decreased with declining grade, reaching 7.8% for a grade of E. Universities, the most complex institutions, also responded positively to the ENC, but the impact was actually larger the worse the grade. They increased the number of vacancies after receiving grades of A and E by 12.0% and 30.7%, respectively. Applicants and enrollments followed a similar pattern. The number of applicants to universities increased by 45.3% and 70.6% in case of grades of A and E, respectively. The number of applicants to single-discipline colleges increased by 36.5% in case of a grade of A, declined monotonically the lower the grade until a grade of D, reach-
5 As the Minister of Education in the period of the ENC, Paulo Renato Souza, observed: “The most important thing, without a doubt, was the reaction of society to the information about the evaluation of higher education institutions.” (Souza, 2005, page 165, author’s translation from Portuguese).
843
ing 24.2%, and then increased again in case of a grade of E. Enrollments increased by 18.8% in universities receiving a grade of A and by 36.9% for those receiving a grade of E. For single-discipline colleges, enrollments increased by 36.6% in case of a grade of A and by 19.1% in case of a grade of E. In summary, with the exception of universities, the coefficients across different types of academic organization declined the lower the grade for the three dependent variables considered. This paper is directly related to the literature on school accountability systems. Examples of these policies and research on their effects include the Chicago Public Schools’ (Jacob, 2005; Neal & Schanzenbach, in press), Chile’s 900 Schools Program (Chay, McEwan & Urquiola, 2005), Dallas School Accountability and Incentive Program (Ladd, 1999), Florida’s A+ Plan (Figlio & Rouse, 2006) and FCAT (Figlio, 2006), Virginia’s Standards of Learning (Figlio & Winicki, 2005), Wisconsin Knowledge and Concepts Examination (Sims, 2008) and the analysis of different policies by Kane and Staiger (2002), Hanushek and Raymond (2005), Springer (2008) and of an anonymous state by Donovan, Figlio and Rush (2006). This research, however, has been restricted to policies applied to elementary and secondary education, and I contribute to it by studying the impact of a college accountability system. The ENC disclosed information about colleges to applicants and therefore this paper is also related to research on the effects of college rankings. For instance, public college resources are found to increase with information provided by college rankings (Jin & Whalley, 2007). I contribute to this literature by analyzing the impact of information on vacancies, applications and enrollments.6 This paper is organized as follows. Section 2 describes the Brazilian college market and the ENC. Section 3 presents the data. Section 4 analyzes the effect of the ENC on faculty characteristics. Section 5 studies its impact on the number of vacancies, applicants and enrollment. Section 6 concludes. 2. The Brazilian college market and the ENC 2.1. The Brazilian college market in the mid-nineties The Brazilian college market has grown significantly since the mid-nineties, both in the number of students and the number of colleges, as shown in Figs. 1 and 2.7 College enrollment was roughly at the same level from the early eighties until the mid-nineties, but it more than doubled thereafter within a decade. The number of colleges evolved similarly. After remaining constant until the mid-nineties,
6 This paper contributes more generally to the literature on the impact of information disclosure about product quality. It has identified substantial effects of information disclosure on the quality of salad dressings (Mathios, 2000) and restaurant hygiene levels (Jin & Leslie, 2003), on the prices of child care services (Chipty & Witte, 1998), on the decisions of doctors and hospitals to treat severely ill patients (Dranove et al., 2003), and on consumer learning about the quality of HMOs (Dafny & Dranove, 2008). 7 The data in these figures are taken from various issues of the Census of Higher Education prepared by the Ministry of Education.
844
M. Rezende / Economics of Education Review 29 (2010) 842–856
Fig. 1. College enrollment (millions).
requested approval to offer new disciplines or to increase the number of vacancies, and processed these requests very slowly.11 The CFE was closed in 1994 due to suspicions that it restricted entry in the market in favor of incumbents. In 1995 the CFE was replaced by the National Council of Education (CNE).12 Regulation shifted from authorizations to open or increase the number of disciplines or vacancies towards periodic accreditation and accountability. It simplified and made less stringent the requirements for institutions to open or increase the number of disciplines or vacancies and it required every major program in every institution to be periodically reaccredited, usually every 3 years, based on evaluations of quality using the ENC as one of the main criteria.13 Additional changes in the regulation of higher education in Brazil relaxed restrictions on the curriculum of disciplines, allowing more differentiation among colleges and consequently fostered entry and expansion in the market even further.14 2.2. The ENC
Fig. 2. Number of institutions.
the number of private colleges almost tripled in a decade, while the number of public ones remained roughly constant. Different reasons contributed to such growth, but the expansion of high-school education and deregulation of the college market after 1994 were critical factors. The number of students graduating from high-school grew from 639 thousand in 1990 to 1.836 million in 2000. Most of this increase was concentrated in public schools. During the nineties public spending on education increased and some states and municipalities eliminated repetition or restricted the years of school in which students could flunk.8 These reforms were exogenous to changes in the college market and increased the number of students graduating from high-school, which in its turn contributed to the expansion of college education. The college market grew mostly, however, due to a regulatory reform in the mid-nineties. Before that reform, entry and expansion in this market were regulated by the Federal Council of Education (CFE).9 The CFE was responsible for allowing new colleges to open and for allowing incumbents to change their academic organization status, to offer new majors and to increase the maximum number of students they could admit.10 Since universities did not need its approval to add new disciplines or vacancies, for the other institutions such changes could only occur if the CFE approved them or if it granted them university status. The CFE, however, limited the number of institutions offering the same discipline within geographic areas, required that institutions prove there was excess demand when they
8
The state of São Paulo, which has the most colleges in the country, is among them. 9 Conselho Federal de Educac¸ão in Portuguese. 10 From the most complex to the simplest, colleges are classified by academic organization as universities, university centers, integrated colleges, colleges, institutes of higher education, and centers of technological education.
In order to measure the quality of the colleges in this newly deregulated market, the Brazilian government created the ENC in 1996. The ENC was an annual mandatory exam for every college senior majoring in a specific list of disciplines. Students were required to take the exam in order to obtain their degrees, although the granting of the degree did not depend on the ENC score.15 Incentives for students to perform well on the exam were twofold. First, they could show their scores when applying for jobs or graduate schools. Second, starting in 2001, students with the highest performance in each discipline were awarded graduate fellowships. The number of disciplines covered by the exam increased gradually from three in the first year, 1996, to 26 in the last year, 2003. Table 1 shows the disciplines evaluated and the respective numbers of college–discipline pairs with valid grades. The initial choice of the disciplines was based on the Ministry of Education’s assessment of the tradition and importance of careers, the resistance against the exams across disciplines, and the number of students in those disciplines. For these reasons, business administration, civil engineering, and law were already chosen in the first year (Souza, 2005). In the following years the Ministry of Education also favored health-related disciplines, such as Medicine and Nursing, as well as those responsible for the education of school teachers, such as Mathematics and Education. Although the choice of disciplines followed the general principles described above, these choices were not known by either colleges or applicants very long before they became effective. New disciplines were announced
11 In 1995, there were around five thousand pending applications for new disciplines and more than one hundred for changes to university status. 12 Conselho Nacional de Educac¸ão in Portuguese. 13 See Law 9131 of November 24, 1995. 14 See Law 9394 of December 20, 1996. 15 Students who did not take the ENC in their senior year needed to take the exam in the next year in order to obtain their degrees.
M. Rezende / Economics of Education Review 29 (2010) 842–856
845
Table 1 Number of observations with valid scores. Discipline
1996
1997
1998
1999
2000
2001
2002
2003
Total
Business administration Civil engineering Law Chemical engineering Dentistry Veterinary medicine Electrical engineering Journalism Languages Mathematics Economics Mechanical engineering Medicine Agronomy Biology Chemistry Physics Psychology Education Pharmacy Accounting Architecture History Nursing Geography Phonoaudiology Total
305 90 171
345 104 193 42 82 37
371 105 207 43 81 38 77 79 342 264
418 110 220 44 85 43 82 92 361 275 163 70 81
441 117 249 46 90 50 86 97 375 277 173 72 81 68 220 99 65 117
492 123 272 47 102 59 88 113 412 309 174 73 83 72 253 102 68 123 459 84
606 128 297 49 113 76 95 130 463 314 179 77 86 73 285 112 79 135 554 106 396 96 279 144
566
803
1607
2044
2723
3508
4872
744 133 331 50 127 82 110 153 500 360 198 82 90 82 294 128 85 156 690 122 448 108 274 158 224 69 5798
3722 910 1940 321 680 385 538 664 2453 1799 887 374 421 295 1052 441 297 531 1703 312 844 204 553 302 224 69 21,921
every year in June, 1 year before the exam in which they were introduced, which happened in June as well. The ability of colleges to respond in anticipation of the inclusion of new disciplines in the ENC therefore was limited. Although these principles described above might be known by colleges, the exact list of disciplines that would eventually be covered and the timing of this coverage was never known in advance, leaving them with no precise information to motivate such responses. In any case, these concerns may affect the interpretation of my empirical results, and I return to them in Section 4. The content of the ENC was specific for each discipline. The exam was offered annually in a single day in June, lasted 4 h and contained essay and multiple-choice questions. In December, students received a document by mail containing their scores on the essay, multiple-choice and total exam. To facilitate comparison, the document also included the average score of students in Brazil, the region, state and college. Colleges were assigned a grade for each discipline they offered, which ranged from A to E, depending on their students’ average score.16 These grades were then published by the Ministry of Education in December and widely publicized in the media, mostly in newspapers. December was early enough to guide students, who would typically be finishing applications by then and making their enroll-
16 From 1996 until 2000, college–municipality–discipline triplets were assigned A, B, C or D if their averages were above the 88th, 70th, 30th, or 12th percentiles, respectively, and E if they were below the 12th percentile. From 2001 on, these bounds were replaced by the sample mean plus one standard deviation, the mean plus one half standard deviation, the mean minus one half standard deviation, and the mean minus one standard deviation, respectively.
ment decisions in the next couple of months. Newspapers usually spread information about the grades both through their own articles as well as through paid advertisements from highly ranked private institutions. Newspapers coverage of the ENC typically discussed the performance of colleges in their states, explicitly mentioning their grades.17 Besides informing applicants, the ENC was part of an accountability system that rewarded and penalized colleges depending on their performance. College–discipline pairs that obtained grades of A or B were less constrained to increase the number of students admitted and to open new campuses. On the other hand, college–discipline pairs that received grades of D or E for 3 consecutive years and were considered unqualified in two out of three criteria (faculty qualification, educational project and facilities) would not be allowed to admit new students until they improved their quality.18 If they did
17 As an example of the reach of news about the ENC and the effectiveness of newspapers in informing readers about it, a few weeks before it was offered for the first time, Folha de São Paulo, the newspaper with the largest circulation in the country, published a survey where 95% of its readers answered they were adequately informed about the ENC (Souza, 2005). The percentage of applicants who read newspapers is large: only 7.5% of students who took the ENEM (Exame Nacional do Ensino Médio, or National Secondary Education Examination, a standardized test used in college applications) in 2003 answered in its questionnaire that they did not read newspapers and 24% answered that they read them frequently (Instituto Nacional de Estudos e Pesquisas Anísio Teixeira, 2003). 18 In 1998, 101 college–discipline pairs of Business Administration, Civil Engineering and Law were in this situation but eventually corrected their deficiencies.
846
M. Rezende / Economics of Education Review 29 (2010) 842–856
not, they would not be reaccredited and then would be closed.19 This accountability system however was weakened because its main threat – suspending the accreditation of college–discipline pairs – was not enforced. In 2001, the Ministry of Education suspended the accreditation of 12 colleges, but they all obtained court injunctions entitling them to maintain their status (Souza, 2005). Moreover, the number of colleges in this situation was small, suggesting that this threat was remote. Thus, the role of the ENC – other than disclosing information – was weakened. An important issue for analyzing the impact of the ENC is whether colleges’ responses to the regulatory changes discussed in the previous subsection are correlated with the dates the ENC started covering the respective disciplines. The impact of the ENC is identified by comparing college–municipality–discipline triplet characteristics before and after it started covering the respective discipline. If the timing of the ENC and these responses are correlated, then this identification strategy is not valid. The characteristics of this reform, however, suggest that this is not the case. Unlike the ENC itself, these regulatory changes did not formally discriminate across disciplines. However, the Ministry of Education was stricter when authorizing and accrediting law, education, engineering and healthrelated disciplines’ programs. Moreover, even though the regulatory changes did not differentiate among disciplines, their effects on entry and expansion would not necessarily be the same if the previous constraints affected supply differently across disciplines. For instance, if these constraints were binding for veterinary medicine but not for chemical engineering, then one might observe entry and expansion of colleges offering the former but not the latter, even if the regulatory changes were the same for both disciplines. However, even though disciplines might respond differently to the same regulatory changes, these responses should not coincide over time and across disciplines with the implementation of the ENC and its own effects. The ENC started covering different disciplines in an interval of 7 years, and such a coincidence therefore would require colleges and applicants to respond to regulatory reforms in synchronicity with the ENC across the same disciplines over the same time interval, which is highly unlikely. The timing of the ENC thus allows its effects to be identified separately from these other regulatory changes. 3. The data The data used here are publicly provided by the Ministry of Education of Brazil. The unit of observation is a college–municipality–discipline–year quartet. Two different sets of dependent variables are used to measure faculty and student body characteristics. Faculty education is measured by the fraction of Ph.D.s, M.A.s, B.A.s with specialization courses, and B.A.s with no specialization courses. The data also report the distribution of the faculty by the number of weekly hours they are employed
19 The Ministry of Education guaranteed that students from colleges in this situation would be allowed to transfer to other colleges.
by the college. They contain the fraction employed for 40 h (full time), 20–39 h, and up to 19 h. Faculty education and the fraction of full-time faculty – or faculty with more time devoted to the college – are assumed to be positively related to college quality.20 These data are available from 1999 to 2002 only. Data on the student body, namely the number of vacancies offered, applicants and enrollments range from 1991 to 2005. Thus, for any discipline, these data include at least 5 years of observations before the ENC started and 2 years after it ended.21 Table 2 summarizes these data. The data on the ENC contain grades and related variables from 1996 to 2003. The grade itself is given for each observation with a valid grade, ranging from A to E. Some observations were not assigned a valid grade by the Ministry of Education, either due to a small number or a low percentage of graduating students taking the exam. Each observation also has the number of students taking the exam and this number as a percentage of the total number of students graduating. This percentage is an important piece of the data because it accounts for one mechanism colleges may resort to in order to game this accountability system: Similar to what has been documented for schools subject to accountability, colleges may select only their best students to take the ENC in order to obtain a better grade than if all their students took it.22 However, it is highly unlikely that this practice was usual, since students needed to take the exam in order to obtain their degrees.23 In fact, anecdotal evidence suggests that this sort of gaming was uncommon and therefore should not affect the estimations. On average, exam attendance was very high (97.39%) and the cases when it was low were mostly due to student protests against the ENC, unrelated to colleges’ initiatives. Moreover, when used to restrict the sample or directly in regressions, this percentage does not affect the results significantly. The data on the ENC grades and faculty characteristics have two important limitations. First, they do not dis-
20 Evidence on the effects of faculty composition on student performance, however, is ambiguous. Ehrenberg and Zhang (2005) provided evidence that graduation rates are negatively affected by the fraction of part-time and non tenure-track faculty employed at 4-year colleges. Bettinger and Long (in press) studied the impact of adjunct instructors compared to full-time faculty members on course completion rates in 4-year public colleges in Ohio. They found that it can be positive or negative, depending on the field of study. Hoffmann and Oreopoulos (2009) found that, in a sample from a Canadian university, students’ grades or course dropout rates were not affected by objective characteristics of faculty members such as whether they teach full-time or part-time, time devoted to research, tenure or salary. 21 Colleges are obliged to announce the number of vacancies offered for each discipline and students must choose their major when applying to college. Thus, vacancies offered, applicants and enrollment are measured separately for each distinct college–municipality–discipline triplet. See Law 9394 of December 20, 1996. 22 See Haney (2000), Deere and Strayer (2001), Figlio and Getzler (2002), Figlio (2006), Jacob (2005), and Cullen and Reback (2006) for evidence of schools’ changing the pool of students taking exams in order to obtain better grades. 23 Indeed, in 1998 a few colleges were accused of restricting their pool of students taking the ENC. However, these cases were detected before the exam date and the excluded students were then able to register and take it (Souza, 2005).
M. Rezende / Economics of Education Review 29 (2010) 842–856
847
Table 2 Summary statistics of individual-level variables. Variable
Mean
Standard deviation
Minimum
Maximum
Number of observations
Number of senior students present Percentage of senior students Number of faculty members Fraction of Ph.D.s Fraction of M.A.s Fraction of specialized Fraction of 40 h faculty Fraction of 20–39 h faculty Vacancies Applicants Enrollments
77.49 97.39 43.42 20.05 35.60 34.71 41.52 21.87 123.11 636.84 100.29
100.15 11.67 45.36 22.70 18.19 24.52 35.90 19.12 137.84 1120.94 110.43
0 0 1 0 0 0 0 0 0 0 0
3730 100 831 100 100 100 100 100 5425 32,623 2786
12,858 12,857 9268 9269 9269 9269 9269 9269 20,930 20,930 20,916
tinguish campuses in the same municipality: If a college has more than one campus24 offering the same discipline in a municipality, then the data from the different campuses are aggregated in a single observation.25 Second, data on faculty characteristics are available only for disciplines already covered by the ENC and for college–municipality–discipline–year quartets with students graduating in the corresponding year. Thus, these data are not available for college–municipality–discipline triplets that already had students but did not have a senior class yet and, even if these triples already had a senior class, these data are available only if the respective discipline was already covered by the ENC. This is relevant because the impact of the ENC on faculty characteristics is identified comparing them before and after it covers a discipline. However, since the ENC grades were released only in December – the end of the academic year in Brazil – observations from the first year a discipline is covered actually contain characteristics before grade disclosure. Thus, the impact of the ENC on faculty characteristics can be estimated by the comparison between first-year observations with the subsequent ones.26 A secondary dataset contains municipality-level observations of important control variables. It contains annual observations from 1991 to 2005 on colleges and from 1995 to 2005 on high-school students graduating in a given year. Information on colleges includes their number per ownership category and academic organization.27 The data on high-school students graduating per year divides them between public and private high-school students.28
24 Most students attend college in Brazil in their home municipalities or nearby ones, commuting from their residences. Moreover, most students live with their families and very few live on campus. 25 For a few quartets, the dataset reports two observations for the same quartet. Since it does not contain information about the distinction between them, I dropped those observations from the sample. The results, however, are robust for the inclusion of those observations. 26 The data on vacancies, applicants and enrollments are not limited by any of these restrictions. 27 The ownership categories separate colleges into for-profit private, nonprofit private, federal, state, and municipal colleges. 28 The data on high-school students start after the data on vacancies, applicants and enrollments. Moreover, information on high-school students was missing for a significant number of municipality–year pairs. However, as shown in Section 5, the regression results obtained when the sample is restricted to observations with information on high-school students are roughly the same as those obtained with the unrestricted sample.
4. The effect of the ENC on faculties’ characteristics 4.1. Empirical method The impact of the ENC is identified comparing college–municipality–discipline triplets’ characteristics before and after it started covering a discipline. I control for state–year and college–municipality–discipline fixed effects. This approach mitigates the potential biases on the estimates of the impact of the ENC caused by any time-specific or discipline-specific unobservable variables that may affect both the start of the ENC and faculty characteristics.29 The regressions in this section have the basic form: Yijkt = ˛ + ˇENCkt + Xjt + ijk + ıt + εijkt
(1)
where Yijkt is a measure of faculty characteristics of college i in municipality j in discipline k and year t; ˛, ˇ and are the coefficients to be estimated; ENCkt is a dummy variable that equals one if discipline k is covered by the ENC in period t and zero otherwise; Xjt is a vector containing the data on colleges and highschool students, which varies across municipalities and over time; ijk and ıt are college–municipality–disciplineand time fixed effects, respectively; and εijkt represents a college–municipality–discipline–year specific unobservable error.30 The main coefficient of interest is ˇ, which captures the effect of the ENC on faculty characteristics. In some specifications, the ENCkt dummy variable is substituted by a five-dimensional vector of dummy variables in order to take into account the grades received on the ENC. In this vector the five variables equal one if the college–municipality–discipline triplet was assigned a
29 For example, increasing skill premium or increasing college competition may have fostered both the creation of the ENC and the improvement of faculty education over time. Analogously, the relative values of bachelor’s degrees among different disciplines may have determined both the year the ENC started for these disciplines and the moment the quality of their faculties improved. Indeed, as discussed in Section 2, these reasons actually may have contributed to the creation of the ENC. 30 The dataset also included the academic organization of the institution, the number of its students taking the exam and their percentage compared to the total number of its students graduating. However, academic organization varies minimally in the samples employed, and the other two variables do not greatly affect the estimation results. Therefore, these data were not used in the regressions presented in the paper.
848
M. Rezende / Economics of Education Review 29 (2010) 842–856
Table 3 The impact of the ENC on faculty education and weekly hours. Ph.D.
M.A. or Ph.D.
Specialization, M.A. or Ph.D.
40 h
More than 20 h
Ph.D.
M.A. or Ph.D.
Ph.D.
1.185 (0.255)***
1.112 (0.398)***
0.733 (0.250)***
1.030 (0.479)**
0.871 (0.551)
0.480 (0.301) 2.333 (0.645)*** 0.626 (0.667) 1.676 (1.099)
2.527 (0.519)*** −5.046 (0.765)*** −1.925 (1.052)* 1.981 (2.482)
R-squared
0.958
0.924
0.806
0.936
0.894
0.958
0.924
0.528 (0.450) 2.290 (0.657)*** 0.622 (0.669) 1.725 (1.107) 0.108 (0.396) −0.025 (0.394) −0.129 (0.415) −0.355 (0.498) 0.958
Number of college–municipality– discipline triples Number of observations
3488
3488
3488
3488
3488
3488
3488
3488
9268
9268
9268
9268
9268
9268
9268
9268
ENC ENC* federal ENC* state ENC* municipal B-grade C-grade D-grade E-grade
Note: All specifications include municipality-level regressors, state–year and college–municipality–discipline fixed effects. Observations are clustered by municipality–year pairs. Standard errors in parenthesis. * Denote significance at the 90% level. ** Denote significance at the 95% level. *** Denote significance at the 99% level.
grade of A, B, C, D or E, respectively, and zero otherwise, and the case in which a discipline or triplet was not covered is left as the control group. 4.2. Results This subsection presents the estimates of the impact of the ENC on faculty characteristics. The first five columns of Table 3 correspond to alternative measures of faculty quality. They are the percentages of faculty members with (1) Ph.D.s, (2) M.A.s or Ph.D.s, (3) specialization courses, M.A.s or Ph.D.s, and the percentage of faculty members employed at that college for (4) 40 weekly hours and for (5) at least 20 h. Columns 6 and 7 use the same dependent variables as 1 and 2 but also include interaction terms between the ENC and college ownership among the regressors. Column 8 repeats the specification in column 6 and also includes the ENC grade in the previous year as a regressor. The specifications include the municipalitylevel variables described above as well as state–year fixed effects and college–municipality–discipline fixed effects among its independent variables. Standard errors are clustered by municipality–year pairs. I also restrict the data to college–municipality–discipline triplets with valid grades in every year the ENC covered the corresponding discipline.31
31 Colleges not satisfying this requirement are either entrants, colleges that did not have students graduating in one of the years the ENC was required, or colleges that had their students tested but did not obtain a valid grade, most likely because they had few students taking the exam.
The coefficient I am mainly interested in is the one for the ENC. It is positive and statistically significant in the first three columns, indicating that the ENC increased faculty education. The estimate from column 1 implies it increased the percentage of faculty members with Ph.D.s by 1.19 points. This is a large effect considering the sample mean is 20.05% (see Table 2). The impact of the ENC on other measures of faculty education is also significant, although smaller. According to column 2, the ENC causes the percentage of faculty members with M.A. or Ph.D. degrees to increase by 1.11, which is still relevant considering that the sample mean is 55.65%. It also causes the percentage of faculty members with specialization courses, M.A.s or Ph.D.s to increase by 0.73 point according to column 3, where the mean is 90.36%. Columns 4 and 5 show the results when the dependent variable is the percentage of faculty members by the number of weekly hours of employment. Column 4 shows that the percentage of faculty members employed for 40 h (full time) increases by 1.03% points, which is significant, considering the mean is 41.52%. The effect of the ENC is similar for faculty employed for at least 20 h according to column 5.32 The ENC therefore had a significant impact on differ-
This restriction to the data is justified because these colleges were typically changing their characteristics – in particular those related to their faculty – for reasons other than the ENC, and therefore add noise to the estimation. Moreover, this restriction does not significantly affect my estimates. 32 Analogous but weaker results were obtained when the number of weekly hours spent teaching was used as the dependent variable. These results, not shown in the paper, are available upon request.
M. Rezende / Economics of Education Review 29 (2010) 842–856
ent measures of faculty quality. Such impact is consistent both with the effects of information disclosure, which gave colleges incentives to improve because applicants were able to identify college quality, and also with the effects of accountability, which provided similar incentives because rewards and penalties depended on grades and on faculty qualification. Columns 6 and 7 repeat the specifications from columns 1 and 2, respectively, but also include interaction terms between the ENC and the ownership dummies. The latter refer to federal, state, and municipal colleges, where private colleges are the reference group. The coefficients of the ENC and the interaction terms indicate that ownership determines colleges’ reactions to it. The coefficients on the ENC are positive but differ in magnitude from their counterparts in columns 1 and 2. The coefficients of the interaction terms of federal and state colleges are positive when the dependent variable is the percentage of Ph.D.s (column 6) and negative when it is the percentage of M.A.s or Ph.D.s (column 7). The coefficients of the interaction term of federal colleges in particular are larger and statistically significant in both columns. The opposite signs of these coefficients are initially counterintuitive, but they are consistent with the salary policy the Ministry of Education implemented in federal universities during this period, which favored Ph.D.s as opposed to M.A.s.33 Column 8 adds the ENC grades in the previous year to the independent variables in the specification used in column 6. Dummy variables for grades of B, C, D and E are included in the regression, while the grade of A is left as the reference group. These grades are added to check whether the previous results are robust and to test whether the incentives to hire more or less educated faculty vary with the grade. The coefficients on grades are not statistically significant, which is not surprising because this specification measures the impact of grades on faculty characteristics on an annual basis in a period of 4 years only. The annual frequency of grades may be too high for them to impact significantly on faculty characteristics, considering that colleges may not have been able to respond to these grades as fast due to labor regulations, employment contracts and characteristics of their own recruiting processes. Two concerns may be raised on the interpretation of these results. First, new disciplines were announced 1 year before the exam in which they were introduced and thus, colleges might have responded to these announcements before a discipline started being covered by the ENC, therefore biasing the results. Unfortunately, data on faculty characteristics are not available for previous years, preventing me to control for such response. However, a significant part of the reaction to the ENC should be concentrated in the year it starts covering the relevant discipline because this is actually when grades started being published. Moreover, an anticipated response should bias the
33 From 1994 until 2002 the salary of an entry-level Ph.D. increased 121%, while the salary of an entry-level M.A. increased 79%. These adjustments were the same across disciplines since the salary structure is the same for any discipline in any federal university.
849
coefficient of the ENC towards zero in a regression analysis. Thus, a positive and significant estimate of the effect of the ENC on measures of faculty quality should still be interpreted as evidence of a positive impact. Second, one might also question whether good faculty or effort was diverted from untested to tested areas. In this case, regression estimates would overstate the effects of the ENC because such diversion would lower the preaccountability faculty characteristics. Diversion, however, should be limited because faculty is typically not transferable across different disciplines since professors are usually required to have degrees in the same discipline they teach. 5. The effects of the ENC grades on vacancies offered, applications, and enrollments 5.1. Empirical method The regressions used to measure the impact of the ENC on vacancies, applications, and enrollments are the same as (1) but are now augmented by dummies for the ENC grades received by college–municipality–discipline triplets. In order to analyze how the relation between grades and academic organization affects colleges’ and students’ responses, in some specifications I also include interaction terms between these two sets of variables. Although ownership might also affect colleges’ and students’ responses to the ENC, the estimates when interaction terms between grades and ownership were included were less meaningful than in alternative specifications. Therefore, I do not include these results in this section. In order to avoid excessive heterogeneity among disciplines, the sample is restricted to disciplines that were eventually covered by the ENC. Observations were also limited to college–municipality–discipline triplets that were already open in 1991 and that obtained valid grades ever since the ENC started covering the respective discipline. Later in the paper I show that these results are robust to the inclusion of triplets that were opened after 1991. 5.2. Results Table 4 presents the regression results using the natural logarithms of the number of vacancies, applicants, and enrollments. The estimates in the first column indicate that the ENC increased the number of vacancies offered by colleges in general, but this effect is stronger the lower the grade received. Although the expected effect of different grades on vacancies is ambiguous, because the best colleges may be reluctant to increase their student bodies, such result is still intriguing since low-grade colleges willing to increase their student body should also be less inclined to do so simply because they should be less in demand. However, the estimates in columns 2 and 3 also suggest that both applications – which depends the least on colleges’ supply decisions – and enrollments respond positively to worse grades. In order to better understand the impact of grades on these variables, therefore, I now investigate how colleges with different academic organizations responded to the exam.
850
M. Rezende / Economics of Education Review 29 (2010) 842–856
Table 4 The impact of the ENC on vacancies, applications, and enrollments.
A-grade B-grade C-grade D-grade E-grade R-squared Number of college–municipality–discipline triples Number of observations
Vacancies
Applications
Enrollments
0.136 (0.026)* 0.195 (0.031)* 0.218 (0.024)* 0.318 (0.033)* 0.243 (0.036)* 0.658 316 9159
0.450 (0.049)* 0.497 (0.055)* 0.488 (0.042)* 0.580 (0.058)* 0.592 (0.059)* 0.589 316 9143
0.201 (0.036)* 0.214 (0.038)* 0.250 (0.032)* 0.341 (0.045)* 0.296 (0.044)* 0.528 316 9110
Note: The dependent variables are ln(vacancies), ln(applications), and ln(enrollment), respectively. All specifications include municipality-level regressors, state–year and college–municipality–discipline fixed effects. Observations are clustered by municipality–year pairs. Standard errors in parenthesis. * Denote significance at the 99% level.
In the next three tables I repeat the specification from Table 4, but I now replace the grade dummies for interaction terms between grades and the five categories of academic organization. Table 5 presents regression results using vacancies as the dependent variable.34 The negative relation between grades and vacancies is now restricted to universities, suggesting that these institutions alone drive the results in Table 4. Indeed, the coefficients for all other categories decline the worse the grade is, consistent with lower grades causing lower demand by applicants. This pattern repeats in the first column of Tables 6 and 7, where the dependent variables are applicants and enrollments, respectively. The estimates imply a substantial impact of the ENC on vacancies, applicants and enrollments that differs both across grades and academic organization. I focus the discussion in this section mostly on universities and singlediscipline colleges because their coefficients were precisely estimated, they are numerous and they differ in complexity. Also, when there is no risk of confusion, I refer to single-discipline colleges simply as colleges. Universities, the most complex institutions, increased the number of vacancies by 12% when they received a grade of A, while those receiving a grade of D or E increased vacancies by at least 30%. Colleges, which are much smaller institutions and offer a single discipline, responded to a grade of A by increasing the number of vacancies by 32.2%, and increased them by less than 11% when they received grades of D or E. The number of applicants to universities increased by 45% in case of a grade of A and by more than 68% in case of a grade of D or E. The number of applicants to colleges increased by 36.5% in case of a grade of A and it declined monotonically the lower the grade until a grade of D, where the response was equal to 24.2%, but strikingly it increased by 39.8% in case of a grade of E. This same pattern of declining response from A to D and a large number for E
34 Due to the small number of institutes of higher education, their respective coefficients were estimated only when the whole sample was used in column 5.
is observed for university centers, and is possibly due to the fact that institutions receiving a grade of E have a large variance in their student body over time for reasons not related to the ENC. With the exception of universities, the coefficients across different types of academic organization declined with lower grades. Enrollments followed a similar pattern: they increased by 18.8% in universities receiving a grade of A and by more than 36% for those receiving either D or E. For colleges, enrollment increased by 36.6% in case of a grade of A and by less than 20% in case of a grade of D or E. The different impact of grades on universities can be explained by the fact that the best institutions in Brazil are mostly universities and they traditionally limit the number of students in order to maintain the quality of the education offered, while the worst ones are less concerned with it. Therefore, other types of institutions were more likely to admit more students after receiving bad grades, causing the negative relation found in the estimates. Indeed, as discussed in Section 2, one of the motivations for the ENC was exactly that future growth might be concentrated at lowquality institutions, or that it might occur at the expense of quality. Moreover, unlike other institutions, universities can increase the number of vacancies without approval from the Ministry of Education. Thus, other institutions that might have intended to admit more students were possibly constrained. Two additional facts about universities may help to explain why grades affected them differently. First, they are on average larger and older and therefore most of them, and their disciplines, already had an established reputation before the ENC. Thus they should respond the least to grades because the grades might not add as much information about their quality as for other institutions’. Second, on average universities also offer more disciplines. Thus, even if the ENC were equally informative for universities and other institutions, university applicants could use more grades from other disciplines to assess the quality of each discipline, and therefore the individual impact of a grade on its respective discipline might be lower for universities than for other institutions.
M. Rezende / Economics of Education Review 29 (2010) 842–856
851
Table 5 The Impact of the ENC on vacancies.
A-grade* university *
B-grade university C-grade* university D-grade* university E-grade* university A-grade* university center B-grade* university center C-grade* university center D-grade* university center E-grade* university center A-grade* integrated college B-grade* integrated college C-grade** integrated college D-grade* integrated college E-grade* integrated college A-grade* college B-grade* college C-grade* college D-grade* college E-grade* college
Baseline
Controls for programs
First grade spec. effects
Controls for HS students
Whole sample
Private only
0.120 (0.028)*** 0.164 (0.030)*** 0.218 (0.028)*** 0.350 (0.042)*** 0.307 (0.045)*** 1.368 (0.298)*** 0.666 (0.219)*** 0.495 (0.065)*** 0.612 (0.104)*** 0.683 (0.723) 0.451 (0.111)*** 0.355 (0.348) 0.168 (0.083)** 0.321 (0.065)*** 0.061 (0.073) 0.322 (0.080)*** 0.290 (0.053)*** 0.193 (0.032)*** 0.108 (0.035)*** 0.078 (0.042)*
0.117 (0.028)*** 0.163 (0.030)*** 0.220 (0.028)*** 0.354 (0.042)*** 0.306 (0.045)*** 1.346 (0.298)*** 0.658 (0.220)*** 0.483 (0.066)*** 0.607 (0.105)*** 0.666 (0.743) 0.465 (0.113)*** 0.362 (0.349) 0.165 (0.082)** 0.312 (0.065)*** 0.052 (0.074) 0.303 (0.077)*** 0.280 (0.054)*** 0.187 (0.032)*** 0.105 (0.035)*** 0.058 (0.042)
0.125 (0.028)*** 0.134 (0.030)*** 0.181 (0.028)*** 0.302 (0.042)*** 0.259 (0.048)*** 1.311 (0.286) 0.657 (0.211)*** 0.488 (0.063)*** 0.608 (0.101)*** 0.638 (0.721) 0.553 (0.116)*** 0.341 (0.316) 0.191 (0.084)** 0.337 (0.067)*** 0.156 (0.066)** 0.342 (0.089)*** 0.302 (0.050)*** 0.189 (0.032)*** 0.133 (0.037)*** 0.105 (0.044)**
0.131 (0.029)*** 0.177 (0.031)*** 0.226 (0.030)*** 0.367 (0.044)*** 0.339 (0.047)*** 1.199 (0.327)*** 0.590 (0.220)*** 0.437 (0.073)*** 0.531 (0.126)*** 0.720 (0.700) 0.414 (0.124)*** 0.341 (0.353) 0.135 (0.089) 0.263 (0.074)*** 0.011 (0.086) 0.283 (0.073)*** 0.206 (0.051)*** 0.150 (0.033)*** 0.119 (0.039)*** 0.056 (0.040)
0.159 (0.019)*** 0.208 (0.021)*** 0.227 (0.018)*** 0.261 (0.029)*** 0.243 (0.029)*** 0.483 (0.087)*** 0.364 (0.077)*** 0.335 (0.039)*** 0.280 (0.063)*** 0.412 (0.114)*** 0.051 (0.113) 0.219 (0.124)* 0.173 (0.040)*** 0.292 (0.042)*** 0.217 (0.060)*** 0.237 (0.034)*** 0.236 (0.028)*** 0.216 (0.019)*** 0.236 (0.030)*** 0.154 (0.027)*** 0.180 (0.076)** 0.249 (0.082)***
−0.116 (0.469) 0.558 (0.164)*** 0.481 (0.107)*** 0.750 (0.121)*** 1.228 (0.142)*** 1.425 (0.379)*** 0.737 (0.247)*** 0.615 (0.085)*** 0.743 (0.134)*** −0.098 (0.170) 0.601 (0.158)*** 0.695 (0.472) 0.230 (0.099)** 0.287 (0.095)*** −0.024 (0.091) 0.226 (0.084)*** 0.249 (0.082)*** 0.213 (0.053)*** 0.165 (0.058)*** 0.121 (0.053)**
0.662 316 9159
0.101 (0.020)*** 0.662 316 9159
0.651 316 9159
0.675 316 6051
0.642 977 20,169
0.632 167 2949
C-grade* inst. higher education E-grade* inst. higher education ln(number of programs) R-squared Number of college–municipality–discipline triples Number of observations
Note: All specifications include state–year and college–municipality–discipline fixed effects and market structure controls. Observations are clustered by municipality–year pairs. Standard errors in parenthesis. * Denote significance at the 90% level. ** Denote significance at the 95% level. *** Denote significance at the 99% level.
In order to evaluate these reputation effects, the second column of Tables 5–7 includes the number of programs offered by the respective college–municipality pair in the regression.35 The coefficient of this variable is statistically
35 Measures of reputation or age unfortunately are not available. The number of programs of a college–municipality pair is at least as large as the number of college–municipality–disciplines triplets because a college may offer more than one of the same discipline in the same municipal-
significant for vacancies and enrollments, but not for applicants, suggesting that the number of majors does not have an important effect on applicants’ demand. Moreover, the coefficients on grades in these three tables remain roughly unchanged when this variable is included, also indicating that the number of majors offered does not play a major
ity. That would be the case for instance, if it offers a day and an evening program.
852
M. Rezende / Economics of Education Review 29 (2010) 842–856
Table 6 The Impact of the ENC on applicants.
A-grade* university *
B-grade university C-grade* university D-grade* university E-grade* university A-grade* university center B-grade* university center C-grade* university center D-grade* university center E-grade* university center A-grade* integrated college B-grade* integrated college C-grade* integrated college D-grade* integrated college E-grade* integrated college A-grade* college B-grade* college C-grade* college D-grade* college E-grade* college
Baseline
Controls for programs
First grade spec. effects
Controls for HS students
Whole sample
Private only
0.453 (0.050)*** 0.531 (0.055)*** 0.548 (0.046)*** 0.682 (0.068)*** 0.706 (0.064)*** 1.182 (0.614)* 0.399 (0.308) 0.347 (0.116)*** 0.311 (0.190) 0.677 (0.398)* 1.219 (0.226)*** 0.459 (0.352) 0.315 (0.139)** 0.489 (0.133)*** 0.092 (0.173) 0.365 (0.115)*** 0.285 (0.136)** 0.278 (0.082)*** 0.242 (0.111)** 0.398 (0.134)***
0.453 (0.050)*** 0.531 (0.055)*** 0.548 (0.046)*** 0.681 (0.068)*** 0.706 (0.064)*** 1.189 (0.614)* 0.401 (0.308) 0.350 (0.116)*** 0.312 (0.190) 0.682 (0.393)* 1.215 (0.223)*** 0.457 (0.352) 0.316 (0.139)** 0.491 (0.133)*** 0.095 (0.173) 0.371 (0.116)*** 0.288 (0.136)** 0.280 (0.082)*** 0.244 (0.111)** 0.404 (0.134)***
0.502 (0.050)*** 0.491 (0.054)*** 0.488 (0.045)*** 0.594 (0.065)*** 0.540 (0.065)*** 1.046 (0.5450)* 0.452 (0.308) 0.312 (0.105)*** 0.322 (0.195)* 0.538 (0.339) 1.430 (0.274)*** 0.427 (0.346) 0.267 (0.130)** 0.412 (0.116)*** 0.000 (0.137) 0.291 (0.117)** 0.279 (0.127)** 0.271 (0.075)*** 0.305 (0.109)*** 0.477 (0.123)***
0.428 (0.053)*** 0.562 (0.056)*** 0.603 (0.047)*** 0.745 (0.068)*** 0.746 (0.066)*** 1.378 (0.637)** 0.530 (0.333) 0.564 (0.120)*** 0.462 (0.205)** 0.985 (0.332)*** 1.375 (0.268)*** 0.669 (0.366)* 0.461 (0.155)* 0.626 (0.151)*** 0.169 (0.177) 0.626 (0.175)*** 0.459 (0.143)*** 0.454 (0.088)*** 0.410 (0.117)*** 0.487 (0.141)***
0.547 (0.039)*** 0.576 (0.038)*** 0.515 (0.031)*** 0.538 (0.048)*** 0.581 (0.046)*** 0.632 (0.110)*** 0.393 (0.122)*** 0.311 (0.072)*** 0.244 (0.120)** 0.608 (0.201)*** 0.368 (0.195)* 0.308 (0.158)* 0.393 (0.072)*** 0.406 (0.100)*** 0.122 (0.142) 0.383 (0.078)*** 0.458 (0.058)*** 0.319 (0.041)*** 0.306 (0.053)*** 0.299 (0.057)*** 0.140 (0.114) 0.392 (0.093)***
0.026 (0.373) 0.796 (0.187)*** 1.015 (0.152)*** 1.139 (0.165)*** 1.536 (0.194)*** 2.144 (0.854)** 0.881 (0.400)** 0.884 (0.133)** 0.905 (0.216)*** 0.704 (0.216)*** 1.662 (0.255)*** 1.116 (0,453)** 0.682 (0.183)*** 0.744 (0.187)*** 0.169 (0.159) 0.395 (0.202)* 0.675 (0.220)*** 0.663 (0.141)*** 0.455 (0.156)*** 0.741 (0.207)***
0.591 316 9143
−0.031 (0.047) 0.591 316 9143
0.595 316 9143
0.634 316 6037
0.628 977 20,098
0.557 167 2942
C-grade* inst. higher education E-grade* inst. higher education ln(number of programs) R-squared Number of college–municipality–discipline triples Number of observations
Note: All specifications include state–year and college–municipality–discipline fixed effects and market structure controls. Observations are clustered by municipality–year pairs. Standard errors in parenthesis. * Denote significance at the 90% level. ** Denote significance at the 95% level. *** Denote significance at the 99% level.
role in causing different effects of grades on universities compared to other institutions. Allowing for different reactions to the ENC depending on characteristics, however, may not be enough to properly identify the causal effect of the ENC, in particular if unobservable effects correlated with quality but unrelated to the exam also affected the dependent variables. In order to investigate more deeply whether differences in quality determine institutions’ and students’ behavior, I now sep-
arate the college–municipality–discipline triplets into five groups according to the first grade they received, from A to E. The ENC grades are correlated over time, as shown in Table 8. The mode of the distribution of grades in a year conditional on the grade in the year before is always this grade itself with the exception of a grade of D, which has C as its mode. Also, grades received in a year that are the same as the year before are at least 35% of the total across the five grades, as shown in the main diagonal of that table.
M. Rezende / Economics of Education Review 29 (2010) 842–856
853
Table 7 The Impact of the ENC on enrollments.
A-grade* university *
B-grade university C-grade* university D-grade* university E-grade* university A-grade* university center B-grade* university center C-grade* university center D-grade* university center E-grade* university center A-grade* integrated college B-grade* integrated college C-grade* integrated college D-grade* integrated college E-grade* integrated college A-grade* college B-grade* college C-grade* college D-grade* college E-grade* college
Baseline
Controls for programs
First grade spec. effects
Controls for HS students
Whole sample
Private only
0.188 (0.036)*** 0.198 (0.037)*** 0.251 (0.036)*** 0.378 (0.056)*** 0.369 (0.054)*** 1.241 (0.418)*** 0.444 (0.295) 0.372 (0.143)*** 0.428 (0.156)*** 0.115 (0.149) 0.288 (0.374) 0.109 (0.218) 0.140 (0.105) 0.317 (0.093)*** 0.013 (0.091) 0.366 (0.106)*** 0.360 (0.083)*** 0.278 (0.040)*** 0.175 (0.050)*** 0.191 (0.070)***
0.187 (0.036)*** 0.198 (0.037)*** 0.252 (0.036)*** 0.380 (0.056)*** 0.369 (0.054)*** 1.228 (0.418)*** 0.439 (0.295) 0.364 (0.143)** 0.425 (0.155)*** 0.104 (0.161) 0.297 (0.375) 0.113 (0.219) 0.139 (0.105) 0.311 (0.093)*** 0.008 (0.093) 0.354 (0.105)*** 0.354 (0.083)*** 0.275 (0.040)*** 0.173 (0.050)*** 0.179 (0.069)***
0.220 (0.036)*** 0.171 (0.036)*** 0.205 (0.035)*** 0.321 (0.055)*** 0.319 (0.052)*** 1.217 (0.395)*** 0.451 (0.292) 0.362 (0.134)*** 0.403 (0.148)*** 0.071 (0.190) 0.390 (0.392) 0.054 (0.216) 0.134 (0.106) 0.326 (0.088)*** 0.112 (0.077) 0.357 (0.104)*** 0.340 (0.075)*** 0.242 (0.040)*** 0.201 (0.0540)*** 0.232 (0.066)***
0.163 (0.037)*** 0.199 (0.039)*** 0.265 (0.038)*** 0.425 (0.058)*** 0.412 (0.058)*** 1.084 (0.441)*** 0.409 (0.285) 0.392 (0.142)*** 0.386 (0.153)** 0.243 (0.197) 0.410 (0.310) 0.156 (0.230) 0.153 (0.110) 0.358 (0.109)*** 0.015 (0.103) 0.364 (0.112)*** 0.285 (0.081)*** 0.227 (0.044)*** 0.144 (0.055)*** 0.113 (0.061)*
0.279 (0.024)*** 0.269 (0.025)*** 0.275 (0.023)*** 0.332 (0.036)*** 0.309 (0.034)*** 0.497 (0.082)*** 0.325 (0.102)*** 0.264 (0.063)*** 0.185 (0.091)** 0.328 (0.174)* 0.156 (0.123) 0.181 (0.092)* 0.215 (0.054)*** 0.338 (0.059)*** 0.120 (0.091) 0.301 (0.048)*** 0.294 (0.038)*** 0.248 (0.025)*** 0.200 (0.034)*** 0.147 (0.040)*** 0.145 (0.083)* 0.287 (0.082)***
0.040 (0.411) 0.787 (0.195)*** 0.664 (0.135)*** 0.842 (0.165)*** 1.333 (0.132)*** 1.659 (0.526)*** 0.727 (0.326)** 0.726 (0.109)*** 0.794 (0.167)*** 0.437 (0.119)*** 0.613 (0.425) 0.540 (0,254)** 0.355 (0.117)*** 0.376 (0.122)*** 0.041 (0.117) 0.526 (0.305)* 0.493 (0.161)*** 0.421 (0.071)*** 0.282 (0.078)*** 0.281 (0.113)**
0.530 316 9110
0.064 (0.031)** 0.531 316 9110
0.520 316 9110
0.540 316 6019
0.521 977 19,975
0.516 167 2926
C-grade* inst. higher education E-grade* inst. higher education ln(number of programs) R-squared Number of college–municipality–discipline triples Number of observations
Note: All specifications include state–year and college–municipality–discipline fixed effects and market structure controls. Observations are clustered by municipality–year pairs. Standard errors in parenthesis. * Denote significance at the 90% level. ** Denote significance at the 95% level. *** Denote significance at the 99% level.
Similarly, grades within an interval of one grade compared to the year before are never less than 90% of the total across the five grades. The fact that grades are correlated over time suggests that separating institutions according to their first grade divides them according to some characteristic related to the quality of education that is constant over time and that may also determine how vacancies, applicants and enrollments trend in the period analyzed. These trends can be independent of the ENC and therefore may cause a spu-
rious correlation between the moment it starts covering disciplines and the dependent variables. To account for this possibility, for each first grade group I assign year fixed effects, from 1991 to 2005, and I replace the state–year fixed effects employed in columns 1 and 2 of Tables 5–7 with these first grade-year fixed effects in column 3 of these tables. These fixed effects therefore should capture differences in vacancies, applicants and enrollments over time and across institutions of different qualities that are unrelated to the impact of the ENC. The effects of the ENC
854
M. Rezende / Economics of Education Review 29 (2010) 842–856
Table 8 Yearly transition of grades.
A B C D E
A
B
C
D
E
0.63 0.18 0.04 0.01 0.01
0.22 0.38 0.15 0.04 0.03
0.12 0.36 0.58 0.44 0.22
0.02 0.06 0.17 0.35 0.31
0.01 0.03 0.06 0.17 0.44
Note: Rows and columns represent the grades in the basis and the following year, respectively. An entry represents the fraction of grades assigned in the following year by grade in the basis year. Observations are restricted to pairs of years when both grades were valid.
should not be captured by these fixed effects; they would remain to be measured by the academic organization-grade dummies. Column 3 in Tables 5–7 shows that time effects specific to college quality do affect the estimations, and in particular the coefficients of grades received by universities. The first grade-time fixed effects attenuate the negative relation between grades and the dependent variables for universities. For instance, in column 3 the grade of A and of E dummies for these institutions are higher and lower, respectively, than the same coefficients in column 1 for all three tables. However, such negative relation remains in this specification. For example, the number of applicants to universities increases by more similar values in this specification: for grades of A and E they now increase by 50.2% and 54%, respectively, against 45.3% and 70.6% previously. For other institutions, including first gradeyear fixed effects does not cause any significant change in the results. The main exception is the number of applicants to colleges, where a similar negative relation with grades also becomes more pronounced. The positive relation between grades and enrollments for colleges is also attenuated: grades of A and E now increase it by 35.7% and 23.2%, respectively, against 36.6% and 19.1% previously. In summary, these results suggest that institutions of different qualities were evolving differently independently of the ENC, but that the estimated impact of this accountability system remains substantial even in the most conservative estimates. Columns 4 to 6 of Tables 5–7 test whether the results are robust to changes in the sample. In column 4 the regression includes the number of students graduating from high-school in each municipality and year, discriminated between public and private high schools. This variable is not included in the previous regression because it is available only from 1995 onward, and therefore when included it restricts the sample. Even though it shortens the sample and it might be an important explanatory variable, the results are very robust to its inclusion. Column 5 includes in the sample observations from college–municipality–discipline triplets that were not observed every year since 1991; mostly colleges that entered the market since then. Compared to the results in column 1, differences in coefficients are slightly attenuated both across academic organization and grades. These changes can be attributed to the fact that the new sample is more than twice as large as the original and adds mostly new entrants, therefore changing the average char-
acteristics of institutions. The results, however, still imply a substantial impact of the ENC on vacancies, applicants and enrollments. The impact of the ENC may also depend on ownership. First, private colleges may respond to demand shocks by changing tuition, as opposed to public colleges, which do not charge it.36 Thus, an increase in demand should have a weaker impact on vacancies, applicants and enrollments for the former. Second, public colleges would be less compelled to respond to a positive demand shock by admitting more students, since additional students must be funded by public transfers, while private institutions may raise additional revenues from tuitions.37 In order to account for these differences, I now restrict the original sample to private institutions. According to column 6, the impact of the ENC on the three variables is mostly stronger across different grades and types of academic organization compared to the baseline results, but the main patterns discussed previously remain. I perform one last robustness check of whether serial correlation among residuals affects the regression results. Bertrand, Duflo and Mullainathan (2004) showed that when the residuals in differences-in-differences specifications are serially correlated, conventional standard errors understate the standard deviation of the estimators. In order to account for this possibility, I follow a method presented in that paper for cases when a policy is implemented across different groups over time, as in the case of the ENC.38 This method, however, estimates the impact of the ENC on the three dependent variables and its standard error without distinguishing the impact of different grades. The estimates of the impact of the ENC on vacancies, applications and enrollments are all positive, consistent with the previous results. Standard errors are large, not allowing rejection of the null hypothesis of no policy impact.39 However, as the authors observed, the power of this method is very low and declines further when applied to small samples. In their simulations with valid policy effects and same number of states as disciplines in my
36 Winston (1999)ordered institutions in the United States in a hierarchy determined by their donated wealth. The wealthier the institution, the better the quality and the lower the price of its educational services, which helps it select the best applicants. In Brazil, private institutions have typically little donated wealth and most of their costs are covered by tuition. Thus, public institutions subsidize their students more and are on average more in demand than private ones. 37 Some government transfers to public colleges and their faculty salaries are related to the number of students or hours of teaching, but not in such a direct way. 38 The method is described in Section IV.C of that paper and is performed as follows: The three dependent variables are first regressed on ownership and academic organization pair dummies and market structure variables. The residuals of these regressions are then averaged by discipline–year pairs and regressed on discipline and year fixed effects. Then, for each discipline, the residuals from these regressions are averaged across years before and after the ENC started covering the respective discipline. The resulting two-period panel of disciplines is used to estimate the coefficients and standard errors, where the dependent variables are the three newly created vectors of residuals. 39 The estimates of the ENC coefficient (and standard errors) are equal to 0.002 (0.027), 0.016 (0.080), and 0.009 (0.042) for the regressions using aggregate residuals derived from vacancies, applicants and enrollments, respectively.
M. Rezende / Economics of Education Review 29 (2010) 842–856
paper, the null hypothesis was rejected less than 10% of the time. Here, the panel used in the last stage of the estimation contains only twenty disciplines and two periods, yielding a total of forty observations. The ability of this method to reject the hypothesis of no impact of the ENC with such a small sample is thus very low. Hence, the estimates of the impact of the ENC are in line with previous results, although standard errors are inconclusive about its significance. Two conclusions emerge from the analysis in this section. First, the ENC had a positive impact on the number of vacancies offered, applicants and enrollments for colleges in general. This impact depended on the academic organization and the grades received by these institutions. It was larger the better the grades for institutions of any academic organization but universities. Second, because of the different impact on universities, the ENC also affected the distribution of students enrolled in low and high-performing institutions by academic organization. It contributed to decrease the number of students enrolling in high-performing universities relative to other simpler and smaller high-performing institutions, and the opposite was true for low-performing institutions. 6. Conclusions In recent years, public policy aimed at improving education quality has been based primarily on accountability systems, particularly due to the No Child Left Behind Act of 2001 in the United States. Those systems, however, have been mostly implemented in basic education, and as a consequence, research on the effects of accountability on higher education is scarce. This paper studied the effects of an accountability policy in the Brazilian college market. This policy was found to improve faculty characteristics, and increase the number of vacancies offered, applicants and enrollments. The results on the relation between accountability, academic organization and enrollment raised some issues to be considered in further research and public policy. Enrollments in universities, the most complex institutions, responded differently to the ENC compared to other institutions. This effect on the distribution of students may have an important impact on competition among institutions as well. Interestingly, such effect was not considered in the discussions that preceded the ENC, but certainly matters for public policy and therefore demands further research. The evidence of the impact of the ENC also raised more questions regarding how it affected colleges’ and students’ decisions. The events after the start of the ENC suggest that information disclosure was the most important part of the system, although most results are consistent both with the effects of information disclosure and with the effects of rewards and penalties. These questions, which relate to many industries where similar policies have been applied as well, are fields for future research too. Acknowledgements I thank the associate editor Eric Bettinger, an anonymous referee, Bill Bassett, David Figlio, Ali Hortac¸su, Fang Lai, Robin Lumsdaine, Jonah Rockoff, Chad Syverson, Bruce
855
Weinberg and seminar participants at Getulio Vargas Foundation, Ibmec-Rio, PUC-Rio, the 2007 American Education Finance Association Conference, the 2007 North American Summer Meeting of the Econometric Society, the Fall 2007 NBER Higher Education Working Group Meeting, the 2008 Latin American and Caribbean Economic Association Meeting, and the 2008 Brazilian Econometric Society Meeting for comments. I am also grateful to CAPES and the Claudio Haddad Dissertation Fund for financial support. The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or the staff of the Federal Reserve System.
References Arenson, K. W. (2006). Panel Explores Standard Tests For Colleges New York Times. Bertrand, M., Duflo, E., & Mullainathan, S. (2004). How much should we trust in differences-in-differences estimates? Quarterly Journal of Economics, Vol. 119(February (1)), 249–275. Bettinger, E., & Long, B. T. (in press) Does cheaper mean better? The impact of using adjunct instructors on student outcomes. Review of Economics and Statistics. Card, David. (1999). The causal effect of education on earnings. In O. Ashenfelter, & D. Card (Eds.), Handbook of Labor Economics, Vol. 3A (pp. 1801–1863). Amsterdam: North Holland. Chay, K. Y., McEwanF P.J., & Urquiola, M. (2005). The central role of noise in evaluating interventions that use test scores to rank schools. American Economic Review, Vol. 95(September (4)), 1237–1258. Chipty, T., & Witte, A. D. (1998). Effects of information provision in a vertically differentiated market. NBER working paper no. 6493, April. Cullen, J. B., & Reback, R. (2006). Tinkering toward Accolades: School gaming under a performance accountability system. In T. J. Gronberg, & D. W. Jansen (Eds.), Improving School Accountability: Check-Ups or Choice, Advances in Applied Microeconomics (pp. 1–34). Amsterdam: Elsevier Science. Dafny, L., & Dranove, D. (2008). Do report cards tell consumers anything they don’t already know? The case of medicare HMOs. RAND Journal of Economics, Vol. 39(Autumn (3)), 790–821. Deere, D., & Strayer, W. (2001). Putting schools to the test: School accountability, incentives, and behavior. Private Enterprise Research Center working paper no. 114, A&M University, Texas, March. Donovan, C., Figlio, D. N. & Rush, M. (2006). Cramming: The effects of school accountability on college study habits and performance. Working paper, University of Florida, April. Dranove, D., Kessler, D., McLellan, M., & Satterthwaite, M. (2003). Is more information better? The effects of “report cards” on health care providers. Journal of Political Economy, Vol. 111(June (3)), 555–588. Ehrenberg, R. G., & Zhang, L. (2005). Do tenured and tenure-track faculty matter? Journal of Human Resources, Vol. 40(Summer (3)), 647–659. Figlio, D. N. (2006). Testing, crime and punishment. Journal of Public Economics, Vol. 90(May (4–5)), 837–851. Figlio, D. N., & Getzler, L. S. (2002). Accountability, ability, and disability: Gaming the system. NBER working paper no. 9307, October. Figlio, D. N., & Rouse, C. E. (2006). Do accountability and voucher threats improve low-performing schools? Journal of Public Economics, Vol. 90(January (1–2)), 239–255. Figlio, D. N., & Winicki, J. (2005). Food for thought: The effects of school accountability plans on school nutrition. Journal of Public Economics, Vol. 89(February (2–3)), 381–394. Haney, W. M. (2000). The myth of the Texas miracle in education. Education Policy Analysis Archives, Vol. 8(August (41)) http://epaa.asu.edu/epaa/v8n41/ Hanushek, E. A., & Raymond, M. E. (2005). Does school accountability lead to improved student performance? Journal of Policy Analysis and Management, Vol. 24(Spring (2)), 297–327. Hoffmann, F., & Oreopoulos, P. (2009). Professor qualities and student achievement. Review of Economics and Statistics, Vol. 91(February 1), 83–92. Instituto Nacional de Estudos e Pesquisas Anísio Teixeira, ENEM: Relatório Pedagógico 2003, Brasília, 2003. Jacob, B. (2005). Accountability, incentives and behavior: Evidence from school reform in Chicago. Journal of Public Economics, Vol. 89(June (5–6)), 761–796.
856
M. Rezende / Economics of Education Review 29 (2010) 842–856
Jin, G., & Leslie, P. (2003). The effect of information on product quality: Evidence from hygiene grade cards. Quarterly Journal of Economics, Vol. 118(May (2)), 409–451. Jin, G., & Whalley, A. (2007). The power of attention: Do rankings affect the financial resources of public colleges? NBER working paper no. 12941, February. Kane, T. J., & Staiger, D. O. (2002). The promise and pitfalls of using imprecise school accountability measures. Journal of Economic Perspectives, Vol. 16(Autumn (4)), 91–114. Katz, L., & Autor, D. (1999). Changes in the wage structure and earnings inequality. In O. Ashenfelter, & D. Card (Eds.), Handbook of Labor Economics, Vol. 3A (pp. 1463–1555). Amsterdam: North Holland. Ladd, H. F. (1999). The Dallas School Accountability and Incentive Program: An evaluation of its impacts on student outcomes. Economics of Education Review, Vol. 18(February (1)), 1–16. Mathios, A. D. (2000). The impact of mandatory information disclosure laws on product choices: An analysis of the salad dressing market. Journal of Law and Economics, Vol. 43(October), 651–677.
Neal, D., & Schanzenbach, D. (in press). Left behind by design: Proficiency counts and test-based accountability. Review of Economics and Statistics. Pitoli, A., & Picchetti, P. (2005). O Problema da Assimetria de Informac¸ão no Mercado de Cursos Superiores–O Papel do Provão. Working paper, Universidade de São Paulo, April. Sims, D. P. (2008). Strategic responses to school accountability measures: It’s all in the timing. Economics of Education Review, Vol. 27(February (1)), 58–68. Souza, P. R. (2005). A Revoluc¸ão Gerenciada: Educac¸ão no Brasil, 1995–2002. São Paulo: Prentice Hall. Springer, M. G. (2008). The Influence of an NCLB accountability plan on the distribution of student test score gains. Economics of Education Review, Vol. 27(October (5)), 556–563. Winston, G. C. (1999). Subsidies, hierarchy and peers: The awkward economics of higher education. Journal of Economic Perspectives, Vol. 13(Winter (1)), 13–36.