The impact of international students on measured learning and standards in Australian higher education

The impact of international students on measured learning and standards in Australian higher education

Economics of Education Review 31 (2012) 587–600 Contents lists available at SciVerse ScienceDirect Economics of Education Review journal homepage: w...

659KB Sizes 0 Downloads 38 Views

Economics of Education Review 31 (2012) 587–600

Contents lists available at SciVerse ScienceDirect

Economics of Education Review journal homepage: www.elsevier.com/locate/econedurev

The impact of international students on measured learning and standards in Australian higher education Gigi Foster ∗ UNSW School of Economics, Australian School of Business, Sydney 2052, Australia

a r t i c l e

i n f o

Article history: Received 16 May 2011 Received in revised form 21 March 2012 Accepted 22 March 2012 JEL classification: I21 J15 Keywords: Educational economics Higher education Spillovers International students Australia

a b s t r a c t International students, who are also often from non-English language speaking backgrounds (NESB students), are an important source of revenue for Australian universities. Yet little large-scale evidence exists about their performance once they arrive. Do these students perform worse than other students in Australian undergraduate classrooms? What happens to other students’ performance when these students are added to classrooms? I provide new empirical evidence on these questions using recent administrative panel data from the business schools of two Australian Technology Network universities. Results show strong and highly statistically significant main effects and spillover effects, raising concerns about the integration of international NESB students into the Australian tertiary environment. © 2012 Elsevier Ltd. All rights reserved.

1. Introduction Foreign students make up an increasing share of Australian universities’ revenues, particularly in business schools.1 April 2010 data from Australian Education International show an 11.9% year-to-date increase in enrolments of international students in Australian higher education programs (AEI, 2010). Enrolment growth in the ‘Management and Commerce’ area, which attracts significant numbers of international students, was 12.6% over this same period. Estimates by Access Economics for the 2007–2008 financial year put the contributions of international students at approximately 1% of Australian GDP (AE, 2009). Travel services related to education, as a sole category, totalled approximately 27% of Australian service

∗ Tel.: +61 2 9385 7472; fax: +61 2 9313 6337. E-mail address: [email protected] 1 Throughout the paper, I use the term ‘school’ to refer to a collection of departments in related disciplines, known more commonly in Australia as a ‘faculty’. 0272-7757/$ – see front matter © 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.econedurev.2012.03.003

exports in 2007–2008 according to the Australian Bureau of Statistics data cited in that same report. Because international students pay full fees to the institutions in which they enrol, they may be seen as a revenue boon for both Australian universities and the broader economy. Yet international students face several potential disadvantages in relation to local students in terms of their likelihood of academic success. Differences in social and academic culture, academic aptitude or preparation, as well as inadequate language fluency, may all contribute to worse performance by foreign students (Bradley, 2000; Cheng & Leong, 1993; Lebcir & Wells, 2008; Stoynoff, 1997; Zhang & Brunton, 2007). Speaking English in the home, while uncommon amongst the international student population in Australia, overlaps with domestic student status and hence varies independently of whether a student is international or not. Unlike being foreign, English fluency is a clear academic skill, and as such is used directly in writing papers, reading texts and understanding lectures. It would be appealing to partial out the effect of being of non-English-speaking background per se from the effects on university performance of other factors related to being

588

G. Foster / Economics of Education Review 31 (2012) 587–600

an international student (e.g., cultural factors, baseline academic preparation, or underlying aptitude). Beyond the question of whether there are sheer performance differentials between international, nonEnglish-speaking-background (NESB), and other student groups, educational equity concerns would also lead us to ask how the infusion of international and NESB students into Australian higher education impacts upon the marks of other students. Recent research (Gould, Lavy, & Paserman, 2000; Jensen & Rasmussen, 2011) has shown that concentrations of immigrant children in classrooms are robustly negatively related to natives’ ultimate educational outcomes.2 These authors do not offer an explicit causal explanation for their finding, but do note that the immigrants in question were socio-economically stressed, implying that this stress was somehow at play in the process(es) that led to the effect they find. In the Australian context, overseas students are newcomers too, and likely to be ‘stressed’ compared to native students in a variety of ways: almost certainly linguistically and culturally, and perhaps also in terms of their academic background and/or inherent academic ability. While increased cultural diversity may aid learning, sharing a discussion group with peers of lesser ability, worse preparation, or foreign cultures, or who are less able to express themselves in English, may produce a lower grade due to lower-quality or less efficient peer-to-peer and/or tutor-to-student interactions in class.3 These effects would most logically appear in an environment where interactive activities are common, such as a section, or ‘tutorial’ in Australian terminology. There may also be spillover effects unrelated to learning. In particular, it is well-known both from anecdote and in the education literature that many university courses are graded on a curve.4 A strict course-specific grading curve immediately implies that sharing a course with students who perform at a lower level than oneself – in the absence of any other effects – will produce a mechanical rise in one’s own grade. This rise does not reflect better learning outcomes, but is only a mechanical reflection of the

2 The specific native/immigrant demographic dichotomy appears to be important to the generalizability of these findings. Research using blackwhite concentrations in U.S. schools (Guryan, 2004), for example, found no effect of black student concentrations on white drop-out rates. 3 Many results in the literature on peer effects in higher education classrooms would imply this type of effect (see, e.g., Arcidiacono, Foster, Goodpaster, & Kinsler, in press; Foster & Frijters, 2010; Winston & Zimmerman, 2004). Naturally, teacher effects based on immigrant status may occur as well, and have been found empirically: using data from a large public American university, Borjas (2000) finds that foreign-born teaching assistants were associated with worse outcomes for domestic students, compared to native TAs. Jacobs and Friedman (1988) finds no overall effect, but some indication of the relevance of tutors’ English language skills to students’ performance, when investigating the same question using different U.S. data. Finally, such effects may flow through differential expectations held by teachers for different types of students, as contended by van Ewijk (2011). Teacher information is not available in the data set used in this paper. 4 This was not always the case. See Small (1973), who also suggests that the popularity of ‘criterion-based’ assessment (e.g., graduating students who are alleged to possess certain ‘graduate qualities’ or ‘attributes’) reflects a desire to return from grading on the curve towards a set of more absolute assessment standards, as was more commonly used in the last century and in antiquity.

mix of co-students combined with lecturers’ needs to keep grade distributions across years and across courses roughly equivalent. Mechanical ‘spillovers’ of this sort would arise even from a more mild form of relative grading, such as when instructors hesitate to fail more than some threshold percentage of students enrolled in a course, perhaps in order not to earn a reputation for being overly strict, or in order not to be noticed and possibly called to account by the university administration. Hence, if international and/or NESB students perform worse than other students and/or have negative spillovers in the tutorial classroom, and if additionally course grades are set in a manner that involves any measure of relative rather than absolute judgment, then the net impact on grades of adding these students to a given course is theoretically ambiguous. Results show that both international students and NESB students perform significantly worse than other students, even controlling for selection into courses. Both effects are large and persistent. Adding international or domestic NESB students to a tutorial classroom leads to a reduction in most students’ marks, and there is a particularly strong negative association between international NESB student concentrations in tutorial classrooms and the marks of students from English-speaking backgrounds. Finally, conditional on student covariates, tutorial composition effects, course fixed effects, and many other controls, the impact on marks of a high percentage of NESB students in a course is positive. I argue that this finding may reflect supply-side influences such as downward adjustments to the difficulty of material or grading standards. Taken as a whole, the evidence in this paper strongly indicates the need for better integration of students from non-English speaking backgrounds – particularly international NESB students. 1.1. International students in Australia Australia’s geographical position makes it an attractive higher-education option for Asian students wishing to obtain English-language degrees. However, none of Australia’s universities appear in the top twenty or so institutions worldwide, according to the most popular university-ranking schemes (e.g., the Academic Ranking of World Universities, the QS World University Rankings, and the Times Higher Education World University Rankings). This means that the ‘best and brightest’ Asian students heading overseas for their undergraduate degrees are, on average, likely to matriculate at an institution in the U.S. or Europe rather than in Australia. As home-grown Asian universities rise in stature, Asian students in the ‘second tier’ of aptitude and/or general preparation for tertiary study face the choice of whether to study domestically, or instead to go abroad and enrol in a good but not elite institution. Australia’s geographic proximity to Asia makes it an attractive target for students from that second tier downward. The approximately 41 universities in Australia range greatly in quality, and those whose data are analyzed in the present study are part of the Australian Technology Network and are placed near the middle of the pack, according to rankings posted by the Australian Education Network. There is no clear a priori reason to expect that international

G. Foster / Economics of Education Review 31 (2012) 587–600

students studying in these institutions (the majority of whom are Asian) will be innately better or worse-prepared for university study than domestic Australian students. There has been very little large-scale quantitative analysis of the academic performance of international students in the Australian context. In the largest study to date using Australian higher education data, Olsen and Burgess (2006) find no performance differentials between international and local students. However, that study examined only pass rates rather than the full distribution of marks, and was based on aggregated data where other factors, such as additional demographics, course type, and learning context, were not controlled. The university performance of international students relative to domestic students should logically relate to three classes of phenomena, once English language background is controlled: (1) their basic academic aptitude, relative to that of domestic students; (2) the quality of their preparation for university in their home country, relative to that provided domestically; and (3) the amount of additional effort required for an international student to overcome cultural and other obstacles that the domestic student need not face in order to perform at an equivalent level. The present paper does not attempt to disentangle these three potential causes, but rather asks about associations overall between international student status and marks in the case of Australia, while looking independently at English language background effects. Any remaining association found between international student status and marks or classroom spillovers could be due to any or all of the above causes, including the quality of prior preparation in English expression. 1.2. English language skills Insufficient language fluency has been proposed by many authors as the major barrier confronting international students. Lebcir and Wells (2008) suggest that English skill is one of the important drivers of international students’ academic performance. Using data from a U.S. university, Lee and Anderson (2007) find that there is a positive correlation between general language proficiency, which is measured by the TOEFL score (a standardized test of English fluency similar to the IELTS, used in Australian undergraduate admissions decisions regarding foreign students), and students’ writing performance, an important input into success for much undergraduate assessment. As English is used as the teaching language in the Australian undergraduate classroom, it is reasonable to expect the ability to speak and understand English to be positively correlated with a student’s performance. Based on interviews with students at the University of Adelaide, Plewa and Sherman (2007) find that both local and international students with good language skills blame students with relatively poor language skills for lack of creativity and slow progress in their groups. However, there is not complete agreement on this point. Some researchers (Johnson, 1988; Light & Xu, 1987; Picard, 2007) argue that there is no significant correlation in undergraduate student samples between English language proficiency as measured on international tests like the

589

TOEFL or IELTS and either performance or intelligence, particularly when the threshold language requirement (e.g., TOEFL score) is high. While the results of such tests offer a convenient measure for educational institutions wishing to assess the language fluency of overseas candidates, standardized test results may not be a reliable indicator of a student’s true English skills. The simple indicator that is used in the present paper and is not mechanically tied to intelligence is whether a student speaks English in the home. Importantly, this variable (like a TOEFL score) will not pick up differences between students that relate to their level of practice or familiarity with the everyday use of English. 2. Data and methodology This paper exploits a new panel data set on Australian students enrolled in undergraduate programs within the business schools of two universities in the Australian Technology Network. Data are available at the student-tutorial level for the universe of students enrolled and taking courses in these programs at any point during the autumn and spring semesters of 2008, 2009, or 2010. As noted briefly above, a ‘tutorial’ in Australia is equivalent to a ‘section’ in North America. Frequently taught by people termed ‘casual staff’5 or graduate students, the tutorial is an opportunity for students to discuss and work on course material and receive more personalized assistance than is possible in lectures. Tutorials occur weekly, as do lectures, for a particular undergraduate course; a typical course includes 3 h of total face time (e.g., 2 h of lectures and 1 h of tutorials) each week for the duration of the semester, which is between 13 and 15 weeks. The final mark that serves as the dependent variable in this paper is awarded at the course level, but any given student was enrolled in both the course itself and a particular tutorial. The multi-level tertiary learning structure combined with the panel nature of the data make it possible to analyze simultaneously the effects on final course mark of both course-specific information and tutorial-specific information, in addition to student-specific controls like demographics. To create the data set, information from the enrolment systems of each institution was merged with data from students’ applications to university, resulting in a final data set that includes detailed demographics (such as age, gender, and other observable characteristics, including international student status and whether the student speaks English in the home) as well as detailed information about which courses and tutorials each student took in each covered semester, and what final percentage marks were achieved in each.6 Throughout all analysis presented in this

5 These are fixed-term and/or non-tenure-track academics, roughly equivalent to the American ‘adjunct’ or ‘part-time’ faculty member. 6 The term ‘mark’ is used in Australia to denote the percentage out of 100 that a student earns in a course. The term ‘grade’ is used to refer to the letter grade associated with that percentage. In Australia, the letter grades most commonly used and the range of marks to which they apply are Fail (<50), Pass (50–64), Credit (65–74), Distinction (75–84), and High Distinction (>84). ‘Marks’ (percentage scores) are used in all analysis in this paper, and ‘grades’ (with the Australian meaning) are not used.

590

G. Foster / Economics of Education Review 31 (2012) 587–600

paper, the outcome modeled is this final course mark, measured out of 100.7 2.1. Methodology From the data described above, all student-tutorial observations are kept for which the student was not the sole student enrolled; for which the course enrollment exceeded the enrollment in the tutorial (meaning that courses with only one section were dropped); and for which all core observable variables were available. A dummy indicating that university matriculation took place directly following graduation from an Australian high school in the state of residence – as opposed to following attendance at another high school, or as a consequence of transferring from another institution or entering university at an older age – is available for a subset of students. This dummy along with a dummy indicating the absence of this information are both included as control variables in the analysis. Other control variables include course-level effects (either varying across semesters, or fixed across semesters, depending on the model); a dummy array for the location of the tutorial;8 a dummy array for the exact day and time that the tutorial began each week; course size and tutorial size; a dummy array for the program of study pursued by the student; and, as a general studentability control in many models, the average GPA earned by the student in courses that semester not including the course in which performance is being predicted. Further controls that are constructed and included in the analysis are institution of study, semester-by-year effects, course load, discipline group (for regressions excluding coursespecific fixed effects), gender, age, and a dummy indicating that the student was new to university that semester. As the primary regressors of interest in the analysis of spillovers, four variables are constructed at the student-bytutorial level: the percentages of international students and NESB students within the course, and the same percentages within the tutorial classroom, all of which exclude the student himself. As noted above, the dependent variable throughout all analysis in this paper is final course mark, measured out of 100. I first examine the performance differentials (both uncorrected and regression-corrected for other observable characteristics) of international, NESB, and other students across all observed courses. Then, using the computed percentages of international and NESB students in each course and each tutorial, I isolate the association of higher concentrations of both types of students on the marks of other students sharing their tutorial and their course.

7 The final course mark is typically a composite of marks on ‘continuous assessment’ (e.g., papers, midterms, and quizzes) and the mark on the final examination. Although course-specific information about assessment mix is not available in the data used here, the standard convention in most undergraduate courses run by Australian business schools is to hold a final examination which is worth between 40 and 70% of the course mark. 8 These variables indicate the actual physical room in which the tutorial was held – e.g., Building 5, Room 301. This array of controls is intended to pick up effects on marks due to shared phenomena at the tutorial level such as poor lighting, stuffiness, or drafts, that may influence final course marks.

The possibility of selection into courses and/or tutorials is an important concern. The mechanical fact that observations in the data set disproportionately represent large courses means that most of the data used in the estimation comes from core courses (for example, Introductory Accounting), which students by and large cannot avoid taking at some point during their undergraduate careers. Different programs of study do require slightly different core courses, but each student’s program of study can be observed and controlled using dummy variables. Coursespecific fixed effects (e.g., a fixed effect for ‘Introductory Accounting’) can also be included in models featuring course composition variables on the right-hand side, due to the variation across semesters in the composition of any given course. Selection into tutorials is more problematic. There are significant constraints placed on most students’ schedules, and generally no information is available in advance about which tutorials will be taught by which teachers. However, students are by and large able to select their tutorials subject to the constraints they face, with early enrollees likely to have a wider array of choices than later enrollees. This may result in later enrollees, who may also be less conscientious students, being clustered into certain ‘left over’ tutorials. If it is furthermore the case that international or NESB students are also likely to be late enrollees, such that the percentage of such students within any given tutorial is negatively related to the average unobserved conscientiousness of its students, then we would expect downward pressure on the estimate of the impact of tutorial-level concentrations of international or NESB students on marks. To address these concerns, the physical location of the tutorial, the day of the week it was held, and its exact start time are all controlled using dummy variable arrays in the models. However, tutorial-level fixed effects cannot be controlled because doing so prevents the simultaneous inclusion of tutorial-specific composition variables, which multiply the estimation targets. Any causal interpretation of the results found in regard to tutorial composition effects should therefore be viewed as conditional upon the comprehensiveness with which the set of controls included capture the unobserved selection mechanisms at play.9

9 Although this would not permit the estimation of the association with marks of being international or NESB oneself, another approach to purifying the composition effect estimates of selection bias in this context would be to control for student-specific fixed effects, thereby estimating the effects of classroom composition only within student. While this drastically reduces the underlying variation in the explanatory variables used to estimate the composition effects at both the course level and the tutorial level, it is conceptually appealing due to the much lower possibility of any significant selection issue remaining to contaminate the estimates. Following the suggestion of a referee, I attempted to do this. With more than 120,000 observations and close to 18,000 fixed effects (including student, course, and tutorial location, day, and time effects), however, the model was inestimable on the full data set using conventional software. Estimation was only possible by running the model separately for different sub-samples of the data (defined by time-frames, genders, and demographics). In these specifications, the positive linear course-composition effects shown in the main results below were in evidence for several sub-samples, though they were not always significant and for some subsamples they were of the opposite sign; and the tutorial-composition effects were highly variable in terms of both sign and significance. I take

G. Foster / Economics of Education Review 31 (2012) 587–600

591

Table 1 Student-tutorial sample: summary statistics.

Panel A Mark International (yes = 1) NESB (yes = 1) NESB international (yes = 1) Percent international in tutorial Percent NESB in tutorial Percent NESB intl in tutorial Percent international in course Percent NESB in course Percent NESB intl in course Number of students in tutorial Number of students in course

Panel B Mark Percent international in tutorial Percent NESB in tutorial Percent NESB intl in tutorial Percent international in course Percent NESB in course Percent NESB intl in course

Mean

Std dev

Min

Max

61.03 .30 .47 .29 .30 .47 .29 .30 .47 .29 25.95 346.15

(14.80) – – – (.25) (.25) (.25) (.19) (.19) (.19) (10.02) (237.25)

1 0 0 0 0 0 0 0 0 0 2 3

100 1 1 1 1 1 1 1 1 1 102 1017

Mark

% intl in tutorial

% NESB in tutorial

% intl in course

% NESB in course

1.00 −.09*** −.07*** −.09*** −.08*** −.07*** −.08***

– 1.00 .86*** .99*** .77*** .68*** .76***

– – 1.00 .87*** .68*** .76*** .69***

– – – – 1.00 .89*** 1.00***

– – – – – 1.00 .91***

Statistics are calculated across the 122,694 student-tutorial level observations for the sample used to produce the final column of Table 4. ‘NESB’ stands for ‘non-English language speaking background’; ‘intl’ stands for ‘international’. *** p < 0.001.

2.2. Data description Table 1 shows simple summary statistics and correlations at the student-tutorial level for the main analysis sample of 122,694 observations, representing 15,249 students. Panel A shows that the average final course mark in the sample is just over 61%, and the percentages of international and NESB students experienced by students in classrooms average 30% and 47% respectively. A tutorial consists of 25 students, on average; the average enrolment in a course is almost 350 students. Panel B shows strong negative raw correlations between mark and both international student status and NESB status at the student-tutorial level. There are also strong negative correlations at this level between mark and all variables capturing the percentages of international and NESB students in classrooms. Finally, it is clear from the patterns of correlations in this panel that whereas virtually all international students in the sample are also NESB, there is a reasonable percentage of NESB students who are not international. This percentage is 35% in the sample. 3. Results: main effects In the data used for this paper, international and NESB students generally fare worse in terms of raw marks.10

from this exercise that insufficient within-student variation remains in the sub-samples, after absorbing this large number of fixed effects, to estimate interpretable composition effects. 10 Fig. 3 in Appendix A shows simple histograms of marks, at the student-by-tutorial level, for international versus non-international students. Fig. 4 then shows analogous histograms for NESB versus non-NESB students. Both comparisons illustrate that the density is thicker both

Table 2 shows that raw marks vary significantly by international and NESB status at the student level, with both international and NESB students performing worse on average. The patterns shown in Table 2 persist when other aspects of the student and his learning environment are controlled. The first column of Table 3 shows robust evidence of lower marks for international and NESB students – by 2.5–3.5 points each on the 1-to-100 marking scale – even controlling for institution, semester, student demographics (age and gender), new student status, university entrant status (same-state high school leaver or other entrant), course load, program of study,11 course and tutorial size, tutorial location and day/time, and discipline group of the course. Moreover, these effects are not due to selection into courses, nor do they disappear after a student’s first semester. Columns 2 and 3 of Table 3 include interactions

below 50 (a bare pass) and just above 50 for both international and NESB students than for other students, while marks of the latter groups cluster more heavily in the upper ranges. 11 It is important to note that the Australian higher education system by and large does not follow the liberal arts tradition. Australian university students come to university having already identified and enrolled in a particular program leading to a particular degree, such as Bachelor of Business (Marketing) or Bachelor of Accounting, and must enrol in a far narrower suite of courses, proscribed by their program, than a typical American undergraduate. Most Australian undergraduate degrees require three years of full-time study to complete, and require the student to enrol in very few courses managed outside the school (i.e., the collection of departments, such as Economics, Marketing, and Management) running the program. This is why the sample used in this paper represents a fairly complete picture of the courses taken by students enrolled in undergraduate programs run by the business schools of the two institutions.

592

G. Foster / Economics of Education Review 31 (2012) 587–600

Table 2 Student-level averages. Panel A

International

Mark Age Gender (female = 1) New student (yes = 1) Number of tutorials in sample

Mean 57.70 22.45 .52 .31 7.75

Panel B

NESB

Average mark Age Gender (female = 1) New student (yes = 1) Number of tutorials in sample

Mean 58.42 22.42 .54 .27 8.07

Panel C

NESB international

Mark Age Gender (female = 1) New student (yes = 1) Number of tutorials in sample

Mean 57.75 22.44 .52 .29 7.79

Domestic Std dev (11.32) (2.52) – – (4.25)

N 4872 4872 4872 4872 4872

Mean 61.55 22.05 .54 .24 8.30

Std dev (12.35) (4.97) – – (4.62)

N 10,377 10,377 10,377 10,377 10,377

ESB Std dev (11.51) (3.61) – – (4.40)

N 7291 7291 7291 7291 7291

Mean 62.07 21.95 .53 .25 8.17

Std dev (12.48) (4.91) – – (4.62)

N 7958 7958 7958 7958 7958

ESB domestic, NESB domestic, and ESB international

Std dev (11.22) (2.50) – – (4.24)

N 4740 4740 4740 4740 4740

Mean 61.49 22.05 .54 .25 8.27

Std dev (12.39) (4.95) – – (4.63)

N 10,509 10,509 10,509 10,509 10,509

Statistics are calculated across the 15,249 student-level observations used to produce the final column of Table 4. ‘NESB’ stands for ‘non-English language speaking background’; ‘ESB’ stands for ‘English language speaking background’.

Table 3 Baseline marks equations. (1) International International* new student NESB NESB* new student NESB* international NESB* domestic Female Age New student Course-by-semester fixed effects? Adj. R-sq Obs

(2)

−2.779 (0.27)

***

−3.315*** (0.24)

(3)

−2.291 (0.29) −2.128*** (0.40) −3.484*** (0.25) 0.722* (0.36) ***

(4)

−2.355 (0.29) −0.553 (0.41) −3.368*** (0.25) 0.364 (0.35)

(5) −2.561*** (0.28)

***

−3.387*** (0.24)

1.913*** (0.18) 0.073** (0.02) −0.084 (0.16)

1.916*** (0.18) 0.074** (0.02) 0.393 (0.22)

1.953*** (0.17) 0.079** (0.02) −0.641* (0.26)

−5.449*** (0.24) −3.395*** (0.24) 1.963*** (0.17) 0.078** (0.02) −0.768*** (0.19)

No 0.096 125,214

No 0.097 125,214

Yes 0.162 125,214

Yes 0.161 125,214

1.921*** (0.18) 0.056* (0.03) −0.439* (0.18) Yes 0.366 122,694

The dependent variable is final course mark. Institution, semester by year, program of study, high school leaver status, course load, and tutorial size, location, and starting time and day effects are controlled in all regressions. Discipline group is also controlled in columns 1 and 2, using nine categories (Accounting, Banking, Business, Economics, Law, Marketing, Mathematics, Management, and Other), as is course size. Columns 3–5 control for course-by-semester fixed effects. Column 5 also adds a control for the GPA of the student calculated across all other courses in the current semester. ‘NESB’ stands for ‘non-English language speaking background’. Standard errors are clustered at the student level. Full results appear in Appendix A. * p < 0.05. ** p < 0.01. *** p < 0.001.

G. Foster / Economics of Education Review 31 (2012) 587–600

of international and NESB student status with whether the student is new to university. Working from the results in column 3, where student selection into particular courses is controlled using course-by-semester fixed effects, an international student in his first semester at university is estimated to perform about 2.9 marks lower than a seasoned non-NESB domestic student, and even after the first semester, about 80% of this differential remains. For NESB students, the story is even worse: in the first semester the average gap in marks between NESB and seasoned nonNESB domestic students is about 3 points, and the gap widens to almost 3.4 points after the first semester. In column 4 of Table 3, the same basic pattern is confirmed using indicators constructed differently: one indicator for NESB international students, and one for NESB domestic students, with the left-out category being English-speaking students of all types. As expected, NESB international students perform even worse compared to English-speaking students than NESB domestic students, and both groups of NESB students experience significantly lower marks than students who speak English in the home.12 Finally, in column 5 of Table 3, in addition to courseby-semester fixed effects, one final control variable is added to the specification in column 1: the average final mark obtained by the student in all other courses in that semester. This reduces the sample size slightly since all students enrolled in only one course in a given semester are excluded, but the result is the same: international and NESB students both perform significantly worse than other students. The similar pattern of results in this column compared to column 1 is also suggestive of differences in the skills and abilities required to succeed in different courses observed in the sample. If the attributes required for success were identical across courses, then no fixed student characteristics should remain significant in this equation once the balance of GPA that semester is controlled.13 4. Results: composition effects This section focusses on variables that capture the percentages of NESB and international students in the tutorial and course in which the student earning a given mark was enrolled.14 Student-by-course mark equations are re-estimated including these additional covariates constructed at the level of the tutorial and course in which the

12 These significant downward performance effects persist if we instead predict simple passage of courses, using a logistic MLE model. 13 One other possibility is that marks are largely a function of coursespecific luck, but GPA in other courses is highly significant in the equation (see full results in Appendix A), making this seem unlikely. Student-bysemester ‘luck’ effects are certainly possible, however. 14 See Appendix A for graphical distributions of these variables in Figs. 5 and 6. It appears in the data that international student concentrations in tutorials are more uneven (with zero international students in 15% of tutorials) than NESB student concentrations because of programof-study and course selection effects. Specifically, international students are far less likely to study law, and more likely to study accounting, banking, business, and marketing, than other students. This may be the result of visa regulations, university marketing messages to potential overseas students, or other factors.

593

final mark was observed. Tutorial concentration effects are included to capture any learning spillovers that may occur within classrooms, whereas course concentration variables are intended to capture any course-wide spillovers from international and/or NESB student concentrations onto all students’ marks (for example, through relative marking). Because of the significant overlap between international and NESB student status, it is impossible to estimate independent effects of both sets of concentration variables in the same regression. To address this problem, additional specifications are then run with concentration variables at both the tutorial and course levels constructed for NESB domestic and NESB international students, with the leftout category being English-speaking students of all types (both international, of whom there are very few non-NESB students, and domestic). All regressions include course fixed effects, as well as extensive arrays of controls at the tutorial and student levels. Table 4 presents these results.

Table 4 Adding composition effects on marks.

International NESB % intl in tutorial % intl in course % NESB in tutorial % NESB in course NESB* international NESB* domestic % NESB intl in tutorial (% NESB intl in tutorial)2 % NESB intl in course (% NESB intl in course)2 % NESB domestic in tutorial (% NESB domestic in tutorial)2 % NESB domestic in course (% NESB domestic in course)2 Adj. R-sq Obs

(1)

(2)

−2.390*** (0.28) −3.359*** (0.24) −3.198*** (0.28) 5.227*** (0.84)

−2.467*** (0.28) −3.346*** (0.24)

(3)

(4)

−5.456*** (0.24) −3.436*** (0.25) −3.674*** (0.31)

−5.456*** (0.24) −3.430*** (0.25) −3.404*** (0.75) −0.220 (0.81) −0.936 (2.24) 8.261** (2.53) 3.643*** (0.87) −8.349*** (1.34) −3.928 (4.12) 20.231* (9.91)

−3.010*** (0.29) 5.425*** (0.87)

5.883*** (0.90)

−0.952* (0.44)

4.406** (1.54)

0.359 122,694

0.358 122,694

0.358 122,694

0.358 122,694

The dependent variable is final course mark. Institution, semester by year, program of study, GPA of the student calculated across all other courses in the current semester, course load, course fixed effects, and tutorial size, location, and starting time and day effects are controlled in all regressions. ‘NESB’ stands for ‘non-English language speaking background’; ‘intl’ stands for ‘international’. Standard errors are clustered at the student level. Full results appear in Appendix A. * p < 0.05. ** p < 0.01. *** p < 0.001.

15 Specifically, this pattern indicates that international students are present in higher concentrations in courses where worse marks are observed.

80 (mean) mark 60 40

Column 1 of Table 4 shows the impact of concentrations of international students in the tutorial and the course. The results in column 1 suggest that being in a tutorial with a larger percentage of international students is significantly worse for one’s mark. Going from a tutorial with the average percentage of international students (30%) to one with 10 percentage points more international students is estimated to yield about a .32 point reduction in final course mark. This implies that in-class learning may be diminished with the addition of international students to a tutorial classroom. The percentage of international students in the course as a whole, by contrast, is estimated to matter positively for marks. Column 1 of Table 4 shows that holding constant own international and NESB student status, the percentage of international students in the tutorial, and all other controls, the marks of students in courses where the percentage of international students is 10 percentage points higher than average are estimated to be .52 points higher on average than those of students in courses with the average percentage of international students. If we do not control for course fixed effects, this buoying effect is not as strong, implying that student selection into certain courses is important in this context.15 This is preliminary evidence in support of a relative marking effect or other supply side influence on student performance. An identical analysis is undertaken next on the impact on final course marks of NESB student concentrations, and results are shown in column 2 of Table 4. Results indicate similarly-sized negative effects on marks from higher concentrations of NESB students in tutorials, as well as similarly-sized positive effects on marks from high concentrations of NESB students in courses. Naturally however, since a large percentage of NESB students are also international, either the English language aspect or the international student aspect may be driving these effects. This pattern again appears to support a supply-side effect on marks. To further investigate these patterns, an analogous set of regressions is run where concentration variables for two sets of students – international NESB students, and domestic NESB students – are included simultaneously. Results are reported in columns 3 and 4 of Table 4. The negative association of own NESB status with marks, regardless of whether one is international or domestic, is once again clearly apparent in column 3. Furthermore, the negative effect of international and NESB students within a tutorial remains, and is estimated to be driven by students who are both international and from non-English language speaking backgrounds. High domestic NESB student percentages within tutorials still negatively impact student marks, but international NESB students yield a particularly strong negative spillover within the tutorial classroom. In regard to course-wide concentration effects, the results in column 3 of Table 4 indicate that both international and

100

G. Foster / Economics of Education Review 31 (2012) 587–600

20

594

0

.2

.4

.6

.8

1

(mean) intl Mark by percent international (course level)

Fig. 1. Average marks in courses, by percent international students.

NESB student percentages within the course appear to produce an upward buoying effect on marks. Finally, to examine whether there are significant nonlinearities in these effects, column 4 of Table 4 shows results from a model in which squared concentration variables are included in addition to allowing main linear effects.16 Results show that the negative effect of the percentage of NESB international students within the tutorial is fairly well captured by a linear model: the squared term is insignificant. By contrast, the positive effect on marks associated with a higher percentage of international NESB students in the course as a whole is indeed nonlinear: the squared term is positive and highly significant, whereas the main effect is not. This pattern suggests that the effect of courselevel percentages will be lower when only a small fraction of the course’s enrolment is made up of international NESB students than when the course already has a large fraction of such students, whereas adding such students to tutorials has the same negative effects on marks no matter if it is the first or the last such student to be added. This makes intuitive sense under the interpretation that effects stemming from tutorial percentages reflect endogenous learning dynamics resulting from piecemeal changes in the tutorial classroom environment, whereas effects stemming from course-wide concentrations are reflecting wholesale changes to material, lecture delivery, or marking that become more and more necessary to implement with the presence of more and more students who warrant their implementation. If international and NESB students perform worse on an individual basis than other students, then in the absence of grading to a curve one would logically expect that courses in which there are large proportions of such students should post lower average marks. Figs. 1 and 2 show average marks in courses plotted against the percentage of each type of student in the course. The size of each bubble is proportional to the total enrolment in the course. These figures show little evidence of a downward adjustment in average marks as the percentage of international or NESB students in a course rises above about 20%. One

16

I thank an anonymous referee for suggesting this specification.

20

40

(mean) mark 60

80

100

G. Foster / Economics of Education Review 31 (2012) 587–600

0

.2

.4 .6 (mean) no_eng

.8

1

Mark by percent NESB (course level)

Fig. 2. Average marks in courses, by percent NESB students.

possible explanation for this, consistent with the regression results reported above, is that the downward pressure on the grade distribution that results when international and/or NESB students are added to classrooms is partly compensated for by downward adjustments to grading standards or delivered material, buoying up the marks distribution as a whole. The desire to keep certain aspects of marks distributions looking similar across course offerings may provide a free ride to students – potentially, both foreign and domestic – in courses with high percentages of international and/or NESB students.

595

Yet does this buoying effect apply equally to all types of students? To examine this question, Table 5 presents estimation results from models of average marks within a tutorial amongst different groups of students, regressed against the percentages of international NESB and domestic NESB students in the tutorial and the course of which it was part. Columns 1 and 2 use average marks of students of English-speaking backgrounds within the given tutorial as the dependent variable; columns 3 and 4 use average marks of NESB international students; and columns 5 and 6 use average marks of NESB domestic students. Aggregated versions of all variables used in prior regressions are included, and columns 2, 4, and 6 additionally include squared concentration variables in order to capture nonlinearities in the effects of interest. All regressions include course fixed effects, course and tutorial size variables, and tutorial day, time and location variable arrays, and are weighted by size of tutorial. The previous negative spillover effect of international NESB students at the tutorial level is still in evidence, but this effect is strongest by far on the marks of other types of students: those from English-speaking backgrounds and those who are domestic NESB students. Furthermore, underscoring previous results, high concentrations of domestic NESB students in the tutorial are found to be associated with lower marks, but this effect is far smaller than the effect of international NESB student concentrations within tutorials and is also significant only with respect to the marks of domestic ESB students. Hence, of all three student types, international NESB students appear to feel the smallest degree of negative spillovers from high

Table 5 Average marks equations for student subgroups. (1) (2) Average marks of ESB students

(3) (4) Average marks of NESB intl students

(5) (6) Average marks of NESB domestic students

% NESB intl in tutorial (% NESB intl in tutorial)2 % NESB intl in course (% NESB intl in course)2 % NESB domestic in tutorial (% NESB domestic in tutorial)2 % NESB domestic in course (% NESB domestic in course)2

−5.563*** (0.14)

−5.489*** (0.33) −0.146 (0.42) −1.509 (0.95) 7.307*** (1.16) 0.737 (0.40) −5.101*** (0.67) 1.858 (1.82) 10.321* (4.68)

−1.015*** (0.19)

−0.250 (0.54) −0.633 (0.53) 8.563*** (1.55) −0.212 (1.59) 5.162*** (0.63) −10.699*** (1.00) −12.417*** (3.08) 35.010*** (7.76)

−5.310*** (0.33)

Adj. R-sq Obs

0.673 66,580

0.674 66,580

0.657 36,608

0.659 36,608

0.575 21,913

Dep. var:

3.826*** (0.41)

−1.934*** (0.20)

5.931*** (0.71)

8.375*** (0.54)

−0.412 (0.32)

1.288 (1.15)

−0.670 (1.08)

−0.797 (0.45)

4.171* (1.92)

−3.313*** (0.79) −2.370* (0.94) 4.067 (2.67) −6.033 (3.10) 3.923*** (1.03) −7.360*** (1.42) −2.937 (6.50) 13.488 (15.06) 0.576 21,913

In all cases, the dependent variable is the average (within tutorial) final course mark for the indicated group of students. Regressions reported in columns 2, 4, and 6 are respectively equivalent to those reported in columns 1, 3, and 5, except that the former include squared concentration variables. Course fixed effects are controlled in all regressions. Also controlled are effects for institution, semester-by-year, course size, tutorial size, location, day of week and start time, and tutorial-wide averages of the following items across all students of the given demographic type whose average mark within that tutorial is being predicted: gender; age; new student status; course load; program of study indicators for the seven programs of study with the largest percentages of international students; indicators for high school leaver status and the absence of high school information as used in prior regressions; and average GPA in other concurrent courses. ‘NESB’ stands for ‘non-English language speaking background’; ‘ESB’ stands for ‘English language speaking background’; and ‘intl’ stands for ‘international’. * p < 0.05. *** p < 0.001.

596

G. Foster / Economics of Education Review 31 (2012) 587–600

concentrations of NESB students in tutorials. When added to tutorials, they also deliver the largest negative spillovers onto other students. Course-wide concentration effects display an interesting pattern. The percentage of international NESB students in a course is estimated to have a very large positive impact on the tutorial-level average marks of international NESB students themselves, and a smaller but still significant positive impact on the marks of ESB students. The course-wide percentage of international NESB students does not matter significantly for the marks of domestic NESB students. This implies that the upward buoying effect, presumably due to downward adjustments in marking standards, material selection, or delivery when more international NESB students are added to courses only affects the average marks of the NESB international students themselves (primarily) and of English-speaking students (to a lesser extent). A similar pattern of influence is found for the concentrations of domestic NESB students in the course: a positive effect for students of the same type (i.e., domestic NESB) and for ESB students, with international NESB students far less affected by concentrations of domestic NESB students in the course. Turning to the even-numbered columns of Table 5, the nonlinear pattern found in previous results is also in evidence here for domestic ESB students. Such students feel a similar negative effect from each international NESB student added to a tutorial, whereas the upward buoying effect of the course-wide percentage of international NESB students becomes stronger the more of such students are added to a course. By contrast, international NESB students themselves feel a fairly linear upward buoying effect as their percentage in a course increases. Overall, these patterns point to an upward buoying influence on most students’ marks when a given course has a high enrolment of students from non-English speaking backgrounds. Logic suggests that this is not due to learning – after all, these effects are found from variation in student percentages at the course level, not at the tutorial level, and NESB students of all types exhibit lower baseline marks than other students. Instead, two explanations seem most congruent with the facts. First, it may be that lecturers adjust the presentation style or the difficulty of material endogenously when faced with large concentrations of such students in a given course, making it easier to obtain a higher mark in the course. Second, at the stage where assignments, final exams or term papers are marked, a large number of papers that are poorly written may make it difficult for markers to uphold the same standards that they would apply if papers were better written. This may mean both that all poorly-written papers are marked more easily than they would be otherwise, and that a relatively well-written paper (most likely from a student of English speaking background) is given a higher mark than it would have been given, had it not stood in comparison to so many poorly-written papers. 5. Conclusion Using new multi-institutional panel data on Australian undergraduates studying in business schools, I find strong

statistical evidence that both international students and students from non-English speaking backgrounds earn persistently lower marks than other students. Both international students and NESB students perform significantly worse than other students, even controlling for selection into courses. Both effects are large and persist into students’ later semesters at university, but NESB status predicts more of a reduction in own mark than international status and becomes even more of a hindrance after the first semester. The paper’s second main result is that adding international or domestic NESB students to a tutorial leads to a reduction in most students’ marks, and this is particularly the case for international NESB student concentrations. This effect is strongest as felt by students from Englishspeaking backgrounds. Given that the main effect on one’s own marks of being an international or NESB student is negative, strong, and statistically significant, a logical inference is that placing these lower-performing students into tutorial classrooms with other students would have, if not a negative effect, at least a non-positive effect on those other students’ learning. I therefore interpret the negative effect of tutorial-level percentages of international NESB students as a direct negative spillover within tutorial classrooms, that may stem from changes in tutorial dynamics due to the mix of students present that, on balance, hinder learning. While English fluency problems could partly explain this result, the pattern of estimates – i.e., far stronger negative spillover effects from international NESB student concentrations than from domestic NESB student concentrations – is consistent with the hypothesis that cultural factors associated with NESB status other than English language competency play an important role in shaping the learning context at the tutorial level. The third main result in the paper is that the presence of more NESB students in a course buoys up the marks of all students in that course. Conditional on student covariates, tutorial composition effects, course fixed effects, and an array of other student and tutorial-level controls, the impact on marks of a high percentage of NESB students in a course is positive. Based on the foregoing evidence regarding negative performance differentials and negative spillovers within tutorial classrooms, this positive effect of course percentages seems unlikely to reflect a learning effect. Rather, it may reflect supply-side influences, such as downward adjustments to the difficulty of material or grading standards applied when large concentrations of such students are present in a course. Both material selection and final decisions about course marks typically occur for the entire course all at once, rather than tutorialby-tutorial. In either of those activities, an adjustment by the teacher to accommodate a larger fraction of lowerperforming students is quite plausible. Taken as a whole, the evidence in this paper strongly indicates the need for better integration of students from non-English speaking backgrounds – particularly international NESB students – into the Australian tertiary education environment. I conclude that Australian universities should consider carefully whether they are directing sufficient attention and resources towards improving the English fluency and cultural assimilation of these students.

G. Foster / Economics of Education Review 31 (2012) 587–600

597

Acknowledgments

.03 .02 .01 0

Density

.04

1

.03

I thank the University of South Australia, the University of Technology Sydney, and the Australian Research Council (Grant number DP0878138). I acknowledge the research assistance of Fei Qiao and Sam Trezise. Special thanks are due to Peter Antony and Graeme Poole for extracting the data used in this paper.

.04

0

0

Fig. 3 shows simple histograms of marks, at the studentby-tutorial level, for international versus non-international students. Fig. 4 then shows analogous histograms for NESB versus non-NESB students. Fig. 5 shows the distribution of the percentage of international students in tutorials, across all tutorials in the main analysis sample; Fig. 6 shows the analogous distribution for the percentage of NESB students in tutorials. Tables 6–8 show complete results of the regressions appearing in Tables 3–5, respectively, with the exception

.01

.02

Appendix A.

0

50

100

mark Graphs by intl

Fig. 3. Marks: international (bottom) versus non-international (top) students.

Table 6 Baseline marks equations: full results.

International International* new student NESB NESB* new student NESB* international NESB* domestic Female Age New student In-state high school No high school information GPA in other courses Tutorial size Course size Courseload Institution Constant Course-by-semester fixed effects? Adj. R-sq Obs

(1)

(2)

−2.779*** (0.27)

−2.291*** (0.29) −2.128*** (0.40) −3.484*** (0.25) 0.722* (0.36)

−3.315*** (0.24)

(3)

(4)

−2.355*** (0.29) −0.553 (0.41) −3.368*** (0.25) 0.364 (0.35)

(5) −2.561*** (0.28)

−3.387*** (0.24)

1.913*** (0.18) 0.073** (0.02) −0.084 (0.16) −0.698 (0.63) 0.885 (0.59)

1.916*** (0.18) 0.074** (0.02) 0.393 (0.22) −0.709 (0.63) 0.831 (0.59)

1.953*** (0.17) 0.079** (0.02) −0.641* (0.26) −0.770 (0.62) 0.338 (0.60)

−5.449*** (0.24) −3.395*** (0.24) 1.963*** (0.17) 0.078** (0.02) −0.768*** (0.19) −0.758 (0.62) 0.145 (0.60)

0.033*** (0.01) −0.005*** (0.00) 0.841*** (0.09) −8.674* (4.21) 63.642*** (3.58)

0.034*** (0.01) −0.005*** (0.00) 0.820*** (0.09) −9.044* (4.21) 63.653*** (3.58)

0.043*** (0.01) 0.645 (0.33) 0.827*** (0.09) 288.055* (139.61) −249.535 (154.27)

0.043*** (0.01) 0.643 (0.33) 0.789*** (0.09) 287.341* (140.18) −248.711 (154.89)

1.921*** (0.18) 0.056* (0.03) −0.439* (0.18) −0.760 (0.62) 0.476 (0.60) −2.309*** (0.02) 0.053*** (0.01) 0.047 (0.30) 0.806*** (0.09) 7.703 (5652.85) 43.864 (7412.93)

No 0.096 125,214

No 0.097 125,214

Yes

Yes

Yes

0.162 125,214

0.161 125,214

0.366 122,694

This table replicates Table 3 but provides estimation results for more variables. All coefficient estimates except for those associated with fixed effect arrays are reported. Standard errors are clustered at the student level. * p < 0.05. ** p < 0.01. *** p < 0.001.

598

G. Foster / Economics of Education Review 31 (2012) 587–600 Table 7 Adding composition effects on marks: full results.

.04

0

.03

(1)

.02

International

0 .04

1

0

.01

.02

.03

Density

.01

NESB

0

50

100

mark Graphs by no_eng

Percent 10

15

Fig. 4. Marks: NESB (bottom) versus non-NESB (top) students.

% intl in tutorial % intl in course % NESB in tutorial % NESB in course NESB* international NESB* domestic % NESB intl in tutorial (% NESB intl in tutorial)2 % NESB intl in course (% NESB intl in course)2 % NESB domestic in tutorial (% NESB domestic in tutorial)2 % NESB domestic in course (% NESB domestic in course)2 Female

5

Age

0

New student

0

.2

.4

.6

.8

1

intlpc_t

Fig. 5. Proportion of international students in tutorials.

In-state high school No high school information GPA in other courses Tutorial size Course size Courseload Institution Constant

0 1 2 3 4

Percent

Adj. R-sq Obs

0

.2

.4

.6

.8

nesbpc_t

Fig. 6. Proportion of NESB students in tutorials.

1

(2) −2.390*** (0.28) −3.359*** (0.24) −3.198*** (0.28) 5.227*** (0.84)

(3)

(4)

−2.467*** (0.28) −3.346*** (0.24)

−3.010*** (0.29) 5.425*** (0.87) −5.456*** (0.24) −3.436*** (0.25) −3.674*** (0.31)

1.920*** 1.921*** 1.931*** (0.18) (0.18) (0.18) 0.057* 0.056* 0.056* (0.03) (0.03) (0.03) −0.441** −0.476** −0.515** (0.17) (0.17) (0.17) −0.676 −0.675 −0.673 (0.63) (0.63) (0.63) 0.617 0.651 0.470 (0.59) (0.59) (0.59) −2.345*** −2.345*** −2.345*** (0.02) (0.02) (0.02) 0.043*** 0.042*** 0.043*** (0.01) (0.01) (0.01) 0.001 0.001 0.001 (0.00) (0.00) (0.00) 0.798*** 0.804*** 0.761*** (0.09) (0.09) (0.09) 2.628 2.543 2.742 (3689.93) (4875.37) (.) 53.547 52.500 52.514 (1453.61) (1890.37) (3968.09)

−5.456*** (0.24) −3.430*** (0.25) −3.404*** (0.75) −0.220 (0.81) −0.936 (2.24) 8.261** (2.53) 3.643*** (0.87) −8.349*** (1.34) −3.928 (4.12) 20.231* (9.91) 1.930*** (0.18) 0.056* (0.03) −0.533** (0.17) −0.679 (0.63) 0.485 (0.59) −2.346*** (0.02) 0.038*** (0.01) −0.000 (0.00) 0.762*** (0.09) 2.705 (.) 53.786 (.)

0.359 122,694

0.358 122,694

5.883*** (0.90)

−0.952* (0.44)

4.406** (1.54)

0.358 122,694

0.358 122,694

This table replicates Table 4 but provides estimation results for more variables. All coefficient estimates except for those associated with fixed effect arrays are reported. Standard errors are clustered at the student level, and the lack of clustered standard error estimates on some coefficients results from the saturation of the model. Standard errors can be recovered by assuming independence within student, and on the variables of main interest, they are roughly one-third the size of the clustered standard errors reported above. * p < 0.05. ** p < 0.01. *** p < 0.001.

G. Foster / Economics of Education Review 31 (2012) 587–600

599

Table 8 Average marks equations for student subgroups: full results.

Dep. var: % NESB intl in tutorial (% NESB intl in tutorial)2 % NESB intl in course (% NESB intl in course)2 % NESB domestic in tutorial (% NESB domestic in tutorial)2 % NESB domestic in course (% NESB domestic in course)2 Female Age New student In-state high school No high school information GPA in other courses Tutorial size Course size Course load Institution Constant Adj. R-sq Obs

(1) (2) Average marks of ESB students

(3) (4) Average marks of NESB intl students

(5) (6) Average marks of NESB domestic students

−5.563*** (0.14)

−1.015*** (0.19)

−5.310*** (0.33)

3.578*** (0.11) 0.308*** (0.01) −0.635*** (0.12) −0.191 (0.30) 0.180 (0.26) −2.534*** (0.01) 0.040*** (0.00) −0.001** (0.00) 1.170*** (0.05) 13.817*** (2.09) 44.784*** (0.97)

−5.489*** (0.33) −0.146 (0.42) −1.509 (0.95) 7.307*** (1.16) 0.737 (0.40) −5.101*** (0.67) 1.858 (1.82) 10.321* (4.68) 3.539*** (0.11) 0.309*** (0.01) −0.717*** (0.12) −0.266 (0.30) 0.144 (0.26) −2.534*** (0.01) 0.038*** (0.00) −0.001*** (0.00) 1.185*** (0.05) 13.065*** (2.09) 46.038*** (0.98)

0.673 66,580

0.674 66,580

3.826*** (0.41)

−1.934*** (0.20)

5.931*** (0.71)

1.678*** (0.14) −0.202*** (0.03) −0.587*** (0.15)

−0.250 (0.54) −0.633 (0.53) 8.563*** (1.55) −0.212 (1.59) 5.162*** (0.63) −10.699*** (1.00) −12.417*** (3.08) 35.010*** (7.76) 1.658*** (0.14) −0.200*** (0.03) −0.600*** (0.15)

−2.602*** (0.02) 0.049*** (0.00) 0.002*** (0.00) 2.684*** (0.13) −1.707 (5.73) 49.574*** (2.12) 0.657 36,608

8.375*** (0.54)

−0.412 (0.32)

1.288 (1.15)

−2.609*** (0.02) 0.043*** (0.00) 0.002*** (0.00) 2.579*** (0.13) −2.518 (5.72) 51.246*** (2.15)

1.194*** (0.18) −0.083*** (0.02) −0.839*** (0.24) 3.125*** (0.51) 5.000*** (0.46) −2.462*** (0.03) 0.046*** (0.01) −0.000 (0.00) 1.645*** (0.09) −8.478 (8.77) 75.834*** (7.67)

−3.313*** (0.79) −2.370* (0.94) 4.067 (2.67) −6.033 (3.10) 3.923*** (1.03) −7.360*** (1.42) −2.937 (6.50) 13.488 (15.06) 1.198*** (0.18) −0.084*** (0.02) −0.824*** (0.24) 3.166*** (0.50) 4.957*** (0.46) −2.461*** (0.03) 0.039*** (0.01) −0.000 (0.00) 1.645*** (0.09) −8.751 (8.77) 76.052*** (7.69)

0.659 36,608

0.575 21,913

0.576 21,913

−0.670 (1.08)

−0.797 (0.45)

4.171* (1.92)

This table replicates Table 4 but provides estimation results for more variables. All coefficient estimates except for those associated with fixed effect arrays are reported. * p < 0.05. ** p < 0.01. *** p < 0.001.

of results for the arrays of fixed effects which are too voluminous to report. References AE. (2009). The Australian education sector and the economic contribution of international students. Access Economics Pty Ltd. AEI. (2010). AEI International Student Data YTD April 2010. Australian Education International. Arcidiacono, P., Foster, G., Goodpaster, N., & Kinsler, J. Estimating spillovers using panel data, with an application to the classroom. Quantitative Economics, in press. Borjas, G. (2000). Foreign-born teaching assistants and the academic performance of undergraduates. American Economic Review Papers and Proceedings, 90(2), 355–359. Bradley, G. (2000). Responding effectively to the mental health needs of international students. Higher Education, 39(4), 417–433. Cheng, D., & Leong, F. T. L. (1993). Cultural differences in psychological distress between Asian and American college students. Journal of Multicultural Counseling & Development, 21(3), 182–189.

Foster, G., & Frijters, P. (2010). ‘Students’ beliefs about peer effects. Economics Letters, 108(2010), 260–263. Gould, E., Lavy, V., & Paserman, D. (2000). Does immigration affect the long-term educational outcomes of natives? Quasi-experimental evidence. Economic Journal, 119, 1243–1269. Guryan, J. (2004). Desegregation and black dropout rates. American Economic Review, 94(4), 919–943. Jacobs, L. C., & Friedman, C. B. (1988). Student achievement under foreign teaching associates compared with native teaching associates. Journal of Higher Education, 59(5), 551–563. Jensen, P., & Rasmussen, A. W. (2011). The effect of immigrant concentration in schools on native and immigrant children’s reading and math skills. Economics of Education Review, 30(6), 1503–1515. Johnson, P. (1988). English language proficiency and academic performance of undergraduate international students. TESOL Quarterly, 22(1), 164–168. Lebcir, R., & Wells, H. (2008). Factors affecting academic performance of international students in project management courses: A case study from a British post 92 university. International Journal of Project Management, 26(3), 268–274. Lee, H., & Anderson, C. (2007). Validity and topic generality of a writing performance test. Language Testing, 24(3), 307–330.

600

G. Foster / Economics of Education Review 31 (2012) 587–600

Light, R., & Xu, M. (1987). English proficiency and academic performance of international students. TESOL Quarterly, 21(2), 251–261. Olsen, A., & Burgess, Z. (2006). The comparative academic performance of international students. International Higher Education, 42(42), 11–12. Picard, M. (2007). English entrance requirements and language support for international postgraduate students. In Enhancing higher education, theory and scholarship: Proceedings of the 30th HERDSA annual conference. Plewa, C., & Sherman, C. (2007). Dimensions of diversity: Group work within marketing courses. In ANZMAC 2007 conference: Reputation, responsibility, relevance.

Small, A. (1973). Marking practices in historical perspective. Educational Studies, 4(4), 189–197. Stoynoff, S. (1997). Factors associated with international students’ academic achievement. Journal of Instructional Psychology, 24(1), 56–68. van Ewijk, R. (2011). Same work, lower grade? Student ethnicity and teachers’ subjective assessments. Economics of Education Review, 30(5), 1045–1058. Winston, G., & Zimmerman, D. (2004). Peer effects in higher education. In College choices: The economics of where to go, when to go, and how to pay for it. University of Chicago Press. Zhang, Z., & Brunton, M. (2007). Differences in living and learning: Chinese international students in New Zealand. Journal of Studies in International Education, 11(2), 124–140.