Non-response bias in student evaluations of teaching

Non-response bias in student evaluations of teaching

Accepted Manuscript Title: Non-response bias in student evaluations of teaching Author: Clifford Nowell Lewis R. Gale Joe Kerkvliet PII: DOI: Referenc...

378KB Sizes 0 Downloads 35 Views

Accepted Manuscript Title: Non-response bias in student evaluations of teaching Author: Clifford Nowell Lewis R. Gale Joe Kerkvliet PII: DOI: Reference:

S1477-3880(14)00011-5 http://dx.doi.org/doi:10.1016/j.iree.2014.05.002 IREE 48

To appear in: Received date: Revised date: Accepted date:

1-2-2013 15-5-2014 15-5-2014

Please cite this article as: Nowell, C., Gale, L.R., Kerkvliet, J.,Non-response bias in student evaluations of teaching, International Review of Economics Education (2014), http://dx.doi.org/10.1016/j.iree.2014.05.002 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

an

us

cr

ip t

Non-Response Bias in Student Evaluations of Teaching

Clifford Nowell*

M

Department of Economics Weber State University

d

Ogden, UT 84408

Ac ce p

te

[email protected]

Lewis R. Gale

Eberhardt School of Business University of the Pacific Stockton, CA 95211 [email protected]

Page 1 of 34

and

Joe Kerkvliet

Corvallis, OR 97331

te

d

M

an

us

[email protected]

cr

Oregon State University

ip t

Department of Applied Economics

Ac ce p

*Corresponding author

1

Page 2 of 34

Abstract: For as long as colleges and universities have been conducting student evaluations of teaching (SET), faculty have questioned the validity of the information collected. Substantial

ip t

effort has been expended to improve the SET process, and researchers have communicated a

cr

consistent understanding of why students evaluate teachers as they do. Most of these

conclusions have been based on an analysis of SET data gathered at the end of the semester by

us

a sample of students who may not represent all students enrolled in the class. This clearly

an

creates the potential for a sample selection bias that puts into question much of what we have learned about why students evaluate their instructors as they do.

M

In this paper, we gather a unique data set from individual students who provide either online SET forms or in-class SET forms. We use these responses to address the concerns

d

regarding the validity of SET data which results from the low response rates often associated

te

with online SET evaluations. The Heckman (1979) sample selection bias correction estimator is

Ac ce p

used to quantify the bias resulting from the low online response rate and finds this bias to be negligibly small.

Keywords: Student evaluation of teaching; Sample selection; Non-response bias;

2

Page 3 of 34

JEL Codes: A2, C5

Introduction

ip t

1.

Obtaining a complete understanding of how students evaluate their instructors has been a

cr

difficult task for educators and university administrators. This difficulty stems from both a lack

us

of student evaluation data and the challenge in fully understanding what the available data actually measure. The problem is confounded by the fact that data are gathered via different

an

methods and through different instruments. The result is that many faculty have little confidence in the efficacy of student evaluations and view the process with suspicion (Shao et.

M

al., 2007). The recent move towards conducting student evaluations of teaching (SET) online

d

has likely exacerbated the situation, as the resultant reduction in response rates (Avery et. al.,

te

2006) has increased the unease of faculty (Sax et.al., 2003), who believe the sample of students who complete evaluations may not be representative of all students in class.

Ac ce p

Administering SET surveys online does present unique opportunities for research into why students rate their instructors as they do. Linking the SET to individual students removes many of the data limitations imposed when evaluations are completed anonymously. Whereas most studies on the determinants of SET ratings are done using class-level data, the recent explosion of online evaluations allows for the determinants of SET rankings to be analyzed with individual data. One disadvantage to researchers using this data is that students are typically required to provide an identification number to access the SET form.

3

Page 4 of 34

In this paper, we gather a unique data set from individual students who provide either online SET forms or in-class SET forms. We use these responses to address the concerns regarding the validity of SET data which results from the low response rates often associated

ip t

with online SET evaluations. The Heckman (1979) sample selection bias correction estimator is

cr

used to quantify the bias resulting from the low online response rate and finds this bias to be negligibly small.

us

This issue of sample selection is important because faculty often fear that students who

an

complete evaluations are not representative of all students in class, and thus tend to discount the information provided by SET data (Sax et.al., 2003). Past research on sample selection

M

(Becker, 2001) has attempted to find ways to correct for missing student data, yet this line of

necessary.

d

research has not attempted to address whether correcting for a non-random sample is actually

te

Our paper has six sections. Immediately following this introduction we present a brief

Ac ce p

description of the problem of sample selection. In Section Three, we present the empirical model used to jointly analyze students’ decisions on whether to complete the SET questionnaire, and how to rate instructor, if they do complete the questionnaire. In Section Four, we estimate the model and discuss results. In Section Five we test for sample selection and in Section Six we present our conclusions.

2.

The Theory of Sample Selection in SET Data

4

Page 5 of 34

Nonrandom samples are common in many types of research, and are present in virtually all investigations of SET data. A sample selection bias occurs in SET data if the factors used to explain the SET rating are correlated with the probability of an individual completing the SET

ip t

survey. If this is the case, coefficient estimates from ordinary least squares regressions are

cr

biased and estimating a sample selection model will result in a more accurate understanding of the motivations behind why students view their instructors as they do.

us

The problem of sample selection in SET data can be summarized by recognizing that the

, where

an

regression equation typically used to analyze SET responses is written as

represents the numerical evaluation of student i, represents the explanatory variables used to

M

represents the parameters to be estimated, explain a student’s SET rating, and

d

represents the error term.

te

Potential biases stem from the fact that we only observe SET responses in cases where students actually complete the SET. Completion occurs when the expected net benefits from

Ac ce p

completing the evaluation are positive. These net benefits such that

benefits and

, where

will vary for each student

represents the determinants of these net

represent the errors. Although these net benefits are not observed, if

the student will complete the evaluation and we are able to observe > 0, we define

When estimating

, and when

< 0,

. In cases where .

, bias is present because when

value of the error term in the SET equation is

> 0,

, the expected . This occurs

5

Page 6 of 34

when

. If

is correlated with either

will yield biased results so long as

is not a subset of

or

, OLS regression on

. That is, a sample selection

us

cr

ip t

bias may occur if the errors in the SET equation are related to the errors in the COMPLETE

d

M

an

equation and the explanatory variables in the COMPLETE equation are not a subset of the

te

explanatory variables in the SET equation.

The theory of sample selection was brought to the attention of economists by Heckman

Ac ce p

(1979), who viewed the problem of sample selection problem as a specification error. Heckman (1979) developed a two-step procedure that is commonly used to correct for selection bias that we use in this paper. In theory, the estimation of a sample selection model testing whether the parameters estimated in the standard OLS model are biased is simple; yet in practice, it is extremely difficult to implement. There must exist an instrumental variable in the

that has

substantial explanatory power in predicting whether a student completes the evaluation while at the same time having very little power in predicting how a student actually rates their instructor on the SET form. Without an adequate instrument, or if the equation predicting 6

Page 7 of 34

whether a student completes the SET survey is mis-specified, the resultant OLS regression predicting how students rate their instructors is clearly unreliable and is potentially less accurate than the non-corrected OLS estimates (Shadish, et.al., 2002). If an adequate

ip t

instrument is present, efficiency can be gained by using a sample selection model that

cr

recognizes a link between the two equations.

One possible instrumental variable that may be used to test for a selection bias in SET

us

ratings is how the SET form is administered. Past research has shown the percentage of

an

students who complete online evaluations is significantly less than the percentage of students who complete evaluations conducted in class (Nulty, 2008). Recent studies (Nulty, 2008;

M

Nowell et. al., 2010) have reported that response rates from evaluations conducted online are typically less than 50%, while response rates for in-class surveys typically exceed 75%.

d

Because research strongly supports the notion that the method of evaluation has little or

te

no impact on how students rate their instructors (Dommeyer et.al., 2007; Nowell, 2007) but

Ac ce p

does have a very strong correlation with whether a student completes an evaluation (Anderson et.al., 2006) the method of evaluation may be the basis for creating an instrumental variable which can be used to estimate a sample selection model.

3. Data and Estimation

In order to test for sample selection, we asked students enrolled in courses at a large public university located in the USA about their classroom experiences. All students were enrolled in the business school taking classes in economics, finance, and quantitative analysis. The survey 7

Page 8 of 34

was conducted in 45 separate courses, taught by 18 different faculty members in lower-division classes taken by freshman and sophomores and in upper-division classes taken by juniors and seniors. Enrollment in these classes totaled 1242 students. The process of administering the

ip t

SET form was identical for all classes taught in the business school. To conduct the evaluation,

cr

a faculty member went to each class and read a set of instructions explaining the importance of the evaluation process. After this information was conveyed, students in approximately half of

us

the classes were told that their evaluation would be done online and were given a URL which

an

linked to the evaluation.

Students who completed the evaluation online were required to provide their

M

identification number to access the evaluation form, which is standard procedure for all online evaluations where the experiment was conducted. Students were told that identification

d

numbers would not be reported to individual departments or to their instructor. Faculty

te

teaching the courses were not given access to any information that could identify an individual

Ac ce p

student. Based on the student identification number we were able to link SET responses to demographic information such as the student’s grade in class, major and GPA. When evaluations were completed online, each student was sent three reminders to their e-mail address explaining how to complete the evaluation. These reminder messages also contained a link to the actual evaluation.

When evaluations were conducted in-class, identical information was given on the importance of the evaluation process and students were immediately given a paper and pencil version of the instructor evaluation form which was identical to the form used in the online 8

Page 9 of 34

evaluations. After students completed the questionnaire, they were asked to put the evaluation form aside. At this point, students were asked if they were willing to participate in a research project regarding how teaching instruction is evaluated. Students were asked to write

ip t

their student identification number on a separate piece of paper and attach the paper to their

cr

instructor evaluation form. In 6.3% of the in-class surveys, students did not provide

identification numbers. A two-tailed t-test revealed that the average instructor rating of

us

students who provided their identification number was not significantly different (p< .05) from

an

the average instructor rating of the students who did not provide their identification numbers. Because the timing of when students were asked to provide their identification number did

M

differ in the in-class and online surveys a potential bias is created. Scott (1993) recognizes that this is common in surveys conducted in higher education when it is important to match

d

individual responses to academic information such as grades or major field of study. Scott

te

(1993) cautions that as anonymity is reduced the potential for dishonest answers increases, and

Ac ce p

a decrease in response rates may result. This decrease in the response rate may explain some of the selection bias found in SET data. The instructor evaluation form asked students about five dimensions of instructor quality: organization, willingness to respond to students, availability, respect for students, and the overall contribution of the instructor. Students ranked their instructors on a scale of 1(low) to 7(high) in each category. Means and standard deviations of data gathered are reported in Table One.

9

Page 10 of 34

3.1 Estimation of the Participation Equation The overall response rate for evaluations conducted online was 30.1% and for evaluations

us

cr

ip t

conducted in-class it was 68.7%. We estimate equation (1) predicting COMPLETE with a

an

probit model and include explanatory variables to account for student, classroom, and teacher

Ac ce p

te

d

M

effects as well as the potential instrumental variable ONLINE.

To measure student effects, we use the grade the student earned in class (GRADE) which

we expect to be positively correlated with completing SET surveys. We also include a student’s

10

Page 11 of 34

ip t

grade point average (GPA), which may be an indicator of student commitment. A priori, we

cr

believe both these variables measure a student’s interest in the subject and believe they will be

us

positively correlated with completing the SET form.

an

We also collected data on whether the class was a required quantitative analysis class

te

d

M

(QUANT = 1), or was in finance or economics (QUANT = 0). We include a dummy variable to

Ac ce p

control for whether the class was taught at the junior/senior level (UPPER = 1) as opposed to

the freshman/sophomore level (UPPER = 0), and a variable to reflect the number of students

11

Page 12 of 34

ip t

enrolled in the course (ENROLL). We hypothesize that in classes with fewer students enrolled,

cr

and in classes taught at the junior/senior level, students will be more engaged and therefore

us

more likely to complete the SET survey. Quantitative Analysis classes are generally viewed as more difficult, and this may increase attendance in these classes, or it may result in a greater

an

number of students leaving the class before the end of the semester.

Ac ce p

te

d

M

We also include a variable indicating when the evaluation was conducted online (ONLINE

=1), and four interaction terms, GRADE*ONLINE, ENROLL*ONLINE, UPPER*ONLINE, and

12

Page 13 of 34

ip t

QUANT*ONLINE to test for possible interaction effects between class specific variables and

cr

whether the SET evaluation was conducted online.

us

Because prior research has found differences in SET completion rates based on gender and ethnicity (Sax et. al., 2003) we did examine the impact of these demographic variables on

an

completion. Students voluntarily report ethnicity data as they are admitted to the university.

M

In our sample, 67.6 % of respondents indicated they were white, 9.8% indicated they were nonwhite, and 22.6% did not provide information on their ethnicity. Perhaps because of the large

d

amount of missing data, differences across the ethnic groups were not significant. In our

te

sample, 64.3% of respondents were male, and 35.7% were female. This ratio is not significantly

Ac ce p

different from the ratio for all students in the School of Business where the survey was conducted. Gender was not a significant predictor in whether a person completed the SET or how the instructor was rated. Additional demographic information such as age and family background was not gathered in the survey. If these omitted demographic variables are significantly related to completion of the SET form, or how the SET form a specification error may result.

13

Page 14 of 34

We include dummy variables for each of the teachers whose classes were included in the

cr

ip t

study. We use seventeen dummy variables (T1, T2, T3,…, T17) to account for the eighteen

us

different teachers. By using dummy variables in this manner we are able to avoid the difficult problem of trying to measure teacher quality, an obvious determinant of both whether a

an

student is present in class on the day which the evaluation is conducted and the level of student

M

satisfaction. The drawback of this mixed-effects model is that we provide no information on

d

what teacher characteristics yield greater participation in SET surveys.

te

3.2 Estimation of the SET Equation

Ac ce p

SET ratings are known to vary systematically by grades, class sizes, subject matter, and faculty performance. In order to understand how students evaluate their instructors we follow the practice of most researchers (McPherson, 2006; Nowell, 2007; Weinberg, et.al., 2009) and use regression analysis to understand the relationship between SET rating and a wide set of explanatory variables. We estimate

14

Page 15 of 34

The dependent variable in equation (2), is the average of student responses to the five questions relating to teacher quality: organization, willingness to respond to students, availability, respect for students, and the overall contribution of the instructor. We use the

cr

ip t

same explanatory variables to estimate SET ratings as we did to estimate response rates.

an

us

Because the dependent variable, SETi, is the average of 5 questions we use ordinary least

M

squares regression to analyze the data.

d

4. Results

te

Table Two presents the results from estimating both the participation and the SET

Ac ce p

equations. Consistent with Avery, et.al., (2006) we find that when evaluations are administered online, fewer students complete the evaluation. We also find that students who earn higher grades are more likely to complete evaluations. Student grade point average and class size are not significant determinants of whether students complete evaluations regardless of whether the evaluation is conducted online or in-class.

15

Page 16 of 34

ip t

The estimated coefficient on the interaction term GRADE*ONLINE is negative and

cr

significant, indicating that when evaluations are conducted online, the influence of higher

us

grades on the probability of SET completion is noticeably less than when the evaluation is

an

conducted in-class although, on balance, students with higher grades are still more likely to

d

M

complete evaluations in both settings. Because the coefficient on UPPER*ONLINE is positive

te

and significant it also appears that students in upper-division classes that are evaluated online

online.

Ac ce p

are more likely to complete the evaluation than in lower-division classes that are evaluated

We confirm that the variable ONLINE has significant power in explaining COMPLETE by

conducting a likelihood ratio test of the null hypothesis Ho:

16

Page 17 of 34

The result of this test, shown in Table Two, confirms the likelihood ratio statistic (LR) is large enough to reject Ho with a p-value of less than .01. Looking at the estimated results from the SET equation in Table Two, we find that student

ip t

evaluations of teaching are positively related to grades and negatively related to the class being

cr

upper-division. The finding that grades and SET ratings are positively related is common

(Denson et.al., 2010). The finding that SET ratings are lower in upper-division classes may imply

an

us

that all else equal, as students progress in their studies, they have higher expectations for

te

d

M

faculty. The estimated coefficient on ONLINE is not significant at the 5% level, but the

Ac ce p

estimated coefficient on the interaction term ONLINE*GRADE is positive and significant. We

find that GPA does not influence SET ratings, even though some past studies have found this variable to be influential (Isely and Singh, 2005).

17

Page 18 of 34

cr

affect of removing the variable ONLINE from equation (2). We test Ho:

ip t

Finally, to finish our analysis of SET ratings, we use a likelihood ratio test to examine the

us

. As shown in Table Two, the likelihood ratio statistic for this χ2 test is 8.30. Based on this test statistic we fail to reject Ho at a significance level of .10. From

an

this test we conclude that the predictive power of the equation is not improved when we allow

M

for different effects created by how the evaluation is administered. The results presented in Table Two show that although conducting evaluations online

te

d

influences the sample of who completes the evaluation, it does not significantly influence how

Ac ce p

the evaluation is filled out by the student. Based on this outcome, we believe ONLINE is a

valid instrument to use in a sample selection model which jointly estimates the participation and SET rating equations. This approach provides us the opportunity to investigate whether a sample selection problem is likely to exist when students complete SET forms, regardless of whether the form is conducted online or in-class.

18

Page 19 of 34

5. Sample Selection Estimation of the Sample Selection model is done with the econometric software program LIMDEP (Greene, 2007). The Heckman procedure starts by estimating the probit model given

ip t

by equation (1) and saving the inverse Mills ratio, ( ). The inverse Mills ratio is given by

represents the standard normal probability function evaluated at

an

represents the value of the cumulative normal distribution

te

Ac ce p

function evaluated at i.

d

M

each observation i, and

us

cr

, where

Using this result, the SET equation estimated in the Heckman procedure is given by

where

,

is the inverse Mills ratio. The inverse Mills ratio, calculated from the estimated

probabilities in equation (1), accounts for the potential sample selection bias created through the truncation of

, and is included to reflect the fact that the data we observe are not

randomly drawn from the population of students in which we are interested. Rather, the data 19

Page 20 of 34

on

are drawn only from students who complete an evaluation. If the estimated

coefficient associated with the inverse Mills ratio is significant we can conclude a selection bias is present when the SET equation is estimated through traditional ordinary least squares

ip t

regression.

cr

Because the first step of the Heckman procedure reproduces the results from the

probit equation shown in Table Two, we focus our discussion on the results from estimating the

us

SET equation which are shown in Table Three. The results from estimating the SET equation

an

with OLS and with the Heckman procedure are similar. In both cases grades are positively

M

related to higher student evaluations, and upper division courses are negatively related to SET

Ac ce p

te

d

ratings. The estimated coefficient on ENROLL is negative in both models, but is only significant

in the sample selection model.

Looking at the teaching dummy variables, we see that in the OLS model the estimated

coefficients on T14, T15, and T17 are insignificant, indicating that all else equal, the

evaluations given to these teachers are no different from the omitted teacher. All other 20

Page 21 of 34

teaching dummy variables are positive and significant. The sample selection model produced

cr

ip t

similar results except for the coefficient on T3, which was estimated to be positive and

us

significant in the OLS estimation and positive but insignificant in the sample selection model. Because the estimated coefficients on the teaching dummy variables can be used to assess the

an

relative performance of faculty (Nowell et. al., 2010), the similarity of the estimates in the two

M

models indicates both models would result in a similar assessment of relative faculty

d

performance.

Ac ce p

te

The value of the estimated coefficient on λi equals -0.20, and contributes little to the

explanatory power of the equation. This indicates that sample selection is likely not a significant issue and suggests that that the bias resulting from the low response rate found in the student evaluation process is likely to be unimportant.

6. Conclusions

21

Page 22 of 34

In this paper, we replicate findings which demonstrate that when student evaluations of teaching are administered online, fewer students complete evaluations. Further, we show that students who complete evaluations online systematically differ from those students who

ip t

complete evaluations conducted in class. Our findings indicate that even though the group of

class, the determinants of their instructor ratings do not differ.

cr

people completing evaluations online is different from the group who complete evaluations in-

us

Based on these two facts, we are able to test if a bias is present when SET data are

an

gathered from the subset of students who complete the SET form at the end of the semester. We find no evidence that such a bias exists. The determinants of how student’s rate their

M

professors should not be questioned simply because many students are absent and do not complete the SET questionnaire. This paper is important because we are able to show that SET

d

data collected from a non-random sample of students at the end of the semester does appear

Ac ce p

te

to be representative of the class as a whole.

22

Page 23 of 34

References

ip t

1. Anderson, J., Brown, G. & Spaeth, S. (2006) Online student evaluations and response rates reconsidered, Innovate, 2(6). Available online at http://www.innovateonline.info/index.php?view=article&id=301.

cr

2. Avery, R. J., Bryant, W. K., Mathios, A., Kang, H. & Bell, D. (2006) Electronic course evaluations: does an online delivery system influence student evaluations, Journal of Economic Education, 37(1), 21-37.

us

3. Becker, W.E. and Powers, J. (2001) Student Performance, Attrition, and Class Size Given Missing Student Data. Economics of Education Review, 20, 377-388.

an

4. Denson, N., Loveday, T., and Dalton, H. (2010) Student evaluation of courses: what predicts satisfaction, Higher Education research & Development, 29(4), 339-356.

M

5. Dommeyer, C. J., Baum, P. Hanna, R. W. and Chapman, K. S. (2004) Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations, Assessment & Evaluation in Higher Education, 29(5), 611-623.

d

6. Greene, W. (2007) LIMDEP Version 9.0, Plainview, NY. Econometric Software, Inc.

te

7. Heckman, J. J. (1979) Sample selection bias as a specification error, Econometrica, 47 (1), 153-162.

Ac ce p

8. Isely, P. and Singh, H. (2005) Do higher grades lead to favorable student evaluations? Journal of Economic Education, 36(2), 29-42. 9. McPherson, M.A. (2006) Determinants of how students evaluate teachers, Journal of Economic Education, 37(1), 3-20. 10. Nowell, C. (2007) The impact of relative grade expectations on student evaluation of teaching, International Review of Education Economics, 6(2) 42-56. 11. Nowell, C., Gale, L.R., and Handley, B. (2010) Assessing faculty performance using student evaluations of teaching in an uncontrolled setting, Assessment & Evaluation in Higher Education, 35(4), 463-476. 12. Nulty, D.D. (2008) The adequacy of response rates to online and paper surveys: what can be done, Assessment & Evaluation in Higher Education, 33(3), 301-313. 23

Page 24 of 34

13. Sax, L. J., Gilmartin, S. K. &Bryant, A. N. (2003) Assessing response rates and nonresponse bias in web and paper surveys, Research in Higher Education, 44(4), 409432. 14. Scott, A.G. and Sechrest, L. (1993) Survey research and response bias, Proceedings the Survey Research Methods Section, American Statistical Association, 1, 238-243.

ip t

15. Shadish, W.R., Cook, T.D. and Campbell, D.T. (2002) Experimental and QuasiExperimental Designs for Generalized Causal Inference, Boston, MA. Houghton Mifflan

us

cr

16. Shao, L. P., Anderson, P. A. and Newsome, M. (2007) Evaluating teaching effectiveness: where we are and where we should be, Assessment & Evaluation in Higher Education, 32(3), 355-371.

Ac ce p

te

d

M

an

17. Weinberg, B. A., Masanori, H., and Fleisher, B. F. (2009) Evaluating Teaching in Higher Education, Journal of Economic Education, 40(3), 227-261.

24

Page 25 of 34

ONLINE QUANT

PRESENT UPPER

ip t cr

.499

us

2.38

1 if SET administered online; 0 otherwise

1.33

.547

.497

.318

.460

35.0

13.2

The number of students in class that complete the SET

16.7

10.77

1 if the class was taught at the upperdivision; 0 otherwise

.309

.462

Student GPA

2.87

.79

1 if the class subject was quantitative analysis

The number of students enrolled in class

Ac ce p

ENROLL

Actual class grade

an

GRADE

.466

M

COMPLETE

1 if completed SET; 0 otherwise

Standard Deviation 1.30

d

SET*

Table One: Descriptive Statistics Description Mean Average SET score (1=low;7=high) 5.53

te

Variable

GPA 25

Page 26 of 34

T6 T7 T8 T9

1 if teacher three; 0 otherwise

.01

ip t

.20

us

cr

.07

.02

.14

an

T5

.04

1 if teacher four; 0 otherwise

.06

.24

.08

.28

.05

.20

1 if teacher eight; 0 otherwise

.10

.31

1 if teacher nine; 0 otherwise

.05

.22

1 if teacher ten; 0 otherwise

.05

.21

1 if teacher five; 0 otherwise

M

T4

1 if teacher two; 0 otherwise

1 if teacher six; 0 otherwise

d

T3

.27

te

T2

.08

1 if teacher seven; 0 otherwise

Ac ce p

T1

1 if teacher one; 0 otherwise

T10 26

Page 27 of 34

T16 T17

1 if teacher thirteen; 0 otherwise

.06

ip t

.27

us

cr

.24

1 if teacher fourteen; 0 otherwise

1 if teacher fifteen; 0 otherwise

.02

.12

an

T15

.08

.08

.27

.04

.19

.08

.26

M

T14

1 if teacher twelve; 0 otherwise

1 if teacher sixteen; 0 otherwise

d

T13

.22

te

T12

.05

1 if teacher seventeen; 0 otherwise

Ac ce p

T11

1 if teacher eleven; 0 otherwise

*Data available only for students who complete SET

27

Page 28 of 34

Table Two: Estimation Results Estimation of Sample Selection Model Estimated t-value Coefficient Dependent Variable:

Estimation of Individual Equations

M

te -643.47

(

LR Statistic Sample size * Significant at α < .05 ** Significant at α < .01

103.9**

9.91** -1.67 4.54** -1.06 -2.64** -0.62 -0.32 2.16* 0.11 1.52 0.16 6.12** 6.94** 2.08* 2.79** 3.18** 6.06** 3.57** 10.39** 5.48** 5.04** 4.85** 7.46** 4.64** 1.94 -1.13 3.09** -0.12

1242

4.25

0.29 -0.01 -0.45 -0.02 -0.006

-0.006 1.62 1.84 0.94 2.80 0.95 2.02 1.08 2.23 1.45 1.43 1.27 1.76 1.29 1.19 .41 1.26 -0.20

SET

9.23** 3.94** -2.59** -2.63** -0.10 -0.09

8.30

-0.09 7.56** 5.18** 1.24 2.91** 4.60** 6.22** 4.19** 9.85** 5.80** 6.36** 5.04** 6.43** 5.35** 0.52 1.35 3.53** -1.09 -829.06 294.44**

577

577

-826.92 (

ip t

SET

cr

0.30 0.55 -0.65 1.06 -3.68** 0.89 2.34* 0.16 -1.18 -2.99** 0.02 -2.22* -0.59 -2.56** -0.14 -3.16** -1.76 -2.21* -1.46 -2.20* -1.59 -1.49 -2.77** -1.21 -1.49

4.08 -0.89 0.27 -.007 -0.53 -0.16 -0.20 0.20 0.002 0.44 0.04 1.70 1.82 1.12 3.02 1.07 1.86 1.28 2.14 1.37 1.26 1.47 1.73 1.27 1.29 -0.34 1.37 -0.03

an

-1.34 -2.02*

d

Complete

Ac ce p

Constant ONLINE GRADE ENROLL UPPER QUANT GPA ONLINE*GRADE ONLINE*ENROLL ONLINE*UPPER ONLINE*QUANT T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 Log-likelihood

-0.55 -0.89 0.59 0.002 0.11 0.16 0.26 -0.25 0.01 0.62 0.04 -0.29 -0.77 -0.62 -1.04 -0.18 -0.64 -0.04 -0.66 -0.42 -0.59 -0.42 -0.46 -0.40 -0.65 -0.71 -0.39 -0.39

Estimated t-value Coefficient Dependent Variable:

us

Estimated z-value Coefficient Dependent Variable:

Variable

28

Page 29 of 34

ONLINE QUANT

PRESENT UPPER

ip t cr

.499

us

2.38

1 if SET administered online; 0 otherwise

1.33

.547

.497

.318

.460

35.0

13.2

The number of students in class that complete the SET

16.7

10.77

1 if the class was taught at the upperdivision; 0 otherwise

.309

.462

Student GPA

2.87

.79

1 if the class subject was quantitative analysis

The number of students enrolled in class

Ac ce p

ENROLL

Actual class grade

an

GRADE

.466

M

COMPLETE

1 if completed SET; 0 otherwise

Standard Deviation 1.30

d

SET*

Table One: Descriptive Statistics Description Mean Average SET score (1=low;7=high) 5.53

te

Variable

GPA 30

Page 30 of 34

T6 T7 T8 T9

1 if teacher three; 0 otherwise

.01

ip t

.20

us

cr

.07

.02

.14

an

T5

.04

1 if teacher four; 0 otherwise

.06

.24

.08

.28

.05

.20

1 if teacher eight; 0 otherwise

.10

.31

1 if teacher nine; 0 otherwise

.05

.22

1 if teacher ten; 0 otherwise

.05

.21

1 if teacher five; 0 otherwise

M

T4

1 if teacher two; 0 otherwise

1 if teacher six; 0 otherwise

d

T3

.27

te

T2

.08

1 if teacher seven; 0 otherwise

Ac ce p

T1

1 if teacher one; 0 otherwise

T10 31

Page 31 of 34

T16 T17

1 if teacher thirteen; 0 otherwise

.06

ip t

.27

us

cr

.24

1 if teacher fourteen; 0 otherwise

1 if teacher fifteen; 0 otherwise

.02

.12

an

T15

.08

.08

.27

.04

.19

.08

.26

M

T14

1 if teacher twelve; 0 otherwise

1 if teacher sixteen; 0 otherwise

d

T13

.22

te

T12

.05

1 if teacher seventeen; 0 otherwise

Ac ce p

T11

1 if teacher eleven; 0 otherwise

*Data available only for students who complete SET

32

Page 32 of 34

Table Two: Estimation Results Estimation of Sample Selection Model

Estimation of Individual Equations

LR Statistic Sample size

M 1.70 1.82 1.12 3.02 1.07 1.86 1.28 2.14 1.37 1.26 1.47 1.73 1.27 1.29 -0.34 1.37 -0.03

-643.47 (

103.9**

ip t

9.91** -1.67 4.54** -1.06 -2.64** -0.62 -0.32 2.16* 0.11 1.52 0.16

4.25

cr

SET

Estimated t-value Coefficient Dependent Variable:

0.29 -0.01 -0.45 -0.02 -0.006

SET

9.23**

3.94** -2.59** -2.63** -0.10 -0.09

an

4.08 -0.89 0.27 -.007 -0.53 -0.16 -0.20 0.20 0.002 0.44 0.04

d

10.28** 0.30 0.55 -0.65 1.06 -3.68** 0.89 2.34* 0.16 -1.18 -2.99** 0.02 -2.22* -0.59 -2.56** -0.14 -3.16** -1.76 -2.21* -1.46 -2.20* -1.59 -1.49 -2.77** -1.21 -1.49

te

-0.77 -0.62 -1.04 -0.18 -0.64 -0.04 -0.66 -0.42 -0.59 -0.42 -0.46 -0.40 -0.65 -0.71 -0.39 -0.39

Complete -1.34 -2.02*

Ac ce p

Constant ONLINE GRADE ENROLL UPPER QUANT GPA ONLINE*GRADE ONLINE*ENROLL ONLINE*UPPER ONLINE*QUANT T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 Log-likelihood

-0.55 -0.89 0.59 0.002 0.11 0.16 0.26 -0.25 0.01 0.62 0.04 -0.29

Estimated t-value Coefficient Dependent Variable:

us

Estimated z-value Coefficient Dependent Variable:

Variable

6.12** 6.94** 2.08* 2.79** 3.18** 6.06** 3.57** 10.39** 5.48** 5.04** 4.85** 7.46** 4.64** 1.94 -1.13 3.09** -0.12 -826.92

(

8.30 577

1242

-0.006 1.62 1.84 0.94 2.80 0.95 2.02 1.08 2.23 1.45 1.43 1.27 1.76 1.29 1.19 .41 1.26

-0.09 7.56** 5.18** 1.24 2.91** 4.60** 6.22** 4.19** 9.85** 5.80** 6.36** 5.04** 6.43** 5.35** 0.52 1.35 3.53**

-0.20

-1.09 -829.06 294.44** 577

33

Page 33 of 34

Ac ce p

te

d

M

an

us

cr

ip t

* Significant at α < .05 ** Significant at α < .01

34

Page 34 of 34