Internet and Higher Education 20 (2014) 35–50
Contents lists available at ScienceDirect
Internet and Higher Education
Interaction, Internet self-efficacy, and self-regulated learning as predictors of student satisfaction in online education courses Yu-Chun Kuo a,⁎, Andrew E. Walker b, Kerstin E.E. Schroder c, Brian R. Belland b a b c
School of Lifelong Learning, Jackson State University, 3825 Ridgewood Rd., Jackson, MS 39211, United States Instructional Technology & Learning Sciences, Utah State University, 2830 Old Main Hill, Logan, UT 84322, United States Department of Health Behavior, School of Public Health, University of Alabama at Birmingham, 1665 University Blvd., 227M RPHB, Birmingham, AL 35293, United States
a r t i c l e
i n f o
Article history: Accepted 1 October 2013 Available online 10 October 2013 Keywords: Interactions Internet self-efficacy Self-regulated learning Satisfaction Online education Hierarchical linear modeling (HLM)
a b s t r a c t Student satisfaction is important in the evaluation of distance education courses as it is related to the quality of online programs and student performance. Interaction is a critical indicator of student satisfaction; however, its impact has not been tested in the context of other critical student- and class-level predictors. In this study, we tested a regression model for student satisfaction involving student characteristics (three types of interaction, Internet self-efficacy, and self-regulated learning) and class-level predictors (course category and academic program). Data were collected in a sample of 221 graduate and undergraduate students responding to an online survey. The regression model was tested using hierarchical linear modeling (HLM). Learner–instructor interaction and learner–content interaction were significant predictors of student satisfaction but learner–learner interaction was not. Learner–content interaction was the strongest predictor. Academic program category moderated the effect of learner–content interaction on student satisfaction. The effect of learner–content interaction on student satisfaction was stronger in Instructional Technology and Learning Sciences than in psychology, physical education or family, consumer, and human development. In sum, the results suggest that improvements in learner– content interaction yield most promise in enhancing student satisfaction and that learner–learner interaction may be negligible in online course settings. Published by Elsevier Inc.
1. Introduction According to the 2010 Sloan Survey of online learning, approximately 30% of university and college students take at least one course online (Allen & Seaman, 2010). Most studies of online education found no significant differences in learning outcomes when compared to traditional, classroom-based education (e.g., Allen, Bourhis, Burrell, & Mabry, 2002; Biner, Bink, Huffman, & Dean, 1997; Brown & Liedholm, 2002; Johnson, Aragon, Shaik, & Palma-Rivas, 2000). However, online courses differ considerably from traditional instruction in the way students interact with the instructor, their fellow students and the content. Interaction would be very limited without the utilization of appropriate technologies in fully online learning settings. Limited interaction may in turn decrease students' course satisfaction and affect their performance (Chang & Smith, 2008; Noel-Levitz, 2011). Learners with high levels of interaction with the teacher and other learners are more engaged in online learning (Veletsianos, 2010). In contrast to traditional learning environments, online learning requires learners to be confident in performing Internet-related actions ⁎ Corresponding author. Tel.: +1 601 432 6817; fax: +1 601 432 6124. E-mail addresses:
[email protected],
[email protected] (Y.-C. Kuo),
[email protected] (A.E. Walker),
[email protected] (K.E.E. Schroder),
[email protected] (B.R. Belland). 1096-7516/$ – see front matter. Published by Elsevier Inc. http://dx.doi.org/10.1016/j.iheduc.2013.10.001
and be willing and able to self-manage their learning process (Sun & Rueda, 2012; Tsai, Chuang, Liang, & Tsai, 2011). Learners with low confidence in the use of the Internet may be less engaged in the learning activities and have fewer opportunities to interact with the instructor or classmates, thus leading to dissatisfaction with online learning (Liang & Tsai, 2008; Tsai et al., 2011). Moreover, online learning allows learners with more freedom to participate in the learning process or interact with the classmates. Therefore, their ability to regulate and monitor their own learning progress is critical. Learners who cannot regulate their learning process efficiently may experience dissatisfaction that leads to less engagement during online courses (Sun & Rueda, 2012).
2. The importance of student satisfaction Studies examining cognitive learning outcomes (e.g., effectiveness of distance courses and student achievement) are common in distance education (Barnard, Paton, & Lan, 2008; Edvardsson & Oskarsson, 2008; Offir, Bezalel, & Barth, 2007; Wadsworth, Husman, Duggan, & Pennington, 2007). However, affective aspects such as student attitudes are equally important. In the late 1990s, Biner, Welsh, Barone, Summers, and Dean (1997) contended that of the attitudinal constructs, student satisfaction is worthy of investigation because it is critical to
36
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
academic achievement. More recently, Palmer and Koenig-Lewis (2012) also called for the study of affective variables in technologyenhanced environments. Chang and Smith (2008) and Noel-Levitz (2011) indicated that post-secondary students who are satisfied are more likely to be successful. Student satisfaction, which reflects how positively students perceive their learning experiences, is an important indicator of program- and student-related outcomes (Biner et al., 1997; Liao & Hsieh, 2011). For example, student satisfaction is associated with program quality, student retention, and student success in program evaluation (Debourgh, 1999; Koseke & Koseke, 1991). High student satisfaction can lead to lower drop-out rates, higher persistence, and greater commitment to the program (Ali & Ahmad, 2011; Allen & Seaman, 2003; Debourgh, 1999; Koseke & Koseke, 1991; Noel-Levitz, 2011; Reinhart & Schneider, 2001; Yukselturk & Yildirim, 2008). Considering these potential benefits, student satisfaction should be studied to increase retention and recruitment of future students. In addition, student satisfaction enables institutions to target areas for improvement and facilitates the development of strategic planning specific to online learners (Noel-Levitz, 2011). In this study, we investigate factors impacting student satisfaction by taking into account course differences. 3. Factors contributing to student satisfaction Interaction has been consistently identified as an important predictor of student satisfaction (Ali & Ahmad, 2011; Bolliger & Martindale, 2004; Bray, Aoki, & Dlugosh, 2008; Dennen, Darabi, & Smith, 2007; Lee, 2012; Sahin, 2007; Yukselturk & Yildirim, 2008). The framework of this study is based on Moore and Kearsley (1996) three types of interaction, with the addition of Internet self-efficacy, self-regulated learning, and additional factors (i.e. course category and program) that impact student satisfaction in online learning (Artino, 2007; Chu & Chu, 2010; Chu & Tsai, 2009; Peterson, 2011; Puzziferro, 2006; Rodriguez Robles, 2006). 3.1. Interaction Interaction is important in all forms of education, regardless of whether technology is involved (Moore & Kearsley, 1996). Traditionally, interaction focuses on classroom-based communications between the instructor and students (Anderson, 2003). The attributes and resources of the Internet and the World Wide Web (WWW) expand the capacity of online learning. One unique feature of online learning is its capacity to support interactive group processes (Jain, 2011). Interaction allows learners to link pre-existing knowledge with new information and make new meaning through analysis or integration (Juwah, 2006). The effective use of technology with proper pedagogy enhances the interactive process between students and instructors or content in online learning (Jain, 2011). Interaction is related to the quality of online learning (Han & Johnson, 2012), online collaborative learning (Kim & Lee, 2012; Rosmalen et al., 2008), low attrition (Juwah, 2006), and effectiveness of online learning (Lee, 2012; Nandi, Hamilton, & Harland, 2012). The transactional distance theory describes interaction (Moore, 1989). Expanding on examination of physical separation alone, Moore (1989) postulated distance as a pedagogical phenomenon that involves the procedures taken by teachers, learners, and organizations to overcome the geographic distance. The concept of transaction was first proposed by Dewey (1916), and it takes into account the interplay among the environments, the individuals, and the behaviors. Transactional distance exists in any educational events, including face-to-face environments as well as distance environments. If there is a learner, a teacher, and a communication channel, then some transactional distance exists. The most prominent framework of interaction in distance education includes three major aspects: learner–instructor interaction, learner– learner interaction, and learner–content interaction (Moore, 1989).
Expanded from Moore's model, other forms of interaction in online learning were proposed such as learner–interface interaction (Gunawardena, Lowe, & Anderson, 1997), learner–tutor interaction (Juwah, 2006), learner–designer interaction (Juwah, 2006), learner– task interaction (Herrington, Reeves, & Oliver, 2006), learner–tool interaction (Hirumi, 2011), and vicarious interaction (Sutton, 2001). Although there is a wide range of proposed interactions, this study will focus on the three types of interaction from Moore. Learner–learner and learner–instructor interactions are learner–human interaction while learner–content interaction is learner–non-human interaction (Hirumi, 2011). Learner–instructor interaction refers to a two-way communication between the instructor of the course and learners (Moore & Kearsley, 1996). It can take many forms, such as guidance, support, evaluation, and encouragement (Moore, 1989). Learner–learner interaction involves a two-way reciprocal communication among learners, with or without the presence of an instructor. By interacting with fellow students, students can exchange ideas with and get feedback from each other (Anderson, 2003; Moore, 1989). Student interest and motivation can be enhanced through peer interaction using asynchronous or synchronous tools (Moore, 1989). Engaging in peer interaction propels students to construct ideas deeply, and increases achievement (Anderson, 2003). Learner–content interaction refers to a one-way process of elaborating and reflecting on the subject matter or the course content (Moore, 1989). Interaction of learners with content initiates an internal didactic conversation, which happens when learners talk or think to themselves about the information, knowledge, or ideas gained as part of a course experience. Through an internal conversation learners cognitively elaborate, organize, and reflect on the new knowledge they have obtained by integrating previous knowledge (Moore, 1989; Moore & Kearsley, 1996). Various forms of interaction have been recognized as important factors in promoting student satisfaction within online learning environments (Bernard et al., 2009; Bray et al., 2008; Burnett, 2001; Eom, 2009; Juwah, 2006; Moore & Kearsley, 1996; Northrup, Lee, & Burgess, 2002; Thurmond & Wambach, 2004) although some disagreements persist. Sher (2004) proposed that learner–instructor interaction and learner–learner interaction are significant contributors to satisfaction. Yukselturk and Yildirim (2008) indicated that learner interaction with peers decreased throughout the learning process but learner interaction with instructors remained the same in a study with online learners from Turkey. Some research indicated that learner–instructor interaction is the best predictor of course satisfaction (Battalio, 2007; Bolliger & Martindale, 2004; Thurmond, 2003). Thurmond (2003) found learner–instructor interaction to be the most significant predictor of student satisfaction in a study involving undergraduate and graduate students participating in web-based nursing courses. Similarly, Bolliger and Martindale (2004) found learner–instructor interaction to be the most important factor impacting student satisfaction in a sample of graduate students enrolled in multiple online instructional technology courses in a regional university. Battalio (2007) described learner–instructor interaction as the only required interaction in student learning. Other research on online learning indicated that interaction among learners is more strongly predictive of learner satisfaction than the amount of learner interaction with the instructor (Jung, Choi, Lim, & Leem, 2002; Rodriguez Robles, 2006). For example, Jung et al. (2002) found that undergraduate students in a collaborative interaction group had higher satisfaction than the other two groups. Rodriguez Robles (2006) had the same finding among adult learners. Interaction among learners enhances satisfied experiences when an interactive course was offered (Lee & Rha, 2009). However, too much required collaboration among learners reduces student satisfaction (Berge, 1999; Bray et al., 2008). It seems unclear whether and under which circumstances these interaction dimensions play a role in predicting student satisfaction.
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
37
3.2. Internet self-efficacy
3.3. Self-regulated learning
Referring to individuals' beliefs, confidence, and expectations in their ability to accomplish a specific task (Bandura, 1977), self-efficacy has been shown to impact motivation and learning outcomes (Liang & Tsai, 2008; Tsai, 2012). Self-efficacy toward diverse tasks has been found to influence academic achievement (Pajares, 1996; Pintrich & De Groot, 1990; Schunk, 1989) and teaching performance (Kao, Wu, & Tsai, 2011; Tella, 2011). Internet self-efficacy (ISE) refers to selfassessment of the ability to organize and execute Internet-related activities that elicit the desired results (Eastin & LaRose, 2000). With the growth of online education, it is increasingly important to consider Internet self-efficacy as a predictor of success in online education (Liang & Tsai, 2008; Tsai et al., 2011). Students may differ substantially in their Internet experiences and capabilities (Kaminski, Switzer, & Gloeckner, 2009; Zickuhr & Smith, 2011). Learners with low Internet self-efficacy may be less likely to fully engage in online systems or content due to lack of confidence (Livingstone & Helsper, 2010; Shi, Chen, & Tian, 2011). In turn, this can decrease students' intention to continue in online learning. In addition to Internet self-efficacy, there are other types of self-efficacy proposed by researchers from various perspectives when relating selfefficacy to web-based learning (Hodges, 2008). For example, academic self-efficacy (students' perception of academic learning) and computer self-efficacy (a judgment of one's ability to use a computer) are relevant self-efficacy constructs in web-based learning (Joo, Bong, & Choi, 2000; Torkzadeh, Chang, & Demirhan, 2006). Previous studies have found Internet self-efficacy to have an influence on learner motivation (Alenezi, Karim, & Veloo, 2010; Liang & Wu, 2010), learning process (Gangadharbatla, 2008; Tsai, 2012) and learning outcomes (Bucy & Tao, 2006; Tsai et al., 2011). For example, learners with high Internet self-efficacy are more likely to have good academic performance (DeTure, 2004; Joo et al., 2000; Thompson, Meriac, & Cope, 2002) and information searching skills (Hong, 2006; Rains, 2008; Tsai & Tsai, 2003), and show positive attitudes toward Internet learning environments (Liang & Tsai, 2008). In Tsai's (2012) study with graduate students, high Internet selfefficacy was found to facilitate the development of skills. Liang and Tsai (2008) found that learners with high Internet self-efficacy preferred online learning environments that allowed them to use the Internet to explore problems, display various sources of problems, and elaborate the knowledge through learning activities. The greater the ease with which learners can perform online tasks, the greater their ability to use Internet applications and participate in collaborative online activities (Gangadharbatla, 2008; Shi et al., 2011). Liang and Wu (2010) indicated that higher Internet self-efficacy led to higher motivations for web-based learning. Positive Internet attitudes and preferences for web-based learning environments can be predicted by Internet self-efficacy (Chu & Tsai, 2009; Joo et al., 2000). In two studies involving adult learners from community colleges and senior centers, Internet self-efficacy mediated the relationships between Internet usage and preferences (Chu & Tsai, 2009), and partially mediated learners' perceived learning and satisfaction (Chu & Chu, 2010). Direct research regarding the relationships between Internet self-efficacy and satisfaction in webbased learning is not conclusive. For example, Rodriguez Robles (2006) examined a predictive model of satisfaction of online adult learners, in which Internet self-efficacy was not a significant predictor, similar to the findings of Puzziferro's (2006) study based on online learners from liberal arts disciplines. On the contrary, Internet self-efficacy was found to be correlated with and predictive of student satisfaction in a study involving online learners in Education (Kuo, Walker, & Schroder, 2010). Due to the importance of Internet self-efficacy in web-based learning and the scarce research of ISE on affective outcomes, Internet self-efficacy was proposed in this study as an important factor beyond interaction.
Self-regulated learning and is defined as the degree to which students are metacognitively, motivationally, and behaviorally active participants in their own learning (Zimmerman, 1989). Metacognitive processes refer to learners' ability to set up plans, schedules, or goals to monitor or evaluate their learning progress. Motivational processes indicate that learners are self-motivated and willing to take responsibility for their successes or failures (Moller & Huett, 2012). Self-regulated learning behavior includes seeking help from others to optimize learning (Zimmerman & Martinez-Pons, 1986, 1988) as well as the use of metacognitive strategies (e.g., Lee, Kim, & Grabowski, 2010). Self-regulated learning has been recognized as an influential component of academic achievement in traditional classroom learning (Pintrich & De Groot, 1990; Schunk, 2005; Zimmerman & Schunk, 1989). The influence of self-regulation in online learning environments has been demonstrated in recent studies (Barnard, Lan, To, Paton, & Lai, 2009; Nicol, 2009; Paraskeva, Mysirlaki, & Choustoulakis, 2009). Compared to traditional classroom learning, online learning is more student-centered and students assume more responsibilities and autonomy, especially in asynchronous learning environments (Artino, 2008). Online learning's flexibility, demanding nature, and learnercenteredness require students to employ more self-regulatory skills (Artino, 2007; Bothma & Monteith, 2004; Jonassen, Davidson, Collins, Campbell, & Haag, 1995; King, Harner, & Brown, 2000). The more selfregulatory skills learners possess, the more likely they will be successful in online learning environments (Artino, 2008; Artino & Stephens, 2009; Barnard-Brak, Paton, & Lan, 2010; Hodges & Kim, 2010; Matuga, 2009; Shea & Bidjerano, 2010). Many studies focused on the influence of self-regulation on learning outcomes such as academic achievement or performance (Bell, 2006; Hargis, 2000; McManus, 2000; Shih & Gamon, 2001; Yukselturk & Bulut, 2005). However, little research focuses on how self-regulation is related to affective outcomes such as student satisfaction and attitudes (Artino, 2007; Peterson, 2011; Puzziferro, 2008). As one of the two processes in self-reflection, the self-reaction process is linked to self-satisfaction that involves individuals' perceptions of satisfaction or dissatisfaction after learning or their reactions to learning experiences regarding performance (Zimmerman, 2000). Consequently, selfregulation may affect student satisfaction. For example, task value and self-efficacy, which are two components in the motivation construct of self-regulated learning, positively predicted students' overall satisfaction with an online course in the U.S. Navy (Artino, 2007). Rehearsal, elaboration, meta-cognitive self-regulation, time management, and study environment were determined to have significant positive correlations with the level of satisfaction in Puzziferro's (2008) study with community college students enrolled in liberal arts online courses. Peterson (2011) investigated high school students taking online courses from various subjects to recovery credits, and found that one of the self-regulatory attributes significantly predicted willingness of taking online courses in the future. It seems that more research is needed to verify the relationship between self-regulated learning and satisfaction. Self-regulatory skills can be taught before a distance course starts, or by embedding support for skill development within the course (Barnard-Brak et al., 2010; Chang, 2005; Cho, 2004; Dembo, Junge, & Lynch, 2006; Yang & Park, 2012). Metacognitive strategies in selfregulation are the focus in this study since metacognitive processes are considered to be central in self-regulation (Brockett & Hiemstra, 1991; Corno, 1986; Corno & Mandinach, 1983; Lee et al., 2010). For instance, metacognitive self-regulation requires learners to adapt their cognitive strategies to task demands (Ross, Green, Salisbury-Glennon, & Tollefson, 2006). Song and Hill (2009) indicated that the use of metacognitive skills was beneficial to learners in online learning environments with flexible structures.
38
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
3.4. Other factors Course category (i.e., undergraduate vs. graduate) and program (i.e., areas of study) may directly or indirectly influence satisfaction in distance education environments (Beqiri, Chase, & Bishka, 2010; Kiriakidis, 2005; Macon, 2011). The literature reflects differing conclusions about the association between course category and satisfaction. Both Beqiri et al. (2010) and Price (1993) found that educational level was correlated with student satisfaction among distance learners. Price's research involved the use of instructional televisions, videotapes and audio tapes. Beqiri et al. (2010) found that graduate students were more satisfied with online courses than undergraduate students. Conversely, Rodriguez Robles (2006) found that educational level was not a significant predictor of satisfaction in a study involving online adult learners. Educational level may serve as a moderator of satisfaction. In a study with online students from several institutions, academic level was found to moderate the relationship between learner interaction and satisfaction (Kiriakidis, 2005). Master's level courses had greater influence on the correlation between interaction and satisfaction than bachelor level courses. There is limited research on the direct and indirect effect of subject area on satisfaction in distance learning courses (Macon, 2011). Satisfaction did not differ across majors among business students enrolled in online courses (Beqiri et al., 2010). Macon (2011) included subject area as a moderator variable in a meta-analysis and found that subject area influences student satisfaction in distance education and traditional classroom courses. Previous studies have identified the factors related to student satisfaction. However, there is a scarcity of research examining how academic level or subject area influences satisfaction. Hence, this study included both variables and explored their direct and moderating effect to student satisfaction. 4. The purpose of the study This study examines a proposed regression model for student satisfaction in fully online learning settings that involves interaction, Internet self-efficacy, and self regulation. Although studies indicate that interaction is a critical predictor for satisfaction in online learning environments (Battalio, 2007; Bolliger & Martindale, 2004; Bray et al., 2008; Chang & Smith, 2008; Chejlyk, 2006; Keeler, 2006; Rodriguez Robles, 2006; Thurmond, 2003), the literature does not indicate which type of the three interactions best predicts student satisfaction. Limited studies investigated the relationship between Internet self-efficacy and satisfaction. Some of them showed that Internet self-efficacy is not significantly correlated with or predictive of satisfaction (Puzziferro, 2006; Rodriguez Robles, 2006) and some revealed a significantly positive correlation between Internet self-efficacy and satisfaction (Chu & Chu, 2010; Chu & Tsai, 2009; Kuo et al., 2010). Only three studies examined the relationship between self-regulated learning and satisfaction in online learning, all of which showed a significantly positive correlation (Artino, 2007; Peterson, 2011; Puzziferro, 2008). Given the low volume of studies and their important roles, replication is needed to assess the relationships between Internet self-efficacy, self-regulation, and student satisfaction in online learning. Extension of existing research is also needed. Internet self-efficacy and self-regulation are typically used as sole predictors of student satisfaction (Artino, 2007; Puzziferro, 2008; Rodriguez Robles, 2006). To our knowledge, no prior studies assessed the relationships between interaction, Internet self-efficacy, self-regulation, and student satisfaction together. In addition, we are interested in exploring the effect of two additional variables — course category and program — on student satisfaction. Research indicates that course category, defined as undergraduate or graduate level, is related to satisfaction (Kiriakidis, 2005; Price, 1993; Rodriguez Robles, 2006). Program, defined as subject areas, may serve as a potential moderator variable for satisfaction (Jain, 2011; Macon, 2011). Hence, the
factors examined were three types of interaction, self-regulation, Internet self-efficacy, course category, and program. 4.1. Research questions 1. To what extent does each predictor variable (learner–instructor interaction, learner–learner interaction, learner–content interaction, Internet self-efficacy, and self-regulated learning) correlate with student satisfaction? 2. To what extent do interaction, Internet self-efficacy, and selfregulated learning predict student satisfaction and which variables are significant predictors of student satisfaction? 3. Of those variables that combine for the best prediction of student satisfaction, how much unique variance in student satisfaction do the significant predictors explain? 4. Do course category and program affect student satisfaction and moderate the effects of three types of the interaction, self-regulated learning, and Internet self-efficacy variables on student satisfaction? 5. Method 5.1. Sample Participants included undergraduate and graduate students taking online classes offered by the College of Education of a medium university in the Intermountain West in the Fall semester of 2009. The online courses were drawn from all seven programs in the College of Education (see Table 1). Of 990 enrolled students, there were 221 (22.32%) survey responses. Forty-one survey responses were deleted to meet the requirements of HLM analysis. This included students who were the only respondent in a course, those who responded to more than one course or attended a blended course, and students who came from courses outside of the college of education. In all, 180 responses from 26 courses were used in the analysis. This exceeds the minimum number of participants (N = 75) needed to test the regression model with five independent variables and allow for confident assumptions about observed relationships (Stevens, 2002). Although the sample of 26 classes was slightly smaller than the required level-2 sample size of 30 in educational research (Maas & Hox, 2005), HLM was necessary to account for clustering in the level-1 (student-level) data. Appendix E shows the number of student responses for each class, ranging from 2 to 31 students. Preliminary chi-square analyses were performed to determine the representativeness of the sample within the College of Education. The first analysis compared the number of participating courses from each program with the number of course offerings. The second analysis compared the number of student survey responses from each program with the number of enrolled students. The non-significant result from the first chi-square analysis χ2(5) = 5.84, p = .32 indicates that the responding courses did not differ systematically from the offered courses at a statistically significant level. When the unit of analysis was changed to students, systematic differences were found, χ2(6) = 128.23, p b .001. Some programs had only single courses (Special Education and Rehabilitation; School of Teacher Education and Leadership; and Health, Physical Education, and Recreation) participating. For instance, there were 172 students enrolled in five courses offered through the program of Health, Physical Education, and Recreation. However, only one class out of these five classes could be approached with instructor's permission, and it only had two student responses. Courses with either small enrollments, or enrollments accounting for a small portion of the program's total course offerings may account for some of these statistically significant differences. Overall, the sample is representative of the population in terms of the courses collected from each program, but not the number of student responses from each program. The greater proportions of respondents in two programs (Family, Consumer, and Human Development;
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
39
Table 1 Programs with the number of responses. Program
Number of courses with student responses
Number of student survey responses
4 3 9 7 1 1 1 26
29 14 84 42 6 3 2 180
Instructional Technology & Learning Sciences Communicative Disorders and Deaf Education Family, Consumer, and Human Development Psychology Special Education and Rehabilitation School of Teacher Education & Leadership Health, Physical Education, and Recreation Total
Instructional Technology and Learning Sciences) and the smaller proportions in two other programs (Communicative Disorders and Deaf Education; Health, Physical Education, and Recreation) may account for the significant chi-square. As illustrated in Table 2, there were more female respondents than male respondents, which is similar to the findings of previous studies in distance learning environments where female respondents were the majority (60% to 89%) of online survey respondents (Chejlyk, 2006; Rodriguez Robles, 2006). According to Table 2, the courses were categorized into three levels: undergraduate level, undergraduate/graduate level, and graduate level. Eighty percent of the respondents were taking undergraduate-level courses. Eleven percent were from graduate-level courses. Nine percent of the respondents were from undergraduate/graduate-level courses. Most students spent less than 5 h or 6–10 h online for the class each week. Generally, not many respondents spent more than 10 h online or on Blackboard (see Table 2). 5.2. Measures The survey (Appendix A) includes questions on demographics, five predictor variables and the outcome variable of student satisfaction. The Internet self-efficacy scale used in this study was developed by Eastin and LaRose (2000), which encompasses an overall measure related to general Internet use, with eight items regarding the extent to which participants feel confident in understanding terms or words relevant to Internet hardware and software, describing functions of Internet hardware, solving Internet problems, gathering data through Internet, and learning Internet advanced skills. The self-regulated learning scale was adopted from the Metacognitive self-regulation subscale
Table 2 Description of participants.
Gender Male Female Marital status Married Single Age 18–25 26–35 36–45 46–55 Above 56 Course level Undergraduate level Undergraduate/graduate level Graduate level Hours spent online per week Less than 5 h 6–10 h 11–15 h 16–20 h Above 20 h
Frequency
Percent
48 132
27% 73%
136 44
76% 24%
74 62 28 16 0
41% 34% 16% 9% 0%
144 (20 courses) 17 (4 courses) 19 (2 courses)
80% 9% 11%
85 65 11 10 9
47% 36% 6% 6% 5%
in the MSLQ developed by Pintrich, Smith, Garcia, and McKeachie (1993). It assesses the extent to which the planning, monitoring, and regulating strategies learners utilized during learning. Scores from both the Internet self-efficacy subscale and self-regulated learning subscale (see Table 3) were found to be valid and reliable in prior work (Internet self-efficacy: α = 0.93; self-regulation: α = 0.79). All three interaction subscales and the satisfaction subscales were adapted from an instrument in prior research regarding student interaction and satisfaction in a blended learning environment (Kuo, Eastmond, Schroder, & Bennett, 2009). The instrument was reliable but lacked validity information (Kuo et al., 2009). The items were revised to fit the fully online environments of this study before a content validity survey was conducted. To assess the content validity of the interaction and satisfaction instruments, six professors with either research expertise in online learning, experience teaching online classes, or both, rated the extent to which each item was necessary and adequate. A content validity ratio (CVR) was calculated based on the ratings from these six experts (Cohen & Swerdlik, 2004). The standard of CVR for the case of six experts is 0.99. Items with CVR value smaller than 0.99 should be deleted. The CVR process involved two-rounds of rating, during which some items were removed and slight wording changes were made based on the feedback from the experts. In order to obtain reliability information for the interaction and satisfaction subscales and to identify the feasibility of data collection procedures, a pilot study with 111 respondents was conducted in summer 2009 (Kuo, Walker, Belland, & Schroder, 2013). The pilot study was conducted to test the data collection procedures and to obtain reliability and content validity information for interaction (learner–learner interaction: α = 0.99; leaner–instructor interaction: α = 0.88; learner–content interaction: α = 0.92) and satisfaction scales (α = 0.93) (see Table 3). The participants in the pilot study included undergraduate and graduate students enrolled in summer-session online courses from the College of Education, who represent similar backgrounds to those of survey respondents in the current study. The items for interaction, Internet self-efficacy, and self-regulation scales were shown in Appendix A. 5.3. Procedure The researcher contacted course instructors about their willingness to include their online students in this survey. A recruitment email was sent out to all instructors who taught online courses offered through the College of Education. Interested instructors were asked to help distribute the online survey link to their students by any mechanism that they normally used to contact their students (e.g., email, Blackboard announcements, Blackboard discussion threads, or some alternative means). Data collection took place by means of an online survey at the end of fall 2009. 5.4. Data analysis Data was analyzed with SPSS 16.0 and HLM 6.0 for Windows. Cronbach's alpha was used to determine the internal consistency of items in each scale. Bivariate correlation analyses (Pearson product
40
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
Table 3 Instruments. Scale Learner–learner interaction Learner–instructor interaction Learner–content interaction Internet self-efficacy scale Self-regulation Satisfaction
5-Point 5-Point 5-Point 7-Point 7-Point 5-Point
Likert Likert Likert Likert Likert Likert
Number of items
Validity
Reliability (Cronbach's alpha)
8 6 3 5 8 12
Good (CVR survey) Good (CVR survey) Good (CVR survey) Good Good Good (CVR survey)
0.99 0.88 0.92 0.93 0.79 0.93
scale scale scale scale scale scale
Note. CVR refers to content validity ratio. The reliability information was from the pilot study.
moment) were performed to understand the relationships among three types of interactions, Internet self-efficacy, self-regulation, and student satisfaction. The Pearson correlation coefficients (r), ranging between −1 and +1, indicate the strength and the direction of the relationship of each independent variable with student satisfaction. As a preliminary step toward HLM, a multiple regression analysis was performed by entering all predictors simultaneously to test for violations of methodological assumptions in the data. No extreme cases were determined as outliers through leverage and influence statistics. Then, a test for multicollinearity was performed first through bivariate correlation and then through ordinary linear multiple regression. Based on the values of variance inflation factor (VIF) and tolerance values in multiple regression, there was no evidence of multicollinearity; this means that HLM analysis was appropriate. HLM was performed to address the research questions regarding the extent to which the combined and individual independent variables predicted student satisfaction, the unique variance each predictor explained, and the direct and moderating effects of class-level predictors on student satisfaction. HLM is a statistical technique which takes into account the influence of clustering within structural units such as classrooms in which responses of participants may not be independent from zeach other, causing dependency in the outcome measures which violates the assumptions of ordinary linear regression analyses. HLM takes this dependency within level-2 units (classrooms) into account and controls for the associated intraclass correlation. Student-level (level-1) data was nested within the specific classes (level-2) students attend (see Table 4). Student-level data (continuous variables) included the scores for predictors and student satisfaction. Class-level variables (categorical variables) included the course category (i.e., undergraduate-level courses vs. graduate-level courses) and the programs (i.e., subject areas) offering the course. The reasons to include course category and program as class-level variables are twofold. First, course category and program represent class characteristics. Students enrolled in undergraduate level courses may differ from those in graduate level courses. Second, the program or subject of study may affect student satisfaction as well as the impact of the three types of learner interaction on student satisfaction. The exploration of the direct and moderating effect of class level
Table 4 Student-level and class-level variables. Student-level variables (level-1)
Class-level variables (level-2)
1. 2. 3. 4. 5. 1. 2.
Learner–learner interaction Learner–instructor interaction Learner–content interaction Internet self-efficacy Self-regulated learning Course category (undergraduate, graduate) Programs offering the course (Instructional Technology & Learning Sciences; Communicative Disorders and Deaf Education; Family, Consumer, and Human Development; Psychology; Special Education and Rehabilitation; School of Teacher Education & Leadership; Health, Physical Education, and Recreation)
variables might be important and therefore they are investigated in this study. Appendix B (Model I), C (Model II), and D (Model III) represent three different two-level hierarchical linear models used to answer research questions of this study. For HLM analysis, a null model was performed without any student-level and class-level predictors in order to know the extent to which course difference (clustering effect) explains variations in student satisfaction. The intra-correlation coefficient (ICC) was 0.024, which indicated that 2.4% of the total variance in student satisfaction was accounted by the between-group variance. However, even small ICCs cannot be ignored without causing substantial error in treating individual participants as independent observations (Roberts, 2002). To address research questions two and three regarding the significant predictor, the uniqueness of significant predictors, and the extent to which the combination of predictors explains in student satisfaction, five variables (three types of interaction, Internet self-efficacy, and self-regulated learning) were entered as student-level predictors in an HLM analysis (Appendix B), without the inclusion of class-level predictors in Eq. (1). Eq. (2) depicts the random intercept and random slopes without the inclusion of any level-2 predictors (course category). To address the question regarding the main and moderating effects of class-level predictors on student satisfaction, class-level predictors needed to be entered into the model. The number of units for each category in two class-level predictors was re-categorized (see Table 5) to make predictors meaningful, as well as to reduce the number of dummy-coded variables for each predictor. Since each category in two class-level predictors needs to have at least two cases, a decision to combine course categories and programs was made. For example, course category was reduced to two categories (i.e., undergraduate vs. graduate). Among four undergraduate/graduate-level courses, two were moved to the graduate-level category and the other two to the undergraduatelevel category as these courses had only undergraduates or graduate students participating in each of them. As for the predictor of program, there were originally seven programs. However, three programs (i.e., Special Education and Rehabilitation; School of Teacher Education & Leadership; Health, Physical Education, and Recreation) only had one course with student responses respectively. Based on the nature of course content, Instructional Technology and Learning Sciences, Special Education and Rehabilitation, and School of Teacher Education and Leadership were combined under the same category. Health, Physical Education, and Recreation was combined with Psychology because the only included course from Health, Physical Education, and Recreation — Drugs and Human Behavior — was relevant to psychology. Table 5 shows the final four categories for the program predictor (see Table 5). Instructional Technology and Learning Sciences was selected as the reference group for two reasons: 1. The courses offered through the program of Instructional Technology and Learning Sciences may involve more use of educational technologies as opposed to the courses offered through other programs. 2. Most of the current authors were from Instructional Technology and Learning Sciences and were particularly interested in seeing how this program was different from others. After the two class-level predictors were re-organized, HLM was performed again to include the two class-level predictors (i.e., course
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
41
Table 5 Class-level predictors: course category and program. Program
Number of courses with student responses
Course category
6
4 graduate-level; 2 undergraduate-level
3 9 8
Undergraduate-level Undergraduate-level Undergraduate-level
Category 1: Instructional Technology & Learning Sciences (Combined with “Special Education and Rehabilitation,” and “School of Teacher Education & Leadership”) Category 2: Communicative Disorders and Deaf Education Category 3: Family, Consumer, and Human Development Category 4: Psychology (Combined with “Health, Physical Education, and Recreation”)
category and program) into the model. The two class-level variables were added to assess their “main effects” on student satisfaction (i.e., the predictive effects of course category and program on the intercept) as well as their moderating effects with significant studentlevel predictors on student satisfaction (i.e., their influence on level1 slopes). The details of this model (Eqs. (3) & (4) in Model II) are in Appendix C. In a final step, redundant level-2 predictors were removed from the model, keeping only the significant class-level predictor for level-1 slopes in the final model (Model III). Eqs. (5) and (6) show the final model (Appendix D). Eq. (5) represents the level-1 (student level) equation with five predictors. Eq. (6) represents the random intercept and random slopes. Three program dummy codes were entered as level-2 predictors of the slope of learner–content interaction, with Category 1 (Instructional Technology & Learning Sciences, combined with Special Education and Rehabilitation, and School of Teacher Education and Leadership) serving as the reference (comparison) group. Dummy code 1 represents the contrast of category 2 (Communicative Disorders and Deaf Education) against category 1, program dummy code 2 represents the contrast of category 3 (Family, Consumer, and Human Development) against category 1, and program dummy code 3 is the contrast of category 4 (a combination of Psychology and Health, Physical Education, and Recreation) against category 1. Table 6 shows an overview of the analyses performed in this study with the corresponding research questions. 6. Results 6.1. Descriptive analyses Table 7 indicates the average score and reliability information for each scale based on the sample collected in this study. Each subscale had an average score higher than the midpoint of their corresponding scale except for the learner–learner interaction scale (M = 2.90, SD = 1.22) which had a mean slightly lower than the midpoint score 3. Among three types of interaction, learner–content interaction (M = 4.08, SD=0.99) had the highest mean score, learner–instructor interaction (M = 3.66, SD = 0.94) followed, and learner–learner interaction (M = 2.90, SD = 1.22) had the lowest mean score. Both Internet selfefficacy (M = 5.32, SD = 1.77) and self-regulated learning (M = 4.35, SD = 1.01) had a mean score higher than the mid-point score 4. The mean score of student satisfaction was high (M = 4.24, SD = 0.79).
The Cronbach's coefficient alpha values for six subscales were all larger than 0.80, presenting good reliability for each scale. 6.2. Correlation and HLM analyses 6.2.1. Research question 1: To what extent does each predictor variable correlate with student satisfaction? All five independent variables revealed a significant correlation with student satisfaction. As each independent variable increases, student satisfaction increases (see Table 8). Out of five independent variables, learner–content interaction (r = .684, p b .01) had the highest correlation with student satisfaction, and learner–instructor interaction (r = .392, p b .01) followed. Learner–learner interaction (r = .177, p b .05) correlated the least with student satisfaction among the three types of interaction. Consistent with previous studies, the direction of correlation between three types of interaction and student satisfaction was positive and also significant (Chejlyk, 2006; McLaren, 2010; Rodriguez Robles, 2006; Sher, 2004; Wurtele, 2008). Both Internet self-efficacy (r = .181, p b .05) and self-regulated learning (r = .340, p b .01) showed a low correlation with student satisfaction. 6.2.2. Research question 2: To what extent do interaction, Internet selfefficacy, and self-regulated learning predict student satisfaction and which variables are significant predictors of student satisfaction? According to the HLM analysis (see Model I in Table 9), among the five level-1 (student level) predictors, learner–instructor interaction (β = .122, t = 2.24, p b .05) and learner–content interaction (β = .604, t = 8.762, p b .001) were the two level-1 variables that significantly predicted student satisfaction. In HLM, R2 was used to interpret the total reduction of predictor error after all predictors were entered. That is, the proportional reduction of predictor error reflected the variance explained by the predictors that were entered beyond the null model. Based on the HLM analysis, R2 was calculated for both level-1 (student level) and level-2 (class level). A 45.5% reduction (see Table 10) in variance was detected after five level-1 predictors were entered into the equation, which were three types of interaction, Internet self-efficacy, and self-regulated learning. In other words, these five level-1 predictors explained almost half of the variance in student satisfaction, which indicates that this model with five predictors of satisfaction is valuable and meaningful.
Table 6 Research questions and corresponding analyses. Research questions
Analyses
1. To what extent does each predictor variable (learner–instructor interaction, learner–learner interaction, learner–content interaction, Internet self-efficacy, and self-regulated learning) correlate with student satisfaction? 2. To what extent do interaction, Internet self-efficacy, and selfregulated learning predict student satisfaction and which variables are significant predictors of student satisfaction? 3. Of those variables that combine for the best prediction of student satisfaction, how much unique variance in student satisfaction do the significant predictors explain? 4. Do the class-level predictors (course category and program) affect student satisfaction and moderate the effects of three types of the interaction, self-regulated learning, and Internet self-efficacy variables on student satisfaction?
Correlation analyses Hierarchical linear modeling (HLM) analyses with student-level predictor Hierarchical linear modeling (HLM) analyses with student-level predictor Hierarchical linear modeling (HLM) analyses with the inclusion of class-level predictors
42
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
Table 7 Descriptive information for each scale. Subscales
Number of items
Range
Midpoint
M
SD
α
8 6 3 8 12 5
1–5 1–5 1–5 1–7 1–7 1–5
3 3 3 4 4 3
2.90 3.66 4.08 5.32 4.35 4.24
1.22 0.94 0.99 1.17 1.01 0.79
0.94 0.83 0.92 0.92 0.82 0.87
Learner–learner Learner–instructor Learner–content Internet self-efficacy Self-regulated learning Satisfaction Note. α refers to Cronbach's alpha.
6.2.3. Research question 3: Of those variables that combine for the best prediction of student satisfaction, how much unique variance in student satisfaction do the significant predictors explain? Based on HLM analysis (see Table 10), learner–content interaction and learner–instructor interaction were the only two independent variables that significantly contributed to student satisfaction in the model with five level-1 predictors. Learner–content interaction alone contributed 24% of incremental variance to the prediction of student satisfaction beyond the effects of the other student-level predictors, almost equal to the amount of variance explained by the other four predictors combined (see Table 10). Learner–instructor interaction contributed 0.2% of the variance of student satisfaction beyond the model with four level-1 predictors where learner–instructor was not included. This result implies that learner–instructor interaction did not explain much of the residual variance left over from learner–content interaction, although it showed a significant bivariate correlation with student satisfaction (see Table 8). The lack of an effect of learner–instructor interaction in the multilevel regression analysis is a result of the substantial overlap with other stronger predictors in the model. 6.2.4. Research question 4: Do course category and program affect student satisfaction and moderate the effects of three types of the interaction, self-regulated learning, and Internet self-efficacy variables on student satisfaction? According to Model I (see Table 9), which contained five level-1 predictors, learner–instructor interaction and learner–content interaction were significant predictors. However, their variance components were not significant. The fact that the variance components of these predictor slopes remained non-significant indicated that there was little variance left to be explained by level-2 predictors. However, we continued the analysis for the following reasons (Raudenbush & Bryk, 2002): First, the lack of significance might be partly due to the limited sample size both in terms of class-level units and in terms of the number of students in the smaller class units. With a small sample size on level-2, effect sizes and variance components could be substantial even if they are not significant; Second, in our case, the large range in the bivariate correlations and regression coefficients of learner–instructor interaction (r squares: 0.066–0.966; β: −0.053–0.671) and learner–content interaction (r squares: 0.203–0.982; β: 0.123–0.876) on student satisfaction demonstrated that there were substantial differences in the slopes of learner–instructor interaction and learner–content interaction among class units (see Appendix F); and Third, Internet self-efficacy and selfregulation were neither the primary focus nor significant predictors
for student satisfaction. Hence, the examination continued regarding the main and moderating effects of class-level predictors with significant level-1 predictors and student satisfaction. Two analyses were performed by adding class-level predictors to the intercept and the significant slopes (i.e., learner–instructor interaction and learner–content interaction): one analysis added course category and the other added program (see Appendix G). Both analyses showed course category (β = −.420, t = .190, p b .05) and program (program dummy 1: β = .003, t = .006, p N .05; program dummy 2: β = −.458, t = −2.86, p b .05; program dummy 3: β = −.491, t = −.28, p b .05) were significant for the slope of learner–content interaction, but not for the slope of learner–instructor interaction. Thus, two class-level predictors were entered into the intercept and the slope of learner–content interaction to investigate the main and moderating effects of class-level predictors on student satisfaction (Model II). Based on this model, there were no main effects of two class-level predictors on student satisfaction (see Table 9). That is, course category (β = .003, t = .014, p N .05) and program (program dummy 1: β = .040, t = .125, p N .05; program dummy 2: β = .103, t = .507, p N .05; program dummy 3: β = .146, t=.687, pN .05) did not help predict student satisfaction at a statistically significant level. Before addressing the question of the moderating effect of classlevel predictors on student satisfaction, the model with two class-level predictors was simplified in favor of parsimony and only retained the level-2 predictor that showed a significant moderator effect. Course category (β = −.113, t =−.528, p N .05) was eliminated from the equation in the final model since it did not impact the slope of learner–content interaction (i.e., the moderator effects of this predictor were found to be non-significant). Therefore, only the moderator effects of academic program on the relationship between learner–content interaction and student satisfaction were tested in the final model (Model III) and entered as level-2 predictors of the learner–content interaction slope (see Table 9). Two out of three groups of programs displayed significant moderator effects on the relationship between of learner–content interaction and student satisfaction. Among students from Family, Consumer, and Human Development (β = −.499, t = −4.125, p b .001) and students from Psychology (β = −.514, t = −3.790, p b .001), learner–content interaction had a significantly lower impact on student satisfaction compared to the effects of learner–content interaction among students of Instructional Technology and Learning Sciences. That is, compared to students in Instructional Technology and Learning Sciences, the positive effect of learner–content interaction on student satisfaction decreased among students in Psychology or Family, Consumer, and Human Development. No significance was found between Instructional Technology and Learning Sciences and Communicative Disorders and Deaf Education (β = −.204, t = −.549, p N .05). The difference between average slopes of learner–content interaction was small (0.114) in Instructional Technology and Learning Sciences (average slope: 0.876) and Communicative Disorders and Deaf Education (average slope: 0.762). The average slope of learner–content interaction in Instructional Technology and Learning Sciences (average slope: 0.876) was much steeper than that of Family, Consumer, and Human Development (average slope: 0.483) or Psychology (average slope: 0.617).
Table 8 Correlations among independent variables and student satisfaction.
Learner–learner Learner–instructor Learner–content Internet self-efficacy Self-regulated learning Satisfaction Note.*p b .05, **p b .01.
Learner–learner
Learner–instructor
Learner–content
Internet self-efficacy
Self-regulated learning
Satisfaction
–
.494** –
.154* .442** –
.035 .222** .226** –
.157* .171* .428** .183* –
.177* .392** .684** .181* .340** –
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
43
Table 9 Results of HLM. Level 1/student-level predictors
Level 2/class-level predictors
Model I: Model with student-level predictors (γ00) Intercept (β0) Intercept β1 (γ10) LL interaction (β1) Intercept β2 (γ20) LI interaction (β2) LC interaction (β3) Intercept β3 (γ30) Intercept β4 (γ40) ISE (β4) Intercept β5 (γ50) SRL (β5)
Final estimation of effects
Variance components
Comparison of variance components
Estimate
SE
t
σ2 u
df
χ2
p value
4.214 −0.021 0.122 0.604 −0.003 −0.025
0.048 0.040 0.055 0.069 0.043 0.071
88.420*** −0.511 2.237* 8.762*** −0.063 −0.348
0.0138 0.0047 0.0059 0.0364 0.0122 0.0598
10 10 10 10 10 10
9.603 8.487 12.404 13.749 18.500 30.976
N0.500 N0.500 0.258 0.184 0.047** 0.001***
41.282*** 0.014 0.125 0.507 0.687 0.003 1.518 6.923*** −0.528 −0.362 −2.647* −2.545* 0.008 −0.808
0.0135
6
10.797
0.094
0.0029 0.0022 0.0010
10 10 6
8.624 11.246 7.719
N0.500 0.338 0.259
0.0116 0.0737
10 10
18.154 36.558
0.052 0.000***
0.01262
7
10.797
0.147
0.0031 0.0021 0.0011
10 10 7
8.690 11.328 7.839
N0.500 0.332 0.347
0.0113 0.0729
10 10
18.135 36.838
Model II: Model with student-level predictors and two class-level predictors (γ00) 4.120 0.099 Intercept (β0) 0.003 0.215 Course category (γ01) Program dummy 1 (γ02) 0.040 0.3190 0.103 0.204 Program dummy 2 (γ03) 0.146 0.212 Program dummy 3 (γ04) Intercept β1 (γ10) 0.0001 0.040 LL interaction (β1) Intercept β2 (γ20) 0.081 0.053 LI interaction (β2) Intercept β3 (γ30) 1.039 0.150 LC interaction (β3) −0.113 0.214 Course category (γ31) −0.141 0.390 Program dummy 1 (γ32) Program dummy 2 (γ33) −0.441 0.167 −0.453 0.178 Program dummy 3 (γ34) Intercept β4 (γ40) 0.0003 0.043 ISE (β4) Intercept β5 (γ50) −0.061 0.075 SRL (β5)
Model III: Final model with student-level predictors and one class-level predictor “program” (γ00) 4.128 0.088 47.107*** Intercept (β0) 0.035 0.269 0.130 Program dummy 1 (γ01) Program dummy 2 (γ02) 0.099 0.107 0.921 0.138 0.121 1.141 Program dummy 3 (γ03) Intercept β1 (γ10) −0.001 0.040 −0.028 LL interaction (β1) Intercept β2 (γ20) 0.080 0.053 1.504 LI interaction (β2) Intercept β3 (γ30) 0.987 0.112 8.774*** LC interaction (β3) −0.204 0.372 −0.549 Program dummy 1 (γ31) −0.499 0.121 −4.125*** Program dummy 2 (γ32) Program dummy 3 (γ33) −0.514 0.136 −3.790*** Intercept β4 (γ40) 0.002 0.042 0.052 ISE (β4) Intercept β5 (γ50) −0.061 0.075 −0.810 SRL (β5)
% of reduction
Δχ2
96.29%
5.746*
0.052 0.000***
Note. LL interaction: learner–learner interaction, LI interaction: learner–instructor interaction, LC interaction: learner–content interaction, ISE: Internet self-efficacy, SRL: self-regulated learning. Program dummy code 1 represents the contrast of category 2 (Communicative Disorders and Deaf Education) against the reference group (Category 1: Instructional Technology & Learning Sciences). Program dummy code 2 contrasts category 3 (Family, Consumer, and Human Development) against category 1. Program dummy code 3 contrasts category 4 (Psychology) against category 1. The reference group was coded as 0, and the rest of the groups 1. Note. *p b .05, **p b .01, ***p b .001.
Learner–instructor interaction became non-significant when classlevel predictors were included. This might be due to the adding of additional predictors into the regression equation that would lower the degrees of freedom for the analysis. The variance component of learner–content interaction in the final model was reduced to 96.29% with the inclusion of three program dummy codes as predictors of the slope of learner–content interaction (see Table 9), which was significant withΔχ2 = 5.746, p b .05. 7. Discussion 7.1. Student-level predictors of student satisfaction Learner–content interaction was found to be the strongest studentlevel predictor of student satisfaction. The prominence of learner–content
interaction was consistent with both Chejlyk (2006) and Keeler (2006). This result supported the ideas of Moore (1989) and Moore and Kearsley (1996), both of whom highlighted the importance of learner– content interaction in online learning environments. Online learners are likely to spend most of their time on required reading or assignments, and digest the content they need to learn through thinking, elaboration, or reflections, which are internally intellectual communication of a person with the content during learning processes. In our study, learner–learner interaction did not appear to have any effect on students' satisfaction, and the effects of learner–instructor interaction were relatively weak when class-level predictors were included in the model. In this regard, the results of this study differ from prior studies in which either learner–learner interaction or learner– instructor interaction was found to be the most important predictor in distance learning environments (Battalio, 2007; Bolliger & Martindale,
Table 10 Uniqueness of learner–instructor interaction and learner–content interaction.
1. Full model: The model with five level-1 predictors against the null model 2. The model with four level-1 predictors (without learner–content interaction) against the null model 3. The model with four level-1 predictors (without learner–instructor interaction) against the null model a
R2
Uniquenessa
0.455 0.215 0.453
– 0.240 (24%) 0.002 (0.2%)
Uniqueness refers to the incremental variance explained in the full model by the specific predictor excluded in Model 2 and Model 3, respectively.
44
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
2004; Jung et al., 2002; Rodriguez Robles, 2006; Thurmond, 2003). One possible explanation for these differences may reside in specific course requirements and instructional goal orientation. In online settings, learner– learner interaction may gain relevance when certain collaborative activities are required, such as group discussions, group projects, or idea sharing (Jung et al., 2002). If collaboration among learners is not required, then learner–learner interaction may not affect student satisfaction at all. In contrast to learner–learner interaction, interaction between the instructor and learner remains a basic component of the online course experience with potentially strong impact on learner outcomes and learner satisfaction. The relevance and impact of learner–instructor interaction on student satisfaction likely depends on the intensity and frequency with which such interaction occurs (Burnett, Bonnici, Miksa, & Kim, 2007). At a minimum, interaction will occur when online learners have questions regarding the course content. In our study, learner–instructor interaction had a significant but not particularly strong impact on student satisfaction, which even was reduced to non-significance in the context of class-level predictors. However, as long as learner–instructor interaction remains a necessary component of online instruction, its potential impact on both student learning and student satisfaction cannot be ignored. Naturally, student satisfaction remains a secondary target of learner–instructor interaction, which may involve criticism, evaluation of student papers, discussion of grades, and other instructional activities that may not be always experienced as pleasant. The dual aspect of learner–instructor interaction as instructional medium and social exchange may explain the limited impact of this type of interaction on student satisfaction. In addition, three types of interaction were not equally influential depending on the course design. Consistent with findings from Puzziferro (2006) and Rodriguez Robles (2006), Internet self-efficacy was not a significant predictor for student satisfaction although there was positive correlation between them. Most students in this study were regular online students and might have possessed a certain level of ability of using the Internet, which may have led to the non-significant result. However, contrary to the findings of Peterson (2011) and Puzziferro (2006) that self-regulated learning significantly predicted student satisfaction, self-regulated learning was not a significant predictor of student satisfaction in this study, even though the correlation between selfregulated learning and student satisfaction was significant. More research is required to determine under which conditions self-regulated learning may or may not predict student satisfaction, whether programs designed to enhance self-regulated learning may improve student outcomes, including course satisfaction, and in which settings such programs may be useful. 7.2. Class-level predictors of student satisfaction Academic program was found to be a significant class-level moderator on the relationship between learner–content interaction and student satisfaction. The fact that learner–content interaction had a particularly strong impact on student satisfaction among students of Instructional Technology and Learning Sciences may not come as a surprise. First, characteristics of programs may play an important role on the impact of learner–content interaction on student satisfaction. Instructional Technology and Learning Sciences may integrate more media into content design than many courses in the social sciences. Further, the content of Instructional Technology and Learning Sciences courses focuses on the integration of technology into teaching and learning, thus sensitizing its students to important characteristics of learner–content interaction. Even more, students in Instructional Technology and Learning Sciences may appreciate various media tools and strategies used to facilitate learner–content interaction more than students in any other academic program. For the program of Communicative Disorders and Deaf Education, content design with the involvement of media may be
highly valued in order to enhance learning opportunities for individuals with special needs, which leads to no significant difference in comparison with the program of Instructional Technology and Learning Sciences. In addition, as opposed to the faculty from other programs, the faculty in the program of Instructional Technology and Learning Sciences and the program of Communicative Disorders and Deaf Education may possess better knowledge and skills to develop online content that motivates students' interest to learn, or know how to manipulate online technologies to increase the accessibility of online course materials. Encouraging faculty in other programs to attend technology training may help decrease the discrepancy in the influence of learner–content interaction on satisfaction among programs. Contrary to some previous research, course category did not impact satisfaction or moderate the correlation between interaction and student satisfaction (Beqiri et al., 2010; Kiriakidis, 2005). More studies are needed to verify the direct and indirect influence of course category on student satisfaction. 8. Limitations and suggestions for future research Several limitations should be noted. First, the participants in this study came from only one college at one university, so results may not generalize well to other university settings. Second, the return rate, 22.32%, was low, which leads to several consequences. While the minimum number of participants was reached, the results would be more reliable with additional participants. The percentage of responding courses against offered courses for each program was unequal, ranging from 13% to 80%. The analysis may be more representative of the programs with higher percentage of responding courses than those programs with lower percentage of courses responding to the survey. To meet minimum thresholds for HLM, several courses with limited participation were eliminated. The level-2 sample size was slightly lower than a group size of 30 that was normally recommended in educational research (Maas & Hox, 2005). Small level-2 sample size may render level-2 standard errors biased, which requires cautious interpretations for the level-2 coefficients. However, HLM was appropriate method to analyze the data with students nested within classes, and eliminated bias in the level-1 coefficients. Thus, beyond reliability and representativeness, more participants would have improved the HLM model. Third, this study required online students to fill out the survey based on only one class they selected. Students who took more than one class during the semester might have arbitrarily selected the course they liked most or least, which may have led to bias. Fourth, self-reports are used for the measurement of three types of interaction since self-reports are the most practical method of collecting the data. However, not all aspects of the three types of interaction may have been captured. In future research, a more diverse population in terms of disciplines and demographics should be studied. The impact of teaching assistants should be considered as class-level predictors in HLM analysis since they may play an important role in three types of interaction. Potential class-level predictors should also be explored, such as the use of teaching assistants, or the fundamental design of the courses themselves (i.e., objectives, tasks, and assessment) (Lee & Rha, 2009). Student-level variables such as support service, class size, learning style, personality, gender, student autonomy and other forms of interaction may be included and examined in online learning environments (Biner et al., 1997; Juwah, 2006; Rodriguez Robles, 2006; Sahin, 2007). 9. Conclusion and implications Learner–content interaction was found to be the most important predictor of student satisfaction in fully online learning. This result suggests that instructors and instructional designers should pay attention to content design and selection of appropriate delivery
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
technology in fully online settings. The online content should be (a) presented in an organized way and (b) easily accessed by online learners. A variety of media or technology tools expand opportunities for learner–content interaction (Anderson, 2003). Embedding interactive videos in the content may be helpful, as the integration of rich media increases the likelihood of student interaction and satisfaction (Appiah, 2006; Havice, Davis, Foxx, & Havice, 2010; Pilarski, Johnstone, Pettepher, & Osheroff, 2008). In addition, online content that is related to personal experiences of students may help increase student interaction with course content (Moore & Kearsley, 1996). Instructors are encouraged to regularly post messages on discussion boards and reply to student questions as soon as possible to increase their interaction with students. Assigning authentic tasks provides collaborative opportunities among learners (Herrington et al., 2006). A user-friendly learning management system may be suggested so that students are able to access online materials without difficulties. Instructional systems design processes such as learner and context analysis may provide guidance for improving interaction in online instruction, but this may not be applicable to all online courses. The suggestions regarding identifying instructional goals, determining
45
learning outcomes, and selecting the evaluation methods may be applied to each case in online learning situation (Dick, Carey, & Carey, 2005). However, some suggestions may not be easily applied in online learning. For instance, it may not be possible to conduct a detailed learner characteristics analysis until an online course starts. Selection of delivery method is vital; however, in some situations instructors are forced to use the standard learning management system to deliver the online course. To our knowledge, this is the first study examining the combined effect of three types of interaction, Internet self-efficacy, and selfregulated learning on student satisfaction, through the application of HLM techniques. No prior study took cluster effects into account while examining the extent to which independent variables predict student satisfaction in distance learning settings. In addition, this study explored the direct and moderator effects of class-level predictor (course category and program) on student satisfaction, which was never done in previous research. Course category (i.e., undergraduate, undergraduate/graduate level, graduate) did not contribute significantly to student satisfaction. However, program was found to significantly moderate the effect of learner–content interaction on satisfaction, in alignment with Macon's (2011) finding that subject area is a
Appendix A. The items for interaction, Internet self-efficacy, and self-regulation scales Scales
Items
Learner–learner interactions (5 point Likert scale, α = 0.93)
1. Overall, I had numerous interactions related to the course content with fellow students. 2. I got lots of feedback from my classmates. 3. I communicated with my classmates about the course content through different electronic means, such as email, discussion boards, instant messaging tools, etc. 4. I answered questions of my classmates through different electronic means, such as email, discussion board, instant messaging tools, etc. 5. I shared my thoughts or ideas about the lectures and its application with other students during this class. 6. I comment on other students' thoughts and ideas. 7. Group activities during class gave me chances to interact with my classmates. 8. Class projects led to interactions with my classmates. 1. I had numerous interactions with the instructor during the class. 2. I asked the instructor my questions through different electronic means, such as email, discussion board, instant messaging tools, etc. 3. The instructor regularly posted some questions for students to discuss on the discussion board. 4. The instructor replied my questions in a timely fashion. 5. I replied to messages from the instructor. 6. I received enough feedback from my instructor when I needed it. 1. Online course materials helped me to understand better the class content. 2. Online course materials stimulated my interest for this course. 3. Online course materials helped relate my personal experience to new concepts or new knowledge. 4. It was easy for me to access the online course materials. 1. Understanding terms/words relating to Internet hardware. 2. Understanding terms/words relating to Internet software. 3. Describing functions of Internet hardware. 4. Trouble shooting Internet hardware. 5. Explaining why a task will not run on the Internet. 6. Using the Internet to gather data. 7. Confident learning advanced skills within a specific Internet program. 8. Turning to an on-line discussion group when help is needed. 1. During class time I often miss important points because I'm thinking of other things. 2. When reading for this course, I make up questions to help focus my reading. 3. When I become confused about something I'm reading for this class, I go back and try to figure it out. 4. If course materials are difficult to understand, I change the way I read the material. 5. Before I study new course material thoroughly, I often skim it to see how it is organized. 6. I ask myself questions to make sure I understand the material I have been studying in this class. 7. I try to change the way I study in order to fit the course requirements and instructor's teaching style. 8. I often find that I have been reading for class but don't know what it was all about. 9. I try to think through a topic and decide what I am supposed to learn from it rather than just reading it over when studying. 10. When studying for this course I try to determine which concepts I don't understand well. 11. When I study for this class, I set goals for myself in order to direct my activities in each study period. 12. If I get confused taking notes in class, I make sure I sort it out afterwards 1. Overall, I am satisfied with this class. 2. This course contributed to my educational development. 3. This course contributed to my professional development. 4. I am satisfied with the level of interaction that happened in this course. 5. In the future, I would be willing to take a fully online course again.
Learner–instructor interactions (5 point Likert scale, α = 0.88)
Learner–content interactions (5 point Likert scale, α = 0.92)
Internet self-efficacy (7 point Likert scale, α = 0.93)
Self-regulated learning (7 point Likert scale, α = 0.79)
Satisfaction (5 point Likert scale, α = 0.93)
46
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
significant moderator of satisfaction. Future studies attempting to predict student satisfaction are encouraged to take into account potential clustering effects, and apply HLM to more accurately model any relationships. Appendix B. Model I: model with student-level predictors
Level 1 : Yij ¼ β0 j þ β1 j ðlearner–learner interactionÞ þ β2 j learner–instructor interactionÞ þ β3 j ðlearner–content interactionÞ þ β4 j Internet self‐efficacyÞ þ β5 j ðself‐regulated learningÞ þ eij Level 2 : β0 j ¼ γ00 þμ 0 j β1 j ¼ γ10 þμ 1 j β2 j ¼ γ20 þμ 2 j β3 j ¼ γ30 þμ 3 j β4 j ¼ γ40 þμ 4 j β5 j ¼ γ50 þμ 5 j
ð1Þ
ð2Þ
Appendix C. Model II: Model with student-level predictors and two class-level predictors
Level 1 : Yij ¼ β0 j þ β1 j ðlearner–learner interactionÞ þ β2 j learner–instructor interactionÞ þ β3 j ðlearner–content interactionÞ þ β4 j Internet self‐efficacyÞþβ5 j ðself‐regulated learningÞþeij
ð3Þ
Level 2 : β0 j ¼ γ00 þγ01 ðcourse categoryÞ þ γ02 ðprogram dummy 1Þþγ03 ðprogram dummy 2Þ þγ04 ðprogram dummy 3Þþ μ 0 j β1 j ¼ γ10 þ μ 1 j β2 j ¼ γ20 þ μ 2 j β3 j ¼ γ30 þγ31 ðcourse categoryÞþγ32 ðprogram dummy 1Þþ γ33 program dummy 2Þ þ γ34 ðprogram dummy 3Þþμ 3 j
ð4Þ
β4 j ¼ γ40 þ μ 4 j β5 j ¼ γ50 þμ 5 j Appendix D. Model III: Final model with student-level predictors and one class-level predictor “program”
Level 1 : Yij ¼ β0 j þ β1 j ðlearner–learner interactionÞ þ β2 j learner–instructor interactionÞ
ð5Þ
þ β3 j ðlearner–content interactionÞ þ β4 j Internet self‐efficacyÞþβ5 j ðself‐regulated learningÞþeij
Appendix E. List of courses with student responses Courses
Number of enrollments
Number of student survey responses
Instructional Technology & Learning Sciences 1.INST 5120/6120 Distance Education Projects 2.INST 5140/6140 Producing Distance Education Resources 3.INST 6310 Foundations of Educational Technology 4.INST 6325 Communication, Instruction, & the Learning Process
7 21 27 29
3 7 12 7
Communicative Disorders and Deaf Education 5.COMD 2910 Sign Language I (CI) — section 1 6.COMD 3120 Disorders of Articulation & Phonology 7.COMD 5070 Speech Science
20 66 51
2 9 3
84 35 63 39 129 29 26 23 19
16 2 17 2 31 4 6 2 4
36 31 28 24 22 16
6 21 3 2 4 2
Family, Consumer, and Human Development 8.FCHD 1010 Balancing Work & Family (BSS) 9.FCHD 2100 Family Resource Management 10.FCHD 2610 Child Guidance 11.FCHD 3100 Abuse & Neglect in Family Context 12.FCHD 3350 Family Finance (DSS) 13.FCHD 3510 Infancy & Early Childhood 14.FCHD 3530 Adolescence 15.FCHD 4220 Family Crises & Interventions 16.FCHD 4230 Families and Social Policy Psychology 17.PSY 1400 18.PSY 2800 19.PSY 2950 20.PSY 3460 21.PSY 3500 22.PSY 4420
Analysis of Behavior: Basic Principles Psychological Statistics (QI) Orientation to Psychology as a Career & Profession Physiological Psychology Scientific Thinking & Methods in Psychology (DSS/CI) Cognitive Psychology (DSS)
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
47
Appendix (continued) E (continued) Courses 23.PSY 5330
Number of enrollments Psychometrics
Special Education and Rehabilitation 24.SPED 4000 Education of Exceptional Individuals School of Teacher Education & Leadership (Elementary/Secondary Education) 25.ELED 3000 Foundation Studies & Practicum in Teaching & Classroom Management Level II (CI) Health, Physical Education, and Recreation 26.HEP 3000 Drugs and Human Behavior Total
Number of student survey responses
11
4
45
6
15
3
34 930
2 180
Appendix F. Bivariate correlations and regression coefficients of learner–instructor interaction and learner–content interaction on student satisfaction Course number
Tolerance
VIF
Bivariate correlations and regression coefficients of learner–instructor interaction on student satisfaction 2 0.257 0.066 0.671 3 0.588 0.346 0.078 4 0.880 0.774 – 6 0.250 0.063 – 8 0.334 0.112 0.066 10 0.481 0.231 – 12 0.359 0.129 0.103 14 0.584 0.341 – 17 0.664 0.441 – 18 0.131 0.017 −0.053 24 0.983 0.966 –
Bivariate correlation
r squares
Regression coefficients
0.235 0.362 0.005 0.076 0.799 0.096 0.787 0.139 0.000 0.596 0.019
4.263 2.764 199.381 13.189 1.251 10.457 1.271 7.202 5028.3 1.677 52.319
Bivariate correlations and regression coefficients of learner–content interaction on student satisfaction 2 0.844 0.712 0.876 3 0.945 0.893 – 4 0.907 0.823 – 6 0.587 0.345 – 8 0.849 0.721 0.776 10 0.613 0.376 – 12 0.772 0.596 0.551 14 0.451 0.203 0.123 17 0.956 0.914 – 18 0.601 0.361 0.617 24 0.991 0.982 –
0.644 0.160 0.008 0.204 0.563 0.087 0.539 0.394 0.093 0.597 0.018
1.552 6.263 130.395 4.899 1.776 11.541 1.854 2.536 10.753 1.676 54.881
Note. The dash sign “–” refers to the regression coefficients that cannot be interpreted due to multicollinearity. The regression was performed with five predictors for the courses with at least six participants.
Appendix G. Testing analysis results for course category and program in HLM Estimate
SE
df
t-Ratio
Course category (for the intercept and the slopes of LI and LC) Intercept β0 (γ00) Course category (γ01) Learner–learner interaction β1 (γ10) Learner–instructor interaction β2 (γ20) Course category (γ21) Learner–content interaction β3 (γ30) Course category (γ31) Internet self-efficacy β4 (γ40) Self-regulated learning β5 (γ50)
Parameter
4.047 0.199 0.001 0.243 −0.153 0.944 −0.420 −0.000 −0.036
0.13 0.14 0.04 0.22 0.22 0.19 0.19 0.04 0.07
24 24 25 24 24 24 24 25 25
30.64*** 1.42 0.03 1.12 −0.70 5.05 −2.21* −0.01 −0.49
Program (for the intercept and the slopes of LI and LC) Intercept β0 (γ00) Program dummy 1 (γ01) Program dummy 2 (γ02) Program dummy 3 (γ03) Learner–learner interaction β1 (γ10) Learner–instructor interaction β2 (γ20) Program dummy 1 (γ21) Program dummy 2 (γ22) Program dummy 3 (γ23) Learner–content interaction β3 (γ30) Program dummy 1 (γ31) Program dummy 2 (γ32) Program dummy 3 (γ33)
4.103 −0.038 0.133 0.159 −0.001 0.138 −0.223 −0.074 −0.003 0.949 0.003 −0.458 −0.491
0.11 0.30 0.13 0.14 0.04 0.17 0.26 0.18 0.20 0.15 0.44 0.16 0.17
22 22 22 22 25 22 22 22 22 22 22 22 22
38.56*** −0.13 1.07 1.17 −0.03 0.817 −0.87 −0.43 −0.02 6.30*** 0.01 −2.86* −2.84* (continued on next page)
48
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
(continued) Appendix G (continued)
Internet self-efficacy Self-regulated learning
Parameter
Estimate
SE
df
t-Ratio
β4 (γ40) β5 (γ50)
0.003 −0.056
0.04 0.08
25 25
0.06 −0.73
Note. LI refers to learner–instructor interaction and LC learner–content interaction. *p b .05. ***p b .001.
Level 2 : β0 j ¼ γ00 þγ01 ðprogram dummy 1Þþγ02 ðprogram dummy 2Þþ γ03 ðprogram dummy 3Þþ μ 0 j β1 j ¼ γ10 þμ 1 j β2 j ¼ γ20 þμ 2 j β3 j ¼ γ30 þγ31 ðprogram dummy 1Þþγ32 ðprogram dummy 2Þþγ33 ðprogram dummy 3Þþμ 3 j
ð6Þ
β4 j ¼ γ40 þμ 4 j β5 j ¼ γ50 þμ 5 j
References Alenezi, A.R., Karim, A.M., & Veloo, A. (2010). An empirical investigation into the role of enjoyment, computer anxiety, computer self-efficacy and Internet experience in influencing the students' intention to use e-learning: A case study from Saudi Arabian government universities. The Turkish Online Journal of Educational Technology, 9(4), 22–34. Ali, & Ahmad (2011). Key factors for determining students' satisfaction in distance learning courses: A study of Allama Iqbal Open University. Contemporary Educational Technology, 2(2), 118–134. Allen, M., Bourhis, J., Burrell, N., & Mabry, E. (2002). Comparing student satisfaction with distance education to traditional classrooms in higher education: A metaanalysis. The American Journal of Distance Education, 16(2), 83–97. Allen, I. E., & Seaman, J. (2003). Sizing the opportunity: The quality and extent of online education in the United States, 2002 and 2003. Retrieved from. https://www. sloan-c.org/publications/survey/pdf/sizing_opportunity.pdf Allen, I. E., & Seaman, J. (2010). Class differences: Online education in the United States. Retrieved from. http://sloanconsortium.org/sites/default/files/class_differences.pdf Anderson, T. (2003). Modes of interaction in distance education: Recent developments and research questions. In M. G. Moore, & W. G. Anderson (Eds.), Handbook of distance education (pp. 129–144). Mahwah, NJ: Erlbaum. Appiah, O. (2006). Rich media, poor media: The impact of audio/video vs. text/picture testimonial ads on browsers' evaluations of commercial web sites and online products. Journal of Current Issues & Research in Advertising, 28(1), 73–86. Artino, A.R. (2007). Online military training: Using a social cognitive view of motivation and self-regulation to understand students' satisfaction, perceived learning, and choice. Quarterly Review of Distance Education, 8(3), 191–202. Artino, A.R. (2008). Promoting academic motivation and self-regulation: Practical guidelines for online instructors. TechTrends, 52(3), 37–45. Artino, A.R., & Stephens, J. M. (2009). Academic motivation and self-regulation: A comparative analysis of undergraduate and graduate students learning online. Internet and Higher Education, 12, 146–151. Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215. Barnard, L., Lan, W. Y., To, Y. M., Paton, V. O., & Lai, S. L. (2009). Measuring self-regulation in online and blended learning environments. Internet and Higher Education, 12, 1–6. Barnard, L., Paton, V., & Lan, W. (2008). Online self-regulatory learning behaviors as a mediator in the relationship between online course perceptions with achievement. International Review of Research in Open & Distance Learning, 9(2), 1–11. Barnard-Brak, L., Paton, V. O., & Lan, W. Y. (2010). Self-regulation across time of firstgeneration online learners. Research in Learning Technology, 18(1), 61–70. Battalio, J. (2007). Interaction online: A reevaluation. Quarterly Review of Distance Education, 8(4), 339–352. Bell, P. D. (2006). Can factors related to self-regulated learning and epistemological beliefs predict learning achievement in undergraduate asynchronous web-based courses? Perspectives in Health Information Management, 3(7), 1–17. Beqiri, M. S., Chase, N. M., & Bishka, A. (2010). Online course delivery: An empirical investigation factors affecting student satisfaction. Journal of Education for Business, 85(2), 95–100. Berge, Z. L. (1999). Interaction in post-secondary web-based learning. Educational Technology, 39(1), 5–11. Bernard, R. M., Abrami, P. C., Borokhovski, E., Wade, C. A., Tamim, R. M., Surkes, M.A., et al. (2009). A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research, 79(3), 1243–1289. Biner, P.M., Bink, M. L., Huffman, M. L., & Dean, R. S. (1997a). The impact of remote-site group size on student satisfaction and relative performance in interactive telecourses. The American Journal of Distance Education, 11(1), 23–33. Biner, P.M., Welsh, K. D., Barone, N. M., Summers, M., & Dean, R. S. (1997b). The impact of remote-site group size on student satisfaction and relative performance in interactive telecourses. American Journal of Distance Education, 11(1), 23–33.
Bolliger, D. U., & Martindale, T. (2004). Key factors for determining student satisfaction in online courses. International Journal on E-Learning, 3(1), 61–67. Bothma, F., & Monteith, J. (2004). Self-regulated learning as a prerequisite for successful distance learning. South Africa Journal of Education, 24(2), 141–147. Bray, E., Aoki, K., & Dlugosh, L. (2008). Predictors of learning satisfaction in Japanese online distance learners. International Review of Research in Open & Distance Learning, 9(3), 1–24. Brockett, R. G., & Hiemstra, R. (1991). Self-direction in adult learning: Perspectives on theory, research, and practice. New York, NY: Routledge. Brown, B. W., & Liedholm, C. E. (2002). Can web courses replace the classroom in principles of microeconomics? American Economics Review, 92(2), 444–448. Bucy, E., & Tao, C. C. (2006). Searching Google news: Interactivity, emotion, and the moderating role of Internet self-efficacy. Paper presented at the annual meeting of the International Communication Association, Dresden, Germany. Burnett, K. (2001). Interaction and student retention, success and satisfaction in web-based learning. [Retrieved from ERIC, database. (ED459798)]. Burnett, K. B., Bonnici, L. J., Miksa, S. D., & Kim, J. (2007). Frequency, intensity and topicality in online learning: An exploration of the Interaction dimensions that contribute to student satisfaction in online learning. Journal of Education for Library and Information Science, 48(1), 21–35. Chang, M. M. (2005). Applying self-regulated learning strategies in a web-based instruction: An investigation of motivation perception. Computer Assisted Language Learning, 18(3), 217–230. Chang, S. H., & Smith, R. A. (2008). Effectiveness of personal interaction in a learner-centered paradigm distance education class based on student satisfaction. Journal of Research on Technology in Education, 40(4), 407–426. Chejlyk, S. (2006). The effects of online course format and three components of student perceived interactions on overall course satisfaction. ((Doctoral dissertation). Available from Dissertations and Theses database. (UMI No. 3213421)). Cho, M. H. (2004). The effects of design strategies for promoting students' self-regulated learning skills on students' self-regulation and achievements in online learning environments. (Retrieved from ERIC database. (ED485062)). Chu, R. J., & Chu, A. Z. (2010). Multi-level analysis of peer support, Internet self-efficacy and e-learning outcomes: The contextual effects of collectivism and group potency. Computer & Education, 55, 145–154. Chu, R. J., & Tsai, C. C. (2009). Self-directed learning readiness, Internet self-efficacy and preferences towards constructivist Internet-based learning environments among higher-aged adults. Journal of Computer Assisted Learning, 25, 489–501. Cohen, R. J., & Swerdlik, M. E. (2004). Psychological testing and assessment: An introduction to tests and measurements. New York, NY: McGraw-Hill. Corno, L. (1986). The metacognitive control components of self-regulated learning. Contemporary Educational Psychology, 11(4), 333–346. Corno, L., & Mandinach, E. B. (1983). The role of cognitive engagement in classroom learning and motivation. Educational Psychologist, 18(2), 88–108. Debourgh, G. (1999). Technology is the tool, teaching is the task: Student satisfaction in distance learning. Paper presented at the Society for Information and Technology & Teacher Education international conference, San Antonio, TX. Dembo, M. H., Junge, L. G., & Lynch, R. (2006). Becoming a self-regulated learner: Implications for web-based education. In H. F. O'Neil, & R. S. Perez (Eds.), Web-based learning: Theory, research, and practice (pp. 185–202). Mahwah, NJ: Erlbaum. Dennen, V. P., Darabi, A. A., & Smith, L. J. (2007). Instructor–learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance Education, 28(1), 65–79. DeTure, M. (2004). Cognitive style and self-efficacy: Predicting student success in online distance education. American Journal of Distance Education, 18(1), 21–38. Dewey, J. (1916). Democracy and education. New York, NY: Macmillan. Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction (6th ed.). Boston, MA: Allyn and Bacon.
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50 Eastin, M. S., & LaRose, R. (2000). Internet self-efficacy and the psychology of the digital divide. Retrieved from. http://jcmc.indiana.edu/vol6/issue1/eastin.html Edvardsson, I. R., & Oskarsson, G. K. (2008). Distance education and academic achievement in business administration: The case of the University of Akureyri. International Review of Research in Open & Distance Learning, 9(3), 1–12. Eom, S. (2009). Effects of interaction on students' perceived learning satisfaction in university online education: An empirical investigation. International Journal of Global Management Studies, 1(2), 60–74. Gangadharbatla, H. (2008). Facebook me: Collective self-esteem, need to belong, and Internet self-efficacy as predictors of the iGeneration's attitudes toward social networking sites. Cyberpsychology, Behavior & Social Networking, 14(12), 717–722. Gunawardena, L., Lowe, C., & Anderson, T. (1997). Interaction analysis of a global online debate and the development of a constructivist interaction analysis model for computer conferencing. Journal of Educational Computing Research, 17(4), 395–429. Han, H., & Johnson, S. D. (2012). Relationship between students' emotional intelligence, social bond, and interactions in online learning. Educational Technology & Society, 15(1), 78–89. Hargis, J. (2000). The self-regulated learner advantage: Learning science on the internet. Electronic Journal of Science Education, 4(4), 1–8. Havice, P. A., Davis, T. T., Foxx, K. W., & Havice, W. L. (2010). The impact of rich media presentations on a distributed learning environment: Engagement and satisfaction of undergraduate students. Quarterly Review of Distance Education, 11(1), 53–58. Herrington, J., Reeves, T. C., & Oliver, R. (2006). Authentic tasks online: A synergy among learner, task, and technology. Distance Education, 27(2), 233–247. Hirumi, A. (2011). The design and sequencing of online and blended learning interactions: A framework for grounded design. Canadian Learning Journal, 16(2), 21–25. Hodges, C. B. (2008). Self-efficacy in the context of online learning environments: A review of the literature and directions for research. Performance Improvement Quarterly, 20(3/4), 7–25. Hodges, C. B., & Kim, C. (2010). Email, self-regulation, self-efficacy, and achievement in a college online mathematics course. Educational Computing Research, 43(2), 207–223. Hong, T. (2006). The Internet and tobacco cessation: The roles of Internet self-efficacy and search task on the information-seeking process. Journal of Computer-Mediated Communication, 11, 536–556. Jain, P. J. (2011). Interactions among online learners: A quantitative interdisciplinrary study. Education, 131(3), 538–544. Johnson, S. D., Aragon, S. R., Shaik, N., & Palma-Rivas, N. (2000). Comparative analysis of learner satisfaction and learning outcomes in online and face-to-face learning environments. Journal of Interactive Learning Research, 11(1), 29–49. Jonassen, D. H., Davidson, M., Collins, M., Campbell, J., & Haag, B. B. (1995). Constructivism and computer-mediated communication in distance education. American Journal of Distance Education, 9(2), 7–25. Joo, Y., Bong, M., & Choi, H. J. (2000). Self-efficacy for self-regulated learning, academic self-efficacy, and Internet self-efficacy in web-based instruction. Educational Technology Research & Development, 48, 5–17. Jung, I., Choi, S., Lim, C., & Leem, J. (2002). Effects of different types of interaction on learning achievement, satisfaction and participation in web-based instruction. Innovations in Education & Teaching International, 39(2), 153–162. Juwah, C. (Ed.). (2006). Interactions in online learning: Implications for theory and practice. New York, NY: Routledge. Kaminski, K., Switzer, J., & Gloeckner, G. (2009). Workforce readiness: A study of university students' fluency with information technology. Computers & Education, 53(2), 228–233. Kao, C. P., Wu, Y. T., & Tsai, C. C. (2011). Elementary school teachers' motivation toward web-based professional development, and the relationship with Internet self-efficacy and belief about web-based learning. Teaching and Teacher Education, 27, 406–415. Keeler, L. C. (2006). Student satisfaction and types of interaction in distance education courses. Dissertation Abstracts International, 67(9) (UMI No. 3233345). Kim, M., & Lee, E. (2012). A multidimensional analysis tool for visualizing online interactions. Educational Technology & Society, 15(3), 89–102. King, F. B., Harner, M., & Brown, S. W. (2000). Self-regulatory behavior influences in distance learning. International Journal of Instructional Media, 27(2), 147–156. Kiriakidis, P. (2005). A path analysis of factors that affect student satisfaction in the online learning environment. Dissertation Abstracts International, 66(7) (UMI No. 3183535). Koseke, G. F., & Koseke, R. D. (1991). Student burnout as a mediator of the stress–outcome relationship. Research in Higher Education, 32(4), 415–431. Kuo, Y. C., Eastmond, J. N., Schroder, K. E. E., & Bennett, L. J. (2009). Student perceptions of interactions and course satisfaction in a blended learning environment. Paper presented at the educational multimedia, hypermedia & telecommunications world conference, Honolulu, HI. Kuo, Y. C., Walker, A., Belland, B. R., & Schroder, K. E. E. (2013). A predictive study of student satisfaction in online education programs. The International Review of Research in Open and Distance Learning, 14(1), 16–39. Kuo, Y. C., Walker, A., & Schroder, K. E. E. (2010). Interaction and other variables as predictors of student satisfaction in online learning environments. Paper presented at the annual meeting of the Society for Information Technology & Teacher Education (SITE), San Diego, California. Lee, J. (2012). Patterns of interaction and participation in a large online course: Strategies for fostering sustainable discussion. Educational Technology & Society, 15(1), 260–272. Lee, H. W., Kim, K. Y., & Grabowski, B.L. (2010). Improving self-regulation, learning strategy use, and achievement with metacognitive feedback. Educational Technology Research and Development, 58, 629–648, http://dx.doi.org/10.1007/s11423-010-9153-6.
49
Lee, H. J., & Rha, I. (2009). Influence of structure and Interaction on student achievement and satisfaction in web-based distance learning. Educational Technology & Society, 12(4), 372–382. Liang, J. C., & Tsai, C. C. (2008). Internet self-efficacy and preferences toward constructivist Internet-based learning environments: A study of pre-school teachers in Taiwan. Educational Technology & Society, 11(1), 226–237. Liang, J. C., & Wu, S. H. (2010). Nurses' motivations for web-based learning and the role of Internet self-efficacy. Innovations in Education and Teaching International, 47(1), 25–37. Liao, P. W., & Hsieh, J. Y. (2011). What influences Internet-based learning? Social Behavior and Personality, 39(7), 887–896. Livingstone, S., & Helsper, E. (2010). Balancing opportunities and risks in teenagers' use of the Internet: The role of online skills and Internet self-efficacy. New Media & Society, 12(2), 309–329. Maas, C. J., & Hox, J. J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1(3), 86–92. Macon, D. K. (2011). Student satisfaction with online courses versus traditional courses: A meta-analysis. Dissertation Abstracts International, 72(5) (UMI No. 3447725). Matuga, J. M. (2009). Self-regulation, goal orientation, and academic achievement of secondary students in online university courses. Educational Technology & Society, 12(3), 4–11. McLaren, A.C. (2010). The effects of instructor–learner interactions on learner satisfaction in online masters courses. ((Doctoral dissertation). Available from Dissertations and Theses database. (UMI No. 3398368)). McManus, T. F. (2000). Individualizing instruction in a web based hypermedia learning environment: Non-linearity, advance organizers, and self-regulated learners. Journal of Interactive Learning Research, 11(3), 219–251. Moller, L., & Huett, J. (2012). The next generation of distance education: Unconstrained learning. New York, NY: Springer. Moore, M. G. (1989). Three types of interactions. The American Journal of Distance Education, 3(2), 1–6. Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. New York, NY: Wadsworth. Nandi, D., Hamilton, M., & Harland, J. (2012). Evaluating the quality of interaction in asynchronous discussion forums in fully online courses. Distance Education, 33(1), 5–30. Nicol, D. (2009). Assessment for learner self-regulation: Enhancing achievement in the first year using learning technologies. Assessment & Evaluation in Higher Education, 34(3), 335–352. Noel-Levitz (2011). National online learners priorities report. Retrieved from. https:// www.noellevitz.com/upload/Papers_and_Research/2011/PSOL_report%202011.pdf Northrup, P., Lee, R., & Burgess, V. (2002). Learner perceptions of online interaction. Paper presented at ED-MEDIA 2002 world conference on educational multimedia hypermedia & telecommunications, Denver, CO. Offir, B., Bezalel, R., & Barth, I. (2007). Introverts, extroverts, and achievement in a distance learning environment. American Journal of Distance Education, 21(1), 3–20. Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66(4), 543–578. Palmer, A., & Koenig-Lewis, N. (2012). The effects of pre-enrolment emotions and peer group interaction on students' satisfaction. Journal of Marketing Management, 27, 1208–1231. Paraskeva, Mysirlaki, & Choustoulakis (2009). Designing collaborative learning environments using educational scenarios based on self-regulation. International Journal of Advanced Corporate Learning, 2(1), 42–49. Peterson, S. (2011). Self-regulation and online course satisfaction in high school. Dissertation Abstracts International, 71(10A) (UMI No. 3466080). Pilarski, P. P., Johnstone, D. A., Pettepher, C. C., & Osheroff, N. (2008). From music to macromolecules: Using rich media/podcast lecture recordings to enhance the preclinical educational experience. Medical Teacher, 30(6), 630–632. Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40. Pintrich, P. R., Smith, D. A., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 801–813. Price, M. L. (1993). Student satisfaction with distance education at the University of South Carolina as it correlates to medium of instruction, educational level, gender, working status, and reason for enrollment. Dissertation Abstracts International, 54(11). Puzziferro, M. (2006). Online technologies self-efficacy, self-regulated learning, and experiential variables as predictors of final grade and satisfaction in college-level online courses. ((Doctoral dissertation). Available from Dissertations and Theses database. (UMI No. 3199984)). Puzziferro, M. (2008). Online technologies self-efficacy and self-regulated learning as predictors of final grade and satisfaction in college-level online courses. American Journal of Distance Education, 22(2), 72–89. Rains, S. A. (2008). Seeking health information in the information age: The role of Internet self-efficacy. Western Journal of Communication, 72(1), 1–18. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Thousand Oaks, CA: Sage. Reinhart, J., & Schneider, P. (2001). Student satisfaction, self-efficacy, and the perception of the two-way audio/video distance learning environment: A preliminary examination. Quarterly Review of Distance Education, 2(4), 357–365. Roberts, J. K. (2002). The importance of the intraclass correlation in multilevel and hierarchical linear modeling designs. Multiple Linear Regression Viewpoints, 28, 19–31. Rodriguez Robles, F. M. (2006). Learner characteristic, interaction and support service variables as predictors of satisfaction in web-based distance education. Dissertation Abstracts International, 67(7) (UMI No. 3224964). Rosmalen, P. V., Sloep, P. B., Brouns, F., Kester, L., Berlanga, A., Bitter, M., et al. (2008). A model for online learner support based on selecting appropriate peer tutors. Journal of Computer Assisted Learning, 24, 483–493.
50
Y.-C. Kuo et al. / Internet and Higher Education 20 (2014) 35–50
Ross, M. E., Green, S. B., Salisbury-Glennon, J.D., & Tollefson, N. (2006). College students' study strategies as a function of testing: An investigation into metacognitive selfregulation. Innovative Higher Education, 30(5), 361–375. Sahin, I. (2007). Predicting student satisfaction in distance education and learning environments. (Retrieved from ERIC, database. (ED496541)). Schunk, D. H. (1989). Self-efficacy and achievement behaviors. Educational Psychology Review, 1(3), 173–208. Schunk, D. (2005). Self-regulated learning: The educational legacy of Paul R. Pintrich. Educational Psychologist, 40(2), 85–94. Shea, P., & Bidjerano, T. (2010). Learning presence: Towards a theory of self-efficacy, self-regulation, and the development of a communities of inquiry in online and blended learning environments. Computers & Education, 55, 1721–1731. Sher, A. (2004). Assessing the relationship of student–instructor and student–student interaction with student learning and satisfaction in web-based distance learning programs. ((Doctoral dissertation). Available from Dissertations and Theses database. (UMI No. 3126415)). Shi, J., Chen, Z., & Tian, M. (2011). Internet self-efficacy, the need for cognition, and sensation seeking as predictors of problematic use of the Internet. CyberPsychology, Behavior, and Social Networking, 14(4), 213–234. Shih, C. C., & Gamon, J. (2001). Web-based learning: Relationships among student motivation, attitudes, learning styles, and achievement. Journal of Agricultural Education, 42(4), 12–20. Song, L., & Hill, J. R. (2009). Understanding adult learners' self regulation in online environments: A qualitative study. International Journal of Instructional Media, 36(3), 263–274. Stevens, J. P. (2002). Applied multivariate statistics for the social sciences. Mahwah, NJ: Erlbaum. Sun, J., & Rueda, R. (2012). Situational interest, computer self-efficacy and self-regulation: Their impact on student engagement in distance education. British Journal of Educational Technology, 43(2), 191–204. Sutton, L. A. (2001). The principle of vicarious interaction in computer-mediated communications. International Journal of Educational Telecommunications, 7(3), 223–242. Tella, A. (2011). An assessment of mathematics teachers' Internet self-efficacy: Implications on teachers' delivery of mathematics instruction. International Journal of Mathematical Education in Science and Technology, 42(2), 155–174. Thompson, L. F., Meriac, J. P., & Cope, J. G. (2002). Motivating online performance: The influences of goal setting and internet self-efficacy. Social Science Computer Review, 20(2), 149–160. Thurmond, V. A. (2003). Examination of interaction variables as predictors of students' satisfaction and willingness to enroll in future web-based courses while controlling for student characteristics. Retrieved from. http://www.bookpump.com/dps/pdf-b/ 1121814b.pdf Thurmond, V. A., & Wambach, K. (2004). Understanding interactions in distance education: A review of the literature. International Journal of Instructional Technology and Distance Learning, 1(1), 9–26. Torkzadeh, G., Chang, C. J., & Demirhan, D. (2006). A contingency model of computer and internet self-efficacy. Information & Management, 43(4), 541–550.
Tsai, C. C. (2012). The development of epistemic relativism versus social relativism vi a online peer assessment, and their relations with epistemological beliefs and Internet self-efficacy. Educational Technology & Society, 15(2), 309–316. Tsai, C. C., Chuang, S.C., Liang, J. C., & Tsai, M. J. (2011). Self-efficacy in Internet-based learning environments: A literature review. Educational Technology & Society, 14(4), 222–240. Tsai, M. J., & Tsai, C. C. (2003). Information searching strategies in web-based science learning: The role of Internet self-efficacy. Innovations in Education and Teaching International, 40(1), 43–50. Veletsianos, G. (2010). Emerging technologies in distance education. Retrieved from. http:// www.aupress.ca/books/120177/ebook/99Z_Veletsianos_2010-Emerging_Technologies_ in_Distance_Education.pdf Wadsworth, L. M., Husman, J., Duggan, M.A., & Pennington, M. N. (2007). Online mathematics achievement: Effects of learning strategies and self-efficacy. Journal of Developmental Education, 30(3), 6–12. Wurtele, S. M. (2008). Influence of faculty interaction on student satisfaction in online undergraduate courses. ((Doctoral dissertation). Available from Dissertations and Theses database. (UMI No. 3330340)). Yang, Y. C., & Park, E. (2012). Applying strategies of self-regulation and self-efficacy to the design and evaluation of online learning programs. Educational Technology Systems, 40(3), 323–335. Yukselturk, E., & Bulut, S. (2005). Relationships among self-regulated learning components, motivational beliefs and computer programming achievement in an online learning environment. Mediterranean Journal of Educational Studies, 10(1), 91–112. Yukselturk, E., & Yildirim, Z. (2008). Investigation of interaction, online support, course structure and flexibility as the contributing factors to students' satisfaction in an online certificate program. Educational Technology & Society, 11(4), 51–65. Zickuhr, K., & Smith, A. (2011). 28% of American adults use mobile and social locationbased services. Retrieved from. http://www.pewinternet.org//media//Files/Reports/ 2011/PIP_Location-based-services.pdf Zimmerman, B. J. (1989). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329–339. Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San Diego: Academic Press. Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for assessing student use of self-regulated learning strategies. American Educational Research Journal, 23(4), 614–628. Zimmerman, B. J., & Martinez-Pons, M. (1988). Construct validation of a strategy model of student self-regulated learning. Journal of Educational Psychology, 80(3), 284–290. Zimmerman, B. J., & Schunk, D. H. (1989). Self-regulated learning and academic achievement: Theory, research, and practice. New York, NY: Springer-Verlag.