Efficient learning using a virtual learning environment in a university class

Efficient learning using a virtual learning environment in a university class

Computers & Education 56 (2011) 495–504 Contents lists available at ScienceDirect Computers & Education journal homepage: www.elsevier.com/locate/co...

463KB Sizes 0 Downloads 38 Views

Computers & Education 56 (2011) 495–504

Contents lists available at ScienceDirect

Computers & Education journal homepage: www.elsevier.com/locate/compedu

Efficient learning using a virtual learning environment in a university class Daniel Stricker a, b, *, David Weibel a, Bartholomäus Wissmath a a b

Department of Psychology, University of Bern, Muesmattstrasse 45, 3012 Bern, Switzerland Institute for Research in Open-, Distance and eLearning, Ueberlandstrassse 12, P.O. Box, 3900 Brig, Switzerland

a r t i c l e i n f o

a b s t r a c t

Article history: Received 15 February 2010 Received in revised form 17 September 2010 Accepted 18 September 2010

This study examines a blended learning setting in an undergraduate course in psychology. A virtual learning environment (VLE) complemented the face-to-face lecture. The usage was voluntary and the VLE was designed to support the learning process of the students. Data from users (N ¼ 80) and nonusers (N ¼ 82) from two cohorts were collected. Control variables such as demographical data, attitude towards the learning subject, computer literacy, motivation, learning effort and available infrastructure were captured by means of a self-report. As a learning outcome, the grade in the final exam was included. For the VLE-users, the mean performance in the VLE was taken as a predictor for success in the final exam. Two different groups of VLE-users were observed and classified into ’light and ’heavy’ users. The results showed that among those students who had spent two or more hours per week for pre- and post processing of the lectures, ‘heavy’ VLE-users performed better than non-users in the final exam. Additionally, the ‘heavy’ users’ performance in the VLE was the best predictor for the grade in the final exam. We discuss the results in the context of self-regulated learning competence. Ó 2010 Elsevier Ltd. All rights reserved.

Keywords: Evaluation methodologies Interactive learning environments Media in education Blended learning

1. Introduction In our study, the use of a virtual learning environment (VLE) – an Internet based self-assessment tool – within a university course is outlined. The VLE was provided to the students as an addition to the face-to-face lecture of the undergraduate course ‘Introduction to Psychology of Perception’. Thus, in the sense of blended learning, a traditional university lecture was combined with an Internet-based test module. Thereby, usage was voluntary. The purpose of the VLE was to enhance students’ learning process. 1.1. Student performance vs. student satisfaction It is still unclear whether a blended learning environment may enhance efficient learning compared to traditional face-to-face education. According to the student’s point of view, learning is probably considered efficient if good grades can be achieved in short time and under little effort. There are studies evaluating the efficiency of e-learning or blended learning looking at ease of use, perceived usefulness of the e-learning environment (Liaw, 2008; Locatis, Vega, Bhagwat, Liu, & Conde, 2008; Sun, Tsai, Finger, Chen, & Yeh, 2008), perceived learning and user satisfaction (Abrantes, Seabra, & Lages, 2007; Liaw, 2008; Saadé, Hea, & Kiraa, 2007; So & Brush, 2008; Sun et al., 2008; Webb Boyd, 2008), and the attitude towards the e-learning environment (Liaw, Huang, & Chen, 2007; Tait, Tait, Thornton, & Edwards, 2008) as outcome variables. Additionally, other studies looking at the outcome directly found that the use of new technologies in e-learning or blended learning settings actually improved the learning outcome (Campbell, Gibson, Hall, Richards, & Callery, 2008; Connolly, MacArthur, Stansfield, & McLellan, 2007; McDaniel, Lister, Hanna, & Roy, 2007; O’Dwyer, Carey, & Kleiman, 2007; Pereira et al., 2007). Within these studies, the grade in the final exam was used to operationalise the learning outcome. There are also meta-analyses comparing traditional learning settings to distance and e-learning (Bernard et al., 2004; Shachar & Neumann, 2010; U.S. Department of Education, 2009). Whereas Bernard et al. (2004) did not find a positive impact of distance learning on performance, more recent publications revealed different findings. Shachar and Neumann (2010) as well as the U.S. Department of Education (2009) provide evidence that students in a distance learning setting outperform their counterparts in ‘traditional’ learning environments. This could be because implications of earlier distance

* Corresponding author. Department of Psychology, University of Bern, Muesmattstrasse 45, 3012 Bern, Switzerland. Tel.: þ41 31 631 3642; fax: þ41 31 631 3606. E-mail addresses: [email protected] (D. Stricker), [email protected] (D. Weibel), [email protected] (B. Wissmath). 0360-1315/$ – see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2010.09.012

496

D. Stricker et al. / Computers & Education 56 (2011) 495–504

and e-learning research have been put into practice. However, compared to distance and e-learning, the results are not as numerous in the case of blended learning. For example, Pereira et al. (2007) tested the effectiveness of a blended learning approach in an anatomy course. Two cohorts were tested: the first cohort participated in the traditional teaching setting; the second cohort participated in the blended learning setting. The blended learning cohort received several new technologies to support their learning process, which resulted in a considerable improvement of the average mark. Following another methodological approach, Campbell et al. (2008) showed that online learning may enhance students’ performance. Their study has two major advantages. They (1) compared full-time face-to-face, part-time face-to-face and online students in the same cohorts and (2) they analysed performance in seven different modules over three years. Results showed that online students performed consistently better than part-time face-to-face students, who in turn performed better compared to full-time face-to-face students. The comparison of online and part-time face-to-face students is correctly emphasized by the authors, because the students in these two groups were more experienced and more motivated than the full-time face-to-face students. These two examples demonstrate that the approaches used to investigate the effects of online and blended learning can vary strongly between different studies. Bliuc, Goodyear, and Ellis (2007) gave a broad overview of the most frequently used research methodologies in the context of e-learning and blended learning. However, the value of media comparison studies have often been criticised on methodological grounds (Clark & Jones, 2001; Phipps & Merisotis, 1999). In the study presented here, we try to overcome methodological drawbacks. The data was gathered during two consecutive years for two different cohorts. Only the first half of the learning material for the first cohort, and only the second half of the learning material for the second cohort were made available for online training. Furthermore, three sources of information were combined to get a firm evaluation of the contribution of the VLE to the learning outcome. Data from VLE-usage were gathered by log file analysis. This allowed us to evaluate when and how often students used the VLE and moreover, to assess the students’ performance in the VLE. Additionally, a questionnaire provided information about general demographical data and about students’ attitudes toward the learning subject, computer literacy, their learning effort, their motivation in general, the available equipment, and their learning habits. As a main dependent variable, the grade in the final exam was included to measure the learning outcome. 1.2. Predicting students’ performance Under what conditions do people acquire knowledge? This seems to depend on many variables. In the case of e-learning, Kerr, Rynearson, and Kerr (2006) identified four major user characteristics that predicted the students’ success in the context of online learning: reading and writing skills, independent learning, motivation, and computer literacy. Liaw (2008) identified user characteristics, environmental factors, and the behavioural intention of using e-learning, as major influences on e-learning effectiveness. The effect of cognitive or learning styles on performance in different e-learning settings was examined as well (Cook, 2005; Lu, Yu, & Liu, 2003; Workman, 2004). Lu et al. found no significant impact of graduate students’ learning styles and learning patterns on learning performance (Lu et al., 2003), whereas Workman (2004) showed that cognitive styles significantly influence learning performance and perceived effectiveness. The author found that people who are able to deal with abstract information and prefer collaborative learning, performed better in a guided, interactive, and non-sequential learning environment compared to a sequential, repetitive, and individual-focused one. According to Kerres (2000) as well as Jonassen, McAleese, and Duffy (1993), learners with low previous knowledge and low intrinsic motivation benefit from highly structured and sequential learning contents. In our study, we pursue the aim to compare VLE-users and non-users of the VLE in terms of performance, which we measure using the grade at the final exam. Thus, we are interested in whether VLE-users attain similar or higher grades with less effort compared to non-users. The effort is operationalised by the amount of time spent for learning. In order to control the differences between the groups (users vs. nonusers), several attributes such as attitude towards the learning subject, computer literacy, motivation, learning style, available equipment, and Internet connection were collected. As an independent measure of academic achievement, we additionally integrated the final grade in a statistics course as a control variable. 1.3. Hypotheses We hypothesise that VLE-users get better grades in the final exam compared to non-users. The aim of the VLE is not only to deliver learning material, but also to provide the students with a framework to support their learning progress. Therefore, we predict that, compared to non-users, VLE-users perform better in the final exam with less effort. We also expect that VLE-users perform better on the trained material, but not on the untrained material. The prognostic validity of students’ performance in the VLE for predicting the grade in the final exam will be checked by a multiple linear regression. The students’ overall performance in the VLE, the sum of problems solved in the VLE, the motivation, the attitude towards perception and the learning effort are included as predictors for the grade in the final exam. Among these variables, we expected the students’ overall performance in the VLE to be the best predictor for the final grade. 2. Methods 2.1. Virtual learning environment The VLE used for this study is structured as follows: after starting the tool, a list of available chapters appears. Each chapter consists of a number of subtopics. Each subtopic is introduced by a short description and comprises four to six questions. The descriptions are illustrated by figures and/or short movie clips. After answering the questions, a feedback is given to the user which is also illustrated by figures and movies. Fig. 1 shows the conceptual organisation of the VLE. The whole learning unit ‘Introduction to perception’ consists of 84 subtopics with a total of 418 questions, divided into twelve chapters. Once the questions of a subtopic are answered correctly, it will no longer be presented unless the user deletes his or her answers. The tool furthermore offers the possibility to get a detailed performance feedback after each training. Thus, access to performance measures for each

D. Stricker et al. / Computers & Education 56 (2011) 495–504

497

Fig. 1. Schematic depiction of the VLE design.

problem, chapter, or topic is provided for the users. These performance measures give feedback about the users’ strengths and weaknesses, thus allowing the structuring of further learning. 2.2. Subjects A total of 409 students, recruited from two cohorts, took part in the final exams. Nine students were eliminated because their final grade was more than two standard deviations below the mean performance. 162 out of the remaining 400 students completed the questionnaire (response rate of 40 percent). These 162 students were used for data analysis. Subjects were classified as VLE-users if they (1) regularly utilised the e-learning tool, (2) took part in the final exam and (3) returned the questionnaire, which covered the different variables we assessed. In contrast, subjects were classified as non-users if they did not use the VLE, but took part in the final exam and returned the questionnaire. 2.3. Instruments After having written the final exam, all students received a questionnaire. Demographical information, questions about the infrastructure available at home (e.g. type of internet access, type of operating system and web browser), learning style (e.g. learning in a learning group or not), learning effort (e.g. how much time per week have you spent for preparation of the lecture), attitude towards perception, and computer literacy were captured. Learning effort was measured in four levels: less than 1 h per week, 1 < 2 h per week, 2 < 3 h per week, and 3 or more hours per week. The attitude towards perception was measured using nine five-point bipolar items (cf. Thurstone, 1931). Exemplary items are: ‘Perception is interesting vs. uninteresting’, ‘Perception is important vs. unimportant’ or ‘Perception is easy to understand vs. hard to understand’. Six items were used to assess computer literacy (cf. Richter, Naumann, & Groeben, 2001). Thus, a five-point scale was used (1 ¼ ‘I completely disagree’; 5 ¼ ‘I completely agree’). Exemplary items are: ‘I think it is convenient to have a computer at hand for my study’ or ‘I can hardly imagine to work without computers’. One question assessed the students’ motivation: ‘If I don’t succeed to solve a problem on the first try, I keep on trying until I succeed.’ The grade at the final exam of the course ‘Statistical methods’ was used as an independent variable to indicate academic achievement. Several studies have used grades in mathematics or statistics as indicators of general academic achievement (Beyth-Marom, Chayut, Roccas,

498

D. Stricker et al. / Computers & Education 56 (2011) 495–504

& Sagiv, 2003; Tabatabaei & Reichgelt, 2006). For example, Tabatabaei and Reichgelt (2006) found that performance in math courses is a good indicator to predict students’ grades in other courses. The assessment of the statistics grade allows for controlling whether VLE-users and non-users differ in terms of general academic achievement and thus, enables us to control a potential confounding variable. Moreover the course ‘Statistical methods’ was held by the same lecturer as ‘Introduction to Psychology of Perception’. Thus, possible differences in teaching styles of the lecturer were minimised. The grade at the final exam of the lecture ‘Introduction to Psychology of Perception’ served as the objective performance measurement. Thus, it is important to note that the questions in the final exam differed from the questions presented in the VLE. VLE-users were therefore not familiar with the questions that were asked in the final exam.

2.4. Procedure The VLE was introduced in the lecture ‘Introduction to Psychology of Perception’, which is part of the undergraduate studies in psychology. All students were informed about the availability of the VLE. Students were not randomly assigned to the VLE-user group and the non-user group for ethical reasons. Thus, the use of the VLE was voluntary. By creating a personal account for the VLE, every user agreed to the fact that data is collected within the VLE. All information we captured was treated confidentially. After the final exam, all students received the questionnaire and the data from the VLE-usage. The questionnaire and the grades were merged and made anonymous. In order to gain insight about the effects caused by the VLE, and in order to be able to separate possible grouping effects from the VLE effects, the following evaluation design was chosen: During the first year, only the first six chapters were made available in the VLE. During the following second year, only the second halfdchapters seven to twelvedwas accessible in the VLE. Fig. 2 illustrates this design. During both years the VLE was accessible to the whole cohort. Students who had to repeat the course were excluded from analysis. Thus, the students took part in this study either in the first or in the second year. This design allowed measuring the performance separately for trained and untrained material for each VLE-user.

3. Results 3.1. VLE-usage Forty-four students were classified as VLE-users and 34 students as non-users in the first cohort. For the second cohort, we identified 38 VLE-users and 46 non-users. A Chi-square test revealed no association between the variables VLE-usage and cohort, c2(1, N ¼ 162) ¼ 2.020, p ¼ 0.155. A log file analysis showed that the VLE was used frequently. The VLE-users of the first cohort solved a total of 2458 problems from the beginning of the semester until the day of the exam. The VLE-users of the second cohort solved a total of 2165 problems. For both cohorts we found that two thirds of all registered activity in the VLE took part during the last week before the exam. The median of the session duration was 21 min 53 s for the first cohort. The median session duration of the second cohort was 21 min 07 s. In average, the VLE-user of the first cohort (N ¼ 44) spent 106 min 04 s (ranging from 10 min 49 s to 6 h 11 min 3 s) in the VLE during the whole semester. For the second cohort (N ¼ 38), it was found that a VLE-user on average spent 86 min 45 s (ranging from 4 min 33 s to 6 h 26 min 8 s) in the VLE during the whole semester. For both cohorts, a bimodal distribution of the sum of problems solved was observed. Fig. 3 displays the histograms for both cohorts. To address the bimodal nature of this variable for further analysis, the VLE-users were divided into light and heavy users by a median split of

Fig. 2. Outline of the experimental design. The first cohort had access to chapters 1–6 in the VLE. The second cohort had access to chapters 7–12 in the VLE. Both cohorts were tested on the whole topic at the final exam and after the exam the questionnaire was delivered to all students.

D. Stricker et al. / Computers & Education 56 (2011) 495–504

499

Fig. 3. Histograms for sum of problems solved. A) first cohort, B) second cohort.

the sum of problems solved. Students who had totally solved less than 31 problems were classified as light users; all other users were classified as heavy users. 3.2. Comparison of light VLE-users, heavy VLE-Users, and non-users The percentage of females and males did not differ between the light user, heavy user, and non-user group, c2(2, N ¼ 162) ¼ .551, p ¼ 0.759. There were 15 male and 65 female non-users, 9 male and 32 female light VLE-users and 10 male and 31 female heavy VLE-users. No interaction between VLE-usage and type of Internet connection was found, c2(8, N ¼ 157) ¼ 11.922, p ¼ 0.155. The proportion of available computer systems (Windows or Macintosh) was the same for the three groups, c2(4, N ¼ 145) ¼ 5.650, p ¼ 0.227. The proportion of students who learn in groups and students who learn alone did not differ between the three groups, c2(2, N ¼ 161) ¼ .745, p ¼ 0.689. A Kruskal–Wallis test was used to compare light VLE-users, heavy VLE-users and non-users regarding the learning effort. The test revealed no significant difference between the three groups, c2(2, N ¼ 162) ¼ 2.129, p ¼ 0.345. The motivation did not differ between the three groups as well, c2(2, N ¼ 160) ¼ .066, p ¼ 0.968. The two scales measuring attitude towards perception and computer literacy turned out to be reliable (Cronbach’s alpha .84 and .72 respectively). In order to compare light and heavy VLE-users and non-users in terms of attitude towards perception and computer literacy, two Kruskal–Wallis tests were conducted. The results showed that the mean score of the attitude towards perception and the mean score of the computer literacy did not differ between the two groups, c2(2, N ¼ 162) ¼ .566, p ¼ 0.753 and c2(2, N ¼ 162) ¼ 2.805, p ¼ 0.246 respectively. For 144 students who returned the questionnaire, the grades in statistics were available. The correlation between the grade in perception and the grade in statistics was r(143) ¼ .491, p < 0.001. A one-way ANOVA revealed no difference in the mean grades in statistics between light VLE-users, heavy VLE-users, and non-users, F(2, 141) ¼ .595, p ¼ 0.553. Furthermore, we calculated the correlation between motivation and learning effort (measured in the original 4 point scale). We found that motivation correlates significantly with learning effort for the light VLE-users (r(41) ¼ .338, p ¼ 0.031), whereas the correlation was not significant for heavy VLE-users (r(40) ¼ .147, p ¼ 0.366) nor the non-users (r(79) ¼ .076, p ¼ 0.503). 3.3. Learning efficiency Performance was assessed as the percentage of correctly answered items in the exam. The exam consisted of 20 items. The comparison between the light VLE-users, heavy VLE-users, and the 80 non-users in terms of the performance on untrained material was tested by a Kruskal–Wallis test and revealed no difference, c2(2, N ¼ 162) ¼ .203, p ¼ 0.903. The comparison between the light VLE-users, heavy VLETable 1 Means and standard errors for light and heavy VLE-users as well as non-users for performance measurements. VLE-usage

n

Mean

Std. error

Grade in perception

Heavy Light No

41 41 80

5.01 4.95 4.94

.097 .103 .071

Grade in statistics

Heavy Light No

37 39 68

4.55 4.38 4.38

.118 .129 .107

Performance trained material

Heavy Light No

41 41 80

.82 .77 .76

.024 .025 .017

Performance untrained material

Heavy Light No

41 41 80

.86 .85 .86

.018 .02 .014

500

D. Stricker et al. / Computers & Education 56 (2011) 495–504

Fig. 4. Interaction between VLE-usage and learning effort. VLE-users were divided into light- and heavy users.

users, and the non-users in terms of performance on trained material was also tested by a Kruskal–Wallis test and marginally revealed no difference, c2(2, N ¼ 162) ¼ 5.618, p ¼ 0.06. A post-hoc test by Conover (1999), however, revealed that the heavy users performed significantly better than the non-users (p < 0.05). The post-hoc test was applied using BrightStat (Stricker, 2008). The three groups, however, did not differ significantly in terms of grade in perception, c2(2, N ¼ 162) ¼ 1.583, p ¼ 0.453. The mean grades in perception and the performances on trained and untrained material, as well as the corresponding standard errors, are displayed in Table 1. Grades range from 6.0 (best, equivalent to A) to 4.0 (sufficient, equivalent to E). Grades below 4.0 denote a failure (equivalent to F). In order to check our main hypothesis, that VLE-usage enhances learning efficiency, a two-way analysis of covariance (ANCOVA) was conducted. Learning effort and VLE-usage were included as independent variables, grade in perception as a dependent variable. For the ANCOVA, we recoded the learning effort variable into two categories: less than 2 h per week and two or more hours per week. VLE-usage contained three levels: no-users, light users, and heavy users. Grade in statistics served as a covariate, measuring the students’ academic achievement. The grade in statistics was available for 144 students. It has been previously shown that the grade in statistics correlated significantly with the grade in perception. Furthermore, the two way interactions between the grade in statistics and the learning effort, F(1, 132) ¼ 1.505, P ¼ 0.222, and between the grade in statistics and VLE-usage, F(2, 132) ¼ .373, P ¼ 0.690, as well as the three–way interaction between grade in statistics, learning effort, and VLE-usage, F(2, 132) ¼ .334, P ¼ 0.716, were not significant. Hence, all preconditions were met to include the grade in statistics as a covariate in the analysis. The group sizes as well as the corresponding means and standard deviations are displayed in Table 2. The main effect learning effort, F(1, 137) ¼ 6.441, p ¼ 0.012, as well as the two-way interaction, F(2, 137) ¼ 4.693, p ¼ 0.011 turned out to be significant. No main effect of VLE-usage was observed, F(2, 137) ¼ 1.735, p ¼ 0.180. Fig. 4 shows the two way interaction.

3.4. Prognostic validity A multiple linear regression was applied to test what variables best predict the VLE-users performance in the grade in perception. The following independent variables were entered into the model using the stepwise method: learning effort (in the original 4 point scale), motivation, attitude towards perception, mean performance in the VLE, and the sum of problems solved. Because there were clearly two groups who used the VLE in two different ways, the multiple linear regression was repeated separately for light and heavy users. Learning effort was the only variable that entered the model for the light users. This variable explained 21.3 percent of the variance in the final grade, F(1, 39) ¼ 10.554, p ¼ 0.002, R2 ¼ .213. Tables 3 and 4 depict the descriptives and correlation tables for light and heavy users. In Table 5, the result of the regression analysis is presented for the light users. In the case of the heavy users, the variables, attitude towards perception and mean performance in the VLE, entered the model. These two variables explained 31.2 percent of the variance in the final grade, F(2, 37) ¼ 8.404, p ¼ 0.001, R2 ¼ .312. Table 6 presents the result of the regression analysis in detail for the heavy users. The correlation between the sum of problems solved and the grade in perception was negative for the heavy users, although marginally not significant, r(39) ¼ .260, p ¼ 0.052. Table 2 Descriptives for 2  3 analysis of covariance. VLE-usage

Learning effort

n

Mean

S.D.

Heavy

Low High

16 21

4.94 5.26

.680 .490

Light

Low High

8 31

4.44 5.13

.863 .482

No

Low High

26 42

5.00 4.95

.632 .603

D. Stricker et al. / Computers & Education 56 (2011) 495–504

501

Table 3 Descriptive statistics for all variables used in the multiple linear regression analysis for heavy and light users. Light users are shown in brackets.

1. 2. 3. 4. 5. 6.

Grade in perception Attitude towards perception Learning effort Motivation Mean performance in VLE Sum of problems solved in VLE

Mean

S.D.

5.08 3.98 2.68 3.88 .734 44.3

.626 .706 .859 .686 .111 9.20

(4.95) (3.90) (2.88) (3.85) (.679) (12.4)

n (.660) (.643) (.678) (.792) (.108) (7.29)

40 40 40 40 40 40

(41) (41) (41) (41) (41) (41)

For the heavy users, learning effort was only associated with performance in the VLE and not with grade in perception. In order to illustrate this, a path model was developed in order to show how the variables, attitude towards perception, learning effort, and performance in the VLE, are associated with grade in perception (Fig. 5). A maximum likelihood method was used for parameter estimation. The model fits the data extremely well, c2(3) ¼ .332, p ¼ 0.954, N ¼ 40, GFI ¼ .996, AGFI ¼ .986, RMSEA < .001, PCLOSE ¼ .960. In particular, the RMSEA and PCLOSE indicate a very good fit, even with the small sample size that was available (Browne & Cudek, 1993). All paths between the variables are significant and there is no significant path or covariance that is not shown in the model. The path model explains 30 percent of the grade in perception. 4. Discussion In this study, the virtual learning environment (VLE) was introduced as an addition to the face-to-face lecture ‘Introduction to Psychology of Perception’. The purpose of the VLE was to support the students’ learning progress. It was hypothesised that compared to non-users, VLEusers attain the same or better grades with less effort. Furthermore, it was expected that the overall performance in the VLE is the best predictor for the learning outcome in terms of the grade in the final exam. Because it was not possible to randomly assign the students to the user or the non-user group, we analysed whether the two groups differ in terms of the control variables, attitude towards perception, computer literacy, motivation, learning style, available infrastructure, and gender distribution. However, no differences were found between users and non-users regarding these variables. Thus, possible grouping effectsdfor example due to a self-selection biasdcan be ruled out. Additionally, users and non-users did not differ in terms of the grade in statistics that was used as an indicator of academic achievement. However, motivation was captured by only one question, which could be a limitation of our study. Therefore, one should be cautious about drawing conclusions about students’ motivation and its effect on the learning outcome. The findings showed that the use of a VLE as an addition to a face-to-face lecture is beneficial. The results suggest, however, that students do not automatically benefit from using the VLE. This means, that among all participants, VLE-users did not perform better in the final exam compared to non-users. The e-learning tool was only useful when the students had spent a certain amount of time to get familiar with the basic concepts and key terms of the topic. Thus, having spent the same effort as non-users, VLE-users achieved a better grade in the final exam. In other words, when some effort is made (at least about 2 h per week), students benefited from the usage. This benefit was notable since it led to an average increase of the final grade from 5 to 5.25. The results demonstrate that online learning, in addition to a face-to-face lecture, may enhance students’ performance without increasing their workload too much. In contrast, it has been demonstrated several times that online learning increases the workload (Carr, 2000; Dutton, Dutton, & Perry, 2002), which in turn leads to a higher drop out rate for online students. However, this conclusion may be correct when a considerable part of a course is substituted by an online version. This was not the case here. Besides the VLE’s effectiveness, the prognostic validity of the online performance concerning the grade in the final exam was also examined. A strong relationship between these two variables would indicate that the VLE gives the students appropriate feedback concerning the learning progress. The regression analysis for the heavy users revealed that the permanent performance feedback in the VLE was very useful, thus, indicating not only their overall score and ranking in the cohort, but also predicting, to some extent, the learning outcome. In fact, only six out of the 82 VLE-users did not pass the exam, and for only one of them, the mean performance in the VLE indicated a success in the final exam. The findings lead us to the assumption that students generally used the VLE in two different ways. On the one hand, some students did not spend much time using the VLE. On the other hand, there were students who generally did not spend much time in learning but used the VLE extensively. This group of students probably thought that the mere usage of the VLE will suffice to pass the exam. The results of the multiple linear regression analysis provided some evidence for this assumption. For the heavy users, a negative correlation between the sum of problems solved and the final grade was found (see Table 4). We compared the number of solved problems during the last three weeks before the exam between students who spent two or more hours per week for learning (high effort group) and students who spent less than 2 h per week (low effort group). During this time period, the low effort group solved an average of 30.3 problems compared to only 20.6 problems in the high effort group. This difference is statistically significant, N(low-effort) ¼ 27, N(high-effort) ¼ 55, t(80) ¼ 2.176, p ¼ 0.033. Table 4 Correlation table for all variables used in the multiple linear regression analysis for heavy and light users. Light users are shown in brackets (*p < 0.05, **p < 0.01).

1. 2. 3. 4. 5. 6.

Grade in perception Attitude towards perception Learning effort Motivation Mean performance in VLE Sum of problems solved in VLE

2.

3.

4.

5.

.459** (.148)

.094 (.462**) .081 (.030)

.142 (.321*) .010 (.053) .147 (.338*)

.329* .022 .308* .130

6. (.318*) (.020) (.165) (.316*)

.260 .233 .041 .050 .029

(.085) (.371**) (.001) (.037) (.060)

502

D. Stricker et al. / Computers & Education 56 (2011) 495–504

Table 5 Standardised coefficients of the multiple linear regression’s final model for light users (R2 ¼ .213). Beta in for variables not in the model are also shown. Dependent variable is the final grade in perception. Standardised coefficients Beta Learning effort Attitude towards perception Motivation Mean performance in VLE Sum of problems solved in VLE

Beta in

t

p

.162 .186 .248 .085

3.249 1.142 1.241 1.769 .591

.002 .261 .222 .085 .558

.462

For the light users, the learning effort was the most important predictor for the grade in perception (see Tables 4 and 5), whereas for heavy users, the learning effort was not associated with the learning outcome at all (see Tables 4 and 6). For heavy users, the attitude towards perception and the performance in the VLE were the most important factors for predicting the final grade in perception. The path model demonstrated that for heavy users, the VLE is a mediator between the learning effort and the grade in perception. Taken altogether, we assume that at least some students tried to compensate their lack of learning during the semester by an extensive use of the VLE before the exam. Most of them succeeded, but some did not. On the basis of the data, it cannot be decided whether the heavy users were particularly lazy or not. The heavy users did not differ from the light users in terms of their motivation, their attitude towards perception, their learning effort, and also not in terms of the grade in statistics. Hence, the question of why these two groups used the VLE so differently cannot be answered here. However, it must be mentioned that, in addition to perception and statistics, the students had to take six other exams. May be they had simply divided their learning efforts in different ways. In sum, the results confirm the conclusion from Littlejohn, Falconer, and Mcgill (2008), that the effectiveness of a resource is not determined by the type of the resource, but by its appropriate use in the particular context. 4.1. Self-regulated learning We have observed that some students probably used the VLE as a last minute exam ‘booster’. Therefore it can be argued, that the VLE did not meet the original intent to provide the students with a tool to support their learning process during the semester. However, we cannot draw conclusions for this question, since we did not ask the students about their learning strategies. One possible strategy is that the students used the VLE to refine their knowledge and establish a more general view and a deeper understanding of the topic after having acquired the basic concepts and key terms through reading the textbook and attending course lectures. A second strategy might be that students used the VLE before the exam to acquire as much knowledge as possible in the shortest period of time. The significant correlation between motivation and learning effort for the light VLE-users indicates that those students who were intrinsically motivated were also those students who managed to regulate their learning process without using the VLE extensively. Thus, it may be that the ‘light’ users used the first strategy (use of the VLE to achieve a deeper understanding), whereas the ‘heavy’ users used the second strategy (use of the VLE to gain as much knowledge as possible in a short period of time). A follow up test would have been helpful to address the question of whether the VLE-usage leads to a deeper processing of the learning material. However, the appropriate use of a learning resource can be understood as part of self-regulated learning competence. Self-regulated learning competence is the extent students are able to develop their own learning strategies in a particular educational setting. Selfregulated learning competence itself, is defined as the degree to which learners are metacognitively, motivationally, and behaviourally active participants in their own learning (Zimmerman, 1986a, 1986b). The goal for learners is to be their own teachers (Schunk & Zimmerman, 1998; Torrano & González, 2004). Thus, good achievement via self-regulated learning requires a strong will to learn and excellent learning skills (Torrano & González, 2004). The observed behaviour of light and heavy users in the VLE, together with their performance in the final exam (as outlined in Fig. 4), possibly reflects differences in users’ self-regulated learning types (Chen, 2009). The VLE used for this study is not adaptive in the sense that it detects how a particular user behaves. It does not automatically adapt to the user’s knowledge, skills, and self-regulated learning competence. Such adaptive learning systems are, however, emerging and are a subject of future research (Chen, 2009). 4.2. Methodological considerations Media comparison studies have often been criticised on methodological grounds (Clark & Jones, 2001; Phipps & Merisotis, 1999). Mainly two different approaches are used in performance-oriented e-learning and blended learning research. First, two consecutive cohorts are

Table 6 Standardised coefficients of the multiple linear regression’s final model for heavy users (R2 ¼ .31). Beta in for variables not in the model are also shown. Dependent variable is the final grade in perception. Standardised Coefficients Beta Attitude towards perception Mean performance in the VLE Learning effort Motivation Sum of problems solved in VLE

Beta in

t

p

.045 .097 .174

3.315 2.338 .309 .703 1.250

.002 .025 .759 .486 .219

.452 .319

D. Stricker et al. / Computers & Education 56 (2011) 495–504

503

Fig. 5. Path model for heavy users (*p < 0.05, ***p < 0.001).

compared in terms of performance, with the virtual learning environment introduced to the second cohort (Pereira et al., 2007). Second, experimental and control groups are realised within the same cohort (Campbell et al., 2008; Connolly et al., 2007). Both approaches have their own advantages and drawbacks. The advantage of the first approach is that, within one cohort, all students are treated the same. The drawback is the amount of time that passes by between the first and the second data collection. This means that the two groups may differ in terms of control variables such as gender distribution, computer literacy, motivation, etc. The advantage of the second approach is that data is collected after one course completion. The main drawback of the second approach is that, in many cases, subjects cannot be assigned randomly to the control and experimental group. We have tried to overcome these concerns. In our study, the learning material in the VLE was split into two halves. Furthermore, we collected data for many control variables such as computer literacy, motivation, attitude towards perception, etc. And finally, data was collected from three different resources: self report, VLE log file, and performance in the exam. Although our students were given the option to use the VLE as an additional learning tool or not, the evaluation design allowed us to reveal the contribution of the VLE to the students’ performance. The results clearly demonstrated that mere VLE usage alone did not lead to better grades, but that factors such as selfregulated learning competence and attitude towards perception, moderated the outcome. This is not a new fact (Cook, 2005; Dutton et al., 2002; Kerr et al., 2006), but due to our evaluation design, we were able to confirm previous results in a holistic way. References Abrantes, J. L., Seabra, C., & Lages, L. F. (2007). Pedagogical affect, student interest, and learning performance. Journal of Business Research, 60, 960–964. Bernard, R. M., Abrami, P. C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., et al. (2004). How does distance education compare with classroom instruction? A meta-analysis of the empirical literature. Review of Educational Research, 74(3), 379–439. Beyth-Marom, R., Chayut, E., Roccas, S., & Sagiv, L. (2003). Internet-assisted versus traditional distance learning environments: factors affecting students’ preferences. Computers & Education, 41, 65–76. Bliuc, M. A., Goodyear, P., & Ellis, R. A. (2007). Research focus and methodological choices in studies into students’ experiences of blended learning in higher education. The Internet and Higher Education, 10(4), 231–244. Browne, M. W., & Cudek, R. (1993). Alternative ways of assessing equation model fit. In K. A. Bollen, & J. S. Long (Eds.), Testing structural equation models (pp. 136–162). Newbury Park, CA: SAGE Publications Inc. Campbell, M., Gibson, W., Hall, A., Richards, D., & Callery, P. (2008). Online vs. face-to-face discussion in a web-based research methods course for postgraduate nursing students: a quasi-experimental study. International Journal of Nursing Studies, 45, 750–759. Carr, S. (2000). As distance education comes of age, the challenge in keeping students. Chronicle of Higher Education, 46(13), A39–A41. Chen, C. M. (2009). Personalized e-learning system with self-regulated learning assisted mechanisms for promoting learning performance. Expert Systems with Applications, 36(5), 8816–8829. Clark, R. A., & Jones, A. (2001). A comparison of traditional and online formats in a public speaking course. Communication Education, 50, 109–124. Connolly, T. M., MacArthur, E., Stansfield, M., & McLellan, E. (2007). A quasi-experimental study of three online learning courses in computing. Computers and Education, 49, 345–359. Conover, W. J. (1999). Practical nonparametric statistics (3rd ed.). John Wiley & Sons. Cook, D. A. (2005). Learning and cognitive styles in web-based learning: theory, evidence, and application. Academic Medicine, 80, 266–278. Dutton, J., Dutton, M., & Perry, J. (2002). How do online students differ from lecture students? Journal of Asynchronous Learning Networks, 6(1), 1–20. Jonassen, D. H., McAleese, T. M. R., & Duffy, T. M. (1993). A manifesto for a constructivist approach to technology in higher education. In T. M. Duff, J. Lowyck, & D. H. Jonassen (Eds.), Designing environments for constructive learning. Heidelberg: Springer Verlag. Kerr, M. S., Rynearson, K., & Kerr, M. C. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9, 91–105. Kerres, M. (2000). Entwicklungslinien und Perspektiven mediendidaktischer Forschung. Zu Information und Kommunikation beim mediengestützten Lernen. Zeitschrift für Erziehungswissenschaft, 3(1), 111–129. Liaw, S. S. (2008). Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: a case study of the blackboard system. Computers and Education, 51, 864–873. Liaw, S. S., Huang, H. M., & Chen, G. S. (2007). Surveying instructor and learner attitudes toward e-learning. Computers and Education, 49, 1066–1080. Littlejohn, A., Falconer, I., & Mcgill, L. (2008). Characterising effective e-learning resources. Computers and Education, 50, 757–771. Locatis, C., Vega, A., Bhagwat, M., Liu, W. L., & Conde, J. (2008). A virtual computer lab for distance biomedical technology education. BMC Medical Education, 8, 12. doi:10.1186/ 1472-6920-8-12. Lu, J., Yu, C. S., & Liu, C. (2003). Learning style, learning patterns, and learning performance in a WebCT-based MIS course. Information & Management, 40, 497–507. McDaniel, C. N., Lister, B. C., Hanna, M. H., & Roy, H. (2007). Increased learning observed in redesigned introductory biology course that employed web-enhanced, interactive pedagogy. CBEdLife Sciences Education, 6, 243–249. O’Dwyer, L. M., Carey, R., & Kleiman, G. (2007). A study of the effectiveness of the Louisiana Algebra I online course. Journal of Research on Technology in Education, 39(3), 289–306. Pereira, J. A., Pleguezuelos, E., Merí, A., Molina-Ros, A., Molina-Tomás, M. C., & Masdeu, C. (2007). Effectiveness of using blended learning strategies for teaching and learning human anatomy. Medical Education, 41, 189–195. Phipps, R., & Merisotis, J. (1999). What’s the difference? A review of contemporary research on the effectiveness of distance learning in higher education. Washington, DC: The Institute for Higher Education Policy. Richter, T., Naumann, J., & Groeben, N. (2001). Das Inventar zur Computerbildung (INCOBI): Ein Instrument zur Erfassung von computer literacy und computerbezogenen Einstellung bei Studierenden der Geistes- und Sozialwissenschaften. Psychologie in Erziehung und Unterricht, 48, 1–13. Saadé, R. G., Hea, X., & Kiraa, D. (2007). Exploring dimensions to online learning. Computers in Human Behavior, 23, 1721–1739. Schunk, D. H., & Zimmerman, B. J. (1998). Self-regulated learning: From teaching to self-reflective practice. New York: Guilford. Shachar, M., & Neumann, Y. (2010). Twenty years of research on the academic performance differences between traditional and distance learning: summative meta-analysis and trend examination. Journal of Online Learning and Teaching, 6(2), 318–334.

504

D. Stricker et al. / Computers & Education 56 (2011) 495–504

So, H. J., & Brush, T. A. (2008). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: relationships and critical factors. Computers and Education, 51, 318–336. Stricker, D. (2008). BrightStat.com: free statistics online. Computer Methods and Programs in Biomedicine, 92, 135–143. Sun, P. C., Tsai, R. J., Finger, G., Chen, Y. Y., & Yeh, D. (2008). What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction. Computers and Education, 50, 1183–1202. Tabatabaei, M., & Reichgelt, H. (2006). Predictors of student success in a project management course. Issues in Information Systems, 7(1), 305–309. Tait, M., Tait, D., Thornton, F., & Edwards, M. (2008). Development and evaluation of a critical care e-learning scenario. Nurse Education Today, 28(8), 971–981. Thurstone, L. L. (1931). The measurement of attitudes. Journal of Abnormal and Social Psychology, 26, 249–269. Torrano, F., & González, M. C. (2004). Self-regulated learning: current and future directions. Electronic Journal of Research in Educational Psychology, 2(1), 1–34. U.S. Department of Education, Office of Planning, Evaluation and Policy Development. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. (Washington D.C). Webb Boyd, P. (2008). Analyzing students’ perceptions of their learning in online and hybrid first-year composition courses. Computers and Composition, 25, 224–243. Workman, M. (2004). Performance and perceived effectiveness in computer-based and computer-aided education: do cognitive styles make a difference? Computers in Human Behaviour, 20, 517–534. Zimmerman, B. J. (1986a). A social cognitive view of self-regulated academic learning. Journal of Educational Psychology, 81(3), 329–339. Zimmerman, B. J. (1986b). Development of self-regulated learning: which are the key subprocesses? Contemporary Educational Psychology, 11(4), 307–313.