Learning and Instruction 19 (2009) 433e444 www.elsevier.com/locate/learninstruc
Constructing knowledge with an agent-based instructional program: A comparison of cooperative and individual meaning making Roxana Moreno* Educational Psychology Program, University of New Mexico, Simpson Hall 123, Albuquerque, NM 87131, USA
Abstract Participants in the present study were 87 college students who learned about botany using an agent-based instructional program with three different learning approaches: individual, jigsaw, or cooperative learning. Results showed no differences among learning approaches on retention. Students in jigsaw groups reported higher cognitive load during learning than students who learned individually; scored lower on a problem-solving transfer test than students in individual and cooperative learning groups; and were less likely to produce elaborated explanations and co-construct knowledge with their peers than students in cooperative groups. Students in cooperative groups reported higher situational interest than their counterparts. Implications for cooperative and individual meaning making in agent-based instructional programs are discussed and future research directions are suggested. Ó 2009 Elsevier Ltd. All rights reserved. Keywords: Cooperative learning; Individual learning; Knowledge construction; Agent-based instruction
1. Introduction There is a growing consensus among cognitive scientists and educators that learning is an active process in which students seek to make sense out of the to-be-learned information (Bruner, 1990; Jonassen, Peck, & Wilson, 1999). Interactive multimedia environments are particularly well suited to promote meaning making by presenting students with multiple knowledge representations and offering opportunities for constructing understandings such as having students interact with objects by exploring, manipulating, and testing hypotheses (Bransford, Brown, & Cocking, 1999; Jacobson & Kozma, 2000; Lajoie, 2000). In previous research, we have investigated the effectiveness of various features of an interactive multimedia program called Design-A-Plant (Lester, Stone, & Stelling, 1999). In this learning environment, students fly to remote planets that have certain environmental conditions (such as low nutrients and heavy rain) and must design plants with root, stem, and leaf characteristics that
* Tel.: þ1 505 2773960. E-mail address:
[email protected] 0959-4752/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.learninstruc.2009.02.018
allow them to survive in such conditions (Moreno, Mayer, Spires, & Lester, 2001). A pedagogical agent interacts with the learner by posing guiding questions and providing principlebased feedback. The research on learning with the Design-APlant environment shows that students construct a deeper understanding of botany when they are able to interact on a one-on-one basis with a pedagogical agent who guides their cognitive processing as they manipulate and experiment with the multimedia materials (Moreno & Mayer, 2005). This finding supports a guided-activity principle for the design of interactive multimedia environments (Moreno & Mayer, 2007). The guided-activity principle has been also supported by other research programs using different multimedia learning environments (de Jong, 2005). With few exceptions, however, most of the research on multimedia instruction has focused on students’ individual interactions with a computer (Barker, Jones, Britton, & Messer, 2002; Dillenbourg, 1999; Vauras, Iiskala, Kajamies, Kinnunen, & Lehtinen, 2003; Woodul, Vitale, & Scott, 2000). This pattern comes as a surprise considering that one of the strengths of technology is to support activities that engage many learners in higher-order thinking such as group discussions (Johnson & Johnson, 2004;
434
R. Moreno / Learning and Instruction 19 (2009) 433e444
McClintock, 1999). The goal of the present study was to examine whether and how students’ learning and perceptions about learning are affected when they interact with an agentbased multimedia environment either individually or in cooperation. 1.1. The case for individual learning The case for individual learning with technology is supported by cognitive constructivism, which focuses on internal meaning making and the conceptual change processes of individual students (Piaget, 1970). According to this perspective, the role of technology is to provide instructional materials and environments where students can make intellectual choices for themselves as they construct knowledge in their minds (Greeno, Collins, & Resnick, 1996). For instance, computers can be a very useful tool to support students’ knowledge construction process by offering opportunities to select relevant information, manipulate different representations of knowledge, and conduct virtual experiments (Davis & Linn, 2000; Reiser, 2002). The role of technology, therefore, is to prime active cognitive processing in individual learners, such as those that facilitate the selection, organization, and integration of new information with their prior knowledge (Moreno & Mayer, 2007). Several reviews on the effects of using individualized instructional programs show that they can promote learning and motivation provided that they offer student control, opportunities for activity and reflection, guidance, and informational feedback (Azevedo & Bernard, 1995; Mayer, 2005; Moreno & Mayer, 2007). Examples of successful technologies for individual meaning making are BGuile, a computer-based environment in which students conduct authentic scientific investigations (Reiser et al., 2001); SimQuest, an interactive multimedia program in which students explore physics principles by running simulations (van der Meij & de Jong, 2006); the cognitive tutor programs developed by Carnegie Learning (van Lehn et al., 2007); and AutoTutor, an intelligent tutoring system that supports learning with mixed-initiative dialogues (Graesser, Chipman, Haynes, & Olney, 2005). 1.2. The case for learning together During the past decade, the learning sciences have been increasingly influenced by several social theories, such as social constructivism, distributed learning, socially shared cognition, and socio-historical theory (Brown & Cole, 2000; Bruner, 1966; Lave, 1988; Palincsar & Herrenkohl, 1999; Salomon, 1993; Vygotsky, 1978). According to the group learning approach, learning is a social activity with meaning being constructed as a result of interaction and shared efforts to make sense of new information (Holt & Morris, 1993). Learning results not only from the reorganization of the knowledge structures of an individual but also from the cooperations that s/he carries with others. By bringing together diverse interests, expertise, and perspectives to the knowledge construction process, students can reach goals that surpass
those that can be achieved by individual members (Johnson & Johnson, 2004). Learning science in a group with a multimedia program can be viewed as a cooperative process where meaning and solutions are potentially negotiated and agreed upon among the group members (Springer, Stanne, & Donovan, 1999). There are many examples of successful technologies for cooperative meaning making, such as WISE, a web-based inquiry science environment (Linn, 2006); Biokids, a program that instructs children about biology through mobile technology platforms (Songer, 2007); and computer supported collaborative learning (CSCL), a knowledge-building community system (Hewitt & Scardamalia, 1998; Stahl, 2002). On the other hand, the use of agent-based instructional programs such as the one used in the present study has been largely limited to one-to-one instruction (Harrer, McLaren, Walker, Bollen, & Sewall, 2006). A contribution of this research is to extend on the growing research on agent-based instruction by examining whether learning in cooperation with the Design-A-Plant program would promote deeper learning (as measured by problemsolving transfer) than learning individually from the same instructional program. 1.3. The present study We examined the following three methods. First, the traditional approach to agent-based learning described in the introduction, in which students construct knowledge individually about plants’ root, stem, and leaves by solving a set of plant-design problems with the guidance of the explanatory feedback of the pedagogical agent. Second, a traditional cooperative learning approach, in which all members in a group learn about the three plant parts by solving together the same set of plant-design problems and with the same agent guidance presented to students who learn individually with the instructional program (Dillenbourg, 1999; Johnson & Johnson, 1999). Finally, the Design-A-Plant program was modified to suit the jigsaw method of learning. The jigsaw method is a cooperative learning technique where, just as in a jigsaw puzzle, each student’s part is essential for the completion and full understanding of the final learning product. More specifically, each student is asked to learn an assigned portion of the instructional materials and then teach it to the rest of the group members. By combining what each individual learns with the material learned by others, jigsaw members are able to form a coherent body of knowledge (Aronson, 2002; Aronson & Patnoe, 1997; Brown & Campione, 1994). In the Design-APlant program, the jigsaw method was implemented by segmenting its instructional materials into three modules (i.e., roots, stems, and leaves) and randomly assigning each member of the jigsaw groups to learn about one of the modules. Jigsaw participants learned about their assigned plant part by solving the same set of plant-design problems and with the same agent guidance presented to the individual and cooperative learning groups. Once the jigsaw members finished learning about their assigned plant part, they were asked to teach the other group members what they had learned.
R. Moreno / Learning and Instruction 19 (2009) 433e444
The benefits of cooperative learning are highly dependent on the specific design of the cooperative learning groups, with group interdependence and individual accountability being necessary conditions for learning success (Rohrbeck, Ginsburg-Block, Fantuzzo, & Miller, 2003; Slavin, Hurley, & Chamberlain, 2003). In this study, group interdependence was ensured by giving different portions of the material to-belearned (jigsaw group) or different roles (cooperative group) to the team members during learning (Kagan, 1994). In addition, individual accountability was ensured by instructing students that each member would be individually assessed on his/her learning after working with their peers on the task. Although the three approaches described share the fact that students learn botany with identical multimedia materials (i.e., plant types, environmental conditions) and agent guidance, the learning processes encouraged by them differ in important ways. First, group approaches differ from the individual approach in that cooperation among the group members is needed to achieve the learning objectives. Second, the two group approaches differ in the way in which their members cooperate. In cooperative learning, members cooperate as they learn about all topics from the beginning to the end of lesson, whereas in jigsaw learning members cooperate after they acquire expertise in their assigned portion of the lesson by teaching each other what they have learned. The purpose of this research was to investigate whether these differences would affect students’ learning, learning perceptions, and knowledge construction. In particular, the present study sought answers to the following research questions: (a) How do the three learning approaches affect students’ learning? (b) How do the three learning approaches affect students’ learning perceptions? (c) How do the jigsaw and cooperative approaches affect students’ knowledge construction? To answer these questions, we randomly assigned college students to learn with the Design-A-Plant program using one of the three approaches. For each approach, learning was assessed by asking students to write down all plant parts presented by the program immediately after instruction (retention test) and to apply what they have learned to solve new plant-design problems (transfer test). In addition, students’ perceptions about learning were assessed with a questionnaire in which the participants rated the perceived level of interest and difficulty of the task. Finally, to shed light on how the two group approaches may have affected students’ knowledge construction, we analyzed the discourse produced during the group discussions (Coleman, 1998; Hogan, Nastasi, & Pressley, 2000; Kaartinen & Kumpulainen, 2002; Palincsar & Brown, 1984; Webb, 1989). Next, we discuss the hypotheses for each of the research questions examined in this study. 1.3.1. Hypotheses Research Question 1: How do the three approaches affect students’ learning? The most recent meta-analysis on the effects of social context (i.e., individual versus small group) on learning with computer technology indicates a small but positive effect of small group learning on students’ individual
435
achievement (Lou, Abrami, & d’Apollonia, 2001). Likewise, a recent study investigating the effect of computer-mediated cooperation on solving ill-defined problems showed that cooperative dyads performed significantly better than students who worked alone (Uribe, Klein, & Sullivan, 2003). Based on these findings, and the tenet of cooperative learning that by receiving and giving explanations, students can fill in gaps in their understanding, correct misconceptions, and strengthen connections between new information and previous learning, it was predicted that the cooperative and jigsaw groups would outperform students who worked individually on learning measures (Hypothesis 1). Predictions about learning differences between the two group approaches, however, were not clear. Although the jigsaw method has been applied in several classroom learning scenarios with slight different variations, comparisons are typically made between jigsaw learning and regular classroom instruction (Artut & Tarim, 2007; Ha¨nze & Berger, 2007; Souvignier & Kronenberger, 2007). Furthermore, to the best of our knowledge, there is no research that examined how the jigsaw method affects students’ learning from interactive multimedia programs (R. Johnson, personal communication, April 13, 2005). Research Question 2: How do the three approaches affect students’ perceptions about learning? Research in cooperative learning has consistently shown that students in cooperative learning groups report higher motivation, self-efficacy, and pro-academic attitudes than students in control groups (Slavin, 1995). Advocates of cooperative learning argue that the characteristics of this method are highly motivating because the success of the group depends on helping and receiving help from group mates (Johnson & Johnson, 1999). These characteristics are commonly shared by the traditional cooperative learning and the jigsaw learning methods. Therefore, it was predicted that students learning with jigsaw and cooperative methods would report higher interest ratings than those learning individually (Hypothesis 2). On the other hand, the effects of the different learning approaches on students’ perceived cognitive load (Paas, Tuovinen, Tabbers, & Van Gerven, 2003), were unclear. From a cognitive load theory perspective, when students spend their cognitive resources in activities that are conducive to the acquisition of new schemas, they are likely to experience germane cognitive load, a type of load that promotes learning (Paas et al., 2003; Sweller, 1999). For instance, when students share their ideas with others, they are encouraged to clarify and organize their own ideas, elaborate on what they know, discover flaws in their reasoning, and entertain alternative perspectives that may be equally valid to their own (Gauvain, 2001). The additional processing elicited by cooperative methods should increase students’ invested effort and, in turn, learning. Although the jigsaw group was similar to the individual group in that students first had to learn about one of the modules alone, we speculated that the subsequent cooperative task of teaching and learning about all modules would result in comparatively higher invested effort for students in jigsaw conditions. Therefore, it was
436
R. Moreno / Learning and Instruction 19 (2009) 433e444
predicted that cooperative and jigsaw groups would report higher levels of perceived cognitive load than the individual group (Hypothesis 3). However, we did not find any theoretical or empirical grounds to offer predictions about potential differences between the cooperative and jigsaw groups on this outcome measure. Research Question 3: How do the group approaches affect students’ knowledge construction? The two cooperative learning models used in this study share conditions that have been shown to facilitate students’ knowledge construction, namely, cooperation, interdependence, and individual accountability (Rohrbeck et al., 2003). Nevertheless, the different emphasis on individual learning of the two group methods suggests that the learning affordances that each group approach creates may differ. For instance, because the jigsaw method requires students to independently learn a piece of the puzzle, each group member relies exclusively on the developed expertise of his/her peers to put all pieces together. A strong assumption underlying the jigsaw method is that students will develop sufficient expertise in their topic to be able to meaningfully teach the other group members about their assigned topic. If jigsaw members acquire the necessary expertise to teach others what they have learned (and possess the pedagogical and social skills needed to communicate effectively what was learned), then all students will be able to integrate the distributed expertise of the team into a mental model of the scientific system to-be-learned (Hewitt & Scardamalia, 1998). Yet, if any member of the team fails to construct a deep understanding of the assigned topic or to effectively communicate what was learned, the knowledge construction of the group will suffer. The cooperative approach to learning seems to minimize this risk because all students are given the opportunity to interact with the instructional program and receive feedback about all topics. On the other hand, by asking students to attend to all puzzle pieces simultaneously, the cooperative approach may be overwhelming and hinder students from building deeper understandings of the principles to-be-learned (Paas et al., 2003; Sweller, 1999). Therefore, no specific prediction was stated regarding differences between cooperative and jigsaw groups. 2. Method 2.1. Participants The participants were 87 teachers in preparation who were enrolled in an introductory educational psychology course at a southwestern university in the USA and given credit for their participation in the study. The mean age of the participants was 24.29 years (SD ¼ 7.73). Of the participants, 30 students were randomly assigned to participate in each of the two group conditions, namely the jigsaw learning group (20 females and 10 males) and the cooperative learning group (21 females and 9 males), while 27 students were randomly assigned to learn individually (18 females and 9 males). Participants in the jigsaw and cooperative groups formed triads.
2.2. Measures For each participant, the paper-and-pencil measures consisted of a consent form, a participant questionnaire, a retention test, a transfer test, a program-rating questionnaire, and a set of review sheets, each typed on 8.5 by 11 in sheets of paper. 2.2.1. Demographics and prior botany knowledge The purpose of the questionnaire was to gain information concerning the participant’s name, gender, age, and prior botany knowledge and was identical to the one used in the studies by Moreno and Mayer (2005) and Moreno et al. (2001) on learning botany with multimedia environments. The questionnaire measured participants’ prior botany knowledge with the following two questions: (a) ‘‘Please put a check mark indicating your knowledge of botany’’ followed by five blanks; scores ranged from 0 (very little) to 4 (very much); and (b) ‘‘Please place a check mark next to the items that apply to you: ___ I have taken a class in botany. ___ I have houseplants. ___ I have eaten a plant or vegetable that I grew myself. ___ I have made my own mulch. ___ I know what a pistil is. ___ I know why plant leaves are green.’’ 2.2.2. Retention test The retention test consisted of the following three questions, each typed in the same sheet: (a) ‘‘Please write down all the types of roots that you can remember’’, (b) ‘‘Please write down all the types of stems that you can remember’’, and (c) ‘‘Please write down all the types of leaves that you can remember.’’ 2.2.3. Transfer test The transfer test was a problem-solving test consisting of four questions, each typed on a separate sheet. Students were asked to indicate at least one of the possible kinds of roots, stems, and leaves that would help a plant survive under certain weather conditions and justify their choices. The four problem-solving sheets, respectively, had the following statement at the top: (a) ‘‘Design a plant to live in an environment that has low sunlight.’’ (b) ‘‘Design a plant to live in an environment that has low temperature and high water table.’’ (c) ‘‘Design a plant to live in an environment that has high temperature.’’ (d) ‘‘Design a plant to live in an environment that has heavy rainfall and low nutrients.’’ 2.2.4. Scoring of the retention and transfer tests The retention and transfer tests were independently scored by two researchers who were blind to students’ learning approach and using the same rubrics developed by Moreno and Mayer (2005) and Moreno et al. (2001). Agreement between both scorers was 93% on the retention test and 87% on the transfer tests. Differences between raters were resolved in discussion. Scores ranged from 3 to 9 for the retention test and from 7 to 24 for the transfer test.
R. Moreno / Learning and Instruction 19 (2009) 433e444
2.2.5. Learning perceptions Learning perceptions were measured with the programrating questionnaire that contained the following six questions, to be rated on a 10-point Likert-type response scale: (a) ‘‘How interesting was it to learn about this material?’’ rated from 1 (boring) to 10 (interesting); (b) ‘‘How entertaining was it to learn about this material?’’ rated from 1 (tiresome) to 10 (entertaining); (c) ‘‘If you had a chance to use this program with new environmental conditions, how eager would you be to do so?’’ rated from 1 (not eager) to 10 (very eager); (d) ‘‘How motivating was it to learn about this material?’’ rated from 1 (not at all motivating) to 10 (very motivating); (e) ‘‘How difficult was it to learn the material?’’ rated from 1 (easy) to 10 (difficult); and (f) ‘‘How much effort was required to learn the material?’’ rated from 1 (little) to 10 (much). An exploratory factor analysis of the program-rating questionnaire was performed using principal axis factoring with Varimax rotation. The analysis resulted in a two-factor structure that explained 71% of the total variance. The first factor was labeled perceived situational interest (Questions 1e4; internal consistency Cronbach’s a ¼ 0.89) and the second factor was labeled perceived cognitive load (Questions 5 and 6; internal consistency Cronbach’s a ¼ 0.92). Factorbased scores (mean rating of the items that loaded on each factor) were then calculated for each participant. 2.2.6. Review material A set of four review sheets was presented to the individual and jigsaw participants after interacting with the multimedia program. The sheets listed the environmental conditions for the four planets visited during instruction and included a reproduction of the plant library (i.e., all root, stem, and leaf types) presented in the instructional program. The purpose of the review sheets was twofold. First, they were used to guide jigsaw participants as they taught their peers what they had learned. By providing jigsaw participants with a sheet for each visited planet, we ensured that students did not miss teaching their peers any relevant principle covered by the program. Second, they were used to guide participants who learned individually as they self-explained their solutions to the problems. By asking students to review what they had learned, we ensured comparable time on task for the three conditions. This decision was based on the fact that students in cooperative learning groups spend significantly more time on task than students in the other two groups (Slavin, 1995; Uribe et al., 2003). 2.3. Apparatus e computerized material The computerized materials consisted of a guideddiscovery multimedia computer program about botany. The individual and cooperative versions were identical and consisted of the Design-A-Plant program developed by the IntelliMedia Initiative at the College of Engineering of North Carolina State University (Lester et al., 1999). The program includes a lifelike pedagogic agent who provides narrated advice to learners as they graphically assemble plants from
437
a library of plant structures such as roots, stems, and leaves for a total of four hypothetical planets with different weather conditions. For jigsaw participants, the Design-A-Plant program was modified to include three separate versions, one for learning about root design, one for learning about stem design, and one for learning about leaf design. All three versions of the jigsaw program presented the same hypothetical planets, weather conditions, library of plant parts, and pedagogical agent guidance as the individual and cooperative versions. However, in each version, the program was modified so that students could only manipulate the plant parts that were assigned to them with corresponding guidance and feedback. Hence, they only had access to the assigned portion of the information tobe-learned. The multimedia programs were developed using Director 4.04 (Macromedia, 1995a) and SoundEdit 16 (Version 2; Macromedia, 1995b). The apparatus consisted of three PC computer systems, which included a 16-in monitor and Sony headphones for participants in individual and jigsaw conditions. 2.4. Procedure All participants were given the consent forms and the participant questionnaire. Using the same procedure described in Moreno and Mayer’s (2004) study, any student who scored above the median on the botany knowledge survey was replaced with a new student (n ¼ 7). Next, participants were randomly assigned to one of the three learning groups, namely the individually learning, jigsaw learning, and cooperative learning groups. All students were instructed to learn as much as they could from the multimedia program and that their learning and learning perceptions would be individually assessed at the end of the study. Students assigned to the individual group were informed that they would be audiotaped in a subsequent review session in which they would be asked to self-explain the solutions that they had given to the problems presented by the instructional program. All participants were assigned a computer station and instructed to start the program. Members of the jigsaw triads were randomly assigned to learn one of the three modules of the multimedia program (i.e., roots, stems, leaves) and informed that they would be audiotaped as they taught what they learned to their peers. Each jigsaw participant learned content-matter related to their assigned module in different computer stations. Members of the cooperative triads were randomly assigned to moderate the discussion and click on the final solution for the root, stem, or leaf design. The triads were also informed that they would be audiotaped while learning together with the multimedia program. Participants in cooperative groups shared the same computer station. Participants in cooperative and individual conditions learned content-matter related to the three modules. Once the computer program was over, participants in the individual group were given the set of review sheets to
438
R. Moreno / Learning and Instruction 19 (2009) 433e444
self-explain their solution to the problems and jigsaw participants were given the set of review sheets to teach what they had learned to their peers. Students’ self-explanations and group discussions were audiotaped. However, to answer Research Question 3, transcription and analyses of the protocols were only conducted for the two cooperative groups. To this end, the transcripts were converted into text-only format by two research assistants (reliability of parsing ¼ 1) and transferred into the qualitative analysis software (ATLASti, 1995). Finally, all participants were asked to answer the program-rating questionnaire at their own pace. They were given 5 min to answer the retention test and 3 min to answer each one of the four questions in the transfer test. Participants worked individually on the program-rating questionnaire and retention and transfer tests. Fig. 1 shows the learning and assessment phases for each learning condition. As can be seen in the figure, the learning times were constrained to ensure that the overall cognitive engagement time of each learning condition was equivalent. 2.5. Protocol analyses The protocol analyses of students’ interactions were based on the macrocode and microcode schemes and procedures described by Hogan et al. (2000). The final schemes emerged inductively through several iterations on the protocols of four triads and were then applied to the remaining transcripts. This was done by the blind collaboration of two independent coders. The initial coding agreement was 79%. Disagreements between the two coders were resolved through discussion and further review of the disputed studies. 2.5.1. Microcodes The units of analysis for microcoding were students’ statements, the smallest meaningful codable units of speech within a conversational turn. Conversational turns began when a student took the floor in a conversation, ended when another student took the floor, and typically included more than one statement. Statements were then classified into three cognitive levels: retention, elaboration, and metacognition. Retention statements included recall of information provided in the multimedia program and typically reflected a reliance on recall rather than comprehension to solve problems, such as ‘‘I think we need a thick leaf. We chose that in the other planet where there was no rain.’’ Elaboration statements included meaningful connections of the information provided in the multimedia program with prior knowledge, evidence that the student had derived scientific principles from the program, or any other evidence that demonstrated that s/he understood the information given by extending knowledge beyond what was taught in the program. An example is: ‘‘I always wondered why cacti have such short roots; I guess they need to be short to soak the little rain from the surface.’’ Metacognition statements included any reference to the process of evaluating previous learning experiences or reflecting on the learning task. An example is: ‘‘We still need to go over the leaves before we can move on.’’
2.5.2. Macrocodes The unit of analysis for the macrocodes was an interaction, which was defined as a series of conversational turns delimited by statements that initiate a new focus such as moving from a discussion of root design to one of stem design for the same weather conditions, or moving from a discussion of root design for a certain weather condition to that of root design for a different weather condition. We coded the jigsaw and cooperative interactions into the following patterns: consensual, nonconsensual, responsive, and elaborative. Interactions were coded as consensual when only one student contributed significantly to the discussion through elaborative or metacognitive statements. One or more peers served as a minimally verbally active audience by simply agreeing, having a neutral reaction, or repeating the other’s statements verbatim. The following is an example of a consensual interaction: Student 1: Every time you have a high water table you want the roots to be shallow and branching. If you have shallow, branching, and thick, they can withstand a high water table. Student 2: OK, I get it. Interactions were coded as nonconsensual when one student made contributing statements to the discussion and at least one other student challenged or questioned the statement. The following is an example of a nonconsensual interaction: Student 1: Remember we were discussing the cactus? Roots were branching, shallow, and thick, to hold the water. Student 2: Why would you hold water in the roots? I think the roots don’t have to be thick. They are not designed to hold water like the leaves. But let’s leave the roots for now and keep going. Interactions were coded as responsive when both questions and responses of two students contributed substantive statements to the discussion. The following is an example of a responsive interaction: Student 1: Hmmm short, no bark because of the temperature. Thin or thick? I don’t know. Do you? Student 2: I think that maybe if the stem is thin it will help the plant survive the high wind. Student 1: Well, but how about the low rain here? Student 2: Oh yeah, it has to be thick to store water. Maybe thick and short then. Lastly, interactions were coded as elaborative when all students contributed multiple substantive statements to the discussion that build on or clarified another student’s prior statement. Elaborations included linking a new idea to someone else’s idea, correcting someone’s ideas with an undisputed statement, or disagreeing with an idea and offering a counterargument. An example of elaborative sequence is: Student 1: OK, the leaves have to be large because there is no sunlight. Remember the photosynthesis deal? High temperature is new though. Student 2: I was thinking that the heat may scorch the leaves if they are big. Student 3: Actually, we can make the leaves thin to perspire. Student 1: Yeah, even if it is real hot, if the sun is not hitting the leaves, they’ll be alright. Student 2: Hmm, so are large thin leaves the best combination?
R. Moreno / Learning and Instruction 19 (2009) 433e444
439
Individual Group Phases Task
Learning Learned about 3 modules by individually interacting with the pedagogical agent. Approximate 30 minutes duration
Reviewed the solution to the 4 problems presented by the program using self-explanations. 16 minutes
Cooperative Group Phases Learning Task Learned about 3 modules by interacting with the pedagogical agent and peers. Approximate 45 minutes duration
Jigsaw Group Phases Learning Task Learned about 1 module by individually interacting with the pedagogical agent. Approximate 12 minutes duration
Taught the assigned module and learned about the other two modules by interacting with peers. 30 minutes
Assessment Completed the programrating questionnaire.
Completed the retention test.
Completed the problemsolving transfer test.
2 minutes
5 minutes
12 minutes
Assessment Completed the programrating questionnaire. 2 minutes
Completed the retention test. 5 minutes
Completed the problemsolving transfer test. 12 minutes
Assessment Completed the programrating questionnaire.
Completed the retention test.
Completed the problemsolving transfer test.
2 minutes
5 minutes
12 minutes
Fig. 1. Learning and assessment phases and approximate duration of each for the three learning groups.
Student 1: I think so. Student 3: I agree, let’s check and see.
deviations for the three groups on measures of retention, transfer, situational interest, and cognitive load.
3. Results
3.1. Research Question 1: How do the three approaches affect students’ learning?
Alpha was set at 0.05 for all statistical tests and protected when conducting multiple tests and Fisher LSD post hoc tests were conducted as follow-up analyses. An ANOVA with learning group as between-subjects factor and the amount of minutes engaged with the learning materials as a dependent variable revealed no significant differences among groups, F(2, 84) ¼ 2.31, p ¼ 0.11 (M ¼ 47.99, 46.80, and 46.63 and SD ¼ 2.51, 2.89, and 2.25, for the individual, cooperative, and jigsaw groups, respectively). Table 1 shows the mean scores and corresponding standard
To answer this question, we conducted a MANCOVA with learning group as between-subjects factor, students’ scores on retention and transfer tests as dependent variables, and students’ prior knowledge score as a covariate. The analysis revealed a significant difference between groups on the dependent variables, Wilks’s l ¼ 0.83, F(4, 160) ¼ 3.83, p ¼ 0.005, partial h2 ¼ 0.09. Separate ANOVAs on each dependent variable revealed that the groups did not differ on retention, a finding that is probably due to the shallowness of
Table 1 Means (and SD) on retention and transfer tests, and on the two factors of the program-rating questionnaire (learning perceptions) as a function of the three learning groups. Group
Type of measure Learning perceptions
Individual Jigsaw Cooperative
Retention test M (SD)
Transfer test M (SD)
Perceived situational interest M (SD)
Perceived cognitive load M (SD)
6.93 (1.73) 7.23 (1.67) 7.13 (1.63)
17.70a (4.05) 15.11a,b (3.81) 18.70b (3.84)
5.33d (2.51) 5.30c (2.18) 6.90c,d (2.77)
5.07e (1.63) 6.17e (1.57) 5.27 (1.77)
Means that share a superscript are statistically different from one another.
440
R. Moreno / Learning and Instruction 19 (2009) 433e444
this learning measure. On the other hand, there were significant differences among groups on transfer test, F(2, 81) ¼ 6.92, p ¼ 0.002, partial h2 ¼ 0.15. Post hoc tests showed that the jigsaw group produced lower scores on transfer test than the cooperative group ( p ¼ 0.001) and marginally lower scores than students who learned individually ( p ¼ 0.05). 3.2. Research Question 2: How do the three approaches affect students’ learning perceptions? To answer this question, we conducted a MANOVA with learning group as between-subjects factor and the situational interest and cognitive load scores as dependent measures. The analysis revealed a significant difference between groups on the dependent variables, Wilks’s l ¼ 0.84, F(4, 166) ¼ 3.74, p ¼ 0.006, partial h2 ¼ 0.08. Separate ANOVAs on each dependent variable revealed that the groups differed in both situational interest and cognitive load ratings, F(2, 84) ¼ 3.95, p < 0.05, partial h2 ¼ 0.09, and F(2, 84) ¼ 3.59, p < 0.05, partial h2 ¼ 0.08, respectively. Post hoc tests showed that the cooperative group produced higher interest ratings than the jigsaw group ( p < 0.05) and marginally higher interest ratings than the individual group ( p ¼ 0.06). In addition, jigsaw participants produced higher cognitive load ratings than participants who learned individually ( p < 0.05). No other differences were noted. 3.3. Research Question 3: How do the jigsaw and cooperative approaches affect students’ knowledge construction? To answer this question, we conducted two analyses on the protocol data. First, we conducted a MANOVA using learning group (cooperative and jigsaw) as between-subjects factor and the proportion of retention, elaboration, and metacognitive statements produced during the group discussions as dependent variables (means and corresponding standard deviations shown in Table 2). The analysis revealed a significant difference between groups on the dependent variables, Wilks’s l ¼ 0.19, F(2, 57) ¼ 123.88, p < 0.001, partial h2 ¼ 0.81. Separate ANOVAs on each dependent variable revealed significant differences between the groups on the proportion of retention, elaboration, and metacognitive statements produced during their cooperation process, F(1, 58) ¼ 31.67, p < 0.001, partial h2 ¼ 0.35, F(1, 58) ¼ 246, p < 0.001, partial h2 ¼ 0.81, and F(1, 58) ¼ 7.48, p ¼ 0.008, partial h2 ¼ 0.11, respectively. The jigsaw group produced a larger proportion of retention and metacognitive statements than the cooperative group, which in turn produced a larger proportion of elaboration statements than the jigsaw group. Second, we conducted a MANOVA using learning group as between-subjects factor and the relative amount of consensual, nonconsensual, responsive, and elaborative interactions produced during group discussions as dependent variables. The analysis revealed significant differences between groups, Wilks’s l ¼ 0.16, F(3, 16) ¼ 28.74, p < 0.001, partial
Table 2 Mean (and SD) proportion of retention, elaboration, and metacognition statements produced by jigsaw and cooperative groups. Group
Jigsaw Cooperative
Type of statement Retention M (SD)
Elaboration M (SD)
Metacognition M (SD)
0.68a (0.10) 0.56a (0.06)
0.19b (0.04) 0.36b (0.04)
0.14c (0.08) 0.08c (0.07)
Means that share a superscript are statistically different from one another.
h2 ¼ 0.84. Post hoc ANOVAs revealed that the cooperative group produced relatively more nonconsensual and elaborative statements than the jigsaw group, F(1, 18) ¼ 9.37, p ¼ 0.007, partial h2 ¼ 0.34, and F(1, 18) ¼ 33.98, p < 0.001, partial h2 ¼ 0.65, respectively, whereas jigsaw groups produced relatively more consensual statements than the cooperative group, F(1, 18) ¼ 20.72, p < 0.001, partial h2 ¼ 0.54 (means and standard deviations shown in Table 3). 4. Discussion The aim of this research was to compare the effects of learning botany with an agent-based multimedia program using three learning approaches: learning individually, learning with a traditional cooperative learning method, and learning with a jigsaw method. This work, although exploratory, contributes to the growing understanding of learning individually and learning together with agent-based multimedia programs. Most of what is known about the conditions that foster learning from technology-based environments is based on individual approaches to learning (Dillenbourg, 1999). Drawing from cooperative learning theory, we predicted that the cooperative and jigsaw groups would demonstrate higher performance on learning measures (Hypothesis 1) and report higher interest ratings (Hypothesis 2) than the individual group (Johnson & Johnson, 2004). In addition, as predicted by cognitive load theory, we expected the cooperative and jigsaw groups to report higher cognitive load ratings than the individual group (Hypothesis 3). We summarize our findings and their implications next. 4.1. Theoretical and practical implications The present study showed that students who learned with the cooperative approach perceived the learning experience as more interesting than those who learned with individual or jigsaw approaches. This finding is consistent with the literature comparing cooperative and individual learning methods and with current efforts in developing cooperative agent-based environments for school and online learning (Linn, 2006; Songer, 2007; Stahl, 2002). The finding that the cooperative group gave higher interest ratings than the individual group supports Hypothesis 2. On the other hand, although the cooperative group outperformed the jigsaw group on the transfer measure, cooperative and individual groups produced comparable transfer scores. This finding fails to support
R. Moreno / Learning and Instruction 19 (2009) 433e444
441
Table 3 Mean (and SD) proportion of consensual, nonconsensual, responsive, and elaborative interactions produced by jigsaw and cooperative groups. Group
Jigsaw Cooperative
Type of interactions Consensual M (SD)
Nonconsensual M (SD)
Responsive M (SD)
Elaborative M (SD)
0.60a (0.18) 0.15a (0.25)
0.00b (0.01) 0.10b (0.10)
0.40 (0.18) 0.42 (0.16)
0.00c (0.00) 0.33c (0.18)
Means that share a superscript are statistically different from one another.
Hypothesis 1. Furthermore, it runs at odds with the small learning advantage found in past research for learning in small groups with technology (Lou et al., 2001). Several hypotheses can help explain this finding. The first hypothesis is that in order to make the engagement time similar for all three learning groups, we asked students who learned individually to self-explain their solutions to the problems. It is possible that this additional review facilitated students’ organization of the materials prior to the testing session that followed. For instance, several studies have shown that self-explanations can deepen students’ understanding (Chi, de Leeuw, Chiu, & La Vancher, 1994; Moreno & Mayer, 2005). A second hypothesis is that the explanatory feedback presented by the pedagogical agent in the Design-A-Plant program may have shadowed the benefits of cooperative learning by reducing the need to rely on peer discussions to construct knowledge together. Research shows that the amount of feedback available in instructional programs can moderate the effects of social interactions on students’ performance (Lou et al., 2001). Additional research is needed to test this hypothesis by investigating whether the superiority of group performance over individual performance is more pronounced when there is no feedback or minimal feedback available from instruction. As predicted by Hypothesis 3, students in jigsaw groups perceived higher cognitive load during the learning experience than those who learned individually. However, they were less able to transfer what they learned to solve novel problems as compared to the individual and cooperative groups, suggesting that the jigsaw method may not be an effective approach to agent-based multimedia learning. Nevertheless, this conclusion should be approached with caution because the jigsaw method is not as widely used as the cooperative method. Although no data were collected from the participants in this regard, their instructor reported that students regularly engaged in traditional cooperative learning activities but had no experience working with the jigsaw method. Because the effects of cooperative learning are significantly more positive when students have group work experience or instruction (Abrami, Lou, Chambers, Poulsen, & Spence, 2000; Lou, Abrami, & Spence, 2000), the lower performance of this group should be interpreted as reflecting students’ best attempts to interact productively with an unfamiliar method. However, the analysis of students’ statements offers alternative interpretations for the lower transfer scores of the jigsaw group. First, the jigsaw approach promoted a relatively larger number of retention and metacognitive statements
whereas the cooperative approach promoted a relatively larger number of elaboration statements. This finding indicates that the jigsaw participants focused more on transmitting and receiving information and regulating their progress towards the learning goals than on elaborating on the instructional materials. A potential interpretation for the retention pattern is that the emphasis of the jigsaw approach on developing team members’ expertise may have inadvertently prompted nonexpert students to passively accept the explanations of their peers rather than actively integrate the new knowledge with their own schemas. Likewise, the relatively larger number of metacognitive statements produced by the jigsaw group may be the result of the stronger interdependence created by this approach. Because each group member had developed expertise in only one topic, it is likely that students were more aware of their need to evaluate and regulate learning during cooperation to achieve the goal of learning about all topics. Finally, the analysis of students’ interactions indicated that the jigsaw group produced relatively more consensual interactions than the cooperative group whereas the cooperative produced relatively more nonconsensual and elaborative interactions than the jigsaw group. The predominance of consensual interactions during jigsaw discussions seems to strengthen the idea that this learning approach promoted knowledge transmission rather than knowledge co-construction among the triad members. 4.2. Limitations and suggestions for future research It is important to note that the present study was not aimed at advancing a general theory of how technology supports cooperation but rather at exploring how individual and different cooperative arrangements may affect students’ learning and perceptions in agent-based environments. Therefore, the conclusions should be limited to the particular characteristics and affordances that these learning environments offer to individuals and groups of learners. For instance, the Design-A-Plant program uses a guided-discovery method to help an individual learner construct knowledge; yet, it was not designed to provide the orientation and support that a group of learners may need to engage in effective cooperation (McLoughlin & Luca, 2001). Specialized intelligent agents may be needed to guide students’ cooperation to further enhance agent-based instruction (Constantiono-Gonzalez, Suthers, & Santos, 2003). Although the research on agent scaffolds for group discussions is scant, a recent study of online learning found that pedagogical
442
R. Moreno / Learning and Instruction 19 (2009) 433e444
approaches that fostered perspective taking and negotiation of meaning led to higher levels of knowledge construction (Ha¨kkinen, Ja¨rvela, & Ma¨kitalo, 2003). The question of which technologies should be integrated into agent-based multimedia environments to foster cooperative learning is one that still needs to be answered by future research. In addition, the experimental nature of this study presents the following limitations to some conditions for effective group cooperation. First, research in cooperative learning suggests that, to maximize the likelihood of meaningful and rich interactions, cooperative learning groups should be designed to be heterogeneous in gender, cultural and language background, and ability (Webb & Palincsar, 1996). Because the participants of this study had to be randomly assigned to the groups, the possibility of manipulating the heterogeneity of the members was not possible. Second, past research in cooperative learning settings is typically situated in an authentic classroom environment, where students work together for several weeks to accomplish a learning objective (Box & Little, 2003; Huber, 2003). In contrast, although the participants of this study knew each other because they were all enrolled in the same undergraduate course, their cooperation consisted of a one-time experience in a research laboratory and the learning objective was not part of their curriculum. Because the effectiveness of cooperative learning activities also depends on the quality of students’ interactions and social skills displayed during learning (O’Donnell & Dansereau, 1992; Webb & Palincsar, 1996), an additional limitation is that students were not given cooperative skills’ training prior to participating in the study. This shortcoming may help explain why the cooperative groups did not outperform the individual learners on transfer and may also explain why participants who worked in the less familiar jigsaw format underperformed their counterparts. Future research should examine the extent to which the statement and interaction patterns observed in this study are the result of students’ lack of experience working with the jigsaw method or the result of the specific type of discourse that is elicited by the jigsaw model itself. Some suggestions for overcoming our limitations and strengthening the design of the present study in future research are to embed the study in authentic classrooms using instructional materials that are relevant to their curriculum. This would increase the external validity of the research. Alternatively, the present study could be replicated using either an equated quasi-experimental design, where the units of study are classrooms where different knowledge construction models are implemented, or a single-case design, which involves repeated measurement of the same student in different conditions over time. Our findings also suggest that the present study could be strengthened by ensuring that students have sufficient experience and skills to successfully learn with others. Despite its promise, for cooperative learning to work, group members should have good interpersonal skills when interacting with peers, encourage the efforts of the group, and engage in effective academic argumentation (McLoughlin & Luca, 2001). These skills need to be explicitly taught and practiced
before groups are asked to tackle a particular task requiring knowledge construction and problem solving (Johnson & Johnson, 1999). Despite these limitations, this exploratory study presents insights that have merit when considering future research. Much is still to be learned about the cognitive and affective consequences of constructing knowledge together and alone in agent-based learning environments. Acknowledgements This material is based upon work supported by the U.S.A. National Science Foundation under Grant No. 0238385. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the funding agency. References Abrami, P. C., Lou, Y., Chambers, B., Poulsen, C., & Spence, J. (2000). Why should we group students within-class for learning? Educational Research and Evaluation, 6(2), 158e179. Aronson, E. (2002). Building empathy, compassion, and achievement in the Jigsaw classroom. In J. Aronson (Ed.), Improving academic achievement (pp. 209e225). New York: Academic. Aronson, E., & Patnoe, S. (1997). The jigsaw classroom: Building cooperation in the classroom (2nd ed.). New York: Longman. Artut, P., & Tarim, K. (2007). The effectiveness of Jigsaw II on prospective elementary school teachers. Asia-Pacific Journal of Teacher Education, 35(2), 129e141. ATLASti. (1995). Statistics software for qualitative data analysis. [Computer program]. Retrieved December 9, 2008 from. http://www.atlasti.com. Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111e127. Barker, T., Jones, S., Britton, C., & Messer, D. (2002). The use of a cooperative student model of learner characteristics to configure a multimedia application. User Modeling and User-Adaptive Interaction, 12, 207e241. Box, J., & Little, D. (2003). Cooperative small-group instruction combined with advanced organizers and their relationship to self-concept and social studies achievement of elementary school students. Journal of Instructional Psychology, 30(4), 285e287. Bransford, J. D., Brown, A. L., & Cocking, R. R. (1999). How people learn. Washington, DC: National Academy Press. Brown, A. L., & Campione, J. C. (1994). Guided discovery in a community of learners. In K. McGilly (Ed.), Integrating cognitive theory and classroom practice: Classroom lessons (pp. 229e270). Cambridge, MA: MIT Press. Brown, K., & Cole, M. (2000). Socially-shared cognition: system design and the organization of collaborative research. In D. H. Jonasssen, & S. L. Land (Eds.), Theoretical foundations of learning environments (pp. 197e214). Mahwah, NJ: Erlbaum. Bruner, J. S. (1966). Toward a theory of instruction. Cambridge, MA: Harvard University Press. Bruner, J. S. (1990). Acts of meaning. Cambridge, MA: Harvard University Press. Chi, M. T. H., de Leeuw, N., Chiu, M. H., & La Vancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18, 439e477. Coleman, E. B. (1998). Using explanatory knowledge during collaborative problem solving in science. The Journal of the Learning Sciences, 7(3&4), 387e427. Constantiono-Gonzalez, M. D. L. A., Suthers, D. D., & Santos, J. G. E. D. L. (2003). Coaching web-based collaborative learning based on problembased solution differences and participation. International Journal of Artificial Intelligence in Education, 13, 156e169.
R. Moreno / Learning and Instruction 19 (2009) 433e444 Davis, E. A., & Linn, M. C. (2000). Scaffolding students’ knowledge integration: prompts for reflection in KIE. International Journal of Science Education, 22, 819e837. Dillenbourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.), Collaborative-learning: Cognitive and computational approaches (pp. 1e19) Oxford, England: Elsevier. Gauvain, M. (2001). The social context of cognitive development. New York: Guilford. Graesser, A. C., Chipman, P., Haynes, B. C., & Olney, A. (2005). AutoTutor: an intelligent tutoring system with mixed-initiative dialogue. IEEE Transactions on Education, 48, 612e618. Greeno, J. G., Collins, A. M., & Resnick, L. B. (1996). Cognition and learning. In D. Berliner, & R. Calfee (Eds.), Handbook of educational psychology (pp. 15e46). New York: Macmillan. Ha¨kkinen, P., Ja¨rvela, S., & Ma¨kitalo, K. (2003). Sharing perspectives in virtual interaction: review of methods of analysis. In B. Wasson, S. Ludvigsen, & U. Hoppe (Eds.), Designing for change in network learning environments (pp. 395e404). Dordrecht, The Netherlands: Kluwer. Ha¨nze, M., & Berger, R. (2007). Cooperative learning, motivational effects, and student characteristics: an experimental study comparing cooperative learning and direct instruction in 12th grade physics classes. Learning and Instruction, 17(1), 29e41. Harrer, A., McLaren, B. M., Walker, E., Bollen, L., & Sewall, J. (2006). Creating cognitive tutors for collaborative learning: steps toward realization. User Modeling and User-Adapted Interaction, 16, 175e209. Hewitt, J., & Scardamalia, M. (1998). Design principles for distributed knowledge building processes. Educational Psychology Review, 10, 75e96. Hogan, K., Nastasi, B., & Pressley, M. (2000). Discourse patterns and collaborative scientific reasoning in peer and teaching-guided discussions. Cognition and Instruction, 17(4), 379e432. Holt, G. R., & Morris, A. W. (1993). Activity theory and the analysis of organization. Human Organization, 52, 97e109. Huber, G. (2003). Processes of decision-making in small learning groups. Learning and Instruction, 13(3), 255e269. Jacobson, M. J., & Kozma, R. B. (Eds.). (2000). Innovations in science and mathematics education: Advanced designs for technologies of learning. Mahwah, NJ: Erlbaum. Johnson, D. W., & Johnson, R. T. (1999). Making cooperative learning work. Theory into Practice, 38, 67e73. Johnson, D. W., & Johnson, R. T. (2004). Cooperation and the use of technology. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (pp. 785e811). Mahwah, NJ: Erlbaum. Jonassen, D. H., Peck, K. L., & Wilson, B. G. (1999). Learning with technology: A constructivist perspective. Upper Saddle River, NJ: Merrill. de Jong, T. (2005). The guided discovery principle in multimedia learning. In R. Mayer (Ed.), Cambridge handbook of multimedia learning (pp. 215e 228). New York: Cambridge University Press. Kaartinen, S., & Kumpulainen, K. (2002). Collaborative inquiry and the construction of explanations in the learning of science. Learning and Instruction, 12, 189e212. Kagan, S. (1994). Cooperative learning. San Juan Capistrano, CA: Kagan Cooperative Learning. Lajoie, S. P. (Ed.). (2000). Computers as cognitive tools: No more walls. Mahwah, NJ: Erlbaum. Lave, J. (1988). Cognition in practice. Cambridge, UK: Cambridge University Press. van Lehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science, 31(1), 3e62. Lester, J. C., Stone, B., & Stelling, G. (1999). Lifelike pedagogical agents for mixed-initiative problem solving in constructivist learning environments. User Modeling and User-Adapted Interaction, 9, 1e44. Linn, M. C. (2006). WISE teachers: using technology and inquiry for science instruction. In E. A. Ashburn, & R. E. Floden (Eds.), Meaningful learning using technology: What educators need to know (pp. 45e69). New York: Teachers College Press.
443
Lou, Y., Abrami, P. S., & d’Apollonia, S. (2001). Small group and individual learning with technology: a meta-analysis. Review of Educational Research, 71, 449e521. Lou, Y., Abrami, P. S., & Spence, J. C. (2000). Effects of within-class grouping on student achievement: an exploratory model. The Journal of Educational Research, 94(2), 101e112. Macromedia. (1995a). Director 4.0.4. [Computer program]. San Francisco: Author. Macromedia. (1995b). SoundEdit 16. [Computer program]. San Francisco: Author. Mayer, R. E. (2005). The Cambridge handbook of multimedia learning. New York: Cambridge University Press. McClintock, R. (1999). The educators manifesto: Renewing the progressive bond with posterity through the social construction of digital learning communities. New York: Institute for Learning Technologies, Teachers College. McLoughlin, C., & Luca, J. (2001). Investigating processes of social knowledge construction in online environments. In C. Montgomerie, & J. Viteli (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1287e1292). Chesapeake, VA: AACE. van der Meij, J., & de Jong, T. (2006). Learning with multiple representations: supporting students’ learning with multiple representations in a dynamic simulation-based learning environment. Learning & Instruction, 16, 199e212. Moreno, R., & Mayer, R. E. (2004). Personalized messages that promote science learning in virtual environments. Journal of Educational Psychology, 96, 165e173. Moreno, R., & Mayer, R. E. (2005). Role of guidance, reflection, and interactivity in an agent-based multimedia program. Journal of Educational Psychology, 97, 117e128. Moreno, R., & Mayer, R. E. (2007). Interactive multimodal learning environments. Educational Psychology Review, 19, 309e326. Moreno, R., Mayer, R. E., Spires, H., & Lester, J. (2001). The case for social agency in computer-based teaching: do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19, 177e213. O’Donnell, A. M., & Dansereau, D. F. (1992). Scripted cooperation in student dyads: a method for analyzing and enhancing academic learning and performance. In R. Hertz-Lazarowitz, & N. Miller (Eds.), Interaction in cooperative groups: The theoretical anatomy of group learning (pp. 120e 144). New York: Cambridge University Press. Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: recent developments. Educational Psychologist, 38, 1e4. Paas, F., Tuovinen, J., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38, 63e71. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction, 1, 117e175. Palincsar, A. S., & Herrenkohl, L. R. (1999). Designing collaborative contexts: lessons from three research programs. In A. M. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 151e177). Mahwah, NJ: Erlbaum. Piaget, J. (1970). Science of education and the psychology of the child. New York: Orion. Reiser, R. A. (2002). A history of instructional design and technology. In R. A. Reiser, & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 26e53). Upper Saddle River, NJ: Merrill Prentice Hall. Reiser, B. J., Tabak, I., Sandoval, W. A., Smith, B., Steinmuller, F., & Leone, T. J. (2001). BGuILE: strategic and conceptual scaffolds for scientific inquiry in biology classrooms. In S. M. Carver, & D. Klahr (Eds.), Cognition and instruction: Twenty five years of progress (pp. 263e 305). Mahwah, NJ: Erlbaum. Rohrbeck, C. A., Ginsburg-Block, M. D., Fantuzzo, J. W., & Miller, T. R. (2003). Peer-assisted learning interventions with elementary school students: a meta-analytic review. Journal of Educational Psychology, 94, 240e257.
444
R. Moreno / Learning and Instruction 19 (2009) 433e444
Salomon, G. (1993). Distributed cognitions: Psychological and educational considerations. New York: Cambridge University Press. Slavin, R. E. (1995). Cooperative learning: Theory, research, and practice (2nd ed.). Boston: Allyn & Bacon. Slavin, R. E., Hurley, E. A., & Chamberlain, A. (2003). Cooperative learning and achievement: theory and research. In W. M. Reynolds, & G. E. Miller (Eds.), Handbook of psychology: Educational psychology, Vol. 7 (pp. 177e 198). New York: Wiley. Songer, N. B. (2007). Digital resources versus cognitive tools: a discussion of learning science with technology. In S. Abell, & N. Lederman (Eds.), Handbook of research on science education (pp. 471e491). Mahwah, NJ: Erlbaum. Souvignier, E., & Kronenberger, J. (2007). Cooperative learning in third graders’ jigsaw groups for mathematics and science with and without questioning training. British Journal of Educational Psychology, 77(4), 755e771. Springer, L., Stanne, M., & Donovan. (1999). Effects of small-group learning on undergraduates in science, mathematics, engineering, and technology: a meta-analysis. Review of Educational Research, 69(1), 21e51. Stahl, G. (2002). Rediscovering CSCL. In T. Koschmann, R. Hall, & N. Miyake (Eds.), CSCL 2: Carrying forward the conversation (pp. 169e 181). Hillsdale, NJ: Erlbaum.
Sweller, J. (1999). Instructional design in technical areas. Camberwell, Australia: ACER Press. Uribe, D., Klein, J., & Sullivan, H. (2003). The effect of computer-mediated collaborative learning on solving ill-defined problems. Educational Technology Research and Development, 51(1), 5e19. Vauras, M., Iiskala, T., Kajamies, A., Kinnunen, R., & Lehtinen, E. (2003). Shared-regulation and motivation of collaborating peers: a case analysis. Psychologia: An International Journal of Psychology in the Orient, 46, 19e37. Vygotsky, L. (1978). Mind in society: The development of the higher psychological process. Cambridge, MA: Harvard University Press. Webb, N. M. (1989). Peer interaction and learning in small groups. International Journal of Educational Research, 13, 21e40. Webb, N. M., & Palincsar, A. S. (1996). Group processes in the classroom. In D. C. Berliner, & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 841e876). New York: Macmillan. Woodul, C. E., Vitale, M. E., & Scott, B. J. (2000). Using a cooperative multimedia learning environment to enhance learning and affective selfperceptions of at-risk students in grade 8. Journal of Educational Technology Systems, 28(3), 239e252.