Measurement and assessment in computer-supported collaborative learning

Measurement and assessment in computer-supported collaborative learning

Computers in Human Behavior 26 (2010) 806–814 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.c...

203KB Sizes 0 Downloads 54 Views

Computers in Human Behavior 26 (2010) 806–814

Contents lists available at ScienceDirect

Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Measurement and assessment in computer-supported collaborative learning q Carmen L.Z. Gress b,*, Meghann Fior a, Allyson F. Hadwin a,*, Philip H. Winne b a b

Faculty of Education, University of Victoria, Victoria, BC, Canada Faculty of Education, Simon Fraser University, Burnby, BC, Canada V5A 1S6

a r t i c l e

i n f o

Article history: Available online 12 September 2007 Keywords: Computer-supported collaborative learning Measurement Assessment

a b s t r a c t The overall goal of CSCL research is to design software tools and collaborative environments that facilitate social knowledge construction via a valuable assortment of methodologies, theoretical and operational definitions, and multiple structures [Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standards for reporting research: a review of the literature on computer-supported collaborative learning. In Paper presented at the 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Netherlands; Lehtinen, E. (2003). Computer-supported collaborative learning: an approach to powerful learning environments. In E. De Corte, L. Verschaffel, N. Entwistle & J. Van Merriëboer (Eds.), Unravelling basic components and dimensions of powerful learning environments (pp. 35–53). Amsterdam, Netherlands: Elsevier]. Various CSCL tools attempt to support constructs associated with effective collaboration, such as awareness tools to support positive social interaction [Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human–Computer Studies 58, 605] and negotiation tools to support group social skills and discussions [Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005). Computer support for knowledge construction in collaborative learning environments. Computers in Human Behavior 21, 623–643], yet few studies developed or used pre-existing measures to evaluate these tools in relation to the above constructs. This paper describes a review of the measures used in CSCL to answer three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measures examine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the measures administered? Our review has six key findings: there is a plethora of self-report yet a paucity of baseline information above collaboration and collaborative activities, findings in the field are dominated by ‘after collaboration’ measurement, there is little replication and an over reliance on text-based measures, and an insufficient collection of tools and measures for examining processes involved in CSCL. Ó 2007 Elsevier Ltd. All rights reserved.

1. Introduction Computer-supported collaborative learning (CSCL) is one of the more dynamic research directions in educational psychology. Computers and various software programs were incorporated into education to aid the administration and measurement of solo and collaborative learning activities because software can: (a) be

q Portions of this paper were presented at the Annual Meeting of the Canadian Society for the Study of Education, York University, Toronto, ON, May 26–30, 2006. Support for this research was provided by grants from the Social Sciences and Humanities Research Council of Canada to A. F. Hadwin (410-2001-1263), P. H. Winne (410-2001-1263), and P. H. Winne (principal investigator) and A. F. Hadwin (co-investigator) (512-2003-1012). * Corresponding authors. Tel.: +1 604 268 6690; fax: +1 604 291 3203 (C.L.Z. Gress). E-mail addresses: [email protected] (C.L.Z. Gress), [email protected] (A.F. Hadwin).

0747-5632/$ - see front matter Ó 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2007.05.012

individualized in design and use, (b) represent problems more realistically, (c) display each step of a difficult problem solving task, (d) afford group discussion and collaboration across distances, and (e) provide immediate feedback for monitoring and evaluating student progress (Baker & Mayer, 1999; Baker & O’Neil, 2002; Schacter, Herl, Chung, Dennis, & O’Neil, 1999). Not surprisingly the increased prevalence and benefits of computer use in collaboration has spawned new directions for research in the field of educational psychology and beyond, demonstrated by studies in the learning sciences, computer science, human computer interaction, instructional psychology, educational technology, and education (Baker & Mayer, 1999; Hadwin, Winne, & Nesbit, 2005; Lehtinen, 2003). The overall goal of CSCL research is to design software tools and collaborative environments that facilitate social knowledge construction via a valuable assortment of methodologies, theoretical and operational definitions, and multiple structures (Hadwin, Gress, & Page, 2006; Lehtinen, 2003). CSCL environments such as CSILE/Knowledge Forum (Lipponen, 2000; Salovaara & Järvelä,

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

2003; Scardamalia & Bereiter, 1996) and gStudy (Winne et al., 2006) promote multiple collaborative learning models that vary by task, purpose, tools, access to product, access to peers, and theoretical position (see Gress, Hadwin, Page, & Church, 2010). The development and examination of various innovative and interactive software tools aim to facilitate and support individual and shared construction of knowledge, skills, process products (such as notes, drafts, and collaborative conversations) and final products via cueing, prompting, coaching and providing immediate feedback of both process and product (Hadwin et al., 2006; Kirschner, Strijbos, Kreijns, & Beers, 2004; Koschmann, 2001; Lehtinen, 2003; Salovaara & Järvelä, 2003). To empirically demonstrate the beneficial nature of these collaborative environments and tools, a focus on measurement tools, methods, and analysis is essential (Puntambekar & Luckin, 2003).

2. Measurement in CSCL Measurement in CSCL consists of observing, capturing, and summarizing complex individual and group behaviours, from which researchers make reasonable inferences about learning processes and products. Factors affecting measurement in CSCL include individual differences, context, tool use, collaborative activities, and various theoretical backgrounds of the researchers and instructors. These inferences and interpretations form assessments which play a central role in guiding and driving student learning toward knowledge acquisition and learning outcomes (Chapman, 2003; Knight & Knight, 1995; Macdonald, 2003). Assessment targets learner’s outcomes and it infuses instruction with objective information, to stimulate deeper knowledge and motivate personal goals in students and educators (Baker & O’Neil, 2002). Measurement and assessment in CSCL can take one of three forms: assessing the individual about the individual, assessing the individual about the group, and assessing the group as a whole. We are interested in the measurement of individual and shared learning processes, the steps each learner takes and retakes as they progress towards a learning outcome, typically tracked by process products such as notes, drafts, discussions, and traces of learner to learner and learner to computer interactions. Of particular interest is the measurement of process products in real time and to find a way to summarize and present these products to learners to provide opportunities for them to monitor, evaluate, and adapt their learning during collaborative activities. For example, Puntambekar and Luckin (2003) and Baker and O’Neil (2002) suggested learners gain a better understanding of their learning processes when provided opportunities to reflect on their collaborative learning products, such as notes, conversations, drafts, group management skills, and so on. These reflection opportunities arise when instructors or software programs provide real-time analysis of the artifacts learners produce, such as chat records, drafts, and learning objects, and process statistics, such as traces of learner-software interactions (Hmelo-Silver, 2003). Process measurement and real-time analysis, however, is highly complex and challenging, as it includes (a) measuring the cognitive steps taken by the individual and the group in the collaborative process requires, (b) measuring individual differences in these steps, (c) designing meaningful assessments of the processes, and (d) developing analytical methods for understanding and analyzing collaborative processes and products, which includes dealing with a wide variety of interaction types and developing means for automatically and efficiently processing collaborative process data (logs and tracings) and products (demonstrations of learned skills and content) so it can be viewed by learners, educators and researchers (Lehtinen, 2003; Martı´nez, Dimitriadis, Rubia, Gómez, & de la Fuente, 2003), adapting methods for different contexts (Puntambekar & Luckin, 2003).

807

3. Purpose of this paper This paper stems from a literature review that identified current methods of measuring and assessing learning processes in CSCL. We wanted our comprehensive review of measurement tools and methods used in CSCL to describe the current state of the literature by answering three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measures examine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the measures administered? For example, collaboration typically includes student-centered small group activities, in which learners develop the necessary skills to share the responsibility of being active, critical, creative co-constructors of learning processes and products. Conditions shown to facilitate and influence collaboration include, for example, positive interdependence, positive social interaction, individual and group accountability, interpersonal and group social skills, and group processing (Kreijns, Kirschner, & Jochems, 2003). Various CSCL tools attempt to support these constructs, such as awareness tools to support positive social interaction (Carroll, Neale, Isenhour, Rosson, & McCrickard, 2003) and negotiation tools to support group social skills and discussions (Beers, Boshuizen, Kirschner, & Gijselaers, 2005), yet few studies evaluate these tools in relation to the above conditions. They focus instead on comparing collaborative products or investigating tool usability or tool effects on collaborative products. This paper describes the findings of our review. First, we answer the three questions stated above. Second, framed by our coding to meet the first objective, we highlight key findings and discuss potential directions for CSCL research. Finally, we will explore how future research in CSCL the Learning Kit project might contribute to developing a systematic and thorough approach for measuring collaborative processes and products using gStudy (Winne, Hadwin, & Gress, 2010; Winne et al., 2006).

4. Method We conducted an extensive literature search for all articles related to CSCL from January 1999 to September 2006 in five academic databases: Academic Search Elite, Computer Science Index (which includes IEEE and ED/ITLib, formerly the AACE Digital Library), ERIC, PsycArticles, and PsycInfo. Search terms included variations and combinations of computer, collaboration, and learning. After the search, we focused on empirical studies, including case studies, as long as the focus of the study was collaboration among learners, not software usability (23 studies), resulting in 186 articles. We acknowledge that some studies may be missing from this analysis but we felt 186 articles should provide a strong representation of the field. Initially we critically reviewed and coded each article to delineate contextual aspects of the literature, clarifying five broad aspects of CSCL research (see Gress et al., 2010): (1) the focus of the article (for example, was it CSCL, computer-supported collaborative work, computer-supported problem solving, or computermediated communication); (2) whether or not the technology proposed was designed to provide a CSCL environment or if the technology was add on, such as email, or stand alone chat; (3) models of collaboration, defining mode and purpose of communication, level of knowledge construction, group membership, and individual access to the group project; (4) collaborative tools; and (5) collaborative support. Discrepancies were resolved through discussion to consensus between researchers. For this paper, we added a sixth coding category, research methods and design. We were interested in three main attributes of existing CSCL research: constructs of interest, measures and

808

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

methods, and measurement timing. To aid us in this task, we coded the following information.

framework of assessment timing was to highlight what measures are used and when and what constructs have been overlooked.

4.1. Research question

5. Results

In this category, we identified the research questions or statements provided by the author(s): for example ‘‘this paper describes a case study that investigated students’ sociocognitive processes in a multimedia-based science learning task. By focusing on students’ discursive, cognitive and collaborative activity on a micro analytic level, the study describes the nature of students’ strategic activity and social interaction whilst handling and processing information from a multimedia CD-ROM science encyclopaedia” (Kumpulainen, Salovaara, & Mutanen, 2001, p. 481);

As stated above, we conducted a review of the CSCL literature to answer three fundamental questions: (a) What measures are utilized in CSCL research? (b) Do measures examine the effectiveness of attempts to facilitate, support, and sustain CSCL? And (c) When are the measures administered? To answer these questions, we focused on identifying the measures used to assess collaborative constructs in CSCL, when they measure these constructs, and the constructs or information of interest. 5.1. What measures are utilized in CSCL research and when?

4.2. Constructs of interest In this category we identified the particular constructs of interest, for example, anonymity in chat (Barreto & Ellemers, 2002), awareness (Carroll et al., 2003), knowledge construction (Beers et al., 2005), and efficacy (King, 2003). 4.3. Measures, methods, and timing First, we identified all the various measures, such as questionnaires and interviews, and methods, such as observations, discourse analysis, or trace data, used in collaborative research. We then grouped these measures and methods according to assessment timing (before, during, or after collaborative task[s]). Within the three categories of measurement timing, we divided the measures and methods into six main categories of measurement identified in the first step: self-report, interview, observation, trace data, discussion/dialogue, performance/product, and feedback. The purpose of delineating the measures and methods within the

We evaluated 186 empirical articles and found these studies incorporated 340 measures (and methods) of collaborative constructs (see Table 1). The majority of these measures were self-report questionnaires (33%) and products of collaboration (19%) assessing individual differences associated with collaboration as well as collaborative learning and knowledge construction. The number of measures used to assess process data and discussion and dialogues were approximately equal at 12%, followed by interview data (10%), observations (9%), and then prediction and/or feedback (5%). These numbers suggest much of the information known about collaborative processes and products is from the first two forms of measurement available in CSCL: self-report measures that assess the individual about the individual, and the individual about the group. This is not surprising considering the ease of administration, low cost, and flexible nature of self-report measures. Of the three approximate times during which collaboration was assessed (before, during, and after), our review reveals the majority

Table 1 Coding framework: categories, measures, and examples Category

Measures

Examples

Self-report

Questionnaires, surveys, and inventories

Interviews

All discussions between researchers, assistants, and participants

Observations

All methods of visually examining and documenting actions and utterances of participants, either directly or by videotape recording

Process data

Estimates of time, frequency, and sequence, as well as trace data which examined participants’ actions via the computer during the collaborative tasks Engaged purposeful conversation and/or verbal expressions coded as either asynchronous or synchronous communication.

Community Classroom Index (Rovai, 2002), Learning Styles by Honey and Mumford (1986, as cited by Shaw and Marlow (1999)), and the College and University Classroom Environment Inventory (Joiner et al., 2002) ‘‘the post-test interview asked the subjects to rate their experience with the CSMS program on a 1–10 scale including ease of use and the nature and quality of recordings provided” (Riley et al., 2005, p. 1013) ‘‘The discussion was videotaped and transcribed as the students tried to understand a case of pernicious anemia, a blood disorder that causes nervous system problems. The entire transcript was coded for the types of questions and statements in the discourse” (Hmelo-Silver, 2003, p. 408) Kester and Paas’s (2005) examination of time on task and path through a physics simulation and Joiner and Issroff’s (2003) use of trace diagrams to analyze joint navigation in a virtual environment True asynchronous communication tools were text-based email (van der Meij et al., 2005), discussion forums (Scardamalia & Bereiter, 1996), and text-based chat rooms (Wang et al., 2000). Asynchronous tools in synchronous functions included chat tools similar to AOL or IM where the context appears to be real time but actually works like fast email text submissions (Linder & Rochon, 2003). True synchronous communication tools included chat tools like ICQ, where a user can see all changes made to the system at all times (Van Der Puil et al., 2004) Grades (Strijbos et al., 2004), concept maps (Stoyanov & Kommers, 1999), notes (Lipponen et al., 2003), and essays (Erkens et al., 2005) ‘‘user feedback was obtained in several ways including: (1) users emailing their comments directly from within the application (2) polling users informally and (3) eliciting comments through regular meetings with several of our user groups” (Lee & Girgensohn (2002, p. 82))

Discussions and dialogues

Performance and products Feedback and/or prediction

All output produced by participants’ collaborative activities 1. Feedback from participants to researchers on CSCL tools, 2. individual feedback from the teacher, teacher assistant, or researcher on individual or group work, 3. participants’ feedback on collaborative group member’s actions and performance in the CSCL environment, and 4. prediction by instructors on the quality of after collaboration products

809

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

of measures used to assess some aspect of collaboration were administered after collaborative activity (51%), followed by during the activity (35%), while only 14% of the studies gathered baseline data of the collaborative construct of interest before the actual activity. This suggests the lion’s share of information gained from CSCL studies is measured after collaborative activity, i.e. after potential changes to an individual’s learning processes and skills have occurred, and is based on the final products of collaboration rather than the process. A more detailed look at the review data uncovered some predictable patterns. As expected, some measures are better suited to assessments at certain times than others, such as self-report questionnaires for before and after activities and process measurements or observations during collaborative activities. As we can see in Table 2, the majority of studies assessing collaboration in the before collaboration time slot used self-report questionnaires (74% of before measures, 10% of total measures) followed by performance indicators (11% of before, 1.5% of total measures), interviews (6% of before, 1% of total), with predictions or initial CSCL feedback and coded discussion data tied for last (4% of before, .6% of total measures). Studies measuring collaborative constructs during collaboration did so mostly via discussion or dialogues and process data such as timing, frequency, and/or traces (each approximately 32% of during measures, 11% of total), followed by observations (video or live) (22.5% of during measures, 8% of total), selfreport (5% of during, 2% of total), interviews (4% of during, 1.5% of total), performance (3% of during, 1% of total), and feedback (1% of during, .3% of total). As stated previously, most studies incorporated collaborative measurement after collaborative activity. These consisted mainly of self-report questionnaires (41% of after measures, 21% of total), followed by performance and product (33% of after, 17% of total), interview (15% of after, 8% of total), feedback (9% of after, 4% of total), observations of after collaborative behaviours and skills and discussion data (each 1% of after, .6% of total).

Table 3 Category, type, and frequency of constructs assessed before collaboration in 186 studies Category

Construct

Individual difference measures

Attitude towards science Attitudes toward collaborative learning Collaborative experience Collaborative skills Computer efficacy and/or literacy Epistemological beliefs Inductive thinking Instructional plans or beliefs Leadership abilities Learning skills, processes, styles, or approaches Metacognitive strategies Motivation Social networks from prior collaboration

n 1 4 5 2 8 1 1 2 1 4

Standardized test

Bielefeld-screening Woodcock Johnson

1 1

Baseline information

GPA Prior knowledge Student writing Student reflections and predictions Teacher predictions of students

1 3 1

3 11 1 1 1

We were surprised by the low number of before collaboration measures. Less than a fifth of the 186 studies included any before collaboration baseline measures other than basic descriptive information. Out of 186 studies, studies included only 12 instances of measuring collaborative constructs: attitudes toward collaboration, skills for collaboration, prior experience, and social networks. This indicates a large gap waiting to be addressed. For example, we did not find a single study that reported on students’ readiness to collaborate although a few did lead the students in group discussions on collaboration prior to activities (e.g. Fischer, Bruhn, Grasel, & Mandl, 2002). Shumar and Renninger (2002) point out that researchers and laypersons typically consider non-contributing individuals in a virtual environment as ‘‘lurkers” who are ‘‘shirking their social responsibility” (p. 6). Instead, the authors suggest, some of those individuals may not be ready to collaborate. The current literature, however, does not suggest a way to assess the state of collaborative readiness. Nardi (2005) begins to address this issue in her work on communicative readiness, a state in which ‘‘fruitful communication is likely” (p. 91).

5.2. Do measures examine the effectiveness of facilitating, supporting, and sustaining CSCL? 5.2.1. Before collaboration Our review revealed the measures can be collapsed into three categories: individual difference measures, standardized testing, and baseline information. As demonstrated in Table 3, individual difference measures included measures on individual and collaborative learning attitudes and skills, epistemological beliefs, computer skills and experience, motivation, and instructional planning and beliefs, and so on. Standardized testing included the occasional pre-screening for learning disabilities or an achievement test. Baseline information included pre-tests on content or experience, grade point averages, examples of work pre-intervention, and predictions by learners and instructors on participant behaviours during collaboration.

5.2.2. During collaboration The variety of constructs assessed during collaboration was as extensive as the disciplines and authors conducting the studies. Because the measures used were mostly observations, coding of discussions, and process-oriented data, the constructs of interest were at times less defined (due to the nature of qualitatively coding process data) than would typically be seen with self-report

Table 2 Frequency of measures used in 186 CSCL studies categorized by measurement timing Measure type

Measurement timing

Totals

Before n Self-report Interview Observations Process data Discussion Performance/product Prediction/feedback

35 3 0 0 2 5 2

Total n

47

During %n 74.47 6.38 0.00 0.00 4.26 10.64 4.26 100

%N

n

10.29 0.88 0.00 0.00 0.59 1.47 0.59

6 5 27 38 39 4 1

13.82

120

After %n 5.00 4.17 22.50 31.67 32.50 3.33 0.83 100

%N

n

%n

1.76 1.47 7.94 11.18 11.47 1.18 0.29

71 26 2 0 2 57 15

35.29

173

41.04 15.03 1.16 0.00 1.16 32.95 8.67 100

%N

n

20.88 7.65 0.59 0.00 0.59 16.76 4.41

112 34 29 38 43 66 18

50.88

340

%N 32.94 10.00 8.53 11.18 12.65 19.41 5.29 100

810

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

measures. It was difficult to estimate (for some studies) if studies were investigating similar constructs under different labels. Due to this, we do not present the individual frequencies of each investigated construct. Instead we state that each of these constructs appears to be investigated between one and five times. Self-report measures incorporated during collaboration assessed daily journal logs of all computer activities to compare across conditions (e.g., Gay, Sturgill, Martin, & Huttenlocher, 1999), collective efficacy (e.g., Gonzalez, Burke, Santuzzi, & Bradley, 2003), satisfaction in group settings (e.g., Xie & Salvendy, 2003), student opinions of contributions to CSCL (via the experience sampling method) (e.g., Leinonen, 2003), and opinions of the tools and communication in collaboration (e.g., Scott, Mandryk, & Inkpen, 2003). The instances of product measures during collaboration (and feedback given to students during collaboration) included student written opinions of their progress (e.g., Neo, 2003) and various versions of notes and drafts leading to final products (e.g., Leinonen, 2003). The 71 instances of interview, observation, and dialogues/discussion measures (combined as it was not always clear which measure was used to assess which construct) collected information on four broad (at times overlapping) collaborative constructs: what makes collaboration successful, collaborative tools, social interaction and communication, and knowledge construction/skill acquisition. 5.2.2.1. Investigating successful collaboration. Within this category constructs of interest included the levels of: control of interaction, reflecting, evaluating, explaining/defining/ example providing, sharing resources, providing evidence/justifications/references to literature, turn taking, monitoring, questions for clarification, on and off task behaviours, coordination of working on a project, role behaviours, interference, effects of various scaffolds, awareness of other collaborators, quality, efficiency, motivation, and productivity (e.g., Baker & Mayer, 1999; Barile & Durso, 2002; Campbell, 2004; Fletcher, 2001; Hakkarainen, 1999; Hmelo-Silver, 2003; Kim, Kolko, & Greer, 2002; Kraut, Fussell, & Siegel, 2003; Pata, Lehtinen, & Sarapuu, 2006; Salovaara, Salo, Rahikainen, Lipponen, & Jarvela, 2001; Soller, 2004). 5.2.2.2. Collaborative tool use. Constructs of interest included obstacles to software tool use, construction of joint workspace, tool use to increase learning, expectations of tools, and the effect of physical orientation when designing tools collaboratively (e.g., Carletta, Anderson, & McEwan, 2000; Joiner & Issroff, 2003).

constructing learning objects, user actions, search time, decision time, time on task, sharing of learning objects, sequence of tool use, navigation of tools, and social network (e.g., Chau, Zeng, Chen, Huang, & Hendriawan, 2003; Dewiyanti, Brand-Gruwel, & Jochems, 2005; Hakkarainen, 2003; Joiner & Issroff, 2003; Kester, Kirschner, van Merri, & Eumlnboer, 2005; Lipponen, Rahikainen, Lallimo, & Hakkarainen, 2003; Nokelainen, Miettinen, Kurhila, Floreen, & Tirri, 2005; Rummel & Spada, 2005; Xie & Salvendy, 2003). This type of information was gathered from a variety of statistics such as (a) total time in a role, or to complete a task, make a single or a series of decisions, search for information, annotate or read, (b) frequencies of postings, emails, visited web pages, annotations, and (c) sequence information on emails, computer movements, and interactions in the software. 5.3. After collaboration As demonstrated in Table 4, measurement and analysis after collaboration typically examined (a) changes in perceptions or attitudes towards collaborative learning, CSCL software, group efficacy, or learning in general, (b) comparison between products created by an individual versus a group, (c) comparisons between products across various supports and scaffolded contexts (e.g. role taking, problem solving, or inquiry learning), and (d) comparison between products across various CSCL tools and environments. Studies using after collaborative measures collected and reported on three categories of information: individual differences and individual assessment of group processes, standardized testing, and after collaboration products include assignments and feedback. Individual difference measures and individual assessment of group processes, assessed via self-report questionnaires and interviews were similar to pre-intervention measures (see Table 4). We noticed an increases focus on collaboration, including measures of task cohesion, collective efficacy, group effectiveness, and team disagreements. After collaboration products included an

Table 4 Category, type, and frequency of constructs assessed after collaboration in 186 studies Category

Construct

n

Individual difference measures

Anxiety (general) Attitude towards science Attitudes toward collaborative learning Classroom community Collaborative experience Computer efficacy and/or literacy Epistemological beliefs Group efficacy, satisfaction, teamwork Instructional plans or beliefs Interpersonal attraction in collaboration Leadership abilities Learning skills, processes, styles, or approaches Mental effort exerted in collaboration Metacognitive strategies Motivation Social influences on collaboration Social networks after collaboration

1 1 6 2 18 9 2 7 2 1 1 15

5.2.2.3. Social interactions and communication. These studies focused on social interactions and communication within collaborative settings, the constructs of interest included patterns of interactions, equality of participation, level of cooperation, dynamics, social skills, engagement, decision making, divergence of perspectives, aspects of teamwork, and solidarity (e.g., Guzdial & Turns, 2000; Hakkarainen, 1999; Lapadat, 2004; Lingnau, Hoppe, & Mannhaupt, 2003; Puntambekar, 2006; Salovaara et al., 2001; Suthers, Hundhausen, & Girardeau, 2003). 5.2.2.4. Knowledge construction and skill acquisition. These studies focused on knowledge construction, thinking skills, inquiry learning, self-regulated learning, metacognition, knowledge representation, cognitive processes, learning strategies, goals, strategies for collaboration, and student interpretation of contributions to collaboration (e.g., Hmelo-Silver, 2003; Järvelä & Salovaara, 2004; Järvenoja & Järvelä, 2005; Salovaara & Järvelä, 2003). Among the 38 instances of process statistic methods, studies investigated levels of participation and engagement, interaction patterns between collaborators and idea units, productivity, time

Standardized test

Cognitive tests (general) Woodcock Johnson

After collaboration

After collaboration knowledge Assignments GPA Student feedback to instructors/researchers Student reflections Teacher feedback to students Teacher reflections Thoughts on software

2 1 3 1 1 1 1 18 37 9 2 3 7 5 20

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

assortment of assignments, testing, and feedback. Most common was collaborative and individual assignments (37 studies), followed by feedback on CSCL software, content knowledge, GPA, and other feedback including feedback from the teachers to the students, student feedback to the teachers and researchers on collaborative activities, and research design (see Table 4).

6. Discussion The purpose of this paper was to review and describe current methods of measuring CSCL. Our goal was twofold: find established measures that provide immediate and contextually relevant feedback to learners on their collaborative processes and then research and adapt those measures for use within our CSCL environment. Based on theories of self and shared regulation (Hadwin, Oshige, Gress, & Winne, 2010; Winne & Hadwin, 1998; Zimmerman, 2000) we believe learners need to be able to monitor and evaluate their solo and collaborative processes and products during actual engagement of learning. To accomplish this, they need a way to ‘see’ their processes and evaluate their products, whether those products are learning objects such as notes, search histories, and chats, or workings toward final project goals such as drafts of a collaborative paper. For example, learners need to be able to (a) identify different aspects of their learning processes, such as preparing to collaborate by reading applicable material and negotiating knowledge construction with peers, (b) monitor those processes which requires a way to record without increasing the learners to do list or cognitive load (i.e., learners should be focused on learning), (c) evaluate their processes, such as determining if the interactions are fruitful, (d) incorporate feedback generated personally (by reviewing their own behaviours) and from others, and then (e) make adaptations to their behaviours; all while engaged in the collaborative processes. 6.1. Key findings Our review of the measures currently used in CSCL suggests a number of key findings. 6.1.1. A plethora of self-report Much of the information known about collaborative processes and products in general (i.e. regardless of measurement timing) is gathered via the first two forms of measurement available in CSCL: self-report measures that assess the individual about the individual, and the individual about the group. 6.1.2. Findings in the field of CSCL are dominated by ‘after collaboration’ measurement Of the three approximate times during which collaboration was assessed (before, during, and after), our review revealed just over half of the instances of measuring CSCL were administered after collaborative activity, suggesting the majority of information gained from CSCL studies (a) is measured after collaborative activity, in order words, after potential changes to an individual’s learning processes and skills have occurred, and (b) is based on the product, rather than the process, of collaboration. 6.1.3. A paucity of baseline information about collaboration and collaborative processes prior to CSCL There were a low number of before collaboration measures, as less than a fifth of the 186 studies included any before collaboration baseline measures other than basic descriptive information. More importantly, there were only 12 instances of collaborative construct measurement before actual engagement in collaborative activity.

811

6.1.4. A lack of replication in CSCL studies An extensive range of constructs were assessed in CSCL studies, perhaps as extensive as the numbers of disciplines and researchers involved. Although this highlights the many constructs associated with and important to CSCL, indicating a lack of replication and examination of reliability across various contexts and CSCL models. 6.1.5. Insufficient tools and measures for examining processes involved in CSCL Because the measures used were mostly observations, coding of discussions, and process-oriented data, the constructs of interest were at times less defined (perhaps due to the nature of qualitatively coding process data) and coding methods and frameworks were not always shared, understandably considering the lengthy detail required for such evaluations. Unfortunately, this leads to difficulties in determining if studies were investigating similar constructs under different labels. What would be helpful is a standard metric or published set of descriptions that are transferable across studies. 6.1.6. There is an overreliance on conventional text-based measures rather than mining the potential of CSCL technologies to assess dynamic fine-grained details As demonstrated in Table 1, the majority of measurement methods were self-report, observations, and content analysis of discussions. This highlights Hathorn and Ingram (2002) concern that the emphasis of CSCL work has been on developing, describing and experimenting with CSCL as they relate to individual outcomes, rather than exploring and examining differences in the collaborative process. For example, questionnaires and interviews generate rank or ordinal data summarizing learners’ awareness, perceptions, recollections, and biases about learning processes. This information is informative yet these methods do not necessarily provide complete or accurate views of actual learning processes (Winne & Jamieson-Noel, 2002). An evaluation of discussions and dialogues, via methods such as content analysis or social network analysis (SNA), reveal thematic dialogues and display patterns of collaboration in the forms of sociograms or graphs that represent density and centrality of communicative acts, describing the sequences and articulations of learner collaboration (Martı´nez et al., 2003). Also researchers can observe and code interactions between learners and interactions between learners and the computer and then graphically representation these behavioural logs, such as demonstrated by the Chronologically Ordered Dialogue and Features Used (CORDFU; Luckin, 2003). A draw back of both methods, however, is that SNA and CORDFU are analysis procedures that occur after the fact. This provides a wealth of information to future collaborative studies, methods and scaffolds but does not inform current collaborators. If we want learners to successfully self-regulate their solo and collaborative activities, we need to design activities, CSCL tools, and analysis methods that provide feedback during the process in addition to feedback after (Butler & Winne, 1995; Hadwin et al., 2005). 6.2. Potential directions for measurement in CSCL Kreijns et al. (2003) describe constructs they consider key to successful collaboration in computer-supported environments: interdependence, interaction, individual accountability, interpersonal and small group skills, and active group processing. We believe appropriate measurement and research on these constructs requires methods that assess process in real time. As demonstrated above, the majority of measuring learning processes in CSCL is from discussions and observations. To translate the knowledge found in discussions and dialogues we need methodologies that provide the same fine-grained detailed information we receive

812

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

from content analysis but in real time, to provide feedback to learners on their solo and collaborative learning processes while engaged in collaborative activities. More recent methods may provide opportunities for learners to reflect during collaboration about their own collaborative activities via real-time data capturing and data analysis of detailed and accurate recordings or traces of complex cognitive processes (Chiu, Wu, & Huang, 2000; Hadwin et al., 2005; Jang, Steinfield, & Pfaff, 2002; Lipponen et al., 2003; Winne et al., 2006). Trace data can be collected unobtrusively so it does not interrupt cognitive processing like a think-aloud can nor does it depend on learners’ memories or perceptions (Gress & Winne, 2007; Winne, Gupta, & Nesbit, 1994). Therefore, bolstered by advances in computer technology, CSCL software can now trace, in addition to facilitate, collaboration (Hadwin et al., 2005). The future of computer-based learning for measuring learning processes lies in the development of measures that provide realtime feedback and therefore enhance student learning. In the following sections, we discuss gStudy (see Winne et al., 2010 for full review) as an example of how a CSCL environment can structure and scaffold learning while providing opportunities for real-time feedback and encouraging student reflection, evaluation and adaptation.

6.3. The learning kit project: gStudy as a CSCL environment In the Learning Kit project (Winne et al., 2006), our goal is to develop computer-supported learning environments and tools, such as gStudy, to facilitate individual and collaborative regulation of learning. Collaboration builds on solo learning because before a group interacts and while they collaborate, solo learning tactics provide a foundation for contributing a significant proportion of raw material – information – to collaborative enterprise. Thus, collaboration tools in gStudy are designed to meet three primary goals: (a) to help students enhance their individual self-regulation of learning as they participate in collaborative learning activities; (b) to boost each group member’s learning, development, and testing of strategies that help them collectively to collaborate better, that is, to promote shared regulation of learning; and (c) to provide methods for systematically researching the effectiveness of these tools in supporting productive individual and shared self-regulation, as well as the group’s co-regulation of their collaborative processes. An important feature of the gStudy software is the trace data it unobtrusively collects: detailed information about students’ studying actions by logging the time and context of every learning event. Traces recorded in gStudy are artifacts of tactics and strategies in a log of fine-grained, temporally identified data that can advance research about how learners go about learning. Collecting traces of student activity in computer-supported learning environments may provide information about the dynamic, situated nature of learning, as well as individual differences in engagement (Winne et al., 1994). gStudy provides numerous ways to trace collaborative exchange and dialogue. In studying appropriation of self-regulated learning (SRL) skills, use of discourse analysis has been the main methodology. Trace data of the dialogues that occur in chat tools and coaching will provide data about incremental change across time, both in the content of the interactions and user’s reliance of scaffolded SRL tools within gStudy, because trace data logs the conversation as well as the context of the learner. From trace data, we know when and which windows within gStudy are open, if learners change views, add information, create new links, or browse their documents. Therefore we know the state of the learners’ environment before, during, and after collaboration.

While conventional analysis methods have contributed to an understanding of mechanisms involved in a basic shift in the appropriation of SRL, gStudy provides rich and sophisticated ways to collect data about ‘‘transaction” between students and context (Corno, 2006). For example, collaboration in gStudy is facilitated in asynchronous and near synchronous1 modes. In near-synchronous mode, students engage in live text exchanges using an innovative design for a chat tool (gChat). Our chat tool moves seamlessly between (a) scaffolding each collaborator’s tactics and roles in ways research indicates make collaboration more effective and (b) integrating with our coach tools to guide students to explore and share in regulating collaborative activities to make them more effective (see Morris, Church, Hadwin, Gress, & Winne, 2010). In addition, our chat tool allows learners to ‘drag and drop’ products made in gStudy, such as notes, glossary items, and study tactics. In asynchronous mode, collaborators have multiple methods for sharing learning and its products that are created along the timeline of a collaborative project. For example, teammates can asynchronously share one kit and all its diverse, richly linked contents (notes, documents, etc.). Or, learners can share their individual kits or specific objects from their kits to a group kit, the latter providing for a highly focused collaboration in contrast to the former ‘‘complete package” that includes all collaborators’ contributions. Additionally, learners can study collaboratively outside of designated collaborative activities by sharing learning objects via kit exchange, shared kits, exporting and importing, or chat. The gStudy collaborative environment affords learners the opportunity to give or receive modeling and feedback on their tasks. For example, in the Self-Regulation Empowerment Program (SREP) developed by Cleary and Zimmerman (2004), a tutor demonstrated and verbalized how he or she used SRL skills in studying. In gStudy, the tutor’s demonstration can be shown via the open chat, or the tutor can verbalize their skills through the chat tool to provide modeling. Further, if these SRL skills were pre-stocked in gStudy, the guided chat would function as a model for the learner. In giving feedback to the student, the open chat tool allows a teacher to give feedback to the contents that the students send. 7. Conclusions Currently the field of CSCL is dominated by analysis of processes and products to inform future collaborative activities instead of supporting current ones. We suggest tools and functions in gStudy offer new directions for research on collaborative learning. For example, via gStudy, learners can share one kit among group members facilitating collective regulation of learning or share multiple objects between kits to facilitate interdependence and self-regulation. gStudy traces allow researchers to examine the unfolding collaboration over time as groups collectively transition forward and backward through phases of self-regulated learning. Logfile traces allow researchers to track transitions between students as they adding new objects, updating old contents, and reviewing what is currently within the kit. Rather than a log representing one student’s SRL, it will represent the dynamic interactions between students and the environment (i.e., gStudy), as well as among the group members. This will provide ways to summarize and present learning processes and products to learners, providing opportunities for them to monitor, evaluate, and adapt their learning during solo and collaborative activities.

1 True synchronous collaboration requires each participant to see each character typed and each move made during collaboration. See Gress et al. (2010) for more information on delineating synchronous and asynchronous tools.

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

Acknowledgement Support for this research was provided by a SSHRC Standard Research Grant to Allyson F. Hadwin and a SSHRC-INE grant to Phillip H. Winne (PI) [Hadwin-Co-Investigator] from the Social Sciences and Humanities Research Council of Canada (410-2001-1263; 512-2003-1012).

References Baker, E. L., & Mayer, R. E. (1999). Computer-based assessment of problem solving. Computers in Human Behavior, 15, 269–282. Baker, E. L., & O’Neil, H. F. J. (2002). Measuring problem solving in computer environments: Current and future states. Computers in Human Behavior, 18, 609–622. Barile, A. L., & Durso, F. T. (2002). Computer mediated communication in collaborative writing. Computers in Human Behavior, 18, 173–190. Barreto, M., & Ellemers, N. (2002). The impact of anonymity and group identification on progroup behavior in computer-mediated groups. Small Group Research, 33, 590. Beers, P. J., Boshuizen, H. P. A. E., Kirschner, P. A., & Gijselaers, W. H. (2005). Computer support for knowledge construction in collaborative learning environments. Computers in Human Behavior, 21, 623–643. Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research, 65, 245–281. Campbell, J. D. (2004). Interaction in collaborative computer supported diagram development. Computers in Human Behavior, 20, 289–310. Carletta, J., Anderson, A. H., & McEwan, R. (2000). The effects of multimedia communication technology on non-collocated teams: A case study. Ergonomics, 43, 1237–1251. Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., & McCrickard, D. S. (2003). Notification and awareness: Synchronizing task-oriented collaborative activity. International Journal of Human–Computer Studies, 58, 605. Chapman, E. (2003). Alternative approaches to assessing student engagement rates. Practical Assessment, Research & Evaluation, 8. Chau, M., Zeng, D., Chen, H., Huang, M., & Hendriawan, D. (2003). Design and evaluation of a multi-agent collaborative web mining system. Decision Support Systems, 35, 167. Chiu, C. H., Wu, W. S., & Huang, C. C. (2000). Collaborative concept mapping processes mediated by computer. Paper presented at the WebNet 2000 World Conference on the WWW and Internet Proceedings, Taiwan. Corno, L. (2006). Socially constructed self-regulated learning: Where social and self meet in the strategic regulation of learning [discussant’s comments]. Presentation at the American Educational Research Association, San Francisco, CA. Dewiyanti, S., Brand-Gruwel, S., & Jochems, W. (2005). Applying reflection and moderation in an asynchronous computer-supported collaborative learning environment in campus-based higher education. British Journal of Educational Technology, 36, 673–676. Erkens, G., Jaspers, J., Prangsma, M., & Kanselaar, G. (2005). Coordination processes in computer supported collaborative writing. Computers in Human Behavior, 21, 463–486. Fischer, F., Bruhn, J., Grasel, C., & Mandl, H. (2002). Fostering collaborative knowledge construction with visualization tools. Learning and Instruction, 12, 213–232. Fletcher, D. C. (2001). Second graders decide when to use electronic editing tools. Information Technology in Childhood Education Annual, 155–174. Gay, G., Sturgill, A., Martin, W., & Huttenlocher, D. (1999). Document-centered peer collaborations: An exploration of the educational uses of networked communication technologies. Journal of Computer-Mediated Communication, 4, 1–13. Gonzalez, M. G., Burke, M. J., Santuzzi, A. M., & Bradley, J. C. (2003). The impact of group process variables on the effectiveness of distance collaboration groups. Computers in Human Behavior, 19, 629–648. Gress, C. L. Z., Hadwin, A. F., Page, J., & Church, H. (2010). A review of computersupported collaborative learning: Informing standards for reporting CSCL research. Computers & Human Behavior. Gress, C. L. Z., & Winne, P. H. (2007). Measuring cognitive & metacognitive monitoring of study skills with trace data. Paper to be presented at the Annual Convention of the American Educational Research Association. Chicago, IL. Guzdial, M., & Turns, J. (2000). Effective discussion through a computer-mediated anchored forum. Journal of the Learning Sciences, 9, 437–469. Hadwin, A. F., Gress, C. L. Z., & Page, J. (2006). Toward standards for reporting research: A review of the literature on computer-supported collaborative learning. Paper presented at the 6th IEEE International Conference on Advanced Learning Technologies, Kerkrade, Netherlands. Hadwin, A. F., Oshige, M., Gress, C. L. Z., & Winne, P. H. (2010). Innovative ways for using gStudy to orchestrate and research social aspects of self-regulated learning. Computers & Human Behavior, 26, 794–805. Hadwin, A. F., Winne, P. H., & Nesbit, J. C. (2005). Roles for software technologies in advancing research and theory in educational psychology. British Journal of Educational Psychology, 75, 1–24.

813

Hakkarainen, K. (1999). The interaction of motivational orientation and knowledgeseeking inquiry in computer-supported collaborative learning. Journal of Educational Research, 21, 263–281. Hakkarainen, K. (2003). Emergence of progressive-inquiry culture in computersupported collaborative learning. Learning Environments Research, 6, 199–220. Hathorn, L. G., & Ingram, A. L. (2002). Cooperation and collaboration using computer-mediated communication. Journal of Educational Computing Research, 26, 325. Hmelo-Silver, C. E. (2003). Analyzing collaborative knowledge construction: Multiple methods for integrated understanding. Computers & Education, 41, 397. Jang, C.-Y., Steinfield, C., & Pfaff, B. (2002). Virtual team awareness and groupware support: An evaluation of the teamscope system. International Journal of Human–Computer Studies, 56, 109. Järvelä, S., & Salovaara, H. (2004). The interplay of motivational goals and cognitive strategies in a new pedagogical culture: A context-oriented and qualitative approach. European Psychologist, 9, 232. Järvenoja, H., & Järvelä, S. (2005). How students describe the sources of their emotional and motivational experiences during the learning process: A qualitative approach. Learning & Instruction, 15, 465–480. Joiner, K. F., Malone, J. A., & Haimes, D. H. (2002). Assessment of classroom environments in reformed calculus education. Learning Environments Research, 5, 51–76. Joiner, R., & Issroff, K. (2003). Tracing success: Graphical methods for analysing successful collaborative problem solving. Computers & Education, 41, 369. Kester, L., Kirschner, P. A., van Merri & Eumlnboer, J. J. G. (2005). The management of cognitive load during complex cognitive skill acquisition by means of computer-simulated problem solving. British Journal of Educational Psychology, 75, 71–85. Kester, L., & Paas, F. (2005). Instructional interventions to enhance collaboration in powerful learning environments. Computers in Human Behavior, 21, 689–696. Kim, S., Kolko, B. E., & Greer, T. H. (2002). Web-based problem solving learning: third-year medical students’ participation in end-of-life care virtual clinic. Computers in Human Behavior, 18, 761–772. King, K. P. (2003). The webquest as a means of enhancing computer efficacy. Illinois: ERIC. Kirschner, P., Strijbos, J.-W., Kreijns, K., & Beers, P. J. (2004). Designing electronic collaborative learning environments. Educational Technology Research & Development, 52, 47–66. Knight, B. A., & Knight, C. (1995). Cognitive theory and the use of computers in the primary classroom. British Journal of Educational Technology, 26, 141–148. Koschmann, T., (2001). Revisiting the Paradigms of Instructional Technology. In Proceedings of the annual conference of the australasian society for computers in learning in tertiary education (ASCILITE 2001) (pp. 15–22). Melbourne, Australia. Kraut, R. E., Fussell, S. R., & Siegel, J. (2003). Visual information as a conversational resource in collaborative physical tasks. Human–Computer Interaction, 18, 13–49. Kreijns, K., Kirschner, P. A., & Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: A review of the research. Computers in Human Behavior, 19, 335–353. Kumpulainen, K., Salovaara, H., & Mutanen, M. (2001). The nature of students’ sociocognitive activity in handling and processing multimedia-based science material in a small group learning task. Instructional Science, 29, 481–515. Lapadat (2004). Online teaching: Creating text-based environments for collaborative thinking. Alberta Journal of Educational Research, 50, 236–251. Lee, A., & Girgensohn, A. (2002). Design, experiences and user preferences for a webbased awareness tool. International Journal of Human–Computer Studies, 56, 75. Lehtinen, E. (2003). Computer-supported collaborative learning: An approach to powerful learning environments. In E. De Corte, L. Verschaffel, N. Entwistle, & J. Van Merriëboer (Eds.), Unravelling basic components and dimensions of powerful learning environments (pp. 35–53). Amsterdam, Netherlands: Elsevier. Leinonen (2003). Individual student’s interpretations for their contribution to the computer-mediated discussions. Journal of Interactive Learning Research, 14, 99–122. Linder, U., & Rochon, R. (2003). Using chat to support collaborative learning: Quality assurance strategies to promote success. Educational Media International, 40, 75–89. Lingnau, A., Hoppe, H. U., & Mannhaupt, G. (2003). Computer supported collaborative writing in an early learning classroom. Journal of Computer Assisted Learning, 19, 186–194. Lipponen, L. (2000). Towards knowledge building: From facts to explanations in primary students’ computer mediated discourse. Learning Environments Research, 3, 179–199. Lipponen, L., Rahikainen, M., Lallimo, J., & Hakkarainen, K. (2003). Patterns of participation and discourse in elementary students’ computer-supported collaborative learning. Learning and Instruction, 13, 487–509. Luckin, R. (2003). Between the lines: Documenting the multiple dimensions of computer-supported collaborations. Computers & Education, 41, 379. Macdonald, J. (2003). Assessing online collaborative learning: Process and product. Computers & Education, 40, 377–391. Martı´nez, A., Dimitriadis, Y., Rubia, B., Gómez, E., & de la Fuente, P. (2003). Combining qualitative evaluation and social network analysis for the study of classroom social interactions. Computers & Education, 41, 353–368. Morris, R., Church, H., Hadwin, A. F., Gress, C. L. Z., & Winne, P. H. (2010). The use of roles, scripts, and prompts to support CSLC in gStudy. Computers & Human Behavior, 26, 815–824.

814

C.L.Z. Gress et al. / Computers in Human Behavior 26 (2010) 806–814

Nardi, B. (2005). Beyond bandwidth: Dimensions of connection in interpersonal communication. Computer Supported Cooperative Work: The Journal of Collaborative Computing, 14, 91–130. Neo (2003). Developing a collaborative learning environment using a web-based design. Journal of Computer Assisted Learning, 19, 462–473. Nokelainen, P., Miettinen, M., Kurhila, J., Floreen, P., & Tirri, H. (2005). A shared document-based annotation tool to support learner-centred collaborative learning. British Journal of Educational Technology, 36, 757–770. Pata, K., Lehtinen, E., & Sarapuu, T. (2006). Inter-relations of tutor’s and peers’ scaffolding and decision-making discourse acts. Instructional Science, 34, 313–341. Puntambekar, S. (2006). Analyzing collaborative interactions: Divergence, shared understanding and construction of knowledge. Computers & Education, 47, 332–351. Puntambekar, S., & Luckin, R. (2003). Documenting collaborative learning: What should be measured and how? Computers & Education, 41, 309–311. Riley, W. T., Carson, S. C., Martin, N., Behar, A., Forman-Hoffman, V. L., & Jerome, A. (2005). Initial feasibility of a researcher configurable computerized selfmonitoring system. Computers in Human Behavior, 21, 1005–1018. Rovai, A. P. (2002). Development of an instrument to measure classroom community. Internet and Higher Education, 5, 197–211. Rummel, N., & Spada, H. (2005). Learning to collaborate: An instructional approach to promoting collaborative problem solving in computer-mediated settings. Journal of the Learning Sciences, 14, 201–241. Salovaara, H., & Järvelä, S. (2003). Students’ strategic actions in computersupported collaborative learning. Learning Environments Research, 6, 267– 284. Salovaara, H., Salo, P., Rahikainen, M., Lipponen, L., & Jarvela, S. (2001). Developing technology-supported inquiry practices in two comprehensive school classrooms. Paper presented at the ED-Media 2001 World Conference on Educational Multimedia, Hypermedia & Telecommunications, Tampere, Finland. Scardamalia, M., & Bereiter, C. (1996). Student communities for the advancement of knowledge. Communications of the ACM, 39, 36–38. Schacter, J., Herl, H. E., Chung, G. K. W. K., Dennis, R. A., & O’Neil, H. F. Jr., (1999). Computer-based performance assessments: A solution to the narrow measurement and reporting of problem-solving. Computers in Human Behavior, 15, 403–418. Scott, S. D., Mandryk, R. L., & Inkpen, K. M. (2003). Understanding children’s collaborative interactions in shared environments. Journal of Computer Assisted Learning, 19, 220–228. Shaw, G., & Marlow, N. (1999). The role of student learning styles, gender, attitudes and perceptions on information and communication technology assisted learning. Computers & Education, 33, 223–234.

Shumar, W., & Renninger, K. A. (2002). On conceptualizing community. In K. A. Renninger & W. Shumar (Eds.), Building virtual communities (pp. 1–17). Cambridge, UK: Cambridge Univrsity Press. Soller, A. (2004). Computational modeling and analysis of knowledge sharing in collaborative distance learning. User Modeling & User-Adapted Interaction, 14, 351–381. Stoyanov, S., & Kommers, P. (1999). Agent-support for problem solving through concept-mapping. Journal of Interactive Learning Research, 10, 401–425. Strijbos, J. W., Martens, R. L., & Jochems, W. M. G. (2004). Designing for interaction: Six steps to designing computer-supported group-based learning. Computers & Education, 42, 403–424. Suthers, D. D., Hundhausen, C. D., & Girardeau, L. E. (2003). Comparing the roles of representations in face-to-face and online computer supported collaborative learning. Computers & Education, 41, 335–351. van der Meij, H., de Vries, B., Boersma, K., Pieters, J., & Wegerif, R. (2005). An examination of interactional coherence in email use in elementary school. Computers in Human Behavior, 21, 417–439. Van Der Puil, C., Andriessen, J., & Kanselaar, G. (2004). Exploring relational regulation in computer-mediated (collaborative) learning interaction: A developmental perspective. CyberPsychology & Behavior, 7, 183–195. Wang, M., Laffey, J., Wangemann, P., Harris, C., & Tupper, T. (2000). How do youth and mentors experience project-based learning in the internet-based shared environment for expeditions (expeditions), Missouri, USA. Winne, P. H., Gupta, L., & Nesbit, J. C. (1994). Exploring individual differences in studying strategies using graph theoretic statistics. The Alberta Journal of Educational Research, XL, 177–193. Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Hillsdale, NJ: Erlbaum. Winne, P. H., Hadwin, A. F., & Gress, C. L. Z. (2010). The Learning Kit Project: Software tools for supporting and researching regulation of collaborative learning. Computers & Human Behavior, 26, 787–793. Winne, P. H., & Jamieson-Noel, D. L. (2002). Exploring students’ calibration of selfreports about study tactics and achievement. Contemporary Educational Psychology, 27, 551–572. Winne, P. H., Nesbit, J. C., Kumar, V., Hadwin, A. F., Lajoie, S. P., Azevedo, R. A., et al. (2006). Supporting self-regulated learning with gstudy software: The Learning Kit Project. Technology, Instruction, Cognition and Learning, 3, 105–113. Xie, Y., & Salvendy, G. (2003). Awareness support for asynchronous engineering collaboration. Human Factors & Ergonomics in Manufacturing, 13, 97–113. Zimmerman, B. J. (2000). Attaining self-regulation: A social cognitive perspective. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 13–39). San Diego, CA: Elsevier Academic Press.