Computers & Education 41 (2003) 397–420 www.elsevier.com/locate/compedu
Analyzing collaborative knowledge construction: multiple methods for integrated understanding Cindy E. Hmelo-Silver* Department of Educational Psychology, Rutgers University, USA Received 3 August 2002; received in revised form 15 April 2003; accepted 14 July 2003
Abstract Documenting collaborative knowledge construction is critical for research in computer-supported collaborative learning. Because this is a multifaceted phenomenon, mixed methods are necessary to construct a good understanding of collaborative interactions, otherwise there is a risk of being overly reductionistic. In this paper I use quantitative methods of verbal data analysis, qualitative analysis, and techniques of data representation to characterize two successful knowledge building interactions from a sociocultural perspective. In the first study, a computer simulation helped mediate the interaction and in the second, a student-constructed representation was an important mediator. A fine-grained turn-by-turn analysis of the group discussions was supplemented with qualitative analysis of larger units of dialogue. In addition, chronological representations of discourse features and tool-related activity were used in study 2 to gain an integrated understanding of how a student-generated representation mediated collaborative knowledge construction. It is only by mixing methods that collaborative knowledge construction can be well characterized. # 2003 Elsevier Ltd. All rights reserved. Keywords: Collaborative learning; Research methodology; Post-secondary education; Problem-based learning; Simulation
Analyzing collaborative knowledge construction, central to sociocultural theories of learning, has much in common with the three blind men and the elephant from the Indian parable that describes their observations, each from their own point of view:
* Corresponding author at present address: 10 Seminary Place, New Brunswick NJ 08901-1183, USA. Tel.: +1-732932-7496 ext. 8311. E-mail address:
[email protected] (C.E. Hmelo-Silver). 0360-1315/$ - see front matter # 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2003.07.001
398
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
The First approached the Elephant, And happening to fall Against his broad and sturdy side, At once began to bawl: ’’God bless me! but the Elephant Is very like a wall!’’ The Second, feeling of the tusk Cried, ‘‘Ho! what have we here, So very round and smooth and sharp? To me ’tis mighty clear This wonder of an Elephant Is very like a spear!’’ The Third approached the animal, And happening to take The squirming trunk within his hands, Thus boldly up he spake: ’’I see,’’ quoth he, ‘‘the Elephant Is very like a snake!’’ (Saxe, n.d.)
Each of the men perceived only a small portion of the whole beast and thus could only provide a limited description. This is much like analyzing collaborative knowledge construction—as with the elephant, one needs to use multiple methods to understand the interaction. Documenting collaborative knowledge construction is critical for research in computer supported collaborative learning (CSCL). Because this is a multifaceted phenomena, mixed methods are needed to obtain an understanding of collaborative interactions and to avoid being overly reductionistic. In this paper, I demonstrate how multiple methods were used to analyze collaborative discourse in two tutorial groups. Sociocultural theories of learning place a great emphasis on analyzing discourse in order to understand learning as well as stressing the importance of tools in mediating knowledge construction (Cole, 1996; Engestro¨m, 1999; Palincsar, 1998; Pea, 1993). Discourse is an important practice that one must engage in to participate in a community of practice (Wenger, 1998). In this view, knowledge is constructed through social interactions and activity (Vygotsky, 1978). Collaborative discourse may be the primary mechanism for learning because learners’ ideas are externalized and become objects for discussion, negotiation, and refinement and are only later internalized (Chinn & Anderson, 2000; Vygotsky, 1978). Instructional interventions developed from this perspective redistribute the responsibility for generating and evaluating questions and explanations, placing a greater emphasis on student-centered discourse than in traditional classrooms (Greeno, Collins, & Resnick, 1996). Both collaborative interactions and psychological tools mediate learning in specific contexts and are a critical feature of sociocultural theories of learning (Cole, 1996; Kozulin, 1998). Psychological tools are the cultural artifacts that help people regulate their thinking and interactions
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
399
(Kozulin, 1998). These include material objects such as rulers, equations, drawings, and computers as well as symbolic tools such as language. Understanding collaborative knowledge construction requires making sense of the conversations that students engage in and the tools that mediate their learning. These sort of everyday learning practices have been studied using a variety of techniques including discourse and conversation analysis, ethnography, and other qualitative methods (e.g., Cazden, 1986; Cobb & Yackel, 1996; Koschmann, Glenn, & Conlee, 2000). Many of these methods focus on social and linguistic processes. Although these are rigorous methods, they do not always address important cognitive issues. Borrowing from the verbal data analysis tradition of Chi (1997), I extend this methodology to analyzing group interactions to quantify qualitative information. In addition, the quantified information and qualitative data can be used in complementary ways to understand mediated collaborative learning. The goals of the analyses reported in this paper are to be able to reliably understand the content and cognitive processes that occur as students are trying to learn and the role that tools might play in mediating learning. Computer tools and other representations provide opportunities to study the role that both social factors and artifacts play in learning. This occurs because interfaces can be designed to guide collaboration (Hmelo & Guzdial, 1996; Roschelle, 1996). They can help structure thinking by organizing and constraining activity. In addition, learners can construct representations that they use as tools in their thinking (Kozulin, 1998). These tools can enable learners to construct a joint problem space (JPS) as they use collaborative turn-taking structures to negotiate meaning and production of visual representations that reflect their intermediate understandings. The JPS is a shared conceptual structure that supports learning and problem-solving activities (Roschelle, 1996). Within this space, problem features, goals, operators, and methods are integrated. Computer tools afford unique opportunities for convergence upon shared meanings as learners use the tools to display, confirm, and repair their shared understanding. For example, Roschelle (1996) examined the conversation and action of two learners as they worked on a computer simulation using conversation analysis. Convergence occurred as students used the simulation to display and negotiate their shared understanding. Luckin and colleagues took a different approach to examining how alternative ways of structuring hypermedia affected how students engaged in collaborative knowledge construction (Luckin et al., 2001). They coded all talk into task-oriented, non-task, and content categories with high reliability. Rather than looking at the summary frequencies of these categories, they plotted the occurrence of these kinds of talk and the software features in a chronologically ordered representation of discourse and features used (CORDFU) diagram. This analysis allowed them to explore the relation between the software’s navigational features and collaborative knowledge construction. The results demonstrated how different types of navigational features affect content-related talk. These are just two examples of studies that address the important question of how students learn in a computer-based learning environment and the role of tools in collaborative knowledge construction. There are many methodologies that can be used to analyze collaborative knowledge construction. In this paper, I describe two studies of collaboration, one with and one without technology and provide examples of the techniques used to analyze interactions. These examples are important in understanding the characteristics of successful collaborative knowledge construction. For example, a fine-grained line-by-line coding allows the researcher to examine an entire corpus of discourse to identify important and representative cognitive and social processes
400
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
that can be reported as frequency counts. But this may be only one view of the elephant. Further qualitative analysis can be used to investigate larger phenomena that occur over greater units of time. Finally, the fine-grained analysis can also be represented in ways that allow some of the chronological sequencing and tool use to become salient. Taken together, these three techniques permit more comprehensive investigation than any single technique.
1. Study 1: the Oncology Thinking Cap: simulations as a collaborative context This study focused on analyzing discourse to examine how students constructed a joint problem-space (Roschelle, 1996) while using a simulation to learn about designing complex clinical trials (Hmelo, Nagarajan, & Day, 2000). The data collected for this study included transcripts of the videotaped sessions, computer-generated printouts of the students’ trial designs and final results, and pre- and post-tests. The students in the study were fourth-year medical students who used the Oncology Thinking Cap, a simulation tool that allows investigators to model populations of cancer cells. To help students use the software to learn to design clinical trials, a special purpose interface was developed, the Clinical Trial Wizard (shown in the Appendix). This interface organized the students’ input into categories that were relevant to the trial design process as well as providing access to relevant data displays and graphs (see Hmelo et al., 2001 for details and the results of an evaluation study). This analysis used both quantitative and qualitative methods to examine the role of prior knowledge on the construction of a JPS. We used verbal data analysis methods in a comparative case study design. Six groups of four students each spent between two and three hours in one session designing a clinical trial to test a new cancer drug using the Clinical Trial Wizard and Oncology Thinking Cap (OncoTCAP) software (Hmelo et al., 2001). The students were able to run their simulation to get feedback and then modify their designs. The groups were formed randomly. Based on pre-test scores, the groups were divided into high and low knowledge groups. One high-knowledge (HK) group and one low-knowledge (LK) group were selected. The LK group had a pre-test score of 9.50 out of 24 whereas the HK group had a pre-test score of 17. Both groups did well at post-test (HK: 20.00; LK: 19.50). These two groups were studied in detail to examine how differences in knowledge affected collaborative knowledge construction. Elsewhere, we have analyzed these discussions for the scientific reasoning content (Hmelo, Nagarajan, & Day, 2002). The transcriptions were subjected to a fine-grained analysis of collaborative activities, coded on a turn-by-turn basis. This study uses both finegrained coding and coarser qualitative analysis to capture both the general cognitive and social characteristics of JPS construction, as well as illustrative examples that demonstrate phenomena that go beyond the single turn. Together, they paint a rich picture of JPS construction. The number of conversational turns and trials conducted were counted. Because of the methodological focus of this special issue, the coding scheme is described in detail in Table 1. The categories were designed to capture thinking processes involved in the construction of a JPS. The categorical variables were coded on a turn-by-turn basis in the following categories: knowledge, metacognition, interpretation, and collaboration. Collaboration included the coding categories of conflict, questioning, and facilitator input. To understand construction of the JPS, the groups were initially compared on quantitative measures followed by analyses of the fine-grained qualitative coding of the group discourse to
Table 1 Study 1 coding definitions and examples Subcategory
Definition
Example
Knowledge
Conceptual knowledge Prior Experience
Demonstration of knowledge gained prior to starting the clinical trials design task. Actual experience in areas relevant to designing clinical trials prior the simulation task.
Local analogies
Comparisons made to other trials within the same task or experimental situation. Comparisons made to other trials outside the current task using simulation. Checking ongoing individual or group progress; includes awareness of understanding. Thinking about specific actions and their outcomes from previous trials to design current trial.
That should, that NCI web page recommend 80% of the MTD. No, I just ah saw a lot of leukemia and ah. . . No, I have not seen anything like solid mass tumors. On our eight week leg. It’s not too much different on our eight-week leg. The probabilities of getting a bad run much higher in real life because the patients take off. O.K. Now we know a little bit more about what’s going on here in a complete response. We thought that. . .you know if that was the case we would stop running them, just cut them down a dose. Is that right? You know I mean? If we have to stop our trial because our toxicity is too high at three, we’ll run it again at two. So by the way we did it, we killed people. Look we had some dose modifications. And you may have been correct for dose, for two we should have decreased it and three taken them off. I kind of think that 20% of it was arbitrary
Regional analogies Metacognition
Monitoring Reflection
Interpretation
Conflicts
Theory-driven planning
References to future actions that derive from prior knowledge, experience, or existing theories.
Data-driven planning
References to future actions that derive from the results of trials or data.
Unjustified planning Low-level
No justifications are provided for plan or action. Literal interpretations of particular screen displays.
High-level
Broader conclusions that are drawn on the basis of the range of prior literal data interpretations.
Conceptual
Disagreement over a method of inquiry, or about broader concepts involved in the task. Disagreement over software-use, specific values of parameters, or low-level interpretations of the data.
Task-specific
You can see the growth of liver and lung mets in this patient. Well, if they’re not making it that far then and they’re all dying of tumor that means that they’re, they’re not getting it per cycle. They’re not getting enough drug. No, I thought that Phase II is efficacy response. If the drug works. Two weeks and four weeks. No. We don’t want to repeat that.
401
(continued on next page)
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Major category
402
Table 1 (continued) Subcategory
Definition
Example
Questioning
Plan-related Software-related
Questions pertaining to the future course of action. Questions pertaining to software-use and literal interpretation of the data. Students and immediately answers own question. Other open-ended questions related to the task.
So we do the next study twice as long? What were the different lines? What’s the patient display do? Did he get all? Yeah. Do we information about what a half-life is or Can you see toxicity at 10 if it’s really small? So your model, your model, has its ras mutated cells are more likely to be in another tissue, or if it hasn’t spread? Facilitator: . . . I think they said that they try to minimize the number of different types of them. Student: Yeah. Student1: But you’d know more about the drug.
Self-answered General
Responses
Questioning facilitator
Questions posed to the facilitator by students.
Agreement with facilitator
When students show agreement to the views of the facilitator; coded in context of facilitator statement.
Agreement with group member
Student agrees with view of their group member.
Seeking clarification Brief answer Explanation
Student seeks verification for their ideas, or specific values chosen for parameters. Answers to general questions that do not include an explanation of any kind. Answers that include a reason or justification.
Elaborate explanation
Answers that include a detailed explanation to justify one’s beliefs or share one’s knowledge.
Student2: Yeah you ’ld know more about the kinetics of the drug For pittamycin we’re worrying about neuro and heme right? I’m opposed, I’m opposed to any Grade 4 toxicities. We picked the uh. We picked the P1. I think we picked the P1, and then we, and then we picked the alpha,. . .the desired alpha and beta and that that basically then defines the, which row you’re in and it also defines the P0. There’s probably a lot of reasons. I mean most of it is probably resistance. I mean single drug regimens usually are never very good because soon you’ll have, I mean you’re just going after one mechanism of metastatic disease and, usually those cells are, are smart enough to probably get around that mechanism and so usually multiple drug treatments usually attack different points in the cell cycle and make it a more efficient cell.
(continued on next page)
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Major category
Table 1 (continued) Subcategory
Definition
Example
Facilitator input
Monitoring
Facilitator asks questions to monitor progress, and encourages collaboration.
Explaining software
Answers to software-related questions and/or self-initiated orientation to software possibilities.
Explaining concepts
Addresses higher-level concepts that might help the students in their task.
But, what were you actually going to do to the patients? O.K. So, so what did you guys learn from doing this? . . .if you click on ‘‘view this patient’’ on the. . .let me just give you some context for that. . .so just say, ‘‘Ok’’. So it first starts tracking the patient. . .here’s the breast primary here. It starts tracking them when there’s a hundred cells. And then when there are 10 to the ninth cells. . .it’s diagnosed. . . .. O.k. just, in fact. . .hit enter. This little blip here. That’s where you gave your pittamycin. Well, meaning it’s real and you’ve got enough people That it is sensitive enough to catch what’s going on. So you wanna make sure that you got enough, so you know what it is you are looking for, how many patients you are going to see. To make sure your design is sensitive enough, cause we are not looking for big effects here.
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Major category
403
404
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
explore how the groups’ activity changes as students converge on a solution. Finally, joint construction activity was illustrated with excerpts from the group transcripts, so three type of methods are used in analyzing these data. 1.1. Quantitative results A trial was defined as the planning and execution of an OncoTCAP simulation run. A trial was considered terminated when the simulation finished running a given design. The LK group ran more trials (14) and took more conversational turns (2973) to converge on a satisfactory trial design than the HK group (six trials and 1773 turns). The fine-grained coding illuminates what the students talked about in their conversational turns. These results are summarized in Table 2. 1.1.1. Knowledge Neither group explicitly referred to a large amount of knowledge overall. Not surprisingly, the HK group referred to conceptual knowledge more than the LK group (2.64% and 1.11% of turns). The HK group made references to prior knowledge in all 6 trials, whereas the LK group made such references in only 4 of 14 trials. 1.1.2. Metacognition As shown in Table 2, the HK group demonstrated a higher percentage of metacognitive statements than the LK group. The majority of metacognitive statements made by the LK group were monitoring statements (54.71%). The HK group also had the majority of their statements classified as monitoring, but they made more evaluative statements than the LK group. The HK group was better able to evaluate their progress than the LK group because their prior knowledge provided them with a basis for making evaluative judgments. The nature of planning activity differed between the two groups despite making similar numbers of statements in this category. When the LK group planned, it was generally in reaction to the data they were faced with. In contrast, the HK group planned more and divided their planning evenly between reactive (data-driven) and proactive (theory-driven) approaches. The HK group mediated some of their planning with data, but within the group they had sufficient knowledge resources to cycle between theory and data. 1.1.3. Interpretation After each trial, the learners spent a great deal of effort interpreting the data displays that were available to them, using these as opportunities to test and repair their understandings. The groups engaged in similar amounts of interpretation. Both groups did not stray far from the data with many low-level interpretations. They rarely made high-level interpretations, though these were often important in helping them move forward. 1.1.4. Collaboration The collaboration coding included three subcategories: conflict, questioning, and facilitator input. Conflicts were rare and showed no difference across groups (1.5% of turns). The majority of these were task-specific. The HK group generated more questions than the LK group. The majority of these (48.85%) were clarification-seeking questions. These are important because they
405
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420 Table 2 Category frequencies and subcategory percentagesa Coding categories
Low-knowledge group
High-knowledge group
Frequency
% of total turns
Within category%
Frequency
% of total turns
Within category%
Number of turns Number of trials
2973 14
– –
– –
1773 6
– –
– –
Knowledge Conceptual knowledge Prior experience Analogies
33 11 1 21
1.11
47 23 3 21
2.64
33.33 3.03 63.64
Metacognition Monitoring Evaluation Reflection
393 215 56 53
13.21
342 179 79 37
19.28
54.71 14.24 13.49 17.50 4.07 12.20 1.27
47 19 19 9 107 91 16
6.02
84.8 15.2
27 3 24
1.52
13.64 86.36
219 107 51 13 2 16 30
12.35
43.60 21.05 12.03 1.87 6.02 15.41
376 41 208 86 13 28
21.21
18.30 39.19 27.47 12.82 2.19
352 233 23 96
19.80
53.89 11.80 34.29
Total Planning Theory-driven Planning Data-driven Planning Unjustified
69 16 48 5
Interpretation High-level Low-level
204 173 31
6.84
Conflict Conceptual Task-specific
44 6 38
1.48
Questioning Clarifications Plan-related Software-related Self-answered General Facilitator
266 116 56 32 5 16 41
8.95
Responses Agreement with facilitator Agreement with partner Brief answers Simple explanations Elaborate explanations
273 50 107 75 35 6
9.18
Facilitator’s input Monitoring Explaining concepts Explaining Software
449 242 53 154
15.05
48.94 6.38 44.68
52.33 23.10 10.82 13.74 5.50 5.50 2.63
85.05 14.95
11.11 88.89
48.85 23.28 5.93 0.91 7.30 13.69
10.90 55.32 22.87 3.45 7.44
66.19 6.53 27.27
a For all major categories, the percentage of turns included in those categories was computed. For the subcategories, percentages were computed based on total number of turns within the major category.
406
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
reflect negotiation of shared meaning. The HK group displayed slightly more clarification seeking than the LK group. The next most common type of questioning was plan-related questions, which was similar across the groups. The LK group asked more software-related questions than the HK group as shown in Table 2. There was no difference in facilitator questioning. The LK group was less likely to respond to facilitator questions than the HK group. The pattern of responses was different for the two groups. The HK group engaged in a great deal of consensus seeking, as shown by the large number of turns spent in agreement with other group members compared with the LK group. Consensus seeking is likely a part of the negotiating the JPS and converging on a shared understanding. The LK group was more engaged in constructing simple explanations than the HK group, which was slightly more likely to generate elaborate explanations than the LK group. The major action of the facilitator was monitoring. This accounted for 60% of the facilitator’s input across both groups. The percentage of facilitator monitoring was lower for the LK group because the facilitator needed to explain more about concepts and software than in the HK group. It is not surprising that the limited knowledge in the LK group required more contentspecific facilitator support than in the HK group. 1.2. Qualitative results: construction of the joint problem space The category frequencies are extremely informative regarding some aspects of the knowledge construction process, but do not fully address how students constructed a JPS. We looked for examples across multiple turns that illuminate how this space was co-constructed. In particular, instances of negotiating joint understanding of the task, planning, and collaborative explanations were subjected to additional qualitative analysis. Initially, the students needed to construct a joint understanding of the task, software, and relevant variables in the clinical trial design process. The HK group was able to do this more quickly than the LK group, getting the big picture of the clinical trial design process after the second trial. By the third trial, they tended to interpret the results of the previous trial, summarize what had happened, engage in high-level interpretation, and move on to planning their next trial. When they examined the results of a trial, they focused immediately on relevant information. In contrast, the LK students did not get the big picture until after the third trial. Their pattern was to start planning, realize that they needed more information, go to the data, and back to planning. So for any given trial, they often cycled between planning and data interpretation. The transcripts indicated that their search through the data was often exhaustive. These students examined the individual patient histories for most of the patients in the trial. Although qualitatively different, both groups engaged in joint construction of the problem space as they constructed interpretations, explanations, and plans (see Hmelo et al., 2000 for additional details). One example of collaborative knowledge construction occurred as the groups figured out how to represent the problem (Hmelo et al., 2000). Understanding the problem space was difficult and required marshalling all the resources within the HK group. As they were planning their second trial, they were trying to understand how toxicity is graded in order to set up appropriate contingency rules. In testing new cancer drugs, investigators must strike a balance between therapeutic and toxic effects. They needed to plan for consistent changes to a treatment protocol in response to different toxic side effects. These contingency rules are if. . .then rules that state that
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
407
for a given level of a toxic side effect, some change to the patients’ treatment will be made. In the example below, the HK students had already worked out an understanding of the need for recovery time between trials, using knowledge about the role of bone marrow in producing blood cells. While examining a handout with the standard toxicity grades for each organ system, they were trying to understand the implications of this and set up contingency rules to reduce the dosages patients would receive at different toxicity grades. Students did not begin to consider the consequences of imposing these rules until the tool provided a screen with this option, mediating many discussions of contingency rules. Lou started the dialog with a query to the group about what their plan would be. The computer tool helped guide their thinking about the dose modification toxicity rule. Helen responded by referring to the computer screen that provided an opportunity for them to set up one kind of toxicity rule. Other members of the group jumped in and worked to negotiate the meaning of the toxicity grades. They also tried to understand the difference between off-treatment rules (which removed the patient from the trial) and contingency rules. Lou: Ok, So now what? Helen: Should I pick some of these? (referring to options on the computer screen) Carl: Dose reduction criteria to prevent irreversible (?). Lou: So if we have a Grade 3 or above and we want to do something, we’ll probably, if we have a Grade 3 or above we’re gonna have to stop anyway so. . . Maddy: Stopping the trial except for two’s. Helen: So we only have, this means that we can only pick two? Facilitator: Right. Helen: Unless it has toxicity. Lou: Yeah, unless we move up our, to four on our overall and then you’re three here, right? This example demonstrates how students posed questions and sought clarification from each other as they addressed issues and negotiated their understanding. The LK group began negotiating the meaning of toxicity when Sean said ‘‘O.K., so this is heme, neuro stuff so we had to in order to. It says neutropenia, thrombocytopenia. So we have to look ah at the first box, white blood cells, for instance and platelets are going to be both.’’ This indicates that they were focusing on effects on blood cell production. This statement was closely connected to the OncoTCAP display that the students were viewing. They went on to discuss levels of platelets associated with different toxicity grades to arrive at a consensus that grade 4 would be unacceptable (it is, in fact, life threatening). They began talking about the number of the platelet cells, necessary for blood clotting, as Sean and John tried to reach consensus: John: I’d be worried like at ah, platelets Sean: You really don’t get, needs to be at less than 20. John: 25 Chuck: Right John: 20. Sean: Right? John: 17
408
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Sean: So for severe, 4, will Grade 4 toxicities stop the platelets? We’re going to press on at the platelets of 30. I don’t know, I mean. In this example, students used their understanding of platelets to negotiate what would be an acceptable level and then realized that going to the lower limit would have ‘‘severe’’ consequences. In the remainder of the discussion, they reconsidered the consequences of different levels of toxicity. John offered a plan to instead take patients off treatment at Grade 3. Sean demonstrated that he accepted this idea by putting that information into the computer, simultaneously reading from the screen as Chuck finished the statement. The LK group started their discussion very concretely, from the absolute level of one indicator of one toxicity. Moreover, they were treating off-treatment criteria as the only possible way to deal with adverse toxicity. This contrasts with the HK group, which began by trying to distinguish different types of rules and to negotiate, generally, what level of toxicity would be acceptable. For both groups, the tool helped mediate the discussions, as students completed various screen-based forms to set up their trials and interpreted different data displays and graphical representations. Although this is a brief snapshot of the analytic technique, it demonstrates how the fine-grained coding and coarser analyses complement each other and provide a more complete picture of collaborative knowledge construction than either technique would alone. The fine-grained analysis provides a view of the data that summarizes the cognitive and social processes involved in constructing the JPS. It does not convey all the richness of the sequence of events or social interaction. The second analysis complements the summary analysis by demonstrating bigger units of activity and how these are mediated by the tools that are available.
2. Study 2: studying tool-mediated collaboration in a problem-based learning group In this study, a single student group was analyzed as they spent five hours in a problem-based learning (PBL) tutorial (Hmelo-Silver, 2002a, 2002b). Problem-based learning is a student-centered instructional method in which students work in small, facilitated groups to learn through problem-solving (Barrows & Tamblyn, 1980). One goal of this study was to examine how the students collaboratively constructed knowledge. A second goal was to examine how use of a representational tool helped mediate learning. Thus, this study focused on how content, process, and tools interact during social knowledge construction. Three different analyses were conducted to address these goals. As in Study 1, verbal data analysis (Chi, 1997) was used to conduct a finegrained analysis of the discourse and additional qualitative analysis was used to capture collaborative explanations. In addition, the CORDFU technique developed by Luckin and colleagues (2001) was adapted to address the second goal. The participants in this study were a group of five second-year medical students and an expert facilitator. The discussion was videotaped and transcribed as the students tried to understand a case of pernicious anemia, a blood disorder that causes nervous system problems. The entire transcript was coded for the types of questions and statements in the discourse. All the questions asked were identified and coded on a turn-by-turn basis. The turn was generally the unit of analysis, however, turns were parsed when the topic changed, or additional questions or statements were included in a single turn. Question-asking, especially by students,
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
409
can indicate that learners are actively thinking. It helps learners organize and reformulate their ideas and connect new information to their prior knowledge (King, 1999). Three major categories of questions were coded as shown in Table 3 (Graesser & Person, 1994). Short answer questions required simple answers of five types: verification, disjunction, concept completion, feature specification, and quantification. Long answer questions required more elaborated relational responses of nine types: definitions, examples, comparisons, interpretations, causal antecedent, causal consequences, expectational, judgmental, and enablement. The meta category referred to group dynamics, monitoring, self-directed learning, clarification-seeking questions, and requests for action. To examine collaborative knowledge building, statements were coded as to whether they were new ideas, modifications of ideas, agreements, disagreements, or metacognitive statements. Each of these statements was coded as to its depth. Statements were coded as simple if they were assertions without any justification or elaboration. These corresponded to responses to the short answer questions. These included verifications, concept completions, and quantities. Elaborated statements went beyond simple assertions by including definitions, examples, comparisons, judgments, and predictions. Statements were coded as causal if they described the processes that lead to a particular state or resulted from a particular event. These last two types of statements are indicative of deep cognitive processing. In addition to these fine-grained analyses, an additional episode was selected for further examination. This episode occurred late in the second session as the students drew a flowchart and a diagram that helped them integrate their understanding. The representation construction activity lasted for approximately one half hour and was coded at a very coarse level as to whether the drawing actions focused on anatomy and physiology, biochemistry, or clinical signs and symptoms. To examine how the representation mediated the students’ collaborative knowledge construction, a chronologically-ordered representation of discourse and tool-related activity (CORDTRA) was constructed in order to gain an integrated understanding of how students used the representation as a tool for collaborative knowledge construction (Luckin et al., 2001). 2.1. Quantitative results: questions and explanations Students were expected to ask a substantial number of questions. The meta questions were expected to be the major category for the facilitator. The distribution of questions is shown in Fig. 1. Because these were experienced PBL students, they were also expected to pose many meta questions. A total of 809 questions were asked. The students asked 226 short answer questions, 51 long answer questions, and 189 meta questions. Of the short answer questions, the modal question type was to elicit the features of the patient’s illness from the medical record, suggesting that the students were building rich problem representations. The facilitator asked 39 short answer questions, 48 long answer questions, and 256 meta questions. Short answer questions were used to focus students’ attention. Long answer questions often asked the students to define what they had said or interpret information as, for example, when the facilitator asked a student ‘‘But I mean what produces the numbness at the bottom of the feet?’’ Meta questions were the dominant mode for the facilitator for example, as he asked the students to evaluate one of their hypotheses ‘‘Well yeah, multiple sclerosis. How about that? How do you feel about that?’’ These statements also included monitoring the group dynamics. The facilitator asked few content-focused questions.
410
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Table 3 Categories of questions Question type Short answer 1. Verification 2. Disjunctive 3. Concept completion 4. Feature specification 5. Quantification Long answer 6. Definition 7. Example 8. Comparison
9. Interpretation 10. Causal antecedent
11. Causal consequence 12. Enablement
13. Expectational 14. Judgmental
Task oriented and meta 15. Group dynamics
16. Monitoring 17. Self-directed learning 18. Need clarification
19. Request/Directive
Description
Example
For yes/no responses to factual questions. Questions that require a simple decision between two alternatives. Filling in the blank or the details of a definition. Determines qualitative attributes of an object or situation. Determines quantitative attributes of an object or situation.
Are headaches associated with high blood pressure? Is it all the toes? Or just the great toe?
Determine meaning of a concept. Request for instance of a particular concept or event type. Identify similarities and differences between two or more objects. A description of what can be inferred from a pattern of data. Asks for an explanation of what state or event causally led to the current state and why. Asks for explanation of consequences of event/ state. Asks for an explanation of the object, agent, or processes allows some action to be performed. Asks about expectations or predictions (including violation of expectation). Asks about value placed on an idea, advice, or plan.
Lead to discussions of consensus or negotiation. of how group should proceed Help check on progress, requests for planning. Relate to defining learning issues, who found what information. The speaker does not understand something and needs further explanation or confirmation of previous statement. Request for action related to PBL process.
What supplies the bottom of the feet? Where does that come from?? Could we get a general appearance and vital signs? How many lymphocytes does she have?
What do you guys know about pernicious anemia as a disease? When have we seen this kind of patient before? Are there any more proximal lesions that could cause this? I mean I know it’s bilateral. You guys want to tell me what you saw in the peripheral smear? What do you guys know about compression leading to numbness and tingling? How that happens? What happens when it’s, when the, when the neuron’s demyelinated? How does uhm involvement of veins produce numbness in the foot? How much, how much better is her, are her neural signs expected to get? Should we put her to that trouble, do you feel, on the basis of what your thinking is? So Mary, do you know what they are talking about? Um, so what did you want to do next? So might that be a learning issue we can, we can take a look at? Are you, are you, Jeff are you talking about micro vascular damage that then, which then causes the neuropathy? Why don’t you give, why don’t you give Jeff a chance to get the board up.
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
411
If knowledge were being collaboratively constructed, the students’ statements should be in response to previously introduced ideas. This was indeed the case. The facilitator made a total of 243 statements and the students made a total of 3763 statements. Eighty percent of these statements were directly related to concepts that were important for the problem. The distribution of statement types is shown in Fig. 2. This demonstrates that the students were doing most of the talking and they were engaged with curriculum-relevant content. The facilitator made few statements, rarely offering new ideas or modifying existing ideas. The facilitator was most likely to offer a comment monitoring the group’s progress or encouraging students to consider that a poorly elaborated idea might become a learning issue. Both the metacognitive questioning and statements helped support the students’ collaborative knowledge construction as they built on the new ideas offered by others, expressing agreement, disagreement, and modifying the ideas being discussed. Of the first four categories of statements, the majority were simple statements (1641), but the students also made elaborated statements (464) and causal explanations (211). While many of the statements taken individually were simple statements, taken as a collaborative explanation, they were elaborated, over several speakers and conversational turns. Various excerpts can be used to illustrate the collaborative explanations (see example below).
Fig. 1. Distribution of question types.
412
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
2.2. An integrated view of the ‘‘drawing episode’’: combining qualitative and quantitative results To examine how students collaboratively constructed knowledge, I zoomed in on an episode near the end of the activity in which students were drawing a representation of their understanding of the case, adapting the cordfu methodology (Luckin et al., 2001; Luckin et al., 1998) to create a chronologically-ordered representation of discourse and tool-related activity (cordtra) diagram, shown in Fig. 3. Late in the second session, the facilitator suggested ‘‘Um, probably the best way to pull this all together I suppose is to uh, uh tell me what you think is involved in her nervous system. Can you uh, can you draw a diagram of where you think the problem is?’’ This prompt led to a rich 29 min discussion in which the group members worked at pulling together their understanding. This episode had three phases: a brief phase in which the group planned the drawing, the majority of the drawing phase with an important segment in which the students make the connections between the signs and symptoms and different levels of functioning, and finally, a wrap up that is characterized by references to the drawing and tying up loose ends. The group’s final drawing is shown in Fig. 4. To understand these episodes in greater detail, the CORDTRA diagrams allowed simultaneous examination of talk and tool-related activity. To illustrate this methodology, this paper will discuss the second phase of this activity.
Fig. 2. Distribution of statement types.
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
413
2.2.1. Interpreting CORDTRA In Fig. 3, the numbers along the x-axis refer to the line number of each conversational turn. Along the y-axis are line numbers that represent categories. The entries in the graph refer to speakers, drawing activity, and instances of discourse that are coded in the categories indicated by its position on the y-axis at the turn indicated by the position along the x-axis. Lines 1–6 identify the speakers. Line 1 is the facilitator. Lines 2–6 are the students in the group (the legend identifies the students in order). Lines 7–9 refer to short, long, and meta questions, respectively. Lines 10–19 refer to statements. Recall that many of the statement types—new idea (New), modification (Mod), conceptual agreement (CA), task-related agreement (TRA), conceptual disagreement (CD), and task-related disagreement (TRD)—could also be coded as to whether they were simple assertions, elaborations, or causal statements. Lines 20–22 refer to the actual activity of constructing the representation. The first of these lines refers to representing the phenomenon at the level of structures and functions (anatomy and physiology, D-AP), the next refers to the biochemical level of explanation (D-Bchem), and the final level refers to the level of signs and symptoms (D-SS). The last three lines, 24–26, are references to the drawing. Line 23 refers to gestures directed at the drawing, Line 24 refers to talk related to drawing conventions and planning, and Line 25 is for other spoken references. The phase of the activity presented here is when
Fig. 3. CORDTRA diagram of students mapping between different levels of analysis in middle phase of activity.
414
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
students begin making connections between their hypotheses about the patient’s disease and the observed effects (signs and symptoms). 2.2.2. Mapping between causes and effects After a fairly detailed discussion of the biochemistry, Jeff and Jim have a brief discussion about representational conventions in lines 265–269. Jim: One of, one of the last things about that besides the, which you’re going to write the, you should write that up about the megaloblastic cells, just as another arrow. Jeff: Yeah we could have like symptoms here. Sheila: Uh hmm. Yes. Denise: Yeah. Yeah. Jeff: I’ll draw the symptoms in black.
Fig. 4. Student-constructed representation.
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
415
That triggers the next phase as the students began connecting their hypotheses about causal mechanisms (i.e., anatomy, physiology, biochemistry) to the evidence (signs and symptoms), shown in Fig. 3. This is important because this discussion of how to represent processes and signs and symptoms moves the students’ thinking forward. Thus, the representation serves as a tool in their collaborative knowledge construction and a focus for negotiation. The CORDTRA diagram shows the relation of the discourse to the drawing activity. This makes salient the nature of student talk as they switch between different levels of representation. At the junctures where student drawing activity switches from representations of basic science processes to signs and symptoms, or between levels of science, the students engage in causal elaborations. In the discussion preceding this next excerpt, the students were focused on basic science mechanisms without connecting their ideas to the patients’ signs and symptoms. The facilitator jumped in and asked: ‘‘Okay. Now you’re going to bring it into the nervous system.’’ Students respond to this by first completing their biochemical explanation, but then connecting it to the clinical signs (in bold) in lines 312–322.
Jeff: Where exactly is Jim: You should, you should start off Jeff: off here Jim: Methylmalanil to succinyl Jeff: Right here Jim: Yeah Sheila: Yeah, there, yeah Jim: We, you start with odd number fatty, odd number of carbons for the fatty acids. Mary: Fatty acids Sheila: Right Mary: And then you incorporate it a, a carbon dioxide that it’s a carboxylation reaction for the propianol Co-A to the methylmalanil Co-A. So you convert it from an odd chain with three to a four chain and then you do, it’s actually a mutase reaction for the methyl. This discussion continues until they get to how the membranes for the neurons are formed, which is directly relevant to the patient’s problem, and they continue in lines 334–344: Jeff: So these get incorporated into the Mary: Membranes Jim: In the handout that I gave you, the last sheet gives the um pathogenesis of this vitamin B12 deficiency. Jeff: So incorporated into the membranes and then you get. . . neuron loss, demyelination. Jim: Specifically dorsal column. Yeah. Specifically dorsal column Mary: Right Jim: And it, it’s called like the, the term, the category is a, is a metabolic demyelinization. Mary: And you get neuronal also um, various things that happen. I believe you get neuronal cell swelling within the membrane and then you can get neuronal death. And that’s when you get the paralysis and once it progresses to that stage, as we know, neurons will regenerate.
416
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Here, the students went through a causal explanation in which they clarified their ideas and integrated different levels of analysis, though they only just begun to get to the clinical level, and in fact, they brought their explanation to the level of a hypothetical symptoms. This explanation is highly collaborative as students monitor each other’s statements and complete each other’s sentences. The students got more specific and started to identify structural and functional abnormalities that account for the patients’ symptoms in response to the facilitator’s question: Facilitator: Okay now you want to, would you please summarize those structures that are involved in the nervous system. What, where is that happening? This swelling of the neurons and loss of myelin. Mary: Centrally and peripherally Facilitator: Nice Facilitator: Now narrow it down just a little tad Jim: Dorsal column Mary: Dorsal column, specifically dorsal column Sheila: Yeah Denise: Just Facilitator: Is that it? Just the dorsal columns Sheila: That’s the main place right? It doesn’t happen in.. Jim: That’s what causing her symptoms Jeff: What are her symptoms? Denise: And then, then Mary eventually do you get um.. Jim: Paresis, paresthesia Jeff: Paresthesia Jim: Which is numbness and tingling and hyperexcitability Jeff: Okay Sheila: Um. . .. and then the loss of. . .yeah Jim: And then gait Sheila: Then the loss of, yeah. The proprioception and vibratory loss Mary: Ataxia, sensory ataxia is what it’s called for the gait abnormality Facilitator: You want to describe what sensory ataxia means? Mary: Sensory ataxia um, is specific when, is it, it’s a problem when you actually lose sensation. For example, if you lose your um, position sense, you then are not able to walk properly or you’re not able to do movements that you would normally do because you don’t have a sense of where your fingers or toes or your feet are. So, for someone who has a gait disturbance as she has, you’d classify that as a sensory ataxic. Sheila: Although actually the description of hers doesn’t quite fit. This excerpt corresponds to lines 347–374 on the CORDTRA diagram. Here the students were getting closer to bringing the problem of demyelination to specific structures (the dorsal column) and then mapping it onto the signs and symptoms that the patient is actually exhibiting. Moreover, they were monitoring the fit between the symptoms that she is exhibiting and their theoretical descriptions. All the students were involved in this collaborative sense-making. The drawing was an important tool in this discussion. It served as a concrete referent that students can point
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
417
towards and negotiate as they are elaborating and monitoring their joint understanding (which they did in the final phase of this episode). This analysis provides considerable information about the relationship among variables and representation construction that the frequency counts do not provide. The fine-grained analysis summarizes the cognitive and social activity, but does not capture the richness of the collaborative explanations that students construct. The analysis of the larger units of discourse help shed light on this phenomenon, as well as providing some information about how the representation served as a tool for the students’ collaborative thinking. The CORDTRA diagram makes salient the relation of metacognitive talk and causal explanation to the conceptual space covered in the drawing activity and supports making complex inferences that might otherwise be difficult. This allows exploration of the relationship between tool use (in this case, a drawing) and collaborative knowledge construction. The different methods provide the opportunity to see more of the elephant than any one method does by itself.
3. Discussion In these two studies, several analytic techniques were used. In study 1, the focus was on coding features of collaborative discourse that were related to joint knowledge construction such as questioning and explaining. A combination of quantitative methods (frequency counts) and illustrative qualitative analyses were used to help answer the research questions. In study 2, an attempt was made to integrate the different aspects of the analysis to gain a bigger picture of how a representation mediated collaborative knowledge construction. Although this latter study does not look at technology, this technique has a great deal of potential to support analysis of collaborative knowledge construction in a computer-based learning environment. The CORDTRA technique would have been extremely helpful in analyzing study 1 but there was not sufficient data for such an analysis. Activity theory is a descriptive theory of human thought and behavior in context. This theory suggests that learning needs to be considered as an activity system that involves subjects and mediating artifacts (be they representations, computers, or other tools) that act to transform particular objects of activity to achieve an outcome (Engestro¨m, 1999). The activity system is also affected by other social and historical factors as well. Understanding such a complex system is a substantial undertaking and the use of multiple methods are often required to understand how knowledge is constructed (Salomon, 1991). Rather than rigid methodological orthodoxy, the combination of methods used must be tailored to one’s research questions—which aspects of interaction, in these instances, collaborative knowledge construction, one seeks to understand. For example, in study 2, the fine-grained coding answered questions about the students’ cognitive activity by providing information about how they asked questions, monitored, and elaborated their understanding. The excerpts answered questions about the interactive processes involved in generating a collaborative explanation. The CORDTRA analysis was directed at questions about how a representation mediates learning. These methodological techniques have great potential to inform analysis of data in CSCL systems as investigator seek to answer cognitive, social, and tool-related questions in an integrated way. In these studies, a mixture of quantitative methods and qualitative methods were used as a means of analyzing interaction. Recall the story of the three blind men looking at an elephant.
418
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
One man said oh, this animal is like a wall. Another said that it was like a spear. The third described a snake. But in their reductiveness none of them had the complete picture. The argument that I make here is that to see the whole elephant, we need to mix our methods to get the big picture. Acknowledgements This research was partially funded by a National Academy of Education/ Spencer Foundation Postdoctoral Fellowship.
Appendix. Screenshots from the Oncology Thinking Cap
Step 1 of the Clinical Trial Design Wizard: Defining the dose and schedule.
Step 2 of the Clinical Trial Design Wizard: Modifying the dose due occurrence o toxicity.
Step 3 of the Clinical Trial Design Wizard: Deciding when individual patients will be taken off-treatment.
Step 4 of the Clinical Trial Design Wizard: Setting the statistical parameters.
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
419
Multiple patient simulation result screen showing the number of patients in the trial, the number of complete responses (CR), partial responses (PR) and recurrences. In addition it shows patient deaths to to tumor, toxicity (Tox), and those that reached the end of the trial either with no evidence of disease (NED) or with a tumor. The bottom half of the screen allows the user to view the history of an individual patient.
References Barrows, H., & Tamblyn, R. (1980). Problem-based learning: an approach to medical education. NY: Springer. Cazden, C. (1986). Classroom discourse. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed.) (pp. 432– 463). NY: MacMillan. Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: a practical guide. Journal of the Learning Sciences, 6, 271–315. Chinn, C. A., & Anderson, R. C. (2000). The structure of discussions that promote reasoning. Teachers College Record, 100, 315–368. Cobb, P., & Yackel, E. (1996). Constructivist, emergent, and sociocultural perspectives in the context of developmental research. Educational Psychologist, 3/4, 175–190. Cole, M. (1996). Cultural psychology: a once and future discipline. Cambridge MA: Harvard. Engestro¨m, Y. (1999). Activity theory and individual and social transformation. In Y. Engstro¨m, R. Miettinen, & R. Punamaki (Eds.), Perspectives on activity theory (pp. 19–38). NY: Cambridge University Press.
420
C.E. Hmelo-Silver / Computers & Education 41 (2003) 397–420
Graesser, A., & Person, N. (1994). Question asking during tutoring. American Educational Research Journal, 31, 104– 137. Greeno, J., Collins, A., & Resnick, L. (1996). Cognition and learning. In D. Berliner, & R. Calfee (Eds.), Handbook of educational psychology (pp. 15–46). NY: MacMillan. Hmelo, C. E., & Guzdial, M. (1996). Of black and glass boxes: scaffolding for learning and doing. In D. C. Edelson, & E. A. Domeshek (Eds.), Proceedings of ICLS 96 (pp. 128–134). Charlottesville VA: AACE. Hmelo, C. E., Nagarajan, A., & Day, R. S. (2000). Effects of high and low prior knowledge on construction of a joint problem space. Journal of Experimental Education, 69, 36–56. Hmelo, C. E., Nagarajan, A., & Day, R. S. (2002). ‘‘It’s harder than we thought it would be’’: a comparative case study of expert-novice experimentation. Science Education, 86, 219–243. Hmelo, C. E., Ramakrishnan, S., Day, R., Shirey, W., Brufsky, A., Johnson, C., Baar, J., & Huang, Q. (2001). The Oncology Thinking Cap: scaffolded use of a simulation to learn about designing clinical trials. Teaching and Learning in Medicine, 13, 183–191. Hmelo-Silver, C. E. (2002a). Collaborative ways of knowing: issues in facilitation. In G. Stahl (Ed.), Proceedings of CSCL 2002 (pp. 199–208). Hillsdale NJ: Erlbaum. Hmelo-Silver, C.E. (2002b). Getting the big picture: discourse, representation and reflection in a tutorial group (in preparation). King, A. (1999). Discourse patterns for mediating peer learning. In A. M. O’Donnell, & A. King (Eds.), Cognitive perspectives on peer learning (pp. 87–117). Mahwah NJ: Erlbaum. Koschmann, T., Glenn, P., & Conlee, M. (2000). When is a problem-based tutorial not tutorial? Analyzing the tutor’s role in the emergence of a learning issue. In D. Evensen, & C. E. Hmelo (Eds.), Problem-based learning: a research perspective on learning interactions (pp. 53–74). Mahwah NJ: Erlbaum. Kozulin, A. (1998). Psychological tools. Cambridge MA: Harvard. Luckin, R., Plowman, L., Laurillard, D., Stratfold, M., Taylor, J., & Corben, S. (2001). Narrative evolution: learning from students’ talk about species variation. International Journal of AI in Education, 12, 100–123. Luckin, R., Plowman, L., Gjedde, L., Laurillard, D., Stratfold, M., & Taylor, J. (1998). An evaluator’s toolkit for tracking interactivity and learning. In M. Oliver (Ed.), Innovation in the evaluation of learning technology (pp. 42–64). London: University of North London. Palincsar, A. S. (1998). Social constructivist perspectives on teaching and learning. Annual Review of Psychology, 45, 345–375. Pea, R. D. (1993). Practices of distributed intelligence and designs for education. In G. Salomon, & D. Perkins (Eds.), Distributed cognitions: psychological and educational considerations (pp. 47–87). NY: Cambridge. Roschelle, J. (1996). Learning by collaborating: convergent conceptual change. In T. D. Koschmann (Ed.), CSCL: theory and practice of an emerging paradigm (pp. 209–248). Mahwah NJ: Erlbaum. Salomon, G. (1991). Transcending the qualitative-quantitative debate: the analytic and systemic approaches to educational research. Educational Research, 20, 10–18. Saxe, J. G. (n.d.). The blind men and the elephant. Retrieved August 23, 2002, from http://www.noogenesis.com/ pineapple/blind_men_elephant.html. Vygotsky, L. S. (1978). Mind in society. Cambridge MA: Harvard. Wenger, E. (1998). Communities of practice: learning, meaning, and identity. NY: Cambridge.