Available online at www.sciencedirect.com
System 37 (2009) 514–525 www.elsevier.com/locate/system
Training effects on computer-mediated peer review Hsien-Chin Liou *, Zhong-Yan Peng National Tsing Hua University, 101 Sec. 2 Kuang Fu Rd., Hsinchu 30043, Taiwan, ROC Received 28 October 2008; received in revised form 21 January 2009; accepted 28 January 2009
Abstract The interactive functions of weblogs facilitate computer-mediated peer reviews for collaborative writing. As limited research has been conducted on examining the training effects of peer reviews on students’ peer comments, their revision quality, and their perceptions when composing in weblogs, the present case study aims to fill the gap. Thirteen freshman students participated in an EFL writing class, in which they wrote four formal assignments in weblogs. Peer review training was conducted in the second and third assignments to facilitate the collaborative process. Comparisons between reviews without and with training (i.e., the first and the fourth assignments) indicate that the students made more revision-oriented peer comments and had more success in revising their compositions, although they adopted less than 50% of the comments for revision. The students’ perception data show that blog-enhanced instruction stimulated their interest in improving their writing. Yet, not all of the participants felt confident about providing useful peer feedback. With the empirical evidence presented in the study, blogs could serve as a suitable platform for EFL writing instruction concerning giving opportunities for interaction. As training is essential to make computer-mediated peer review effective, the study supports the crucial role played by language teachers when incorporating Internet technologies into writing instruction. Ó 2009 Elsevier Ltd. All rights reserved. Keywords: Weblog; Computer-mediated peer review; Peer review training; Peer comments; Revision quality
1. Introduction The world-wide growth in Internet technologies has attracted researchers to explore their impact on various aspects of writing instruction (Ciekanski and Chanier, 2008; Ho and Savignon, 2007; Liu and Sadler, 2003; Warschuer and Ware, 2006). To illustrate, Ciekanski and Chanier (2008) designed an experiment based on a synchronous audio-graphic conferencing tool with 16 false beginners in an English for Specific Purposes (ESP) course. To analyze the video data which represents user actions and speech acts that occurred in the various modalities of the system (aural, textchat, text editing, websites), they developed a coding scheme. The relationship between how the learning tasks for collaborative writing were designed by tutors and how they were implemented by learners was examined. The authors argue that the evaluation framework provided *
Corresponding author. Tel.: +886 3 5742709; fax: +886 3 5718977. E-mail addresses:
[email protected] (H.-C. Liou),
[email protected] (Z.-Y. Peng).
0346-251X/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.system.2009.01.005
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
515
can increase our understanding of multimodal communication structures through learner participation and learning practices. With succinct features of archiving, hyperlink, comment, and instant/self-publishing, several projects have been conducted on the use of weblogs for second language writing (Bloch, 2007; Wang, 2007). The commenting functions of blogs are particularly worth examination since they make blogs a promising environment for peer review activities in the L2 writing classroom, helping students interact with each other and refine their writing. A number of studies show the benefits of peer reviews for classroom-based writing instruction and such an activity involves complex factors for their success. Among the factors, the existing literature deemed peer review training as crucial (Berg, 1999; Min, 2005, 2006; Stanley, 1992; Zhu, 1995). Logically, training should be held more thoroughly, particularly in the context of computer-mediated peer review (Ware and O’Dowd, 2008) when technology familiarity and its function add additional dimensions in the complex peer review process. Limited weblog research has been conducted in EFL contexts concerning mainly students’ perceptions regarding online participation and student perception (Wang, 2007), leaving revisions and the training effects untapped. It is argued in this study that students’ revision, online negotiation and their perceptions toward the review task as influenced by training should be examined altogether in order to precisely pinpoint how computer-mediated peer review can facilitate EFL writing instruction. 2. Literature review Peer review for L2 writing has generated increasing research interest recently (e.g., Liu and Hansen, 2002; Lockhart and Ng, 1995; Mangelsdorf and Schlumberger, 1992; Wang, 2007; Zhu, 2001). With the interaction it sparks among students, peer review is claimed to assist continued development of communicative competence, inspire more learner participation, create an authentic communicative context, and help writers gain more understanding of reader needs (Hyland, 2003). Traditionally, studies on peer review generally fell into three categories (Ferris, 2003): the nature of interactions taking place during peer review sessions, the impact of peer review on student revisions and usually overall writing quality, and student attitudes towards peer response. Studies that have linked peer response characteristics with learning outcomes and students’ affective responses are instructors’ prime concerns but they did not always yield satisfactory results. One finding is that the participants may not adopt peer feedback. In Mendoncßa and Johnson’s study (1994), although 53% of peer comments were incorporated, the ESL students in their study were quite advanced with their writings and discussions centering on their specialized field, making the condition in the study unlikely to appear in most general purpose EFL composition courses. In another study, Connor and Asenavage (1994) found that very few student revisions were the direct result of peer response (only 5%). Peer reviewers had the tendency to focus on surface forms (i.e. grammatical errors) while teacher comments covered idea development and organization. A related study was conducted by Paulus (1999), who examined the impact of peer review on eleven international students’ revisions in a pre-freshman course in a university. Overall, peer revision influenced 13.9% of all revisions, whereas teacher feedback influenced 34.3%. Peer feedback’s contribution to the improvement of revisions was much smaller than that of teacher feedback or other-influenced feedback. Another finding for the failures of peer review is that some students gave overly critical comments (Leki, 1990) or complimentary feedback since they were reluctant to criticize their peers (Carson and Nelson, 1996). To make students more effective reviewers and thus enhance the effectiveness of peer review on student writing, some researchers have suggested training be crucial with empirical evidence (Berg, 1999; Min, 2005, 2006; Stanley, 1992; Zhu, 1995). Berg’s (1999) participants made more meaning-changing revisions after receiving training. In Min’s (2006) study, not only did students’ writings comprise a significantly higher percentage of revisions from peer responses (from 68% to 90%), the number of revisions and the quality of them were also enhanced due to her training. This specific line of inquiry seems to suggest that training is essential to make peer review effective. 2.1. Computer-mediated peer review (CMPR) A number of researchers have conducted studies that compared face-to-face and computer-mediated peer review (Ho and Savignon, 2007; Liu and Sadler, 2003; Schultz, 1999; Ware and O’Dowd, 2008). Computer-
516
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
mediated peer review (CMPR) may alleviate some students’ uneasiness during face-to-face peer review due to their reluctance to criticize their peers resulting from their cooperation-oriented cultural background (Carson and Nelson, 1996). Still, different types of interactions (i.e. synchronous and asynchronous) may affect the effectiveness and quality of feedback (Ho and Savignon, 2007; Liu and Sadler, 2003) which language teachers should understand and apply judiciously. Schultz (1999) examined the differences in 54 French students’ peer response between the traditional faceto-face format and networked-enhanced one, using InterChange, which allows students to communicate in real time. The statistically significant results show that the online group (who used InterChange) made fewer changes than students in the face-to-face group in one class. More global changes were made in the oral mode as it may allow more rapid interaction and exploration of the writer’s goals and intention. Liu and Sadler (2003) investigated whether differences in the mode of commenting and interaction (technology-enhanced versus traditional) result in variations in the text area (global versus local), the type (evaluation, clarification, suggestion, and alteration), and the nature (revision-oriented versus non-revision-oriented) of comments given by peer reviewers, and the impact of observed differences on students’ revisions. The findings show that their eight ESL students in the technology-enhanced group generated a larger overall number of peer comments and a higher percentage of revision-oriented comments. However, asynchronous (Microsoft Word editing) interactions were more effective than synchronous (MOO1) ones. Finally, the authors conclude that using Word editing in an electronic review mode combined with face-to-face interaction in the traditional peer review mode may serve as a two-step procedure for effective peer review activities in L2 writing classrooms. Ho and Savignon (2007) investigated the use of face-to-face peer review and computer-mediated peer review (CMPR). The participants were 37 English majors in two writing classes in a national university of science and technology in Taiwan. Face-to-face peer review sessions (FFPR) and asynchronous computer-mediated peer review with email and the annotation features of word-processing programs were held in the writing classes. Questionnaire data revealed that learners had less favorable attitudes towards CMPR than towards FFPR. Although the researchers thus conclude that CMPR should not be used alone in EFL writing classes in Taiwan due to the lack of oral communication, the interval between email exchanges is so long that it easily diminishes the effectiveness of peer comments. Most recently, Ware and O’Dowd (2008) performed a two-phase, year-long research project that explored the impact of corrective feedback on language development. The subjects were post-secondary learners of English and Spanish, who were engaged in telecollaboration through weekly asynchronous discussions. The students were asked to comment on their partners’ use of the target language in one of the two conditions: e-tutoring, in which students were asked to provide feedback on the errors made by their partners; and e-partnering, in which students were not asked to provide corrective feedback. The findings indicated that students in both conditions preferred the inclusion of corrective feedback, but corrective feedback only occurred in the e-tutoring condition in which the students were explicitly required to correct their partners’ language use. They suggest that the instructor should urge students to provide more corrective comments in telecollaboration exchanges. With its easy navigation and What-You-See-Is-What-You-Get (WYSIWIG) platform, weblogs have potential for language teaching (Bloch, 2007; Campbell, 2003; Huffaker, 2005). Ho and Savignon (2007) argue that the different software used for peer review activities is likely to generate different outcomes. The implication drawn from these comparison studies is that the asynchronous mode in CMC may be more appropriate than the synchronous mode for peer review activities since more response time allowed in the former is conducive for students to refine their comments before submission. The comments posted in weblogs as one type of asynchronous interaction can be immediately seen by the comment receivers as they are enhanced a with self-publication facility. The effectiveness of using weblogs to hold peer review activities is therefore worth investigation. On the other hand, the participants in the previous CMPR studies did not receive explicit peer
1 MOO stands for Multi-user domain Object-Oriented. It is a software mechanism which can create an online text-based virtual reality environment for educational uses including language teaching. In MOO, students can do real-time online chat and build objects in the environment; the two features also facilitate the formation of online communities.
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
517
review instructions. With Ware and O’Dowd’s (2008) findings, training is advisable in weblog-enhanced peer reviews if a positive outcome is targeted. 2.2. Research questions The present study attempts to fill the gaps in the literature concerning the training effects of computer-mediated peer review activities with the following research questions. The first main question concerns participants’ online discussion and their follow-up textual changes; the second question addresses their perceptions about the blogging-enhanced peer review activity: (1) In what ways does peer review training influence students’ peer comments and revisions in terms of feedback adoption, changes of feedback nature, and revision quality? (a) Is there any observable difference between students’ peer comments pre- and post-training? (b) To what extent is peer feedback adopted by the participants in order to revise their drafts? (c) Is there any observable difference between students’ revision quality pre- and post-training? (2) How do the students like computer-mediated peer reviews as designed in this study?
3. Methodology A case study approach was adopted to address the effects of training on computer-mediated peer reviews. 3.1. Participants The present study involved 13 EFL freshman English majors, five of whom were male and seven, female, in a public university of an Asian country. All the students were native speakers of Mandarin–Chinese and foreign language learners of English. They had studied English as a subject in their high school for 6 years, but they did not have much writing experience for various text genres before they were admitted into this program. The project lasted for one semester. 3.2. Training of computer-mediated peer review A commercial free blog environment, Vox (http://www.vox.com), was adopted for the current study. Four assignments were designed. The writing cycle for each assignment lasted for 3–4 weeks. Students went through idea development, completing the first draft, exchanging peer comments, and revising the first draft so that the second draft came into view (based on Tsui and Ng, 2000, see Fig. 1). In the students’ first and fourth writing
Brainstorming: pre-writing tasks on paper or Word but drafts uploaded to blogs Drafting outline/ideas/summary as blog entries
Writing first draft either in World or blog
Peer comments on the first draft on blog
Revision of first draft
Peer review training
second draft on blog
Fig. 1. Steps designed in the writing cycle (adapted from Tsui and Ng, 2000).
518
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
assignment periods, no training was given. Peer review training was incorporated to the second and the third assignment periods. To eliminate the potential difficulties of peer review activities, students received instructions on how to provide meaningful and helpful feedbacks to their peers’ essays in terms of revision. A training session was composed of two phases. First, students were given sheets which contained the rules for good peer review (adapted from Min, 2005) and told that they could provide comments based on the organization, structure, and language of the writing. Then, they were given two drafts of an article – one original and one revised. The instructor guided them to read the two drafts and find out why the revised version was superior to the original one. Examples of good peer comments were also given to them to help refine their suggestions for their peers. Specific examples concerning organization, structure, and linguistic aspects were also discussed in class in order to raise students’ awareness of areas that needed improvement. Students did most of their writing activities in a computer laboratory where each of them had access to one computer. They received thorough orientation and practice on how to use the blog function and how to conduct successful peer reviews with teacher-designed handouts, illustrations and text samples. Different rhetorical patterns for the second and the third writing assignments and computer skills were taught while the students were working. 3.3. Procedure For the present study, four formal writing assignments were designed with the following topics: ‘‘First Impressions of ___ (name of this university)”, ‘‘Cooperation or Competition in Learning”, ‘‘A Friend or Family Member I Admire”, and ‘‘Activities That Contribute to Good Health”. Students needed to submit their comments onto their peers’ blogs for each of the four writing cycles, and the blogging environment kept track of the reviews. They had orientation to the blog system, wrote the first assignment, received peer review training while they worked on the second and third assignments, and completed their fourth assignment. Lastly, students were given an evaluation questionnaire, which was designed with ten items on a five-point Likert agreement scale concerning students’ perceptions about the writing class at the end of the semester. 3.4. Data analysis The analysis of peer comments exchanged in the first and fourth writing cycles was based on a rubric originally developed by Liu and Sadler (2003) and later adapted to fit the purpose of the current study. The unit of analysis is one sentence in a given posted comment; this means one comment may have several sentences. To examine the type of revision in students’ comments, the comments were first classified based on their focus of text areas: a focus of global issues (feedback with regard to idea development, audience and purpose, and organization of writing) and a focus of local issues (feedback with regard to wording, grammar, or punctuation). All peer comments can be re-categorized into four types based on their discourse functions: evaluation (comments on the good or bad features of writing), clarification (probing for explanations and justifications), and suggestion (pointing out the direction for changes). The category of altercation comments was not adopted as it did not apply to our data. The category ‘‘chatting” (making comments not related to aspects of writing) was added (for the rubric and examples taken from our participants’ comments, see Appendix A). The third perspective on peer comments is examining whether comments lead to text revision. This is called revision-oriented comments; only those of chatting and complimentary evaluation did not lead to text revision, called non-revision-oriented comments. The peer comments in the first and fourth writing cycles were analyzed and compared in terms of discourse functions, focus of text areas, and revision nature (revision-oriented or not). To examine revision quality, the researcher and a rater (who is a master-level graduate student in TESOL) read through students’ first drafts and the revisions (text segments changed based on revision-oriented comments) to determine if the revisions were effective in terms of form and meaning to fit the entire student text. The interrater agreement is 97%. The revision size was classified as punctuation, word, phrase, clause, sentence, and paragraph in student essays. This means a revision-oriented sentence from a student’s posted online comment may result in several units of revision in his/her second draft. The purpose of analyzing students’
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
519
revision quality was to determine if students made successful revisions of their drafts which directly resulted from the peer comments. One example of an effective revision is shown below: Original draft: My first impression of Tsing Hua University is that: What such a huge campus it is. Revised draft: ‘‘What a huge campus it is”. I shouted with my astonishment when I set foot on Tsing Hua University. In the example, the meaning is better expressed in the revised draft, and the problem in the draft (marked with underlining) was corrected. Below is one example of an ineffective revision. Original draft: Nevertheless, to my even bigger surprise, I passed the exam. Revised draft: To my even bigger surprise, I passed the exam, though, to be frankly, it was not a good first impression. The sentence added by the student writer failed to enhance the meaning of the original draft; instead, it confused the reader as it was difficult to understand what the student meant by adding his ‘‘impression”. 4. Results Findings about training effects on the computer-mediated peer review as designed in this project and the students’ perceptions about CMPR are presented below. 4.1. The quality of peer comments To answer the first sub-question (1a) of research question one, students’ peer comments for their first and fourth writing assignments, written before and after they were trained with peer review activities (while students were working on their second and third assignments), were analyzed and compared. Peer comments from the thirteen students amounted to 116 units (sentences) and 166 units of revisions for their first writing assignment. In Table 1, 33 of the peer comments were categorized as evaluation comments, 25 as suggestions, 4 as clarifications, and 54 as chatting. The percentage of chatting comments in comparison with all peer comments for the first writing assignment was 46.6%, which took up nearly half of all comments. The category of evaluation comprised both complimentary and critical comments. Complimentary evaluative comments and chatting were classified as non-revision-oriented comments. Critical evaluative comments, suggestions, and clarifications were grouped as revision-oriented comments (42.2%). Non-revision-oriented comments comprised over half of the peer comments exchanged between students (57.8%). Students’ peer comment areas were also classified into those concerning global or local writing problems, as seen in Table 1. Global comments dominate students’ evaluation, suggestion, and clarification comments, Table 1 Analysis of peer comments for the first writing assignments. Function
Suggestion
Clarification
Evaluation
Chatting
Number Percentage
25 21.6%
4 3.4%
33 28.4%
54 46.6%
3
19
1
14
Focus of text area Global 22 Percentage 71% Local 3 Percentage 29% Revision nature Number Percentage
Revision-oriented (suggestions, clarifications, and critical evaluations) 49 42.2%
Non-revision-oriented (chatting, and complimentary evaluations) 67 57.8%
520
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
Table 2 Comparisons of peer comments in the first and fourth assignments. Function
Suggestion
Clarification
Evaluation
Chatting
First assignment (%) Fourth assignment (%)
21.6 34.4
3.4 3.1
28.4 53.1
46.6 9.4
Revision nature First assignment (%) Fourth assignment (%)
Revision-oriented (suggestions, clarifications, and critical evaluations) 42.2 68.7
Non-revision-oriented (chatting, and complimentary evaluations) 57.8 31.3
Focus of text area First assignment (%) Fourth assignment (%)
Global 71 84.5
Local 29 15.5
comprising 71% of the three types combined. The gap between global comments and local comments is narrower only in the category of evaluation comments. A possible explanation is that the local aspects of writing are noticeably correctible, so evaluation comments from commentators sufficed in pointing out possible revision directions. In contrast, the global aspects of writing might need clearer explanations and more sophisticated rhetorical knowledge, which were available in suggestion comments. Overall, students’ comments demonstrate a good range of language functions to show that they had tried to communicate problems concerning the global or local areas in texts to their peers. After receiving training on computer-mediated peer review, 64 units/sentences were found in peer comments for their fourth assignment. Among the peer comments, two major categories were 34 of evaluation comments and 22 of suggestions. As shown in Table 2, although the students exchanged a lower number of peer comments for their fourth writing assignment, the number of chatting comments has evidently dropped from 46.6% in the first assignment to 9.4%. If chatting comments were excluded from discussion, the number of comments related to writing per se is 62 for the first writing assignment and 58 for the fourth writing assignment, so there was not a huge difference in the number of comments students had exchanged. Evaluative comments particularly increased from 28.4% to 53.1%. The percentage of revision-oriented comments also increased from 42.2% to 68.7%. Similar to the first writing assignment, students mostly commented on the global aspects of their peers’ writing pieces, so global comments made up 84.5% of evaluation, suggestion, and clarification combined. 4.2. The adoption rate of peer comments The sub-question (1b) attempts to see if the peer comments were indeed adopted by their receivers. The researcher examined all revision-oriented comments carefully to see if they were adopted in students’ revisions (second version of their essays). Below is an example of a peer comment being adopted Original draft: As far as I concerned, this is not a bed way. Peer comment: First, although you used vocabularies well, you made very little mistakes of spelling. Like ‘‘As far as I concerned, this is not a bed way”. The word ‘‘bad” was wrong. Revised draft: As far as I’m concerned, this is not a bad way. In the above example, the commentator pointed out the student writer’s spelling mistake, and the student writer corrected it accordingly. Interestingly, although the commentator did not point out the other mistake (‘‘I’m” concerned) in the sentence, with the sentence being highlighted, the student writer noticed it and corrected it on his own. Still, the student writer’s revision of ‘‘I’m” was not regarded as a product of the peer comment since their connection was merely speculative.2 2
In some cases, when students take the time to read through their own writing carefully enough, they can spot problems and improve their essays, if they have the knowledge about what constitutes good writing.
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
521
Table 3 Adoption of peer comments for the first and fourth writing assignments. Comments versus revisions
First assignments
Fourth assignments
Revision-oriented comments Adoption rate (number of comments adopted/total revision-oriented comments) Successful revisions (peer comments adopted that led to successful revisions/total comments adopted)
49 (42.2%) 24 (48.9%) 20 (83.3%)
44 (68.8%) 21 (47.7%) 18 (85.7%)
Table 4 Revision effectiveness of text areas for the first and fourth writing assignments. Revision and text area
Effective revision
Ineffective revision
First assignment Fourth assignment
112.5 (67.8%) 61.5 (91.8%)
53.5 (32.2%) 5.5 (8.2%)
Text area First assignment Fourth assignment
Global 120 (72.3%) 44 (65.7%)
Local 46 (27.7%) 23 (34.3%)
As shown in Table 3, the students made an overall number of 49 revision-oriented comments for their peers in the first writing assignment. Twenty-four of the 49 comments were used by the receivers, making the adoption rate 48.9%. Regarding the fourth writing assignment, the adoption rate was 47.7%. Even though students were able to provide more revision-oriented comments after receiving training, the adoption rate did not rise but declined slightly. Even though the adoption rate of peer comments was lower than 50%, a high percentage of peer comments adopted led to successful revision. As shown in Table 3, 83.3% of the peer comments adopted resulted in successful revisions in the first writing cycle, and 85.7% of the peer comments adopted led to successful revisions in the fourth cycle. It might be inferred from the findings that students were able to make conscious choices of peer comments to adopt. Once they used a peer comment, they knew that the revision direction provided by the comment would improve their drafts. They may have gained the ability to choose the kinds of suggestions effective to their revisions through the knowledge which the students acquired from peer review, the writing textbook, and the maturation of their English proficiency. 4.3. Students’ revision qualities Clues to the third sub-question (1c) of research question one can be found after we compared the analyses of students’ revisions of their first and fourth assignments. Students made a total number of 166 revisions for their first writing assignment as shown in Table 4. Among the 166 revisions, 112.5 of them were considered effective by the researcher and the rater, making the rate of successful revision 67.8%. Despite the fact that over half of the peer comments were not revision-oriented, students were often successful in revising their drafts. Forty-six of their revisions focused on local areas (27.7%), while 120 focused on global areas (72.3%). Therefore, the students made efforts in revising their drafts globally, even in places which their peers did not mention in the feedback. Students made a total number of 67 revisions for their fourth writing assignment. 61.5 of them (91.8%) were considered effective: 23 of them were directed towards local areas (34.3%), while 44 of them, towards global areas (65.7%). Although students made fewer revisions in their fourth writing, the quality of students’ revisions was enhanced. Effective revisions moved from 67.8% to 91.8%. Interestingly, even though students commented more on the global features of their peers’ writing (from 71% to 84.5% as in Table 2) like the finding in Min (2005), they seemed to pay slightly more attention to the local errors when revising their fourth writing assignments (from 27.7% to 34.3% as in Table 4). It is likely that the learners may have changed the areas they knew how to modify; this means revision strategies may need to be explicitly taught with this group of learners.
522
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
Table 5 Rating of questionnaire items on peer review and revision. Item number 8 5 4 3 7 9 1 2 10 6 Average
Item Statement
Average Rating
The methods of reading and evaluating compositions taught in class made me more aware of the strengths and weaknesses of my own composition I like my classmates leaving comments on my blog I like my classmates providing suggestions for revisions on my blog I like my classmates reading my compositions The methods of providing peer comments taught in class enabled me to give my classmates better writing suggestions The suggestions given by my classmates are helpful for revising of the composition I like providing suggestions on my classmates’ compositions I like leaving comments on classmates’ blogs I like revising my compositions I like doing peer review with my classmates/partner
3.92 3.86 3.86 3.79 3.64 3.36 3.29 3.29 3.29 3.28 3.56
4.4. Students’ perceptions To answer the second research question, responses to the evaluation questionnaire were analyzed. In Table 5, the average rating of items concerning students’ perceptions of CMPR is 3.56, slightly higher than neutral (3.00 means ‘‘no opinion”; 4.00 means ‘‘agree” and 5.00 means ‘‘strongly agree”). The students recognized the value of reading and evaluating peer essays in class (3.92, the highest among these items). Compared with passing on comments to peers and revising (3.29 for item 1 and 2), they liked to read peer comments more (3.86 for items 5 and 4). They were unsure if the suggestions were helpful (item 9, 3.36). Furthermore, they did not seem to enjoy revising their essays (item 10). The questionnaire results might explain the low adoption rate of the peer comments. 5. Discussion In this case study, four writing assignments were designed with the first without any intervention of peer review training. After the treatment of peer review training in the second and third writing assignments, students’ revised drafts, and peer comments on the fourth assignment were compared with those of the first assignment. It is evident that students’ peer comments have become more revision-oriented (from 42.2% to 68.7% as in Table 2) and their revision success has also increased (from 67.8% to 91.8% as in Table 4). Nevertheless, students’ adoption rate of peer comments was not high, reaching 48.9% for the first writing assignment and 47.7% for the fourth writing assignment. It seems that peer review training did not make the students more willing to adopt the peer comments they received when improving their drafts, even with the enhanced quality of the comments. Compared with Min’s (2006) study where an extremely high adoption rate of 90% was found after her EFL participants received peer review training, the adoption rate of close to 50% in the study was not satisfactory. Still, our adoption rate is superior to those of Connor and Asenavage’s (1994) and Paulus’ (1999) studies, in which 5% and 13.9% of peer comments were found to influence students’ subsequent revisions, respectively. One factor which might contribute to the low adoption rate of peer comments is that the students in this case only chose the comments that would be helpful for revision, or the comments which they had the revision strategies to work on. This might be true as over 80% of the peer comments adopted by students led to revision success. New to peer review tasks, students’ revision-oriented comments might not always be beneficial for their peers’ revisions, or were not informative enough for the group to know how to improve their drafts. If students continue to receive training on both English writing and peer review skills, the peer comments may be valued more by the receivers to adopt and know better how to improve their writing. In addition, the age of the participants may contribute to the varying adoption rate of peer comments. Paulus’ (1999) participants were pre-freshman students, the present study’s freshman students, and Min’s (2005) sophomore students. There seems to be an increase of adoption rate (from 5%, 47%, to 90%) as the participating students
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
523
grow older. With the possible increase of their English proficiency, they became more familiar with peer review tasks and tended to make better use of peer feedback for improving their writing. The category of ‘‘chatting” in the first assignment occupied 46.6% but dropped to 9.4% from the first assignment to the fourth assignment. It seems that initially, students took blogging as a social activity of exchanging comments with their friends/classmates in the virtual space, and did not pay full attention to the task demand of the peer review activity as instructed.3 New to peer review tasks, they might have been apprehensive that directly pointing out their classmates’ errors would be face-threatening. Students may have viewed peer review activities as a medium to socialize and construct relationships with classmates rather than to evaluate their classmates’ essays. But as training was given, this group could focus more on the target task, giving feedback to improve their classmate’s essays in the later assignments. Students expressed their mixed feelings about computer-mediated peer review activities through the evaluation questionnaire. Limitations for the EFL learners at this learning stage were hinted at. First, they were inexperienced as commentators and did not believe themselves or their classmates to be eligible to give solid comments on their peers’ compositions. The comments given by the instructor and the teaching assistant carried more weight than peer comments did, similar to what Paulus (1999) found in her study. Moreover, even though Liu and Sadler (2003) argued in their study that students would feel it not as face-threatening to give correct feedback in the setting of CMPR than in face-to-face interactions, some students in our study still found it embarrassing to point out their peers’ mistakes. Carson and Nelson (1996) found in their study that Chinese students did not like correcting their peers and were likely to only compliment their peers, which, however, were not found in our data. In our study, evaluative comments from the participants’ review process particularly increased from 28.4% to 53.1%. Blogs can be a suitable tool by increasing interaction among classmates and the instructor. True to the claim that blogs are useful for in-class collaboration (Huffaker, 2005; Patterson, 2006), the students in this study enjoyed chatting with their classmates and the instructor. They also looked forward to the prospect of outside readers, as claimed by Campbell (2003) and Tan et al. (2005). It would be inferred that the ‘‘public” quality of online writing (Penrod, 2005) truly motivated them to sharpen their writing skills so as to write essays of better quality. The opportunity of writing for an unknown audience sparked students’ interest. 6. Conclusion The current study aims to find out whether training would have effects on blog-based peer review activities. We found from comparing before-training writing and after-training writing that the students have made more revision-oriented peer comments and were less likely to drift away from the peer review task with chatting. With the enhanced quality of peer comments, students’ revision success also slightly increased. Based on the online discussion data and students’ text changes, peer review training as implemented in our blogging environment yielded expected outcome for writing instruction. However, only less than 50% of the peer comments were adopted by students. This could be due to the younger age and the lower English writing proficiency level of the participants, and the different nature of blog-mediated peer review from traditional peer review. It was also found that students took pleasure in composing on blogs, exchanging chatting comments with their peers and the instructor, and looked forward to more outside visitors reading their blogs. Some of them did not trust the effectiveness of peer review activities in helping them revise their drafts, but when they did adopt peer feedback, the act mostly led to successful revision with better writing quality. All in all, technology use with peer review training as applied in this study seems to enhance writing instruction to some extent. The quality of training plays a crucial role in order for online peer reviews to yield the expected outcome of writing teachers, perhaps true to many activities to guarantee a certain degree of success. With the aforementioned evidence supporting the efficacy of blog-enhanced instruction, several limitations of the study exist. First, the small number of participants reduced the possibility of generalizing the findings to 3 Weblogs became very popular among youngsters and often recognized mainly for social networking through personal publications. It is likely that either the young participants in the study took blogging-enhanced peer reviews as a similar activity, or they simply did not like the change of making social activities a cognitive task for educational purposes.
524
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
other EFL contexts. Additionally, as the topics of the first writing assignment and the fourth writing assignment differed, the difficulty of writing on the two assignments may vary, which in turn might intervene in the relationship between peer comments and revisions. With the limitations being stated, more future studies can be conducted as an improvement or continuation of the present study. For instance, how online peer review can be implemented outside of the class time can be explored as a way to foster learner autonomy, as a more authentic manner of networking among youngsters. Two pedagogical implications may be drawn. The study suggests more teacher encouragement is needed for students to fulfill task design in their online environment, as Ware and O’Dowd (2008) conclude. Furthermore, students can be taught rhetorical strategies and the notions of reviewer stances, as suggested by Min (2005) when more training is involved. They may then be able to produce critical, specific but friendly comments in a more sophisticated way. Like implementation of any innovation in language classrooms, training as part of teaching is recommended. When technology is incorporated in the peer review process, rhetorical skills and computer literacy are crucial for language teachers to invest their class time on. It is advisable to bring the students to a computer laboratory for a step-by-step demonstration and practice. With the limited English writing level of EFL students in our study, different rhetorical patterns for various writing topics and points to remember in examining their peer essays are important scaffolds which language teachers may desire to provide to bring about successful computer-mediated peer review. Acknowledgements We would acknowledge the funding support from the Project (NSC 96-2411-H-007-033-MY3) and the help of the participants. Thanks also go to the comments of anonymous reviewers on this paper. Appendix A. Rubric for classifying peer comments Examples taken directly from students’ comments in the study. Area
Global
Nature
Revision-oriented
Type Evaluation
Local Non-revisionOriented
Revision-oriented
Well, I think your article is Frankly speaking, By the way, I think too short to show your first I feel your essay is there is a spelling impression in Tsing Hua easy to understand mistake in this composition Clarification Are you talking about the (No example for I don’t understand the future after Tsing Hua? this category) first sentence in the second paragraph; what’s the meaning of ‘‘It”? Suggestion Then, you can describe (No example for Third, the word more about Tsing Hua, like this category) ‘‘reduce” in your how it impressed you, or second paragraph how the environment in should be ‘‘reducing” Tsing Hua is? And so on Chatting (No example for this I was surprised at (No example for this category) its huge campus category) and comparatively small gates as well
Non-revisionoriented First, there are so many unfamiliar words. . .to some extent, it’s cool (No example for this category)
(No example for this category)
(No example for this category)
H.-C. Liou, Z.-Y. Peng / System 37 (2009) 514–525
525
References Berg, E., 1999. The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing 8 (3), 215–241. Bloch, J., 2007. Abdullah’s blogging: a generation 1.5 student enters the blogosphere. Language Learning and Technology 11 (2), 128–141. Retrieved June 30, 2007 from:
. Campbell, A.P., 2003. Weblogs for use with ESL classes. The Internet TESL Journal 9 (2). Retrieved July 15, 2007 from:
. Carson, J., Nelson, G., 1996. Chinese students’ perceptions of ESL peer response group interaction. Journal of Second Language Writing 5 (1), 1–19. Ciekanski, M., Chanier, T., 2008. Developing online multimodal verbal communication to enhance the writing process in an audio-graphic conferencing environment. ReCALL 20 (2), 162–182. Connor, U., Asenavage, K., 1994. Peer response groups in ESL writing classes: how much impact on revision? Journal of Second Language Writing 3 (2), 257–276. Ferris, D.R., 2003. Response to Student Writing. Lawrence Erlbaum Associates, Publishers, Mahwah, NJ. Ho, M.-C., Savignon, S.J., 2007. Face-to-face and computer mediated peer review in EFL writing. CALICO Journal 24 (2), 269–290. Huffaker, D., 2005. The educated blogger: using weblogs to promote literacy in the classroom. AACE Journal 13 (2), 91–98. Hyland, K., 2003. Second Language Writing. Cambridge University Press, Cambridge. Leki, I., 1990. Coaching from the margins: issues in written response. In: Kroll, B. (Ed.), Second Language Writing: Insights from the Language Classroom. Cambridge University Press, Cambridge, pp. 57–68. Liu, J., Hansen, J.G., 2002. Peer Response in Second Language Classroom. University of Michigan Press, Ann Arbor, MI. Liu, J., Sadler, R.W., 2003. The effects and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes 2, 193–227. Lockhart, C., Ng, P., 1995. Analyzing talk in ESL peer response groups: stances, functions, and content. Language Learning 45 (4), 605– 655. Mangelsdorf, K., Schlumberger, A., 1992. ESL student response stances in a peer-review task. Journal of Second Language Writing 1 (3), 235–254. Mendoncßa, C.O., Johnson, K.E., 1994. Peer review negotiations: revision activities in ESL writing instruction. TESOL Quarterly 28 (4), 745–769. Min, H.T., 2005. Training students to become successful peer reviewers. System 33, 293–308. Min, H.T., 2006. The effects of trained peer review on EFL students’ revision types and writing quality. Journal of Second Language Writing 15 (2), 118–141. Patterson, N., 2006. Computers and writing: the research says yes! Voices from the Middle 13 (4), 64–68. Paulus, T., 1999. The effect of peer and teacher feedback on student writing. Journal of Second Language Writing 8 (3), 265–289. Penrod, D., 2005. Composition in convergence/the impact of new media on writing assessment (electronic resource). Lawrence Erlbaum, Mahwah, NJ. Retrieved August 12, 2007 from: . Schultz, J., 1999. Computers and collaborative writing in the foreign language curriculum. In: Warschauer, M., Kern, R. (Eds.), Networkbased Language Teaching: Concepts and Practice. Cambridge University Press, New York, pp. 121–150. Stanley, J., 1992. Coaching students writers to be effective peer evaluators. Journal of Second Language Writing 1, 217–233. Tan, T.Y., Ow, E.G.J., Ho, P.Y.J.M., 2005. Weblogs in education. IT Literature Review. Retrieved August 5, 2007 from: . Tsui, A.B.M., Ng, M., 2000. Do secondary L2 writers benefit from peer comments? Journal of Second Language Writing 9 (2), 147–170. Wang, H.-C., 2007. Using weblogs as peer review platform in an EFL writing class. In: Proceedings of the 24th Conference on English Teaching and Learning. Taiwan ELT Publishing Co., Ltd, Taipei, pp. 400–413. Ware, P.D., O’Dowd, R., 2008. Peer feedback on language form in telecollaboration. Language Learning and Technology 12 (1), 43–63. Warschuer, M., Ware, P., 2006. Automated writing evaluation: defining the classroom research agenda. Language Teaching Research 10 (1), 157–180. Zhu, W., 1995. Effects of training for peer response on students’ comments and interaction. Written Communication 1 (4), 492–528. Zhu, W., 2001. Interaction and feedback in mixed peer response groups. Journal of Second Language Writing 10, 251–276.