Accepted Manuscript Teacher assessment literacy: Surveying knowledge, conceptions and practices of classroom-based writing assessment in Hong Kong Ricky Lam PII:
S0346-251X(18)30374-9
DOI:
https://doi.org/10.1016/j.system.2019.01.006
Reference:
SYS 2048
To appear in:
System
Received Date: 17 May 2018 Revised Date:
9 December 2018
Accepted Date: 21 January 2019
Please cite this article as: Lam, R., Teacher assessment literacy: Surveying knowledge, conceptions and practices of classroom-based writing assessment in Hong Kong, System (2019), doi: https:// doi.org/10.1016/j.system.2019.01.006. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
Title Page
Author’s Name: Dr. Ricky LAM Author’s Affiliation: Hong Kong Baptist University Corresponding Author: Dr. Ricky LAM
RI PT
Title: Teacher Assessment Literacy: Surveying Knowledge, Conceptions and Practices of Classroom-Based Writing Assessment in Hong Kong
AAB 811, Department of Education Studies, Hong Kong Baptist University, Kowloon Tong, HKSAR, China
Phone: Email:
+ 852 3411 5788
[email protected]
SC
Present Address:
M AN U
Abstract In this decade, assessment literacy has emerged as a significant research agenda. Yet, not much has been done to explore teacher assessment literacy in L2 writing. This study investigates what Hong Kong secondary school teachers know and think about, and how they
TE D
practice classroom-based writing assessment through a questionnaire, telephone interviews, and classroom observations. The study qualitatively examines the extent to which sixty-six respondents achieved teacher assessment literacy from their perspectives, and which aspects of teacher assessment literacy needed further consolidation and why. Self-reported findings
EP
show that most respondents had pertinent assessment knowledge and positive conceptions about alternative writing assessments. Despite this, observation data indicate that some respondents had a partial understanding of assessment of learning (AoL) and assessment for learning (AfL), but not assessment as learning (AaL). In fact, when attempting AaL, the
AC C
respondents could merely mimic its ‘procedures’ rather than internalise its ‘essence’. Implications are discussed concerning how to develop teacher assessment literacy in L2 writing classrooms. Highlights: Secondary teachers have fundamental knowledge of writing assessment. Secondary teachers are generally positive about attempting alternative assessments. Secondary teachers have better understanding of AoL and AfL than AaL. Secondary teachers are advised to take up new roles as assessors of writing. Secondary teachers need a regular update of writing assessment knowledge and skills. Keywords: Teacher assessment literacy; classroom-based assessment; L2 writing; writing teacher education
ACCEPTED MANUSCRIPT Declarations of interest: None
AC C
EP
TE D
M AN U
SC
RI PT
Funding: This paper was supported by a grant from SCOLAR’s Research and Development Projects 2018-19 (Ref. no.: 2018-09).
ACCEPTED MANUSCRIPT
Teacher Assessment Literacy: Surveying Knowledge, Conceptions and Practices of Classroom-Based Writing Assessment in Hong Kong
RI PT
Abstract In this decade, assessment literacy has emerged as a significant research agenda. Yet, not much has been done to explore teacher assessment literacy in L2 writing. This study investigates what Hong Kong secondary school teachers know and think about, and how they practice classroom-based writing assessment through a questionnaire, telephone interviews,
M AN U
SC
and classroom observations. The study qualitatively examines the extent to which sixty-six respondents achieved teacher assessment literacy from their perspectives, and which aspects of teacher assessment literacy needed further consolidation and why. Self-reported findings show that most respondents had pertinent assessment knowledge and positive conceptions about alternative writing assessments. Despite this, observation data indicate that some respondents had a partial understanding of assessment of learning (AoL) and assessment for learning (AfL), but not assessment as learning (AaL). In fact, when attempting AaL, the
TE D
respondents could merely mimic its ‘procedures’ rather than internalise its ‘essence’. Implications are discussed concerning how to develop teacher assessment literacy in L2 writing classrooms. Keywords: Teacher assessment literacy; classroom-based assessment; L2 writing; writing teacher education
AC C
EP
1. Introduction Almost thirty years ago, Stiggins (1991, p. 539) claimed that ‘the time has come to promote assessment literacy for all’. Since then, there has been a burgeoning trend to investigate what assessment literacy means, why it emerges, and how it is used as benchmarks for teacher education programmes in general education and language education (Hamp-Lyons, 2016). Assessment literacy is defined as stakeholders’ abilities to use assessment to fulfill both learning and grading purposes (Taylor, 2009). More specifically, teacher assessment literacy (TAL) is about teachers’ mastery of knowledge, skills, and principles in planning and developing well-constructed assessment tasks, from which useful assessment data are interpreted and utilised to inform pedagogy and learning within a larger sociocultural background (cf. Fulcher, 2012). To facilitate our discussion, TAL is used here to refer to teacher knowledge, conceptions and implementation of classroom-based assessment in L2 writing contexts.
1
ACCEPTED MANUSCRIPT In studies of assessment literacy, there is a body of work focusing on how professional training equips language teachers with knowledge and skills to conduct classroom-based assessment. Brown and Bailey (2008) discussed the usefulness of language testing courses. Author surveyed the role of assessment training provided by all teacher education programmes in Hong Kong. Further, there are studies investigating teacher assessment practices in various international contexts. Qian (2014) found that English teachers lacked
RI PT
marking skills when evaluating student speaking in a school-based assessment in Hong Kong. DeLuca and Klinger (2010) reported that Canadian teacher candidates’ were primarily familiar with conducting summative assessment, but not formative assessment. In a large-scale European study, Vogt and Tsagari (2014) revealed that most teacher respondents
M AN U
SC
received inadequate assessment training, and counted on on-the-job experiences. These studies inform us of how training supports the development of assessment literacy and what practices teachers adopted to evaluate students. Yet, we still do not know whether secondary school teachers have the competence necessary to deal with standardised testing and classroom-based assessment; whether they are able to experiment with various assessment practices such as AfL1 and AaL2; or whether they know how to make their teaching benefit from alternative assessments. These issues are addressed in this study. The paper explores how secondary teachers in Hong Kong understand, conceive and
TE D
implement classroom-based writing assessment via a questionnaire, telephone interviews, and classroom observations. The rationale behind this study is threefold. First, because of the rise of the standard movements, the Ministry of Education in different contexts is likely to impose external benchmarks via standardised testing to evaluate student achievement and school
EP
performance (Klenowski & Wyatt-Smith, 2014). The knowledge to prepare students for high-stakes testing and to interpret the results of standardised testing is pedagogically relevant to TAL. Second, as the AfL movement has become popular (Wiliam, 2011), teacher conceptions about how to try out AfL/AaL practices and then to align with AoL3 would be of
AC C
importance, especially in Hong Kong secondary writing classrooms, where an exam-driven culture is predominant (Lee, 2017). Third, as argued by Davison and Leung (2009), more attention has to be paid to how teachers can utilise classroom-based assessment practices to enhance teaching and learning. In view of these justifications, there is a legitimate need to examine TAL, particularly when the field of assessment literacy in L2 writing is still in slow progress. In the following, a literature review of TAL and a method section are presented. Then, the findings of the study are reported and discussed. The paper ends with implications 1
Assessment for learning (AfL) refers to using assessment information (e.g., self, peer, or teacher feedback) to support learning. 2 Assessment as learning (AaL) is typically considered a subset of AfL, emphasising learner capacity to review and improve learning metacognitively via reflection and self-assessment. 3 Assessment of learning (AoL) is about summarising and judging student learning near the end of a teaching unit or a semester.
2
ACCEPTED MANUSCRIPT and recommendations which suggest the way forward for TAL scholarship. 2. Literature Review The review of literature consists of four parts, including knowledge base, teacher conceptions, assessment practices, and TAL in Hong Kong.
RI PT
2.1 Knowledge Base The knowledge base is considered the foundation of TAL. It is broadly defined as assessment knowledge needed to prepare students for standardised testing or to administer classroom-based assessment. More specifically, it includes assessment knowledge of
M AN U
SC
designing, implementing, grading, and providing feedback for improving student learning. Over years, there have been numerous studies investigating how to enrich teacher assessment knowledge via coursework, professional development events, on-the-job training and self-study via textbooks (Harding & Kremmel, 2016). Despite a call for more assessment-related training, most teachers remain underprepared to perform classroom-based assessment confidently and professionally (DeLuca & Johnson, 2017). There is research on how the assessment textbook trends are catching up with practitioners’ needs of assessment
TE D
knowledge (Brown & Bailey, 2008). There is also a body of scholarship investigating how university-based coursework can equip pre- and in-service teachers with up-to-date assessment knowledge (DeLuca, Chavez, Bellara & Cao, 2013). Regardless of these textbook trends and coursework elements, teachers have found this knowledge base somewhat
EP
theoretical and pedagogically non-relatable to everyday classroom assessment practices (Popham, 2009; Yan, Zhang, & Fan 2018). Additionally, the above knowledge base is mostly decontextualised, indicating that teachers usually learn about pertinent assessment knowledge with a cookie-cutter approach (Leung, 2014).
AC C
In L2 writing, the notion of TAL only enjoys a low profile, given that many teachers lack confidence and theories of assessment in evaluating student writing proficiently (Crusan, Plakans & Gebril, 2016). Scholars have argued that language teacher education programmes only include a generic assessment course, which is insufficient to make writing teachers assessment-capable, namely how to generate revisable written feedback, and how to utilise feedback to improve writing pedagogy (Lee, 2016). To counteract inadequate assessment training, EFL school teachers in Europe have acquired language assessment knowledge on the job and used instructional materials as assessment tools (Vogt & Tsagari, 2014). In Lee, Mak and Burns’ (2016) study, despite initial professional development input (one form of knowledge base), the two teachers encountered setbacks when introducing a focused approach to written feedback and attempting peer review in Grade 9 classrooms.
3
ACCEPTED MANUSCRIPT 2.2 Teacher Conceptions Teacher conceptions of assessment are viewed as an internal guiding framework of how
RI PT
teachers perceive the purposes and uses of assessment relating to their beliefs. Xu and Brown (2016) identified that teacher conceptions of assessment had cognitive and affective dimensions. The former denotes teacher belief systems when evaluating student performance, whereas the latter denotes teacher emotion towards assessment. Studies about the cognitive aspect indicate that teachers tend to be more skillful and confident in performing AoL than AfL/AaL, because they are chiefly influenced by the psychometric paradigm of assessment, which emphasises fairness, reliability and standardisation in scoring
M AN U
SC
(Brookhart, 2011; DeLuca & Klinger, 2010). These studies have implied that although there is a shift from the psychometric to interpretative assessment paradigm, teachers still value standardized testing much more than formative classroom-based assessment. Such teacher mindsets might be due to external accountability reasons and a wider sociocultural norm including an exam-driven culture (Lee & Coniam, 2013). Teacher conceptions are usually shaped by how they were evaluated as learners at school, which refers to ‘testing as you were tested’ (Vogt & Tsagari, 2014, p. 391). This conception is likely to determine how teachers evaluate student writing.
TE D
Affectively, teachers are inclined to consider assessment negative, since it oftentimes links to mastery of technical skills (e.g., scoring methods), involvement in high-stakes
EP
decision-making processes, and excessive focus on summative assessment. In Qian’s (2014) study, secondary teachers of English found it professionally demanding to assign marks in the School-Based Assessment in Hong Kong. The teacher participants reported that despite training, they lacked appropriate scoring skills to assess speaking competency, so they did not
AC C
particularly favour the assessment. Because of the hierarchical power relation involved, Xu and Liu (2009) discerned that a teacher participant, Betty, had to lower the participation grades for her students. When making assessment decisions, Betty felt it unethical to lower the grades because of her superior’s directive. Similarly, Gu (2014) argued that the only teacher participant, Shelley, was painfully torn between reform policies and actual learning needs. In reality, Shelley’s assessment practices were dictated by the public examination content and guidelines, since she dared not sacrifice student exam results at the expense of trying out innovative pedagogy, namely AfL. 2.3 Assessment Practices The other major component of TAL is assessment practices, which refer to what and how innovative writing assessments are carried out in teachers’ work contexts. These applications are interrelated with teachers’ knowledge base and their belief systems. In Lee’s (2013) study,
4
ACCEPTED MANUSCRIPT four teachers underwent an identity transformation, namely from language teachers to change agents. One year after receiving writing teacher education, the teacher informants became empowered to initiate innovation in writing instruction and assessment, including genre-based pedagogy, process writing, blogging, and utilising self-designed rubrics as feedback. The study found that applying innovative assessment practices involved negotiation of identities via consolidated teachers’ knowledge base (training) and change in
RI PT
assessment conceptions (student learning benefits). In Hamp-Lyons’s (2006) study, the instructor kept on providing the student participant, Esing, with non-revisable feedback such as indirect questions, and failed to inform Esing of how to further improve the works-in-progress. The finding indicated that, despite the instructor’s continued pedagogical
M AN U
SC
support, she should explicitly assume her role as an assessor when giving feedback, emphasising the strengths and weaknesses of Esing’s writings. The crux of the matter was that the instructor was supposed to learn how to strategically juggle her role as a teacher, an assessor or a language editor during different stages of the composing process. In New Zealand, Hawe and Dixon (2014) reported that the teacher, Audrey, unsuccessfully developed Grade 5 pupils’ evaluative and productive writing skills when she attempted peer
TE D
review and self-monitoring. Audrey’s problem was that she did not change her assessment practices when introducing a student-centred approach to writing assessment. She seldom made tacit assessment requirements explicit to her pupils. Hyland (1998) reported that two university instructors and six students had differing expectations of written feedback on
EP
assignments. The findings of her study implied that teachers and students need a dialogue to negotiate which type of feedback is best for improving textual quality. From the above studies, we learn that when introducing innovative assessment approaches, teachers need to make adjustments in their beliefs and practices in order to fulfill student needs and institutional expectations.
AC C
2.4 TAL in Hong Kong This part outlines TAL in Hong Kong by highlighting the writing assessment environment. In Hong Kong, teachers formally evaluate student writing since Grade 1. In the primary English curriculum, writing assessment includes sentence completion, seen/unseen dictations, and/or paragraph writing. When students study in secondary schools, they compose book reports, journals, full-length essays and project works as part of summative assessment. Among all, full-length compositions are the most common form of internal and external writing assessments. Further, teacher learning of administering classroom-based writing assessment is somewhat rare as opposed to how teachers learn to prepare students for standardised writing assessments (Author). In secondary writing classrooms, teachers typically equip students with the lexico-grammatical features, schematic structures, and registers of
5
ACCEPTED MANUSCRIPT most-tested genres appearing in standardised writing assessments, namely argumentation. Other than this, wider applications of alternative writing assessments are fairly restricted,
RI PT
although the Education Bureau has promulgated these assessments over two decades. Against this background, writing assessment in Hong Kong is predominantly summative and exam-focused, whereas teachers and students pay disproportionate attention to writing products rather than composing processes. Most of the above studies investigated only one aspect of TAL. Thus far, there is no study to look into all three aspects of TAL, including knowledge base, conceptions, and assessment practices of secondary school teachers except Crusan et al.’s (2016) one. Besides, a majority
M AN U
SC
of TAL studies were conducted either in higher education or in general education, identifying whether the participants are assessment-capable or not. To fill these gaps, I examine whether Hong Kong secondary school teachers are assessment-literate enough to conduct classroom-based assessments; whether they are able to try out alternative writing assessments; or whether they need further professional training in aligning teaching and assessment of writing. Because of this, I address the following questions in this study.
TE D
1. What are Hong Kong secondary teachers’ levels of TAL according to their perspectives? 2. Which aspects(s) of TAL do the teacher respondents need further enhancement of and why? (Insert Figure 1 about here)
EP
3. Method 3.1 Conceptual Framework
AC C
The conceptual framework of this paper builds upon a six-tier model, depicting the major components of TAL. The framework was adapted from Xu and Brown’s (2016) recent scoping review paper, taking stock of assessment literacy scholarship to formulate a conceptual model of TAL in practice. In the model, there are six major components including (1) the knowledge base; (2) teacher conceptions of assessment; (3) institutional and sociocultural contexts; (4) teacher assessment literacy in practice (the core component); (5) teacher learning; and (6) teacher role as assessors (2016, p. 155). The reason for adapting Xu and Brown’s model in this paper is that their model is tried-and-tested and fully encapsulates the notion of TAL in literature. Among these components, (1) the knowledge base, (2) teacher conceptions of assessment and (4) teacher assessment literacy in practice are used in our adapted framework for the current paper (see Figure 1). The selection of these tripartite components is aligned with the aim of this paper, which is to investigate secondary school teachers’ knowledge, conceptions and practices when carrying out L2 writing assessment. In
6
ACCEPTED MANUSCRIPT our framework, the knowledge base refers to L2 writing assessment knowledge and entails other aspects of knowledge, such as knowledge of feedback, grading, alternative assessments, and assessment purposes and ethics. Second, teacher conceptions of assessment are about the cognitive and affective dimensions of belief systems and how these conceptions are formulated. Third, TAL in practice refers to the what and how aspects of classroom-based assessment practices in context. It includes factors that support or inhibit certain writing
RI PT
assessment practices.
3.2 Research Design The study adopted a mixed-methods design using a questionnaire, interviews and
M AN U
SC
observations. The questionnaire made it possible to collect quantitative data, and the interviews and observations qualitative data. The mixed-methods design was meant to warrant the reliability of the study (i.e., triangulation of multiple data sources); provide in-depth perspectives towards TAL; and promote cohesive interpretations of data sets (Merriam, 1998). Because the study investigates practitioners’ experiences in a specific L2 writing setting, the nature of this study is interpretative, naturalistic and exploratory. Besides, to develop a full understanding of teacher knowledge base, pedagogical beliefs and actual
TE D
assessment practices, the use of a case study approach appears to be methodologically justifiable. One major advantage of using case study is that it enriches theoretical and pedagogical understanding of issues under investigation, namely the levels and aspects of TAL among Hong Kong secondary writing teachers (Tight, 2017). Another advantage is that
writing.
EP
the case study method can provide rich and insightful perspectives of how TAL is attained and developed in a specific L2 context. Although the case study may not necessarily guarantee broader generalisation of findings, this study still has its unique theoretical contributions by deepening the knowledge base of TAL scholarship with a focus on L2
AC C
3.3 Participants The participants are English teachers serving in Hong Kong aided 4 and direct subsidy scheme schools5. Fifty-nine (89.4%) out of 66 participants speak Cantonese as their L1 and they learn English as their L2 mainly for academic purposes. The remaining seven participants are native speakers of English. Over half of the participants (55.4%) obtained a bachelor’s degree in various subject disciplines and a master’s degree in education, applied linguistics or equivalents. All participants fulfilled the language teacher qualification via their Bachelor of Education (BEd) or Postgraduate Diploma of Education (PgDE). Speaking of 4
Aided schools refer to secondary schools fully funded by the Hong Kong government monitored by the Education Bureau in terms of medium of instruction, student admission and resources for staffing. 5 Direct subsidy scheme schools refer to secondary schools partially funded by the Hong Kong government with autonomy in the areas of curriculum planning, medium of instruction and student admission.
7
ACCEPTED MANUSCRIPT experience, nearly half of the participants (N = 32) had 11-15 and more than 15 years of service in English language teaching. Twenty-nine participants (43.9%) are teaching senior secondary students, namely Grades 10 - 12. The individual interviews included twelve teachers who are Chinese, raised and brought up in Hong Kong. Their work experience ranged from 6 to 17 years. Of these twelve teachers, three of them, one female and two male local teachers who majored in English language education, agreed to take part in classroom
RI PT
observations. All three participants were informed of the purpose and focus of observations. Prior to data collection, all teacher participants (involving in interviews and observations) signed the informed consents. They were free to withdraw from the study if they felt their rights and safety were not protected.
M AN U
SC
3.4 Data Collection and Analysis Data collection procedures included three phases, namely a questionnaire, interviews and observations. To administer the questionnaire, the author targeted local secondary schools in Hong Kong. When selecting schools to join the project, the author adopted a snowball sampling to recruit prospective respondents. While snowball sampling might generate bias, the author attempted to choose a diverse range of schools from his social connections in order
TE D
to mitigate the said drawback. To be eligible, respondents needed to fulfill one criterion, which was being a full-time staff member in their English Department. Teaching assistants or part-time teachers were excluded. One hundred and twenty-seven print questionnaires were sent to nine local secondary schools in which the principals agreed to join the study. After two months, 66 questionnaires (52%) were completed and returned. After vetting, all 66 questionnaires were considered valid for data analysis.
EP
The questionnaire had three parts including (1) demographic information; (2) knowledge base,
AC C
conceptions and practices of writing assessment; and (3) open-ended responses. The questionnaire was piloted with four serving teachers. Changes made to the questionnaire consisted of: alteration of unclear expressions, rewording of technical jargons, inclusion of more selected-response items, and provision of clear instructional guidelines. The final version of the questionnaire had 18 items, 6 of which are constructed-response items and the remaining were selected-response ones. All rating items adopted a five-point Likert scale, with 1 being strongly disagreed and 5 being strongly agreed. The first part inquired about the respondent’s gender, type of serving school, levels of teaching, student English abilities and academic qualifications. The second part comprised three sub-sections, modeling after the tripartite framework of TAL. The knowledge base section included questions on testing/assessment theories, understanding of and rationale behind AoL, AfL and AaL, and challenges when acquiring writing assessment theories. The conceptions section covered the respondents’ opinions of nature, purpose and effectiveness of (alternative) writing
8
ACCEPTED MANUSCRIPT assessments. The practices section comprised frequency and types of writing assessment assigned to students, types of feedback provided and their effectiveness, and factors that facilitate or inhibit writing assessment practices. Owing to space limitation, the full questionnaire will not be attached here, but readers can contact the author for access to it. All responses were analysed with descriptive statistics, where frequency counts, percents, and means were reported in the findings. The author analysed all questionnaire items twice, using
RI PT
the SPSS software, and then verified the collated data twice together with a research assistant who had qualitative data analysis training in his master’s degree study. From the questionnaire, twelve respondents volunteered to be interviewed. The interviews
M AN U
SC
were conducted by phone and lasted around 30 minutes. Cantonese was used during the interviews and transcripts were translated into English for analysis. The interview guide included seven questions about teacher perceptions towards knowledge base, conceptions and assessment practices of their current writing assessments (see Appendix 1). As the interviews were semi-structured, they allowed the interviewees to express themselves openly and the author to ask for clarifications if necessary. Of the 12 interviewees, three of them accepted the author invitations to be observed in the classroom. This was arranged twice with a focus
TE D
on how each participant conducted classroom-based writing assessments. In the observed classes, the three teacher participants were primarily teaching students how to compose argumentation using multiple drafts, self- and peer assessment and the portfolio approach. The class sizes ranged from 15 to 38. The structure of classes adopted both lecture and
EP
workshop modes. All observations were audio-taped. The author and research assistant took field notes to capture special assessment-related episodes following the classroom observation protocol (see Appendix 2). An inductive approach to analysing both interview and observation transcripts was adopted. The author coded both interview and observation
AC C
data sets three times according to the adapted framework of TAL: knowledge base, conceptions and assessment practices. Related themes were identified and then categorised to fit in the tripartite framework of TAL. Instances of knowledge base were classified as previous and current professional development training, understanding of assessment purposes, and theories of AoL, AfL and AaL. Examples of conceptions were classified as both positive and negative conceptions of writing assessment, their usefulness, and possibility of classroom applications. Episodes of assessment practices were regarded as prior, existing and projected assessment practices that the teacher informants utilised to facilitate learning writing. 4. Results 4.1 Research Question 1: Perceived Levels of TAL To answer the first research question, I describe the teacher respondents’ perceived levels of
9
ACCEPTED MANUSCRIPT TAL as per the questionnaire and interview data. 4.1.1 Knowledge base Nearly half of the respondents had considerable knowledge about two aspects of writing assessment theories, namely fairness (44.82%, Mean = 3.26) and classroom-based assessment (43.1%, Mean = 3.31). Nonetheless, about one-third to almost half of the respondents
RI PT
expressed having minimal knowledge about theories of validity and reliability (36.21%, Mean = 2.81), test construction (35%, Mean = 2.87) and washback (47.06%, Mean = 2.45). During the interviews, four teachers maintained that they only had a vague understanding of what reliability and validity meant, and believed that these theories might not be directly
M AN U
SC
related to their teaching lives. Despite this, one interviewee argued that teachers did need research-proven knowledge to perform classroom-based assessment properly; yet they felt intimidated by mastering technical jargons and statistical calculations (Respondent C31). When asked about their knowledge of the rationale behind AoL, AfL and AaL and their uses, 82.81% of the respondents (N = 64) claimed that they had acquired the theoretical rationale of AoL, AfL and AaL. Of fifty-three respondents, 49 said they had acquired these assessment
TE D
modes via in-house staff training, university-school partnership projects, Education Bureau workshops, and postgraduate diploma in education programmes. Interestingly, ten out of 12 interviewees replied that they forgot where and when they learnt AoL, AfL and AaL. In one free response question, forty-six out of 53 respondents described AoL as the end-of-unit or
EP
end-of-term summative assessment which aimed to evaluate student writing, and 35 out of 51 respondents considered AfL as attempts to enhance teaching and learning of writing by fine-tuning curriculum or lesson planning. For AaL, only thirteen out of 50 respondents referred to learning opportunities where students reflected upon their writing by setting future
AC C
learning targets with feedback. Fourteen out of 50 respondents vaguely referred AaL to parts of the learning process. A minority of respondents remained puzzled about AoL, AfL and AaL. For instance, they wrote: ‘AoL is formative assessment’; ‘AaL is collaborative writing’; and ‘AaL enables teachers to give feedback’. Similarly, the interview data showed that six out of twelve interviewees understood the theoretical ideas of AoL and AfL, but not AaL, since they did not know how to coach students to perform self-reflection. Concerning knowledge about assessing writing, 72.88% (Mean = 3.92) and 52.54% (Mean = 3.64) of the respondents wrote that they used AoL and AfL to a large extent respectively, whereas only 37.93% (Mean = 2.97) stated that they had attempted AaL to some extent. Asked about what challenges teachers might encounter when upgrading their assessment knowledge, 65.08% of the respondents said that professional training was both prescriptive and theoretical. 50.79% of them reported that they had no time to take courses. Also, more
10
ACCEPTED MANUSCRIPT than a quarter of the respondents (30.16%, N = 19) stated that they lacked adequate knowledge to put theory into practice, while 28.57% (N = 18) of them found there were no
RI PT
related training courses about writing assessment. From the above, it appears that there is a lack of transfer from theory into practice because of the quality of professional training, time issues, and insufficient assessment knowledge. More importantly, teachers need regular updates of assessment knowledge including the rationale of AaL. 4.1.2 Conceptions Half of the questionnaire respondents considered classroom-based writing assessment high-stakes, exam-oriented, and standardised, although it was generally regarded as
M AN U
SC
low-stakes and contextualised. Classroom-based writing assessments in Hong Kong are graded and normally simulated to the format and content of high-stakes exams. Five respondents claimed that having one-off, timed writing assessment was legitimate, but students usually made many mistakes if they were asked to write under exam conditions. In the teacher interviews, eleven interviewees were actually in favour of alternative approaches to writing assessment including conferences (one form of AfL in writing), although they were skeptical about student ability and motivation to internalise teacher verbal feedback (61.3%,
TE D
N = 38). Further, when asked about giving classroom examples of how to enact AoL and AfL, eight interviewees were able to do so, which was indicative of having a clear conception of what AoL and AfL entailed.
EP
Regarding the purposes of writing assessment, in one open-ended question, thirty-one out of 66 respondents considered writing assessment to equip students with techniques to organise their thoughts by grammar and vocabulary to complete assigned writing tasks. Nine respondents answered that writing assessment could serve its formative purpose, informing
AC C
instructional practices and enabling students to reflect upon their writing performances alongside its summative purpose. In the other question, a majority of respondents (74.07%, N = 40) believed that the main purpose of writing assessment was to report learning, and only a minority of them (14.81%, N = 8) said it served the grading purpose. Interestingly, one teacher stated that ‘the purpose of writing assessment is for learning (ideally); in reality, it summatively records the learning progress’ (Respondent D42). On the other hand, another teacher emphasised that ‘writing assessment serves both learning and grading purposes, even if not concurrently’ (Respondent E44). As revealed in the interviews, nearly all of the informants expressed concerns about the impact of one-off, single-draft writing assessment on learning, and favoured alternative assessment such as self- and peer assessment (50%, N = 31, Mean = 3.19), conferences (61.3%, N = 38, Mean = 2.56) and assessment rubrics relating to goal-setting (50%, N = 31, Mean = 3.34). Among these alternative assessment practices, most respondents endorsed the principles of conferences, but considered them not feasible if
11
ACCEPTED MANUSCRIPT they were applied in practice. Additionally, 87.69% (N = 57) of teachers thought that writing assessment could help improve student writing, particularly in terms of accuracy (85.96%, N = 49), vocabulary enhancement (70.18%, N = 40) and text coherence (57.89%, N = 33). Despite this, the respondents disapproved of the current timed, single-draft approach to writing assessment
RI PT
(64.61%, N = 42) as it involved no collaboration opportunities (65.63%, N = 42) and minimal feedback provision (72.3%, N = 47). In fact, they pointed out that teachers should utilise criteria-referenced rubrics to make good qualities of writing explicit (75%, N = 48), and encourage reflection on student writing (85.9%, N = 55). Also, 61.54% (N = 40) of
M AN U
SC
respondents said that they were aware of the limitations of one-off writing assessment, yet this assessment mode was considered administratively efficient for the evaluation of student writing. The only concern raised by the respondents was that the exam condition would bring about tremendous learning anxiety. In sum, a large majority of teacher participants had positive conceptions of classroom-based writing assessment, and some preferred attempting alternative assessments, because students are likely to have learning gains.
TE D
4.1.3 Practices This section describes teacher assessment practices by means of types of writing assessment; ways of grading writing; effectiveness of assessment practices; and factors influencing writing assessment practices. A majority of respondents (90.77%, N = 59) adopted ‘in-class
EP
essay writing’, followed by ‘take-home essay writing’ (69.23%, N = 45). Some other teachers utilised ‘seen/unseen dictations’ (46.15%, N = 30) and ‘group writing projects’ (36.92%, N = 24). The least appealing type of writing assessment was ‘portfolios’ (12.31%). From the interviews, three teachers reported that portfolio assessment was somewhat idealistic, given
AC C
that students might not know how to perform self-reflection, and that it took time and skills to set up portfolio systems. The most popular means of grading writing was said to comprise: assigning grades with reference to rating scales (61.54%, N = 40); giving grades or marks with commentary (55.38%, N = 36); and assigning grades with a rubric like content-language-organisation (43.08%, N = 28). One respondent put in: ‘I give marks (relating to rubrics) plus detailed comments, together with debriefing exercises on common errors’ (Respondent D 41). Alongside written feedback, the interview data showed that seven teachers preferred using conferences to inform students of their current writing performances. Asked about the usefulness of their assessment practices, 92.06% (N = 58) of the respondents rated themselves ‘effective’. The key reasons behind their ratings included: considerable improvement in student writing (N = 10); provision of detailed comments for revisions (N = 9); and use of pre-writing or post-writing activities to consolidate learning writing (N = 6).
12
ACCEPTED MANUSCRIPT The use of post-writing activities was an instance of utilising assessment results to fine-tune teaching and support learning. Although the majority of respondents had positive responses, they reported that there were other issues which made their assessment practices less effective, including time and class size (N =7; unavailable for consultations) and student fixation with grades instead of attending teacher comments (N = 9; conscious of performance rather than learning). The respondents mentioned both facilitating and inhibiting factors that might
RI PT
influence their classroom assessment practices. The facilitating factors consisted of: ‘personal commitment and enthusiasm’ (67.69%, N = 44); ‘clear understanding of alternative assessment practices’ (55.38%, N = 36); ‘exchange of comments among colleagues’ (53.85%, N = 35); and ‘support from school management’ (46.15%, N = 30). Another three
M AN U
SC
respondents (B12, B16 and H60) believed ‘sufficient time’ as one of the facilitating factors. The inhibiting factors entailed: ‘administrative duties’ (66.15%, N = 43); ‘teaching duties’ (61.54%, N = 40); ‘marking’ (44.62%, N = 29); and ‘design of curriculum’ (40%, N = 26). 27.69% of the respondents also viewed restricted autonomy to try out alternative assessments as a barrier. From the interviews, more than half of the respondents (N = 8) revealed that they sporadically attempted process writing, conferencing, rubric-referenced marking, peer assessment, or post-writing consolidation activities. Nonetheless, because of limited
TE D
instructional hours and autonomy, they found it exacting to sustain these practices. The findings thus suggest that teacher assessment practices are contextually mediated by meso-level constraining factors including a lack of collaborative work culture, school support, autonomy, and space for professional development.
EP
4.1.4 Summary of data The questionnaire and interview data suggest that the teacher participants were considerably knowledgeable about the ideas of fairness and classroom-based assessment, but not the
AC C
theories of validity and reliability. It also appears that they had a better conceptual understanding towards the applications of AoL and AfL than AaL. Regarding conceptions, the participants were mostly positive about alternative assessments and confident in using classroom-based assessment to enhance student writing performances. The selected participants said they attempted alternative assessments when evaluating writing; yet some wrote they could not sustain their assessment practices owing to a range of school-related factors. The answer to the first research question is therefore that the teachers perceived having moderately attained TAL and being still far away from attaining a full mastery of TAL in writing. The reason for this finding is that some participants had only a limited understanding of AfL and AaL although they were positive about alternative approaches to writing assessment. Besides, some teachers’ initial attempts to try out alternative assessments were
13
ACCEPTED MANUSCRIPT hampered by numerous institutional constraints. 4.2 Research Question 2: Aspect of TAL wanting enhancement To address the second research question, I identified which aspect of TAL the teacher participants need enhancement by using the questionnaire and observation data.
RI PT
The data presented here was obtained from the open-ended responses of the questionnaire and the two specific classroom episodes. The questionnaire data revealed that fourteen respondents said they would like to have more opportunities to try out self- and peer assessment. Nine respondents stated they needed further exposure to helping students
M AN U
SC
compile portfolios for self-reflection. Another seven respondents wrote that they wished to master practical skills in coaching students to set goals and review writing development. Seven respondents expressed interests in acquiring the principles of process writing or peer review (Respondents B14, C25, D41, E45, E46, G50 and G54). Additionally, four respondents stated they wanted to learn more about the theory of AaL in writing (e.g., reflection, self-assessment, portfolios, and e-portfolios; Respondents B17, C29, D33 and H58), since AaL has been high on the educational agenda worldwide.
TE D
The classroom data indicated that the teacher participants ought to take up new roles when attempting AfL or AaL in the classrooms. The first episode is a composition lesson delivered by Tom (pseudonym), who taught a class of Grade 11 students to write a speech. In the lesson,
EP
Tom adopted a presentation-practice-production approach when giving the lecture. He made use of text analysis to deconstruct the lexico-grammatical features and schematic structure of a piece of speech writing. During the practice stage, Tom used a Kahoot game to check student understanding of how to use persuasive language in speech writing. Since the lesson
AC C
was part of an e-portfolio programme, students were expected to produce their first drafts, perform self-assessment, and submit the second drafts to Tom via the web-based e-portfolio system. Despite underscoring reflexivity, Tom did not encourage his students to self-reflect upon their writing in greater depth. Instead, he used a strong top-down approach to monitor student learning throughout the observed lesson. While Tom kept providing comprehensible input, the students remained passive in analysing the strengths and weaknesses of their writing due to a clear lack of agency. As shown in the following episode, Tom dominated the entire conversation, and even spoke on behalf of his students when a female student attempted to share her self-reflection. •
Tom asked a student to share her self-reflection. The student said she would give herself Level 2 for ‘Language’ (with Level 1 being lowest and Level 5 being highest). Then, Tom asked her why, but she did not respond immediately. Tom said, ‘I think the reason why
14
ACCEPTED MANUSCRIPT most of you could not get more than 3 is because you can’t write more than - how many words? 400 words. That’s why the content would suffer because of the number of word you wrote. So, that’s why you can’t get more than 3.’ The second episode describes how Willy (pseudonym) taught a class of Grade 11 students to practice self-reflection on a written genre – explanation. The lesson was structured in this
RI PT
sequence: (1) introduction → (2) instruction on the use of self-reflection form (content, language & organisation) → (3) peer assessment → (4) self-assessment → (5) student sharing of peer feedback → (6) consolidation. In step (2), Willy explicitly taught students specific metalanguage, so that they could perform self- and peer assessment more successfully.
(Mary commenting on Isaac’s writing)
M AN U
SC
Regardless of the usefulness of Willy’s input, it appears that some students remained unable to take peer feedback on board (e.g., Isaac), and provided peer feedback only according to the mark assigned by Willy (e.g., James). The following bulletin points referred to step (2) of the observed lesson.
After Mary had given a positive comment (“good match with the title”), Willy told her to suggest advice on other sentences. Mary said Isaac needed to improve the tone. Then, Willy asked her to give evidence. Mary said the expression like some wisdom from some stories was too verbal.
•
Willy asked Isaac, ‘… Why would you like to write it like this – here like some wisdom from some stories? She said it’s a little bit too oral. It’s like oral language. It’s not formal enough. Why do you like to put it like this?’
•
Isaac replied, ‘I don’t know’ [Ss’ laughter]. Then, Willy asked, ‘would you accept the suggestion to make it more formal?’ But Isaac seemed to have no idea.
EP
TE D
•
The lesson went on: •
•
AC C
Willy continued to invite students to share their peer feedback, and finally, he nominated James. (James) James said there were a lot of things that his classmate did well because of the marks (‘Content’: 7; ‘Language’: 7; ‘Organisation’: 7; Total: 21) [Ss’ laughter]. Willy asked James to explain, but James was unable to give further elaboration. In the above episode, Willy did not invite students to perform self-assessment before teacher and peer assessment. If doing so, the students would be free from bias when self-assessing their works-in-progress. Also, Willy publicised the students’ strengths and weaknesses without their consents. Under such conditions, one can expect that some underperforming
15
ACCEPTED MANUSCRIPT students in the class might feel embarrassed by unsolicited criticisms, which might result in tensions and demotivation for learning. During the post-observation interview, Willy admitted that a large number of students felt bored with self- and peer assessment, because they found the activities counterproductive and time-consuming. More able students expressed concerns that peer assessment only made less-able classmates dependent rather than self-reflective.
RI PT
The classroom data suggest that the two teachers did not fully understand how to conduct AaL owing to their inadequate knowledge base and assessment skills, which corroborated the findings reported in section 4.1.1 – a clear lack of understanding of AaL. It is because AaL, as part of AfL, remains a novelty to most of our study respondents, and putting it into practice
M AN U
SC
calls for a partial revamp of curriculum, pedagogy and assessment methods, which requires additional language assessment training. Also, the data revealed that when attempting alternative assessments, the teacher participants lacked an awareness of their new roles as writing assessors who are expected to observe, evaluate, support, and monitor student learning progress regularly. It seems that neither teacher had an obvious change in mindsets or practices when introducing AaL, namely from a teacher-centric to a learner-centric approach to L2 writing instruction.
TE D
4.3 Summary of findings To sum up, the teacher participants perceived that they had fundamental knowledge about classroom-based assessment more than the theories of reliability and validity. They admitted
EP
that they had a partial understanding of AoL and AfL, but not AaL. They were somewhat optimistic about the prospects of alternative assessments in writing, and attempted process writing, conferencing, portfolio assessment and rubric-referenced assessment, only restricted by certain school-related contextual factors including administrative duties, time spent on
AC C
marking, and preparation for instructional materials. When implementing AaL, the two teacher participants were not ready to take up their new role as writing assessors. It is also interesting to note that despite attempts to use post-writing activities, there was an obvious lack of utilising assessments results to inform instructional practices in the data sets. 5. Discussion Given these research outcomes, the present study has revealed a conflicting phenomenon – an apparent discrepancy between the teachers’ perceived levels of TAL and their professional training. The findings have shown that the teacher participants only had a basic perceived level of TAL in terms of knowledge base, conceptions and practices, although they already possessed solid work experience and relevant academic profiles. These self-reported perceptions are in sharp contrast with the profiles of their professional training, given that Hong Kong teachers typically possess higher academic qualifications (e.g., Master’s degree),
16
ACCEPTED MANUSCRIPT and have more pedagogical experience and training in language assessment than their counterparts in China, Canada, or even in other parts of the world such as Thailand, Poland, or Mexico (Cheng & Wang, 2007; Ruecker & Crusan, 2018). Since 2004, beginning teachers have been mandated to pass the Language Proficiency Assessment for Teachers and to obtain a qualification equivalent to the fulfillment of English subject teaching and subject knowledge (Coniam & Falvey, 2013). All the teacher participants attained both language
RI PT
proficiency and subject training requirements. Nonetheless, Qian (2014) reported that after practicing the School-Based Assessment for almost five years, his teacher participants remained insecure to take up their new roles as independent assessors. The gap between the teachers’ perceived levels of TAL and their professional training calls for a real concern. It
M AN U
SC
implies that pre-service and in-service teacher education programmes may lack a clear focus on language assessment training in general, or simply do not include a robust component of writing assessment in particular due to the fact that teaching and assessing writing is not prioritised. The gap reported in this study is not found in other TAL studies. For instance, language teachers in Eastern Europe tended to have lower levels of TAL and chiefly counted on colleagues, assessment experience on the job, and testing materials in the textbooks, because they had inadequate professional training (e.g., Vogt & Tsagari, 2014).
TE D
The findings also showed that when introducing alternative assessments such as self-reflection in writing, the teacher participants still adopted a teacher-centred rather than a student-centred pedagogical approach. This observation is in line with the findings reported
EP
in Hawe and Dixon’s (2014) study. Our evidence (questionnaire and observation data) further indicates that when it comes to initiating classroom-based alternative assessments, the teacher participants neither undergo a shift in their identity as assessors of writing nor increase their self-awareness to become assessment-capable by acquiring context-specific assessment
AC C
knowledge (Scarino, 2013). Other than pedagogical content knowledge in writing, to carry out classroom-based assessment efficiently involves understanding the theoretical rationale, interpreting assessment data, utilising these data to inform instruction, engaging in decision-making processes, and developing critical professional judgements (Crusan et al., 2016). Our data apparently lack the aspect of utilising assessment results to inform teaching and learning. The reason for this might be that the teacher participants might not have acquired sufficient knowledge to interpret the assessment results for upgrading their pedagogies. Another possible explanation is that the participants tend to segregate the formative and summative purposes of assessment when evaluating writing, which is rather common in exam-driven education contexts. In fact, the inability to utilise assessment results to improve teaching and learning in this study poses a contrast to the findings in Yan et al.’s (2018) study, where one EFL teacher was able to retrieve her teaching experience and analyse the students’ errors when setting appropriate test items for her class. To explain the
17
ACCEPTED MANUSCRIPT discrepancy in results, the teachers in our study might be unaware or even unreflective of incorporating teaching and testing for TAL, given transforming students’ test outputs into inputs for teaching and test development knowledge demands reflective practices and constant exposure to pertinent assessment training. Although the teacher participants perceived that they generally had a satisfactory level of
RI PT
TAL, some of them still lacked an update of knowledge base and practices on AaL. It is because having a strong faith in the merits of AaL does not automatically translate into effective implementation, especially AaL. The findings also show that the study participants might not yet have acquired a deep understanding of AaL, which could facilitate the
M AN U
SC
implementation of writing assessment. The classroom data specifically indicate that the two teachers lacked an awareness of shifting their mindsets, philosophies and instructional approaches when adopting alternative approaches to writing assessment. They did not understand that AaL, like AfL, involves students to play a proactive role in learning and requires learners to self-assess their writing critically and independently. Without making students notice their new roles as self-regulated learners, implementing AaL would be a tall order. While the two teachers, Tom and Willy, attempted to help their students to engage in
TE D
self-reflection for improving writing, they simply captured its form not essence. In other words, the teacher participants did not fully master the essence of AaL, because they mainly developed an awareness of what they had known and practiced, but not who they were when they evaluated student writing (Looney, Cumming, van Der Kleij, & Harris, 2017).
EP
6. Conclusion and recommendations The study has explored the extent to which Hong Kong secondary school teachers’ knowledge, conceptions and practices within a context of writing assessment. In general, the
AC C
teacher participants perceived that they had fundamental knowledge of AoL and AfL, but not AaL. Their conceptions towards writing assessment were mostly positive, assuming that writing assessments could help improve writing. The teacher participants also attempted alternative writing assessments, but encountered some institutional barriers. In sum, the teacher participants in the study were reported to have a basic grip on TAL, but with a need to enhance knowledge base and practices on AaL, and an awareness of their new roles as assessors of writing. Further, the participants were said to be unreflective of how to utilise assessment results to inform teaching and learning of writing. These findings imply that despite the two decades of assessment reforms in Hong Kong, some teachers neither remain prepared to introduce alternative assessments in writing nor receive sufficient training in language assessment, particularly of the three complementary facets of assessment of/for/as learning which can benefit students’ performance in the long
18
ACCEPTED MANUSCRIPT run. This phenomenon suggests that more can be done to empower school teachers to be assessment-capable in L2 writing and to become assessors of writing more independently. To facilitate the development of TAL, I suggest three actionable recommendations. First, principals can provide teachers with support for attempting alternative assessments, e.g., self-reflection in portfolio works, interactive use of exemplars, and applications of feedback dialogues in writing. All these initiatives need extra funding, teaching relief and professional
RI PT
training. Without creating space, support and autonomy, teachers probably find it exacting to develop their TAL in a fuller sense. Second, as suggested by the findings, sharing of assessment strategies as a community of practice would be a way forward. Through building sustained professional dialogues, teachers can develop expert assessment judgements when
M AN U
SC
evaluating writing, and disseminate good assessment practices to colleagues. As such, enhanced teacher professionalism in writing assessment is likely to contribute to the achievement of TAL. Third, author has proposed that an inclusion of TAL as a mandated component of teacher training qualifications, e.g., BEd and PgDE would create a positive washback effect on teaching and learning in teacher preparation programmes. Adding TAL as part of pedagogical content knowledge would make the measurement of TAL legitimate in both coursework and teaching practicum.
TE D
Since the study is inherently exploratory, its findings can generate further questions for future investigation. For instance, how do various stakeholders define and then consolidate what constitutes the construct of TAL in L2 writing and why? What and how do programme
EP
administrators evaluate pre-service secondary teachers’ assessment literacy within a larger L2 writing context? To what extent does language assessment training have an impact on the construction of writing assessor identity? The findings of this study are likely to add new knowledge to language assessment literacy scholarship, given that the attainment of TAL is a
AC C
necessary but not sufficient condition to promote effective applications of AfL and AaL in L2 writing. The ideas of AfL and AaL in writing needs to be constantly defined, updated or even negotiated by practitioners, administrators, parents, students, language testers and publishers. Despite its theoretical significance, the present study has its limitations. It has a small sampling size and the respondents’ self-reported data may be subject to bias although we have utilised data triangulation. Appendix 1: Teacher interview guide 1. What do you think about your writing assessment practices? 2. To what extent do you think these alternative writing assessments are feasible in your work context? (e.g., self- and peer assessment, conferences, portfolio assessment, etc.) 3. What have you learnt in your teacher training programme regarding writing assessment? 4. To what extent do you think your teacher training has equipped you with adequate
19
ACCEPTED MANUSCRIPT knowledge of writing assessment? 5. Could you tell me how you assess your student writing? 6. What kind(s) of feedback do you find useful when marking your student writing? 7. Do you have any suggestions as to how writing assessment can be done differently with students of diverse abilities?
RI PT
Appendix 2: Classroom observation protocol General information: Date __ Time __ School __ Teacher ___ Class __ Class size ___ Topic__ Observers __ The room __ Environment (layout of desks/tables) __ Ordinary classroom/computer room/multi-media language center ___
M AN U
SC
Learning objectives: Content objectives ___ Language objectives ___ Lesson flow: Procedure of use – lead-in __ context-building __ pre-writing __ while-writing __ post-writing __ consolidation __ after-class activities __ Focus of each stage – input __ teacher feedback __ self- and peer feedback __ multiple drafting __ reflection __ re-teaching of weaknesses __ Resources used – PowerPoint __ Worksheets __ Textbook __ Interaction pattern – individual work __ pair work __ group work __ student-student interactions __ teacher-student interactions __ teacher-and-small-group interactions __
References Author
TE D
Reflection on specific assessment events: Open-ended
EP
Brookhart, S.M. (2011). Educational assessment knowledge and skills for teachers. Educational Measurement: Issues and Practices, 30(1), 3-12. Brown, J.D., & Bailey, K.M. (2008). Language testing courses: What are they in 2007? Language Testing, 25(3), 349-383.
AC C
Cheng, L., & Wang, X. (2007). Grading, feedback, and reporting in ESL/EFL classrooms. Language Assessment Quarterly, 4(1), 85-107. Coniam, D., & Falvey, P. (2013). Ten years on: The Hong Kong Language Proficiency Assessment for Teachers of English (LPATE). Language Testing, 30(1), 147-155. Crusan, D., Plakans, L., & Gebril, A. (2016). Writing assessment literacy: Surveying second language teachers’ knowledge, beliefs, and practices. Assessing Writing, 28, 43-56. Davison, C., & Leung, C. (2009). Current issues in English language teacher-based assessment. TESOL Quarterly, 43(3), 393-415. DeLuca, C., & Johnson, S. (2017). Developing assessment capable teachers in this age of accountability. Assessment in Education: Principles, Policy & Practice, 24(2), 121-126. DeLuca, C., & Klinger, D.A. (2010). Assessment literacy development: Identifying gaps in teacher candidates’ learning. Assessment in Education: Principles, Policy & Practice, 17(4), 419-438.
20
ACCEPTED MANUSCRIPT DeLuca, C., Chavez, T., Bellara, A., & Cao, C. (2013). Pedagogies for preservice assessment education: Supporting teacher candidates’ assessment literacy development. The Teacher Educator, 48(2), 128-142. Fulcher, G. (2012). Assessment literacy for the language classroom. Language Assessment Quarterly, 9(2), 113-132. Gu, P.Y. (2014). The unbearable lightness of the curriculum: What drives the assessment
RI PT
practices of a teacher of English as a foreign language in a Chinese secondary school? Assessment in Education: Principles, Policy & Practice, 21(3), 286-305. Hamp-Lyons, L. (2006). Feedback in portfolio-based writing courses. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing contexts and issues (pp. 140-161).
M AN U
SC
London: Cambridge University Press. Hamp-Lyons, L. (2016). Purposes of assessment. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 13-27). Germany: De Gruyter. Harding, L., & Kremmel, B. (2016). Teacher assessment literacy and professional development. In D. Tsagari & J. Banerjee (Eds.), Handbook of second language assessment (pp. 413-427). Germany: De Gruyter. Hawe, E.M., & Dixon, H.R. (2014). Building students’ evaluative and productive expertise in
TE D
the writing classroom. Assessing Writing, 19, 66-79. Hyland, F. (1998). The impact of teacher written feedback on individual writers. Journal of Second Language Writing, 7(3), 255-286. Klenowski, V., & Wyatt-Smith, C. (2014). Assessment for education: Standards, judgement
EP
and moderation. London: Sage. Lee, I. (2013). Becoming a writing teacher: Using “identity” as an analytic lens to understand EFL writing teachers’ development. Journal of Second Language Writing, 22, 330-345. Lee, I. (2016). Teacher education on feedback in EFL writing: Issues, challenges, and future
AC C
directions. TESOL Quarterly, 50(2), 518-527. Lee, I. (2017). Classroom writing assessment and feedback in L2 school contexts. Singapore: Springer. Lee, I., & Coniam, D. (2013). Introducing assessment for learning for EFL writing in an assessment of learning examination-driven system in Hong Kong. Journal of Second Language Writing, 22(1), 34-50. Lee, I., Mak, P., & Burns, A. (2016). EFL teachers’ attempts at feedback innovation in the writing classroom. Language Teaching Research, 20(2), 248-269. Leung, C. (2014). Classroom-based assessment issues for language teacher education. In A.J. Kunnan (Ed.), The companion to language assessment. Vol. 3 (pp. 1510-1519). Chichester, UK: Wiley Blackwell. Looney, A., Cumming, J., van Der Kleij, F., & Harris, K. (2017). Reconceptualizing the role of teachers as assessors: Teacher assessment identity. Assessment in Education: Principles,
21
ACCEPTED MANUSCRIPT Policy & Practice, DOI: 10.1080/0969594X.2016.1268090. Merriam, S.B. (1998). Qualitative research and case study applications in education: Revised and expanded from case study research in education. San Francisco, CA: Jossey-Bass Publishers. Popham, W. J. (2009). Assessment literacy for teachers: Faddish or fundamental? Theory into Practice, 48(1), 4-11.
RI PT
Qian, D. D. (2014). School-based English language assessment as a high-stakes examination component in Hong Kong: Insights of frontline assessors. Assessment in Education: Principles, Policy & Practice, 21(3), 251-270. Ruecker, T., & Crusan, D. (Eds.). (2018). The politics of English second language writing
M AN U
SC
assessment in global contexts. New York: Routledge. Scarino, A. (2013). Language assessment literacy as self-awareness: Understanding the role of interpretation in assessment and in teacher learning. Language Testing, 30(3), 309-327. Stiggins, R. J. (1991). Assessment literacy. The Phi Delta Kappan, 72(7), 534-539. Taylor, L. (2009). Developing assessment literacy. Annual Review of Applied Linguistics, 29, 21-26. Tight, M. (2017). Understanding case study research: Small-scale research with meaning.
TE D
London, UK: Sage. Vogt, K., & Tsagari, D. (2014). Assessment literacy of foreign language teachers: Findings of a European study. Language Assessment Quarterly, 11(4), 374-402. Wiliam, D. (2011). What is assessment for learning? Studies in Educational Evaluation, 37(1),
EP
3-14. Xu, Y. T., & Brown, G. T. L. (2016). Teacher assessment literacy in practice: A reconceptualization. Teaching and Teacher Education, 58, 149-162. Xu, Y. T., & Liu, Y. C. (2009). Teacher assessment knowledge and practice: A narrative inquiry of a Chinese college EFL teacher’s experience. TESOL Quarterly, 43(3), 493-513.
AC C
Yan, X., Zhang, C., & Fan, J. (2018). “Assessment knowledge is important, but …”: How contextual and experiential factors mediate assessment practice and training needs of language teachers. System, 74, 158-168.
22
ACCEPTED MANUSCRIPT
Figure 1: An adapted conceptual framework of teacher assessment literacy
RI PT
Teacher conceptions
M AN U
EP
TE D
Teacher Assessment Literacy
AC C
Probably mediated by contextual factors
Domain: the what and how aspects of classroom-based assessment practices in L2 writing, e.g., attempts at assessment innovations and their implementation strategies
SC
Domain: cognitive and affective dimensions of belief systems regarding L2 writing assessment, e.g., teacher prior experiences of assessment as a learner, value judgement of assessment
Practices
Knowledge Base
Domain: L2 writing assessment knowledge; knowledge of feedback, grading, alternative assessments, assessment purposes and ethics, e.g., understanding and interpretation of key assessment principles
Probably mediated by contextual factors