Holistic, local, and process-oriented: What makes the University Utah’s Writing Placement Exam work

Holistic, local, and process-oriented: What makes the University Utah’s Writing Placement Exam work

Assessing Writing 41 (2019) 84–87 Contents lists available at ScienceDirect Assessing Writing journal homepage: www.elsevier.com/locate/asw Holisti...

153KB Sizes 0 Downloads 16 Views

Assessing Writing 41 (2019) 84–87

Contents lists available at ScienceDirect

Assessing Writing journal homepage: www.elsevier.com/locate/asw

Holistic, local, and process-oriented: What makes the University Utah’s Writing Placement Exam work Crystal J. Zanders, Emily Wilson

T



Joint Program in English and Education, 610 S. University Ave., Ann Arbor, MI, 48109, USA

A R T IC LE I N F O

ABS TRA CT

Keywords: Writing placement Placement exams Direct assessment Construct validity Consequential validity

This review of the University of Utah’s Writing Placement exam evaluates the possibilities of the exam’s construct, addresses the tool's limitations, and analyzes it in light of similar placement tools. The review concludes that although there are challenges specifically related to the scalability, security, and language ideology of the exam, its holistic nature, local assessors, and process-oriented view of writing ensure its effectiveness as a writing placement test.

1. Introduction Writing course placement at the University of Utah offers students multiple entry points into Writing 1010 and Writing 2010, required general education courses throughout the Utah System of Higher Education (USHE) which must be completed within the first three semesters of each student’s enrollment. The university does not have a remedial writing course. However, most students are able to bypass Writing 1010 by having a sufficiently high Admissions Index, a number derived from students’ incoming GPAs and SAT or ACT scores. Students who do not have an Admissions Index (including transfer students who have not fulfilled the 1010 requirement elsewhere), students who would like to challenge their placement in Writing 1010, and international students who do not wish to take English for Academic Success courses all take the University of Utah’s Writing Placement Exam (WPE). The University of Utah’s system is a prime example of an institution that relies on multiple measures and local assessment to assist in course placement, and research supports the validity of these practices (Reinheimer, 2007). We mention measures like the Admissions Index to contextualize the WPE and account for conditions that sustain entirely human-rated assessment, but our primary focus is the WPE itself. The exam consists of 1) a writing prompt and 2) a list of traits for rating the essays. While many large universities are moving toward more standardized or automated placement measures, Utah’s process is more time-consuming but arguably more flexible and individualized, in that it provides students with some agency in the placement process and is assessed by writing faculty. The Writing Placement Exam is a direct assessment: a constructed response task that includes responding to a persuasive, essayistic text and supports a brief revision process. Students have 72 h to read an article and write a response featuring a critical evaluation of the author’s argument. For example, students were asked to read David Glenn’s article “Divided Attention,” from the Chronicle of Higher Education. Students were instructed to summarize the author’s argument and to evaluate the argument’s persuasiveness, including integration of “potential counter arguments.” The instructions emphasize that the exam should not rely on outside sources, differentiating it from placement assessments that include research or synthesis. Four of Writing 1010’s learning outcomes are listed below the prompt, and a list of “criteria for assessment” appears below the outcomes.



Corresponding author. E-mail addresses: [email protected] (C.J. Zanders), [email protected] (E. Wilson).

https://doi.org/10.1016/j.asw.2019.06.003 Received 22 April 2019; Received in revised form 21 May 2019; Accepted 7 June 2019 Available online 18 June 2019 1075-2935/ © 2019 Elsevier Inc. All rights reserved.

Assessing Writing 41 (2019) 84–87

C.J. Zanders and E. Wilson

Criteria for assessment were spelled out in greater detail in the “List of Primary Traits for Rating Placement Essays” provided to faculty readers. The document mentioned that these criteria were not meant to be a checklist, and no numerical weights were given to the categories (content and focus, audience awareness and context, cohesion, and clarity and sentence structure). Instead, the traits were described to “give the raters a more stable set of criteria by which to make a holistic assessment of the placement essay.” The list assumes the raters’ shared understanding of the objectives and ethos of Utah’s writing courses, situating the exam within the “local contexts of [its] curricula” (Stalions, 2009, p. 121). For example, one criterion is writing in a “respectful, semi-formal tone similar to that encouraged in College Writing classes.” Raters meet at the beginning of each semester to review the traits, and all four faculty rate all students’ essays. Generally, their placement decisions tend to agree (Chatterley, 2018). In keeping with calls for raters who “make placement decisions based upon what they know about writing and the curriculum of the courses they teach” (Huot 1996, p. 554), the WPE is built and rated by the university’s writing faculty. 2. Possibilities The Writing Placement Exam’s assignment lists the four primary learning outcomes of Writing 1010 as the first set of criteria for assessment of the WPE creates a “sound argument to support the interpretation the interpretation and use of assessment results” (Moore et al., 2009, p. W114). In other words, the exam writers strengthen their tool’s construct validity by making the learning outcomes of Writing 1010 part of the writing construct of the WPE. The WPE’s writing construct also supports a revision process, reflecting the values of courses that teach writing as a process rather than a product. According to the CCCC Committee on Assessment, “Essay tests that ask students to form and articulate opinions about some important issue, for instance, without time to reflect, talk to others, read on the subject, revise, and have a human audience promote distorted notions of what writing is” CCCC Committee on Assessment (2014). Revision as part of the writing construct aligns with subsequent expectations of writing courses that will encourage students to solicit feedback and revisit their work. Although three days may still yield an early draft, this process avoids some of the compromises to validity that occur in writing-on-demand tasks or indirect assessments. Additionally, the WPE is “site-based, locally controlled…[and] rhetorically based” (Moore et al., 2009, p. W110). The instructors are the evaluators, and as such, they have a vested interest in ensuring that students are placed appropriately as well as the pedagogical knowledge and expertise to make that determination. The format of the reading-to-write exam also gives instructors valuable insight into students, specifically, their dexterity in handling and representing ideas from sources, which is also a learning outcome of their writing courses, and therefore a characteristic that enhances construct validity. According to the assignment sheet, “the goal is to demonstrate comprehension of the article and then to identify the strengths and weakness of its argument.” To complete the assessment students must determine the main idea and supporting details of the source text, evaluate the evidence presented, make a claim, then integrate quotes from the article as evidence supporting it. Ultimately, by aligning the WPE’s writing construct with the learning outcomes of the Writing 1010 course, having writing faculty as exam raters, and including revision in the writing construct, the exam’s construct validity is supported by both “empirical evidence and theoretical rationales” (Moore et al., 2009, p. W114). 3. Limitations The first set of constraints for this tool has to do with the conditions of its administration. Essays written in 72 h still constitute early drafts, and students might produce better writing under conditions that more closely mirror those of writing courses (a potential compromise to the exam’s construct validity). Conversely, a take-home test presents difficulty in verifying that the essay is the student’s original work, a condition that exists in writing courses as well. Also, although read-to-write tasks have numerous advantages (Cumming 2013), including the ways they reflect assignments that students are likely to encounter in a first-year writing course, some students will more readily identify with the topic of the text they are writing about, calling into question the alignment (or possible misalignment) of students’ cultural contexts with the context of the university and the exam itself. Additionally, the University of Utah does not collect disaggregated data regarding the student demographics (including race, sexual identity, religion, etc.) of those who take the exam or how those demographics are placed. Without the data, they cannot ensure the consequential validity of the exam. The fourth set of constraints appears in the “List of Primary Traits.” It is clear that those drafting this document have worked carefully to avoid explicitly privileging one dialect over another. For example, they write that sentence structure and verb tense should be “consistent,” not correct. However, the vague wording of some traits might lend itself to privileging standardized English in the uptake of individual raters. For example, the sentence “the student explains ideas and situations clearly” could give raters a reason to discount non-standardized dialects, even if the meaning is evident. And both the writing sample and the prompt are written in standardized edited academic English, a choice that centralizes and privileges this dialect. We do acknowledge that the potential for linguistic discrimination in writing assessment is an ongoing, universal issue, not a limitation particular to this exam. 4. Connections The University of Utah’s Writing Placement Exam shares several characteristics with the Directed Self Placement Exam (DSP) 85

Assessing Writing 41 (2019) 84–87

C.J. Zanders and E. Wilson

offered at the University of Michigan (UM). The UM DSP also involves a constructed response task that students complete at home. Both assessments encourage revision, treating writing as a process rather than a product by allowing the student to write over multiple days. However, UM students have more than a month to complete the DSP rather than the 72 h allowed for the WPE (Sweetland 2019). For both tasks, students are asked to read a text, analyze it, and write about that analysis, tasks that are similar to what they would complete in the first-year writing course. Therefore, both assessments establish a strong construct validity through the aligning the course placement exam construct with best practices for writing as taught in the courses. One clear contrast between the Writing Placement Exam and Directed Self Placement is that the DSP positions students as the primary evaluators of their writing. During the orientation process students and advisors discuss students’ experiences writing the DSP, then choose a writing course together (Sweetland 2019). This practice gives students more control over their learning than the Writing Placement Exam. However, educators arguably have significantly more background knowledge and are more qualified to evaluate essays than students. The DSP, by giving students agency over their course placement, may also be requiring students to make a judgement call based on insufficient information (though the presence of advisors should help alleviate some of the pressure). Whether or not the benefit of agency provided by the self-evaluation outweighs the benefits of experienced evaluators may vary from student to student. While the DSP is administered to almost all first-year students and many transfer students at the University of Michigan (Sweetland 2019), the Writing Placement Exam, by contrast, is given as the exception rather the the rule at the University of Utah. Therefore, the exams can be compared only in terms of construct validity rather than consequential validity. Practically, allowing students to evaluate their own essays, with the help of their advisors, allows the DSP to be used on a much larger scale without requiring additional commitment and participation from writing faculty. 5. Futures and conclusion Utah’s placement process, including students’ ability to challenge their placement by taking the WPE, has distinct advantages over standardized assessments that have been shown to be “weakly correlated with success in college-level courses” (Hodara et al., 2012, p.1) and thus have poor consequential validity. Building upon these advantages and making it more widely available to students would be an asset to Utah’s strong placement process. However, we can also see challenges associated with this trajectory. Currently, the WPE is rated by writing faculty. Having the same people create and assess the exam who are teaching writing courses is ideal, but we wonder about the scalability of the approach. The current context allows for a more individual approach; students have a placement option that includes more in-depth personal consideration than the rubber-stamping that often occurs with large-scale tests. If many more students were to take the WPE, those advantages would be eliminated, and faculty raters may not be able to handle the increased workload. One future option could be to include some form of automated writing evaluation, which many writing programs use. This would allow for greater scalability and efficiency, but it would come with its own set of challenges; research suggests that tests relying on AWE “underrepresent the construct of writing as it is understood by the writing community” (Condon, 2013, p. 100). As the writing sections of standardized tests comes under increased scrutiny (Jaschik, 2018), it is possible that multiple-measure approaches may offer more flexible, more local, and more individualized validation arguments than standardized measures. Within the foreseeable future, the University of Utah plans to continue using the current placement process (Chatterley, 2018). There is much to celebrate about the WPE, even though we acknowledge that most U of U students are placed in writing courses using other measures, which we have not had the space to analyze in this review. However, the WPE itself is flexible, process-oriented, locally developed, and holistically assessed–all characteristics of a valid assessment tool. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors Acknowledgements Our sincere thanks to the generous folks at the University of Utah. In particular, we’d like to thank Zachariah Chatterley for providing us with materials and answering so many questions, Christie Toth for facilitating the connection, and LuMing Mao for giving us permission to review their materials. References CCCC Committee on Assessment (2014). Writing assessment: A position statement. National Council of Teachers of English. Chatterley, Z. (2018). Email correspondence. Condon, W. (2013). Large-scale assessment, locally-developed measure, and automated scoring of essays: Fishing for red herrings? Assessing Writing, 18(1), 100–108. Cumming, A. (2013). Assessing integrated writing tasks for academic purposes. Language Assessment Quarterly, 10(1), 1–8. Hodara, M., Jaggars, S., & Karp, M. (2012). Improving developmental education assessment and placement lessons from community colleges across the country. CCRC Working Paper No. 51. 1–44. Huot, B. (1996). Toward a new theory of writing assessment. College Composition and Communication, 47(4), 549–566. Jaschik, S. (2018). For fate of SAT Writing Test, watch California. Inside Higher Ed.https://www.insidehighered.com/admissions/article/2018/07/16/more-colleges-

86

Assessing Writing 41 (2019) 84–87

C.J. Zanders and E. Wilson

drop-sat-writing-test-all-eyes-are-california. Moore, C., O’Neill, P., & Huot, B. (2009). Creating a culture of assessment in writing programs and beyond. College Composition and Communication, 61(1), W107–W132. Reinheimer, D. (2007). Validating placement: Local means, multiple measures. Assessing Writing, 12(3), 170–179. Stalions, E. (2009). Putting placement on the map: Bowling green state university. Organic writing assessment: Dynamic criteria mapping in action. Logan: Utah State University Press119–153. Sweetland Center for Writing (2019). Directed self-placement for writing for first year students: Step 1 instructions and guidelines. Retrieved fromUniversity of Michigan LSAhttps://webapps.lsa.umich.edu/SAA/UGStuAdv/App/WritingDSP/WritingDSPFr.aspx. Crystal J. Zanders is a doctoral student in the Joint Program for English and Education at the University of Michigan. As a Rackham Merit Fellow, she is studying literacy and the expression of educational inequity in k-12 classrooms. Her poetry explores the themes of personal and generational trauma. Dr. Emily Wilson is a recent graduate of the Joint Program in English and Education at the University of Michigan. In fall 2019, she will be an assistant professor of English at Alfaisal University in Riyadh, Saudi Arabia. She studies the literacies of mobile and displaced populations.

87