The influence of word processing on English placement test results

The influence of word processing on English placement test results

Computers and Composition 17, 197–210 (2000) ISSN 8755-4615 © 2000 Elsevier Science Inc. All rights of reproduction in any form reserved. The Influen...

65KB Sizes 14 Downloads 50 Views

Computers and Composition 17, 197–210 (2000) ISSN 8755-4615 © 2000 Elsevier Science Inc. All rights of reproduction in any form reserved.

The Influence of Word Processing on English Placement Test Results SUSANMARIE HARRINGTON Indiana University Purdue University Indianapolis

MARK D. SHERMIS Indiana University Purdue University Indianapolis

ANGELA L. ROLLINS Indiana University Purdue University Indianapolis

A study was conducted to consider two issues: (a) whether differences might emerge in writing quality when students wrote examinations by hand or on a computer and (b) whether raters differed in their evaluation of essays written by hand, on a computer, or by hand and then transcribed to typed form before scoring. A total of 480 students from a large Midwestern university were randomly assigned into one of three essay groups: (a) those who composed their responses in a traditional bluebook, (b) those who wrote in a bluebook, then had their essays transcribed into a computer, and (c) those who wrote their essays on the computer. A one-way ANOVA revealed no statistically significant differences in ratings among the three groups [F(2,475) ⫽ 1.21, ns]. The discussion centers on the need for testing programs to examine the relationship between assessment and prior writing experiences, student preferences for testing medium, and rater training regarding the possible impact of technology on scores. modality effects

placement testing

writing assessment

An English placement test—an exam given to determine what form of writing instruction best matches a student’s needs upon college entry—is perhaps the most commonly practiced large-scale writing assessment in American colleges and universities. Although the impact of technology on writing and writing assessment relating to classrooms has often been studied, the impact of technology on placement testing remains an underinvestigated phenomenon. Research on placement testing has concerned itself with the development of assessment instruments (e.g., Ruth & Murphy, 1988), the worth of direct writing assessment (e.g., Huot, 1990), the methods of scoring (Cooper & Odell, 1977; Direct all correspondence to: Susanmarie Harrington, IUPUI Department of English, 425 University Boulevard, Indianapolis, IN 46202. E-mail: ⬍[email protected]⬎.

197

198

HARRINGTON, SHERMIS,

AND

ROLLINS

Haswell & Wyche–Smith, 1995; Huot, 1996; Smith, 1993), and rater effects on scores (Black, Daiker, Sommers, & Stygall, 1994; Barritt, Stock, & Clark, 1986). On the whole, writing assessment scholars have been more likely to study classroom-produced texts than placement tests (although Haswell, 1988, is one exception), and computers and composition scholars have paid scant attention to placement tests; one possible reason for this is that most impromptu placement tests are handwritten texts. The time is now ripe, however, for the study of technology and placement testing. Computers are becoming well established on many college campuses, and students thus have access to computers both in and out of class. Increasing familiarity with wordprocessing software leads students to expect to find them in use for any writing assessment, including placement tests. Test administrators may have to make choices about the use of resources for testing, and whether to allow students to use available computers with word-processing software during placement tests. Making good choices depends, of course, on whether the use of word-processing software for placement tests affects students’ placement test results. Are students’ placements affected by their decisions to compose placement essays on a computer or by hand? The potential effects could be manifested in two ways: one, using computers to produce placement tests could affect the quality of students’ writing; two, reading tests produced on computers could affect raters’ judgments about course placements. This issue will become increasingly pressing as computers become more prevalent on college campuses. As computers proliferate, it is tempting to use them for testing simply because they can be used for testing. And, as students arrive at college with increasing prior exposure to computers, it seems only sensible to encourage those who can use word-processing software to do so for most writing needs, including placement testing. If students’ prior experiences with word-processing software mean that composing on a computer is an integral part of their writing process, asking them to produce a placement essay with pen and paper means asking them to use a less familiar medium of composing for testing. Students writing in an unfamiliar medium may well produce texts that are not representative of their abilities—texts that are inferior in both form and content. But, the effect of computers may not be so clear-cut. Writing on paper rather than via computers may, in a counterintuitive fashion, actually benefit students; without the capacity to easily and seemingly endlessly revise a sentence, students may be more likely to simply get on with the project at hand, thereby producing a more complete text in the allotted time. Test administrators must also consider that the difference between hand-written and computer-generated text may not only affect student writing: It may also affect rater reaction. Raters react to the appearance of a text as well as the substance of it. Computergenerated text may be viewed as more professional and, hence, better. However, it is also possible that computer-generated text may disadvantage students, whether by making grammatical errors more obvious or by distancing the rater from the writer; printed text may be seen as colder and less immediate than its hand-written counterpart, and thus rated less highly. Unfortunately, the extant research on the effect of media on student writing and rater reaction is fragmented and conflicting. Our study was designed to provide an empirical basis for evaluating the effects of access to word-processing software on students’ placement results.

The Influence of Word Processing

199

The Impact of Computers on Writing Quality Although many studies have been conducted on the effects of medium on the quality of student writing, no consensus has emerged. Several important review articles (CochranSmith, 1991; Hawisher, 1989) have noted the overall mixed results of these studies. Gail Hawisher (1989) summed up prior research: Students seem to have positive attitudes toward writing and word processing after working with computers; students exhibit finished products that have fewer mechanical errors than those written with traditional tools; and many students write longer pieces with word processing than with traditional methods. We find conflicting results when we examine two variables: revision and quality. Slightly more studies found an increase in revision as found no increase in revision, and fewer studies found improvement in quality as found no improvement. (p. 52)

In the years since Hawisher’s (1989) and Marilyn Cochran-Smith’s (1991) review articles, no meta-study has come to any satisfactory conclusion regarding the varied results of studies on revision. The factors affecting the overall quality of students’ texts are complex, and as Christine Neuwirth, Christina Haas, and John Hayes (1990) noted, the effects of computers on writing are many, ranging from effects on classroom dynamics, relationships, student or teacher affect, writing processes, and content. The varied effects on quality of student texts are illuminated to some extent by more focused studies that examine particular dimensions of writing, such as planning or specific choices made while revising. Planning is a key feature of an impromptu essay, because the relatively short time allotted means that students must execute a plan for a successful essay quickly. Colette Daiute (1983), whose research has largely focused on elementary school writers, argued that writing with word processors increases writers’ planning, since the word processor eliminates the need to worry about handwriting or spelling, a finding reinforced in David Dickinson (1986). However, other studies of older writers suggest writing with word processors increases attention to the appearance (rather than the content) of a text, and that word or sentencelevel changes become the object of attention (Bridwell-Bowles, Johnson, & Brehe, 1987; Bridwell, Sirc, & Brooke, 1985; Collier, 1983; Haas, 1989a). Christina Haas’ (1989a) literature review noted that research on word processing’s impact on planning had to date failed to produce uniform results—a situation that may be related to the various definitions of planning used by researchers as well as the inherent difficulty of distinguishing phases of writing processes in individual cases. Haas (1989a), using a broad definition of planning, found that writers engaged in less planning when writing with a word processor, both in terms of planning before beginning to write and planning during the production of text; this finding held for both expert and novice writers. This decrease in planning may be related to the increased difficulty students experience in reading on the computer screen (Haas, 1989b). Alternatively, it could also be related to the ways computers have been found to distract students from the rhetorical task at hand; Aviva Freedman and Linda Clarke (1988) found that students’ tendencies to correct typographical errors as they worked at a computer was a significant impediment to text production. However, in a more recent study, Helen Schwartz, Christine Fitzpatrick, and Brian Huot (1994) found that when students produced discovery writing on a computer, their answers were longer and more richly developed. Schwartz et al. did not define discovery

200

HARRINGTON, SHERMIS,

AND

ROLLINS

writing as planning, but the production of a rich store of textual responses is one dimension of the planning phase of writing. Perhaps the single most consistent finding of more than a decade of attention to the impact of computers on writing is that students’ texts are longer when composed on a computer (Burley, 1994; Coulter, 1986; Dean, 1986; Peterson, 1993; Haas & Hayes, 1986; Hawisher & Fortune, 1988; Juettner, 1987; Kaplan, 1986; Russell & Haney, 1997). However, as noted above, findings are mixed as to whether greater length bears any relationship to greater quality. Some studies found that computer-composed texts are inferior to hand-composed texts in terms of focus and coherence (e.g., Burley, 1994); other studies found that the revision of computer-composed texts led to higher quality than the revision of hand-composed texts (e.g., Kaplan, 1986); whereas still other studies found no overall differences in quality between computer- and hand-composed texts (e.g., Daiute, 1985; Hawisher, 1987; Rhodes & Ives, 1991). Two more recent studies (Wolfe, Bolton, Feltovich, & Bangert, 1996; Wolfe, Bolton, Feltovich, & Niday, 1996) suggest that the salient factor affecting the quality of work may be students’ experience with word-processing software rather than composition medium. Edward Wolfe, Sandra Bolton, Brian Feltovich, and Art W. Bangert (1996) found little difference in quality between word-processed and handwritten texts for students with some word-processing experience. The perplexing differences in results regarding revision are discussed thoroughly in Hawisher (1989). An impromptu placement exam, of course, affords limited opportunity for revision, and it is likely that the factors influencing the interaction of revising strategies and technologies are not at work in a timed situation. The mixed results of revision research, in any event, suggest that students with previous experience with word-processing software will have had a range of revising experiences, with some having revised their work more for having written on a computer, some less. For our purposes, it is enough to note that the previous research did not encourage us to hypothesize that essay quality would be affected in any particular way by the use of computers with word-processing software, particularly if students with computer anxiety or lack of typing skills were excluded from the study. The relationship between medium of composition and overall text quality must be linked to the complex interaction of local factors. The Impact of Computers on Rater Reaction Although previous research is decidedly inconclusive as to the net effect of computer use on writing quality, it provides a slightly more coherent body of findings relating to the ways raters react to writing in the face of particular technologies—most notably in studies regarding handwriting and its impact on raters. The appearance of a text is one important factor in determining essay quality. The quality of handwriting is one factor that can affect the perception of quality; generally speaking, poor quality handwriting is associated with lower scores (Chase, 1986; Markham, 1976), although Clinton I. Chase (1986) found that handwriting quality interacts in a complex fashion with other factors such as race, gender, and teacher expectation. However, when the question is whether tests or impromptu essays produced by hand or by computer receive higher scores, findings are not uniform. Research in this area is no more conclusive than that regarding the relationship of quality and word processing.

The Influence of Word Processing

201

Studies of standardized tests have established no consensus on the effect of test medium on scores (see the review by Bunderson, Inouye, & Olsen, 1989). In an unpublished study, researchers at Rio Hondo College found that when handwritten essays were converted to word-processed essays (with all errors in grammar and spelling left intact), they received lower scores, that raters preferred reading handwritten essays, and that raters were more empathetic towards writers of handwritten rather than typed essays (Arnold et al., 1990). In a later study, Donald E. Powers, Mary E. Fowles, Marisa Farnum, and Paul Ramsey (1994) similarly found that handwritten essays converted to a word-processed format received lower scores, and that word-processed essays converted to handwritten format received higher scores. Because these lower scores could have resulted from a number of factors that could be controlled for with training, the researchers repeated their study, emphasizing to test raters that they should bear in mind that typed essays seem to be shorter than their handwritten counterparts, that sloppy handwriting can hide errors in language that might seem more striking in typed texts, and that the absence of crossed-out text in a typed essays does not mean that the writer has not done any planning or revising. Even with this training, the results of the repeated experiment were the same: Typed texts received lower scores. However, a more recent study (Russell & Haney, 1997) compared students’ responses on open-ended, multiple choice, and direct writing assessment items and found that working on a computer had a substantial positive impact on students’ writing scores. Students who had been working with computers in school for at least one year before the experiment wrote longer texts with more paragraphs, texts of significantly higher quality than those written by hand. Russell and Haney argued that, on the basis of these striking findings, truly authentic writing assessments must consider not only authentic tasks, but authentic test-taking media. For an increasing number of today’s students, the authentic test medium will be a computer. The Current Study This study was conducted to provide an empirical basis for administrative decisionmaking at our institution. In a classroom setting (such as Russell and Haney’s, 1997), it is easy to devise assessments that take account of students’ prior writing habits. Universities designing placement tests, however, have much less information about the prior writing experiences of their entering student populations. Given the conflicting array of research findings regarding the relationships between computers, writing quality, and rater judgment, it is necessary to identify whether the use of computers affects students’ texts or the raters’ judgments of those texts. METHODS Participants The target population comprised students who took an English placement test during a one-month period at a large Midwestern university. The final sample consisted of 480 students. Student ages ranged from 14 to 60, with a mean of 20.63 (SD ⫽ 5.91). The participant pool had a slightly higher percentage of females (55.9%) than males (44.1%). To derive a rough estimate of high school achievement level, high school percentile

202

HARRINGTON, SHERMIS,

AND

ROLLINS

Table 1 Frequencies of Sample Characteristics (N ⫽ 479) Characteristic Ethnicity Caucasian African American Hispanic Asian/Pacific Islander Native American Other American Refused to answer or not ascertained Gender Male Female

n

(%)

359 88 4 4 2 8 14

(74.9) (18.4) (.8) (.8) (.4) (1.7) (2.9)

211 268

(44.1) (55.9)

rankings were also collected for the sample. The mean for high school percentile rankings was 48.39 (SD ⫽ 25.38). Students were predominantly Caucasian (75.6%), with a smaller percentage of African American students (18.5%). These sample characteristics are summarized in Table 1. The English Placement Test and Assessment System The English placement exam is a one-hour exam that asks students to write an essay explaining and supporting their opinion on a current social issue; students have a choice of two questions, each providing a brief explanation of the issue or the context in which the test question is posed. Students are also asked to evaluate their answer and explain what changes they might make, had they the time to do so. When raters assess the English placement tests, they rely on their teaching experiences to guide their placement decisions, and their ratings indicate which introductory writing course would be appropriate for each essay’s writer. The distinction between basic writing and first-year composition courses is judged according to the presence or absence of organization, support, development, and an articulated position. Students who need extra help focusing their essays around a major theme or students who need extra help understanding the relationship between assertion and support, are placed into a basic writing course. Stronger writers who would benefit from a more challenging course are placed into honors. Although placement rates may vary from year to year, on the whole 60% of the students who take the test are placed into first-year composition, 35% are placed into basic writing, and roughly 5% are placed into either honors, ESL, or other special courses. Most placement decisions are made by faculty who teach first-year composition and basic writing; honors placements are made by faculty who teach honors courses. The scoring system is based on models developed at the University of Pittsburgh (Smith, 1993) and Washington State University (Haswell & Wyche-Smith, 1995). Test raters’ scores indicate whether an exam is prototypical for a given course, or on the borderlines between two courses. If the first rater places the student into first-year composition, the process is ended; if the first rater places a student into another course, a second (and third, if needed) rater scores the essay.

The Influence of Word Processing

203

Rater training in this system is based on faculty development opportunities throughout the academic year and two portfolio readings each semester, which develop shared expectations for course boundaries and student outcomes. In addition, placement test raters meet regularly during the year to discuss groups of placement tests chosen by the Director of Writing. Test scoring is guided by materials that outline expectations for each level of composition course, and sample tests are used in rater meetings to foster awareness of course boundaries and corresponding placement scores (see Harrington, 1998, for a fuller discussion of rater training and scoring). Final placement decisions are represented by numeric scores indicating whether the test was judged prototypic or not. Final numeric scores of 1 through 4 indicate placements into a prebasic writing course; 5 through 11 indicate placements into basic writing; 12 through 18 indicate placements into first-year composition; and 19 through 22 indicate placements into honors. Research on comparable scoring systems in other programs suggests that training and shared teaching expertise creates acceptable levels of reliability (see, for example, Smith, 1993; White, 1995). The predictive validity of the test has been computed with correlations in the low 0.20’s (Mzumara, Shermis, & Wimer, 1996), with grades as an outcome variable. Procedure Participants were initially screened on two dimensions of behavior. First, they were asked to provide a self-assessment of whether or not they possessed skills in basic typing and skills in using word-processing software. Second, they were asked to provide a selfassessment of how anxious they were about working with computers. Only two participants indicated that they had sufficiently poor typing/word-processing skills that would prevent them from participating in the study (they wrote their tests by hand and received placements, but the results were not included in the dataset). No participants rated themselves as being sufficiently anxious that they could not take part in the study. After screening, participants were randomly assigned to one of three groups as they came in for testing. The first group wrote their essays in a bluebook, and these essays were scored from the bluebook. This group represents the current method of English placement testing on our campus. The second group wrote their essays in the bluebook, and staff typed these essays into the database to appear like typewritten essays to raters. Any spelling or grammatical errors were left intact during the transfer from handwritten to typed form. The third group typed their essays into the database directly, and their essays were scored as such. The purpose of the first and third groups was to test for differences between the handwritten and typed essays when raters are aware of the test conditions. The second group (which we refer to as the transcribed group below), however, allowed the study of test quality when raters were blind to the actual testing medium. Differences between the first and second group would indicate the influence of rater reactions with respect to testing medium (i.e., handwritten or typed).1 1 The data were examined for possible initial group differences on two background characteristics: high school percentile rank, [F(2,384)⫽1.57, ns] and age [F(2,471)⫽0.61, ns]. Because the ANOVA revealed no significant initial group differences, the analysis was continued for the main hypothesis.

204

HARRINGTON, SHERMIS,

AND

ROLLINS

Table 2 Analysis of Variance for Placement Scores Source

SS

df

MS

F

Between (type) Within Total

24.6 4818.8 4843.4

2 475 477

12.3 10.1 —

1.2 — —

All tests were given in a microcomputer testing lab containing 25 computer stations. The laboratory contains 12 Macintosh and 13 PC-based computers in a sound-proofed area of a little more than 1,000 square feet. Each computer station is housed on a 3⬘ by 8⬘ table with privacy partitions; proctors are available to provide basic technical assistance to students taking the test and to ensure an appropriate testing environment. Students scheduled placement tests at their convenience during the testing lab’s open hours (8 a.m. to 8 p.m. on weekdays and 9 a.m. to 5 p.m. on Saturdays), and used a World Wide Web-based application (rather than a full-fledged word-processing package) to produce their tests. No spell-check was available in this application (a technological constraint of which the test raters were aware). The lab’s Web server uses Microsoft software for retrieving the Web form (along with the prompts and instructions) and displays it for the student. Once the student completed the essay, it was submitted to a Microsoft ACCESS database for storage and evaluation. Raters who evaluated the electronic versions of the test did so by retrieving the essay along with an electronic evaluation form in a Web-based environment (accessible from home or campus). Ratings were stored in the ACCESS database and also made available in a university-wide reporting system (see Shermis, Mzumara, Brown, & Lillig, 1997, for a more complete description of the software system). RESULTS The numeric placement scores were analyzed using a one-way ANOVA with the type of input medium (bluebook, transcribed, typed) as the independent variable. Significant differences in an ANOVA would mean that the raters perceived the input medium differently. Results of the ANOVA are presented in Table 2. Sample means and standard deviations stratified by type of input medium are presented in Table 3. There were no significant mean placement score differences among the three groups [F(2,475)⫽1.21, ns]. To confirm that the nonsignificant sample mean differences would

Table 3 Means and Standard Deviations of Placement Scores by Comparison Groups Variable

n

M

SD

Test group Handwritten Transcribed Typed

152 148 163

13.14 12.58 12.54

2.93 3.06 3.32

The Influence of Word Processing

205

have no impact on placement decisions, a ␹2 was performed on the placement by type of input matrix. The ␹2 was nonsignificant [c(6) ⫽ 5.28, ns]. Inter-rater reliability was computed on a subsample of the data with average reliability coefficients between two raters at r(204)⫽0.87, p ⬍ .001. DISCUSSION These analyses indicate that the use of a computer equipped with a word-processing interface did not affect students’ English placement test results. The results from the transcribed group, whose handwritten essays were retyped before scoring, permitted the analysis of effect of medium on rater reaction. No differences were found between any of the experimental conditions. Several issues arise as a result of this experiment: 1. Deciding whether to switch from hand-composed placement tests to computercomposed placement tests. The fact that there were no differences between groups means that arguments can be made both for switching to a computer-based placement test and against such a switch. If students are neither advantaged nor disadvantaged by the test medium, the test medium alone provides no rationale for administrative decisions. However, other factors, including available resources, the quality of existing technology, and students’ prior writing experiences must be weighed when making such decisions. 2. Assessment and student practice. Placement tests must match up not only with the curricula into which they place students, but also with the prior writing practices of the students who will take them. At a time when more and more students are using computers to write, placement testing must adapt. However, it is important to note that student experience with computers is not universal; at our urban campus, with a wide range of incoming students, we see students who have been using computers since kindergarten and others who have never used a computer at all. It is important that test conditions be flexible, so that students who cannot type, or who prefer not to type, can test under conditions they find familiar. Only two students exercised their right to opt out of the experiment and hand write their tests (because of lack of experience with word-processing software), but these two students raise an important concern. Edward Wolfe, Sandra Bolton, Brian Feltovich, and Donna Niday (1996) compared student performance by handwriting and by computer and found that students with low word processing experience scored significantly lower when typing than when writing. Prior experience with word processing may affect student performance, and test administrators should consider this factor when designing test procedures. Otherwise, a computerized test may penalize students with low word-processing abilities. 3. Student preferences. For the past two years, students taking the English placement test have been asked to reread and evaluate their answer, and to explain what sorts of changes they would make in the essay if they had the chance to revise it. Most frequently, students noted that they would type their answers, or rewrite to eliminate problems with messy handwriting; many students also reported that they would spend some time researching information for their answers, but those issues are beyond the scope of our inquiry here. The students’ desire to type their essays reflects a concern

206

HARRINGTON, SHERMIS,

AND

ROLLINS

with the appearance of the final product, as well as an awareness of some of the advantages introduced by spellcheckers. Many students clearly are eager to use a computer to help them demonstrate their best efforts on the placement test. 4. Rater awareness. Some previous research suggested that test raters held typed text to higher standards than handwritten text. Although this finding was not replicated here, it is important to note that raters of typed placement tests must re-evaluate the usual standards they use to gauge certain text features. During the experimental period, test raters were meeting weekly to develop new placement test questions, and the subject of the typed tests occasionally came up in informal conversation. Although we were careful not to bias the experiment by revealing the existence of the transcribed group until after the experimental period had ended, it was difficult not to permit some conversation about the experiment. Raters reported having to rethink inferences made on the basis of length, which became much more difficult to evaluate with typed texts, which seem much shorter than handwritten counterparts. Similarly, spelling errors seemed more prominent in typed tests than in some handwritten tests. Without instruction from the authors, the placement test raters evaluated the situation and noted that assumptions about traditional rating categories (such as length) needed to be re-evaluated in light of the impromptu testing situation. With the adoption of computerized testing as a widespread format for placement testing, more explicit rater training on this point will be necessary, to ensure continued fairness and to help the test raters be more comfortable with their judgments. 5. Technological reliability. The experimental period saw an increase in the number of logistic problems relating to test handling. During the development phase, the Testing Center server crashed periodically—an average of once a day—and some student tests were lost. In some cases, the lost tests were not noticed until the students had trouble scheduling advising and orientation appointments. In other cases, students had to retake the placement tests immediately. Computerized testing introduces some uncertainty into the testing process; computers break down much more often than pens and bluebooks do. Test administrators need to anticipate such problems, and do their best to minimize them to make computerized testing a smooth, relatively stress-free, and efficient procedure.

CONCLUSIONS Given the multitude of conflicting research results regarding the impact of computers on students’ writing, we are wary of generalizing from the results of this study. Clearly, entering students at our institution were not disadvantaged by the chance to type their English placement tests; given many students’ stated preference for typing over writing, we plan to make this option available to students. Through the 1980s, a steady stream of research explored the impact of computers on students’ writing. But, as computers became more and more common on college campuses, research questions began to shift. Fifteen years ago, English departments faced the question “Should we use computers?” Now the question has become “How should we use computers?” Research questions have accordingly shifted, to examine not whether computers make a difference, but how computers make a difference. A quick glance through

The Influence of Word Processing

207

the recent volumes of this journal illustrates that research now examines the demands the electronic age makes on readers and writers. Yet, the question of the impact of computers is not moot. As computers become more and more common on our campuses, we must understand the ways students may be advantaged— or not— by the use of computers, especially in assessment situations that have immediate consequences for students. If asking students to perform tasks by hand is at odds with their preferred style of writing, we must change our assessment practices. If reading typed work creates expectations at odds with previously held assumptions about student work, we must change the way we train and orient test raters. This current study suggests that, on our campus at least, computer-composed text has become so much a part of the writing and reading practices of many students and faculty that a shift to computerized placement testing will have no impact on students’ placements, and a positive impact on student morale. Placement tests must be connected with students’ habits as well as students’ potentials, and test administrators should examine the ways the test medium can best serve students. Susanmarie Harrington is Director of Writing and an associate professor of English at Indiana University Purdue University Indianapolis (IUPUI); her publications include The Online Writing Classroom (Hampton Press, edited with Rebecca Rickly and Michael Day); her research interests include writing assessment and writing program administration. Her e-mail address is ⬍[email protected]⬎. Mark D. Shermis is Director of Testing and an associate professor of Psychology at IUPUI and a licensed psychologist in the State of Indiana. His research interests include computerized adaptive testing, electronic assessment techniques, and the use of microcomputers to gather, analyze, and display educational and psychological data. His e-mail address is ⬍[email protected]⬎ and his Web site address is ⬍http://assessment.iupui.edu/testing/shermis⬎. Angela L. Rollins is a doctoral student in Clinical Rehabilitation Psychology at IUPUI. Her interests include assessment, intervention, and program evaluation in psychosocial rehabilitation of persons with severe mental illness, particularly focusing on vocational rehabilitation. Her e-mail address is ⬍[email protected]⬎.

REFERENCES Arnold, Voiza; Legas, Julia; Obler, Susan; Pacheco, Mary Ann; Russell, Carolyn; & Umbdenstock, Linda. (1990). Do students get higher scores on their word-processed papers? A study in scoring hand-written vs. word-processed papers. Unpublished paper, Rio Hondo College, Whittier, CA. Barritt, Loren; Stock, Patricia; & Clark, Francelia. (1986). Researching practice: Evaluating assessment essays. College Composition and Communication, 38, 67–90. Black, Laurel; Daiker, Donald; Sommers, Jeff; & Stygall, Gail. (1994). Writing like a woman and being rewarded for it: Gender, assessment, and reflective letters from Miami University’s student portfolios. In Laurel Black, Donald Daiker, Jeff Sommers, & Gail Stygall (Eds.), New

208

HARRINGTON, SHERMIS,

AND

ROLLINS

directions in portfolio assessment: Reflective practice, critical theory, and large scale scoring (pp. 235–247). Portsmouth, NH: Heinemann/Boynton–Cook. Bridwell, Lillian; Sirc, Geoffrey; & Brook, Robert. (1985). Revising and computing: Case studies of student writers. In Sarah Freedman (Ed.), The acquisition of written language: Revision and response (pp. 172–194). Norwood, NJ: Ablex. Bridwell-Bowles, Lillian; Johnson, Parker; & Brehe, Steven. (1987). Composing and computers: Case studies of experienced writers. In A. Matsuhashi (Ed.), Writing in real time: Modeling composing processes (pp. 81–107). Norwood, NJ: Ablex. Bunderson, C. Victor; Inouye, Dillon K.; & Olsen, James B. (1989). The four generations of computerized educational measurement. In Robert L. Linn (Ed.), Educational measurement (3rd ed.) (pp. 367– 407). Washington, DC: American Council on Education. Burley, Hansel. (1994). Postsecondary novice and better than novice writers: The effects of word processing and a very special computer assisted writing lab. Paper presented at the Southwestern Educational Research Association Meeting, San Antonio, TX. Chase, Clinton I. (1986). Essay test scoring: Interaction of relevant variables. Journal of Educational Measurement, 23, 33– 41. Cochran-Smith, Marilyn. (1991). Word processing and writing in elementary classrooms: A critical review of related literature. Review of Educational Research, 61, 107–155. Collier, Richard M. (1983). The word processor and revision strategies. College Composition and Communication, 34, 149 –155. Cooper, Charles, & Odell, Lee. (Eds.). (1977). Evaluating writing: Describing, measuring, judging. Urbana, IL: National Council of Teachers of English. Coulter, Catherine A. (1986). Writing with word processors: Effects on cognitive development, revision, and writing quality. (Doctoral dissertation, University of Oklahoma). Dissertation Abstracts International, 47, 2551A. Daiute, Colette. (1983). The computer as stylus and audience. College Composition and Communication, 34, 134 –145. Daiute, Colette. (1985). Do writers talk to themselves? In Sarah Freedman (Ed.), The acquisition of written language: Revision and response (pp. 133–159). Norwood, NJ: Ablex. Dean, Robert L. (1986). Cognitive, pedagogic, and financial implications of word processing in a freshman English program: A report on two years of a longitudinal study. Paper presented at the annual meetings of the Association for Institutional Research, Orlando, FL. (ED, 280 384). Dickinson, David K. (1986). Cooperation, collaboration, and a computer: Integrating a computer into a first-grade writing program. Research in the Teaching of English, 20, 357–378. Freedman, Aviva, & Clarke, Linda. (1988). The effect of computer technology on composing processes and written products of grade 8 and grade 12 students. (Education and Technology Series). Toronto: Ontario Department of Education. Haas, Christina. (1989a). How the writing medium shapes the writing process: Effects of word processing on planning. Research in the Teaching of English, 23, 181–207. Haas, Christina. (1989b). “Seeing it on the screen isn’t really seeing it”: Computer writers’ reading problems. In Gail E. Hawisher & Cynthia L. Selfe (Eds.), Critical perspectives on computers and composition instruction (pp. 16 –29). New York: Teachers College Press. Haas, Christina, & Hayes, John. (1986). Pen and paper v. the machine: Writers composing in hard copy and computer conditions. Pittsburgh: Carnegie–Mellon Technical Report No, 16, (Available ED265 563). Harrington, Susanmarie. (1998). New visions of authority in placement test rating. WPA: Journal of the Council of Writing Program Administrators, 22, 53– 84. Haswell, Richard H. (1988). Dark shadows: The fate of writers at the bottom. College Composition and Communication, 39, 303–315.

The Influence of Word Processing

209

Haswell, Richard H., & Wyche-Smith, Susan. (1995). A two-tiered rating procedure for placement essays. In Trudy Banta (Ed.), Assessment in practice: Putting principles to work on college campuses (pp. 204 –207). San Francisco: Jossey Bass. Hawisher, Gail. (1987). The effects of word processing on the revision strategies of college freshmen. Research in the Teaching of English, 21, 145–159. Hawisher, Gail. (1989). Research and recommendations for computers and composition. In Gail E. Hawisher & Cynthia L. Selfe (Eds.), Critical perspectives on computers and composition instruction (pp. 44 – 69). New York: Teachers College Press. Hawisher, Gail, & Fortune, Ron. (1988, April). Research into word processing and the basic writer. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA. (Available ED 298 943). Huot, Brian. (1990). The literature of direct writing assessment: Major concerns and prevailing trends. Review of Educational Research, 60, 237–263. Huot, Brian. (1996). Toward a new theory of writing assessment. College Composition and Communication, 47, 549 –566. Kaplan, Howard. (1986). Computers and composition: Improving students’ written performance. (Doctoral dissertation, University of Massachusetts). Dissertation Abstracts International, 47, 776A. Markham, Lynda. (1976). Influences of handwriting quality on teacher evaluation of written work. American Educational Research Journal, 13, 277–283. Mzumara, Howard R.; Shermis, Mark D.; & Wimer, Deborah. (1996). Validity of the IUPUI placement test scores for course placement: 1995–1996. Indianapolis, IN: IUPUI Testing Center. Neuwirth, Christine; Haas, Christina; & Hayes, John. (1990). Does word processing improve students’ writing? A critical appraisal and assessment. Final report to FIPSE. (Available ERIC: ED329 994). Peterson, Sarah. (1993). A comparison of student revisions when composing with pen and paper versus word-processing. Computers in the Schools, 9, 55– 69. Powers, Donald E.; Fowles, Mary E.; Farnum, Marisa; & Ramsey, Paul. (1994). Will they think less of my handwritten essay if others word-process theirs? Effects on essay scores of intermingling handwritten and word-processed essays. Journal of Educational Measurement, 31, 220 –233. Rhodes, Barbara K., & Ives, Nancy. (1991). Computers and revisions—Wishful thinking or reality? (Available ERIC: ED331 045). Russell, Michael, & Haney, Walt. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Education Policy Analysis Archives, 5. Available: http://olam.ed.asu.edu/epaa/v5n3.html⬎ [Accessed February 17, 1997]. Ruth, Leo, & Murphy, Sandra. (1988). Designing writing tasks for the assessment of writing. Norwood, NJ: Ablex. Schwartz, Helen; Fitzpatrick, Christine; & Huot, Brian. (1994). The computer medium in writing for discovery. Computers and Composition, 11, 137–149. Shermis, Mark D.; Mzumara, Howard; Brown, Michael; & Lillig, Clotilde. (1997). Computerized adaptive testing through the World Wide Web. Paper presented at the annual meeting of the American Psychological Association, Chicago, IL. Smith, William. (1993). Assessing the reliability and adequacy of using holistic scoring of essays as a college composition placement technique. In Michael M. Williamson & Brian Huot (Eds.), Validating holistic scoring for writing assessment: theoretical and empirical foundations (pp. 142–205). Cresskill NJ: Hampton Press. White, Edward. (1995). Teaching and assessing writing (2nd ed.). San Francisco: Jossey–Bass.

210

HARRINGTON, SHERMIS,

AND

ROLLINS

Wolfe, Edward W.; Bolton, Sandra; Feltovich, Brian; & Bangert, Art W. (1996). A study of word processing and its effects on student essay writing. Journal of Educational Computing Research, 14, 269 –283. Wolfe, Edward W.; Bolton, Sandra; Feltovich, Brian; & Niday, Donna. (1996). The influence of student experience with word processors on the quality of essays written for a direct writing assessment. Assessing Writing, 3, 123–147.