Providing relevant content in an EAP writing test

Providing relevant content in an EAP writing test

En&h for spexi* Putposes. Vol. 9. pp. 109421, Pergamon Press pk. RintedintheUSA 1990 Copyri&t oe89-4906/90 s3.00 + Ia 0 1990 The American University...

1MB Sizes 45 Downloads 146 Views

En&h for spexi* Putposes. Vol. 9. pp. 109421, Pergamon Press pk. RintedintheUSA

1990 Copyri&t

oe89-4906/90 s3.00 + Ia 0 1990 The American University

Providing Relevant Content in an EAP Writing Test John Read Abstract - This article considers the question of how best to elicit samples of writing for assessment in an EAP proficiency test, and, more specificaUy, how to ensure that every test-taker has something to write about. Three types of writing tasks - independent, guided, and experience - are detined in terms of the type of preparation or guidance that they provide for the writer. Examples of each type of task are cited from contemporary EAP tests, and it is argued that guided and experience tasks are more satisfactory than independent ones. The article then analyses the three guided and experience tasks used in a writing test that is administered at the end of a three-month EAP course. The reliability and validity of the test are discussed, and this leads to a more general consideration of the issue of how to validity assess the proficiency of university students in academic writing.

Introduction Foreign students preparing to undertake studies at English-medium universities must develop a range of academic writing skills, as recent needs analyses (e.g. Bridgeman and Carlson, 1983; Horowitz, 1986; Weir, 1988b) have clearly demonstrated. Thus, in the field of EAP proficiency testing, there is increasing recognition of the importance of direct measures of writing ability, in spite of the problems involved in achieving a high level of reliability in the marking of the scripts. The introduction of the Test of Written English into the TOEFL test battery is an obvious indication of this trend, while other major EAP tests like ELTS and Michigan English Language Assessment Battery (MELAB) have included a writing component for some time. In assessing writing skills in a proficiency test, there are essentially two stages: f&t, eliciting a sample of writing, second, evaluating it. When we look at current testing practices, we find that more attention is devoted to the second stage than to the first. Although the need for reliable marking procedures is generally recognised, relatively less consideration seems to be given to the question of whether the test will elicit a valid sample of the students’ writing. This question arises specifically in relation to the timed impromptu test, which is still the predominant form of writing assessment, especially in the United States. In such tests, the task specification is typically referred to as a “topic,” a “stimulus,” or a “prompt,” all terms which suggest a minimal statement of the task and an assumption that the testee will basically supply the content of the writing themselves.

Address correspondence to: John Read. English Language Institute, Victoria University of Wellington, PO Box 600. Wellington, New Zealand.

109

110

J. Read

The purpose of this article is to tirst explore the issue of how to elicit samples of writing for the purposes of proficiency assessment in EAP, and secondly, to analyse the design and development of a particular EAP proficiency test, one that incorporates a variety of writing tasks in an effort to sample the students’ writing ability in a more valid way. The Background Knowledge

Factor

The starting point for this work was a consideration of the role that background knowledge of the subject matter might play in writing performance. In the case of ~~~in~, it is now widely accepted, on the basis of the research by Carrell(1984,1987), Johnson (19811, and others, that background knowledge is a very significant factor in the ability of second language readers to comprehend a written text. Furthermore, testing researchers, such as Alderson and Urquhart (1985) and Hale (1988), have produced some evidence that a lack of relevant background knowledge can affect performance on a test of reading comprehension (although admittedly the evidence so far is not as strong as one might have expected). Until very recently, there appear to have been no comparable studies of the role of background knowledge in second language whiting, but it seems reasonable to expect that it does have a similar effect: someone is likely to write better about a familiar topic than an unfamiliar one. Of course, in general terms, this factor has long been recognised as a significant one in the testing of writing, and there are various ways in which testers have sought to minimise its effect. One approach is to give careful consideration to the choice of topic. Jacobs, Hartfiel, Hughey, Wo~u~, Zingraf (1981, p. 12-15) suggest, hong other things, that the topic should be approp~ate to the educational level and interests of the students, should motivate them to communicate with the reader, and should not be biased in favour of any particular subgroup among them. In other words, the topic should be about a subject that all potential test-takers have enough relevant information on, or opinions of, to be able to write to the best of their ability. On the other hand, the topic should not be too simple or predictable. The dilemma is clearly presented by Herzog in a discussion of a new writing test for applicants for language teaching positions at the Defense Language Institute (DLI): DLI sought to create topics that no category of applicants would be likely to have prepared for in advance. At the same time, the topics had to be of interest to the average applicant. The task of narrowing the test to measure writing proficiency according to the level descriptions and nothing more - not general knowledge, professional preparation, intelligence, or retention of published articles - was not an easy one. (1988, p. 154)

Another solution is to give a choice of topic. In this case, it is assumed that there is a range of interests and backwood represented among the testtakers, and so it is honed that all of them will find at least one that motivates them to write as best they can. The problem with this strategy, as Heaton (1988, p. 138) points out, is #at it reduces the reliabiity of the assessment, in

ProvidingRelevant Content in an EAR Writing Test

111

that essays written on different topics are less easy to compare than ones that are all on the same topic. A choice of topic may also cause some students to lose valuable writing time trying to decide which of the topics they are going to write on. What is proposed here, then, is an alternative approach to the problem of the effect of background knowledge, whereby the test-takers are given a writing task based on content material that is provided to them as part of the testing procedure, rather than simply being given a topic to write on. Classifying Writing Tasks In order to clarify how this approach differs from the conventional ~promp~ writing test, it is useful to refer to Nation’s (1989) classification of language learning tasks.. In this system, tasks are categorised according to the amount of preparation or guidance that the learners are given. There are three task types that are relevant to the present discussion. They may be defined for our purposes as follows: 1. In&@n&nt tasks: The learners are set a topic and expected to write on it without any guidance. This is the conventional approach to the assessment of writing, as discussed above. It assumes that all of the students have background knowledge in the form of information, ideas, and opinions that are relevant to the topic set. Another term which describes this type of task is “free composition.” 2. Guided tusks: The learners are provided with guidance while &y are ~~~~~g, in the form of a table, a graph, a picture, or relevant language material. This category in&ides “guided com~sition,” in which lower proficiency learners are given lapse support: key vocabulary items, incomplete sentences, paragraph frames, and so on. However, more importantly for our purposes, it also covers tasks in which the learners are provided with content support, especially in the form of nonlinear text material. 3. Experience tasks: The students are given the opportunity to acquire relevant content and skills through prior experience before they unab-take the writing tusk. Let us now look at the use of the three types of writing task in current EAP proficiency tests. Independent Tasks As previously indicated, the timed ~promptu test is a prime case of the ~de~ndent type of task. The ~uenti~ text, Testing ESL ~~~sit~ (Jacobs, et al., 1981>, includes sample scripts written for the com~sition section of the MELAB, on topics such as: 1. The role of agriculture in my country today.

112

J. Read

2. Why young people in my country need a college education. 3. Meeting the energy needs of a modem world - problems and prospects. Similar topics are used in the current version of the MELAB (English Language Institute, 1984). An issue that has received considerable attention in recent years is how to phrase the topic in the most effective way. There have been a number of studies of this kind in the field of Ll composition. For example, Brossell and Hoetker Ash (1984) investigated the use of a question format rather than the imperative, and the personal “you” instead of more impersonal structures. Others (e.g., Ruth and Murphy, 1984; Hoetker and Brossell, 1986) have looked more ~~~~y at the problems of composing prompts that students can readily interpret and respond to. In the case of ESL writing assessment, Hirokawa and Swales (1986) investigated whether the use of formally stated, academic-style stimuli was a better means of eliciting academic prose from MELAB examinees than the simple informal topics normally used in the test. Although there were some significant differences in structural features between the two types of compo sition, overall it was not possible to conclude that the academic p~s~g had elicited more sophisticated or academically appropriate writing than the simply worded topics. That notwithstanding, one current trend in writing tests is to specify more carefully the purpose of the writing and the readership. Here is an example from the proposed Defense Language Institute test for language teaching applicants: 1. Assume that you have just returned from a trip and are writing a letter to a close friend. Describe a particularly memorable experience that occurred while you were traveling. 2. This will be one paragraph in a longer letter to your friend. The paragraph should be about 100 words in length, 3. You will be judged on the style and organization of this paragraph as well as vocabulary and grammar. Remember, the intended reader is a close friend. (Herzog, 1988, p. 155). From the standpoint of the present discussion, it should be pointed out that this represents an expansion and e~boration of the test rubric, specifying what form the com~sition should take. In Nation’s terms it is still an ~dependent task, with no guidance being offered with respect to the content of the topic. Guided Tasks

Guided writing tasks, which provide c~tent for the students, are a more recent innovation in EAP proficiency tests. In a preparatory study for the deveiopment of the Test of Written English (TWE), Bridgeman and Carison (1983) included several examples of guided tasks among the set of sample writing topics that they prepared for a survey of faculty in 190 North American

ProvidingRelevant Content in an EAP Writing Test

113

university departments with high foreign student enrollments. The guided tasks consisted of nonlinear texts (a bar graph and a pie chart), to be described ‘sed and analysed. The tasks and interpreted, and linear texts to be so with the graph and chart were those that were most favoured by the respondents to the survey, so these constitute one of the two categories of tasks that are included in operational versions of the TWE (Educational Testing Service, 1989, p. 9). One British example of the use of guided tasks is found in the Test in English (Overseas), an EAP proficiency test produced by the Joint Matriculation Board, which has pictures and diagrams as stimuli in the writing section (Porter, 1987, p, 76; Weir, 1988a, pp. 220-221). Experience Tasks For the purposes of EAP proficiency testing, an experience task is one in which the testees have the opportunity to study relevant content material before they produce their writing. Tasks of this kind are found in two of the main British EAP tests: The English Language Testing Service (ELTS), and the Test in English for Educational Purposes (TEEP). In both of these tests, writing tasks are linked with tasks involving other skills to some extent in order to simulate the process of academic study. For example, in ELTS the two writing tasks draw on texts that have already been read by the candidates for the preceding reading comprehension subtest (British Council, 198’7). Similarly, in the first paper of TEEP, the candidates work with two types of input on a single topic: a lengthy written academic text and a 10 minute lecture. In addition to answering comprehension questions about each of these sources, testees are required to write summaries of the information presented in each one (Associated Examining Board, 1984). The same kind of test design, where a writing task requires the synthesizing of information from readings and a lecture presented previously on the same topic, is found in the Ontario Test of ESL (OTESL) in Canada (Wesche, 1987). The preceding review of current practices in the testing of writing indicates that there are two distinct approaches to the elicitation of writing samples. The Grst approach, which predominates in the United States, involves the setting of what we have called independent tasks, consisting of a succinctly expressed topic or prompt that is designed to draw on ideas or subject matter which the students are (assumed to be) already familiar with. The second approach, exemplified in recent British tests, employs guided and experience tasks that provide the test-takers with content material to work with. This may help to reduce the effects of differences in background knowledge among test-takers and, when the writing tasks are linked to earlier reading and listening tasks, may represent a better sedation of the process of academic study than simply giving a stand-alone writing test. The Design of the ELI Writing Test In order to explore further the use of guided and experience tasks in the assessment of writing for academic purposes, let us look at a test developed at

114

J. Read

the English Language Institute of Victoria University. The test is administered at the end of a three-month EAF’ course for foreign students preparing for study at New Zealand universities, and forms part of a larger proficiency test battery, which also includes functional tests of academic reading and listening skills. The test results provide a basis for reporting on the students’ proficiency to sponsoring agencies (where applicable) and to the students themselves. They are not normally used for university admission decisions; other measures and criteria are employed for that purpose. The Three Writing Tasks The writing test consists of three tasks. The first one is administered at the end of the tenth week of the 12week course, while the other two tasks are given at the end of the following week. This means that the three samples of the students’ writing are collected on two different occasions, which provides a sounder basis for reliability in the assessment. Task 1 The first task, which is modelled on one by Jordan (1980, p. 491, is an example of the guided type. The test-takers are given a table of information about three grammar books. For each book, the table presents the title, author name(s), price, number of pages, the level of learner for whom the book is intended (basic, intermediate or advanced), and some other features, such as the availability of an accompanying workbook and the basis on which the content of the book is organised. The task is presented to the learners like this: “You go to the university bookshop and find that there are three grammar books available. Explain which one is likely to be the most suitable one for you by comparing it with the other two.” This task is perhaps more of a personal writing task than a strictly academic one, although it does require the use of comparison and contrast. Initially it was found that a number of the students tended to convert all the information into linear text in a rather mechanical way, with or without the use of comparative and contrastive devices. Although the wording of the instructions has been changed and a parallel task is given as a practice test a few days earlier, it is still a strategy which some weaker students follow. Task 2 For the second task, the test-takers are given a written text of about 600 words, which describes the process of steel-making. Together with the text, they receive a worksheet, which gives them some minimal guidance on how to take notes on the text. After a period of 25 minutes for taking notes, the texts are collected and lined writing paper is distributed. Then, the students have 30 minutes to write their own account of the process, making use of the notes that they have made on the worksheet but not being able to refer to the original

Providing Relevant Content in an EAF’ Writing Test

115

text. Since the students are given the opportunity to learn about the subject matter before they undertake the writing, this can be classified as an experience task, in our terms. As might be expected, some of the students spend the note-taking time copying whole chunks of the reading text on to the worksheet and then recopy those chunks on their writing paper during the writing time. However, the majority of them do not; they make a real effort to represent the content of the reading text in note form, although it is true that whole phrases and sentences do commonly find their way from the reading text into the students’ writing. This may partly reflect a problem with the particular text used for the text, since some of the information is presented in a concentrated form which makes it difficult to rewrite without following the original text fairly closely. Therefore, the text for this type of writing task needs to be carefully chosen. It is this kind of problem that is often used as an argument against using a linear text - as distinct from a diagram, or a graph - as stimulus material in a writing test. From our experience, the problem is not simply the fact of the copying, but also the relatively longer time that is required for marking the scripts, because the markers need to be regularly referring back to the original text in order to check the extent to which the student has copied from it. On the other hand, in defence of the task in its present form, one can argue that it does simulate to some degree an activity that the students will be engaging in in their academic studies, namely, making notes on material in their textbooks and other written sources and then incorporating that same information in their own written essays, assignments, theses, and so on. Many of the students, especially those who are weak at writing, have a tendency to plagiarise from their sources, and this writing task gives an indication of whether they have the ability to express the ideas from the text in their own words. The task also requires some skill at s ummarizing, because very few of the students can produce a text of equivalent length to the original within the half hour that they are given for the writing.

Task3 The third task, like the second one, is intended to simulate part of the process of academic study. In this case, the preparation for the test begins five days beforehand, because in the week leading up to the test, all of the eight groups that make up the course study a topic for up to five hours in class. The topic used so far has been on the subject of food additives. The class teachers are provided in advance with a set of three key articles on the topic and they prepare materials and activities that are appropriate for the proficiency level of their group. The class activities typically include reading extracts from the key articles, listening to a talk by the teacher, taking notes, having a class debate, and discussing how to organise an answer to a specific question related to the topic. This approach means that each group is prepared for the test in a somewhat different way, although in practice the teachers work cooperatively

116

J. Read

and share materials. The only restrictions placed on the class preparation for the test are, first, that no more than five class hours should be spent on it and, second, that the students should not do in class a practice version of the test on which they receive feedback from the teacher. The week’s activities are intended to represent a kind of mini-course on the topic of food additives, leading up to the test on the Friday, when the students are given an examination-type question related to the topic and they have 40 minutes to write an answer to it. The question is not disclosed in advance either to the students or the teachers. An example of a recent question was: Processed foods contain additives How safe is it to eat such foods?

This third part of the test is a clear example of an experience task. It provides the students with multiple opportunities during the week to learn about the topic (to acquire relevant background knowledge), both through the class work and any individual studying they may do. Of course this does not eliminate differences in background knowledge among the students on the course. However, we consider it sufficient if all of the students learn enough about the topic of food additives to be able to write about it knowledgeably in the test on the Friday. In this form, with its three separate tasks, the test has proved to be workable in our situation. Obviously with guided tasks and more particularly with experience ones, test-takers need a significant additional amount of preparation time before they undertake the actual writing task. In our case, we have been able to integrate much of the preparation into the instructional time on the course, but in other testing contexts this may not be practical, for example, in the assessment of large numbers of students for admissions or placement purposes. On the other hand, as previously noted, the developers of tests such as ELTS and TEEP have found ways to incorporate experience tasks into proficiency tests that are administered on a large scale internationally. Evaluating the Test Rating Procedure

The scripts are rated by the teachers on the course, who are rostered in teams of seven or eight to work on each of the three writing tasks. The rating is carried out using the double impression method. In other words, each of the scripts is rated on a six-point scale by two teachers working independently, on the basis of their overall impression of the quality of the writing in relation to the task that was set. This process results in six ratings of a student’s writing proficiency: two for each of the three scripts. An overall rating is obtained by adding the six ratings to get a total out of 36. Various steps have been taken to standardise the judgements of the raters. At the beginning of each rating session, the raters are briefed on the writing task and given a copy of the rating scale, on which different levels of

ProvidingRelevant Content in an EAP Writing Test

117

TABLE 1

Inter-RaterReality Raters

Correiation

N

Task I A and B CandD E and F EandG F and G

0.82 0.66 0.87 0.80 0.48

44 46 21 22 22

Task 2 AandH I and J K and L M and N

0.69 0.82 0.85 0.69

43 36 43 44

Task 3 E and F KandL K and 0 N and P

0.81 0.66 0.71 0.15

45 44 22 43

performance are described in terms of typical characteristics. For each of the three tasks, the rating scale is defined slightly differently, in order to reflect the different types of text produced. The team of raters sit together in one room to do the assessment so that they are able to seek clarification of the rating criteria and discuss problems of interpretation as they arise. Reliability In order to check the inter-rater reliability of the test, the correlations between pairs of raters were calculated for each of the three tasks in the most recent administration of the test. The results are shown in Table 1. Clearly, from the point of view of reliability, it is unsatisfactory to have so many different raters involved but this has been difficult to avoid, given the limited availability of the teachers for the task and the prevailing feeling that the “chore” of rating the scripts should be shared as widely as possible. There is a standard procedure for reviewing scripts in cases where the ratings diverge by more than one point on the scale. To obtain a general measure of the reliability of each task, the individual correlation coefficients were averaged using Fisher 2 transformations. The averages were as follows: Task 1: 0.75 Task 2: 0.77 Task 3: 0.73 Smce different teams of rates worked on each task, it is not appropriate to draw any conclusions from these fgures about the relative reliability of the

118

J. Read

three tasks. However, it is obviously desirable to improve the overall reliability of the ratings, and we are currently exploring ways of achieving this by using a smaller number of better trained raters. Validity As indicated in the discussion of the three test tasks above, the test design was based on a prior analysis of the typical writing needs of students in New Zealand universities. However, it was not possible at that time to undertake the kind of extensive empirical research into writing needs that was done by, for example, Bridgeman and Carlson (1983), or Weir (1988b), and therefore, in this sense, the content validity of the test has not been formally established. It should be noted, though, that even if such empirical data were available, the determination of the test content would not be a straightforward matter, because of the range of disciplinary areas and academic levels (undergraduate vs. graduate) represented among the 15&170 students who take the test each year. A measure of the concurrent validity of the test was obtained by correlating the overall rating for writing with teacher ratings. The latter were obtained two weeks before the test was administered by simply asking teachers to rate the students in their group on the same six-point scale that is used for the final overall rating of the students’ proficiency. The correlation coefficient was 0.673. Although the correlation was not particularly high, it is probably a satisfactory one, given the limitations of the teacher ratings as a criterion measure. For one thing, the teachers were asked to rate the students’ overall proficiency, and to give a specific rating for writing only if the student’s writing ability seemed to be sinewy different from his or her generaI proficiency in the language. Also, teachers often take a broad view of their students’ writing ability in making their ratings, whereas the writing test focuses more specifically on academic writing. Thus, students who produce good personal writing in class may be rated down in the test if they employ the same informal style in tasks for which a more academic style is appropriate. On the other hand, the lack of a strong relationship between the test ratings and the teacher ratings probably also reflects the limitations of the test situation as a way of assessing writing proficiency. There were a number of students who received lower ratings in the writing test than their teachers expected, based on the students’ performance in class. These students were capable of producing good (clearly expressed and well-organised) academic papers, which might have required sentence-level editing, but were otherwise of high quality. They were usually postgraduate students, often with professional work experience in their own countries; however, they wrote slowly and needed more time than the writing test allowed them to demonstrate their academic writing ability. Here we see one of the limits to the validity of this or any writing assessment which involves strict timing and other test con~tions. Students who do not

Providing Relevant Content in an EAP Writing Test

119

perform well in this situation may nevertheless be quite able to produce good research papers and theses, which, for postgraduate students, is one of the major requirements of their studies. Even a substantially increased time allocation would not alter the fact that the students are being required to write under constraints that do not normally apply to the writing process. One way of broadening the measure of writing ability would be to incorporate the class teacher ratings into the final assessment. This is already done informally to some extent through the teacher comments that are included on each student’s proficiency report form. However, there would be obvious problems with reliability if the teacher ratings were to be used in a more formal way. Another possibility that we are interested in is portfolio assessment (e.g., Katz, 1988, pp. 196-198), which would involve collecting a standard set of different types of writing completed by each student during the course and then having them assessed according to agreed criteria by one or two teachers other than the class teacher. Once again, there are practical difficulties in implementing this approach, but it may be valuable as a supplementary method of assessment, especially in the case of postgraduate students whose primary academic writing activities are the preparation of theses and research papers. Conclusion

It is clear from the preceding discussion that the validity of the ELI writing test is an open-ended issue, which requires further investigation. Apart from the factors already mentioned, there is the question of the predictive validity of the ratings. Are we accurately predicting the extent to which individual students encounter difficulties in coping with the writing requirements of their academic courses? These various matters will be considered as part of a revision of the whole proficiency test battery, which is to be undertaken soon. At the same time, there is a need for more research on the effects of the type of task on performance in EAP writing tests. In the test reported here, and in other tests referred to in the literature review, tasks of the guided and experience type have been preferred on theoretical grounds to the conventional independent task, but as yet we lack empirical evidence of their superiority. We need to know in particular whether guided and experience tasks address to any significant extent the problem of differences in background knowledge among those taking a writing test. Indeed, at a more basic level, the effects of background knowledge on writing performance have yet to be adequately investigated. REFERENCES

Alderson, J.C., & Urquhart, A.H. (1985). The effect of students’ academic discipline on their performance on ESP reading tests. Language Testing, 2, 192-204.

Associated Examining Board. (1984). The test in English for educational

120

J. Read

purposes. Aldershot, England: Author. Bridgeman, B., & Carlson, S. (1983). Survey of academic writingtasksrequired of graduate and undergraduateforeign stu&nts (TOEFL Research Report No. 15). Princeton, NJ: Educational Testing Service. British Council. (1987). An ~nt~oduct~ to the E~l~h Lunguag Testing Seruice. London: Author. BrosseII, G., & Hoetker Ash, B. (1984). An experiment with the wording of essay topics. College ~ompos~~onand ~ornmun~ati~, 35, 423-425. Car&I, P.L. (1984). The effects of rhetorical organization on ESL readers. TESUL Quarterly,18, 441-469. CarrelI, P.L. (1987). Content and formal schemata in ESL reading. TESOL Quarterly,21, 461-481. Education Testing Service. (1989). Test of written English guide. Princeton, NJ: Author. English Language Institute, Testing and Certification Division. (1984). Michigan English Language Assessment Battery. Ann Arbor: University of Michigan. Hale, G.A. (1988). Student major field and text content: Interactive effects on reading comprehension in the test of EngIish as a foreign language. Language Testing, 5, 49-61. Heaton, J.B. (1988). WritingEnglish language tests (New edition). London: Longman Herzog, M. (1988). Issues in writing proficiency assessment. Section 1: The government scale. In P. Lowe, Jr., & C.W. Stansfield (Eds.), Second language proficienq assessment: Current issues (pp. 38-67). Englewood Cliffs, NJ: Prentice Hall. Hirokawa, K., & SwaIes, J. (1986). The effects of modifying the formality level of ESL composition questions. TESOL Quarterly,20, 343-345. Hoetker, J., & Brossell, G. (1986). A procedure for writing content-fair essay examination topics for large-scale writing assessments. ColZegeComposite and ~ommu~ica~~n, 37, 328-335. Horowitz, D.M. (19~). What professors actually require: Academic tasks for the ESL classroom. TESOL Q~~e~~, 20, 445-462. Jacobs, H.L., Ziigraf, S. A., Wormuth, D. R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: A practical approach. Rowley, MA: Newbury House. Johnson, P. (1981). Effects on reading comprehension of language complexity and cultural background of a text. TESOL Quarterly, 15, 169-181. Jordan, R.R. (1980). Academic writingcourse. London: CoIlins. Katz, A. (1988). Issues in writing proficiency assessment. Section 2: The academic context. In P. Lowe, Jr. and C.W. Stansfield (Eds.) Second language proficienq assessment: Current issues (pp. 145-171). Englewood Cliffs, NJ: Prentice Ha& Nation, P. (1989). A systemof tasksfor language leaking, Paper presented at the RELC Regional Seminar on Language Teaching Me~~oio~ for the Nineties, Singapore.

ProvidingRelevant Content in an EAP Writing Test

121

Porter, D. (1987). Review of test in English (Overseas). InJ.C. Alderson, K.J. Krahnke, & C.W. Stansfield (Eds.) Reviewsof EngZishkzngz.urgeprofKiency tests (pp. 76-77). Washington, DC: TESOL. Ruth, L., & Murphy, S. (1984). Designing topics for writing assessment: Problems of meaning. College C~~~Qsiti~ md C~~~n~~t~, 35, 410422. Weir, C.J. (1988a). C~~~n~~~v~ lake test&g, with ant referee to English as a foreign language. (Exeter Linguistic Studies, Vol. 15). Exeter: University of Exeter. Weir, C.J. (1988b). The specification, realization and validation of an English language proficiency test. In A. Hughes (Ed.) Testing EngZishfor university study (ELT Document 127). London: Modem English Publications. Wesche, M.B. (1987). Second language performance testing: The Ontario Test for ESL as an example. Language Testing, 4, 28-47.

John Read teaches EAP and applied iingmstics at Victoria University of Wellington, New Zealand, having previously taught at universities in the United States and at the Regional Language Centre (RELC), Singapore. He is the editor of the RELC ~~o~o~es directs in barge Testing (1981) and Trends in Lumguge Sy~l~~ Design (1984).