Article
Grading student clinical practice performance: the Australian perspective Kate Andre
Educators have long considered assigning a grade in the assessment of student clinical practice performance as too variable, too subjective, or educationally inappropriate. Consequently, most undergraduate nurse education programs have maintained non-graded pass/fail criteria assessment for evaluating and reporting student clinical performance. This paper argues that, while the varied clinical environments do make assessing students’ clinical performance difficult, the reliability and validity of assessment practices should be maintained no matter what the grading system. Furthermore, many assumptions about criterion-referenced assessment, and the associated preclusion of graded assessment, are seen to be baseless when dealing with competencies rather than with traditional behavioural objectives. By combining both criterionreferenced and norm-referenced assessment, graded assessment can clarify and impress on students the minimal competency requirements, while at the same time describing and rewarding meritorious practice. It is argued that to do so would provide a more appropriate way of communicating the assessment of theory application in praxis, a highly relevant issue in a practice-based discipline such as nursing. It is now appropriate to reconsider the debate of graded student clinical performance assessments, and to investigate both the practicalities and benefits of this approach. © 2000 Harcourt Publishers Ltd
Introduction Kate Andre RN, BN, MN, Lecturer in Nursing, School of Nursing and Midwifery, Division of Health Sciences, University of South Australia, City East Campus, GPO Box 2471, Adelaide, S A 5001, Australia. Tel.: + 08 8302 1442; E-mail: kate.andre@unisa. edu.au Manuscript accepted: 7 June 2000
672
It is not the objective of this article to prescribe how graded assessment should be implemented in the evaluation of student nurses’ clinical performance. Rather it is intended to highlight the need for grading to be given consideration in designing clinical performance assessment approaches. To encourage debate on the issue of graded performance assessment, this article has drawn upon both general and nursing literature. In doing so, the issues and traditional preference for criterion-referenced assessment are explained. The use of graded competency-based performance measures, both generally and as applied to nursing, provides the basis for further discussion about assessing workplace
Nurse Education Today
(2000) 20, 672–679
performance. Finally, the requirements of graduate nurse employers, and the impact this is having on student satisfaction with current assessment practices, are discussed. With the current international trend for nurse education to be situated in the higher education/university sector, clinical assessment according to merit rather than pass/fail criteria is becoming increasingly relevant. A practice-based discipline such as nursing, that espouses the value of applying skills to practice, needs to consider how such value can be communicated in academic form. Evidence and experience suggests that varied levels of performance exist in clinical practice, including performance levels beyond that of merely an ‘acceptable’ standard (Benner 1984). The use of pass/fail grades for clinical practice
© 2000 Harcourt Publishers Ltd doi:10.1054/nedt.2000.0493, available online at http://www.idealibrary.com on
Grading student clinical practice performance
assessment limits the communication of performance standards to that of acceptable and non-acceptable nursing practice. A meritorious grading system, however, communicates standards of performance beyond a mere pass, including recognition of exemplary standards. High achieving students are disadvantaged by pass/fail grading systems, as their accomplishments are not readily communicated to employing bodies, selection committees for post-graduate programs, scholarships and the like (Biggs 1992). Considering that the objective of an undergraduate nursing program is to assist in the development of clinically applied skills, a failure to give appropriate recognition for the application of these skills is biasing student merit towards the theoretical component rather than the praxis of nursing. In preparing for this debate it is necessary to clarify what is meant by graded versus nongraded assessment. For the purpose of this paper, graded assessment is defined as the practice of assessing and reporting levels of performance that recognize merit or excellence beyond the issuing of a pass grade. As will be explained later, graded assessment may contain components of both criterion-referenced assessment (pass grade criteria) and norm-referenced assessment (merits beyond proficiency requirements). It is envisaged that the graded categories would be consistent with standard university/institutional graded assessment policy. For example, 100–85% would be classified as a high distinction, 84–75% as a distinction, and so forth. It is also envisaged that clinical grades would be reported along with academic based grades, either in subjects combining scores for theoretical and clinically applied assessments or as stand alone clinical application subjects. Non-graded assessment, on the other hand, issues either a pass or a fail grade, with no distinction of merit beyond satisfactory achievement.
The graded assessment debate as it stands in the literature The debate associated with graded, competency-based assessment, though well established in the general literature (Wolfe 1993, Thompson et al. 1996), is limited to brief justifications for assessment instrument development in recent nursing literature (Donoghue & Pelletier 1991, Glover 1997, Hill
© 2000 Harcourt Publishers Ltd
1998). The following section overviews the contribution of such literature, and aims to extend the debate with particular application to undergraduate nurse education. The various uses of the term ‘competency’, and its associations with competency-based training and assessment, complicate the debate about appropriate assessment of vocational performance. Competency statements have been broadly defined as statements pertaining to levels of performance outcome required for professional entry (Hager & Gonzi 1993). Much of the Australian literature on competencies has been produced from the perspective of vocational training institutions such as the Technical and Further Education (TAFE) institutions. The educational requirements of these institutions are generally focused toward trade-based vocations and the provision of prescriptive skill outcomes regulated by industrial requirements (Thompson et al. 1996). For this reason, the competency statements and assessment requirements associated with trade-based vocations, though less prescriptive than behavioural objectives, are more narrowly defined than many professional competency statements, such as the one that is currently utilized in Australian nursing practice (Australian Nursing Council Inc. 1998). The use of competency statements for the regulation of nursing practice was initially established in Australia in the 1980s (Australasian Nurse Registering Authorities Conference 1982, Australian Nursing Council Inc. 1993a). In their current form, competencies for Registered Nurse practice are based on four major domains: professional/ethical practice, reflection, problem solving, and enabling (Australian Nursing Council Inc. 1998). Within each domain is a series of competency units, competency elements and, finally, suggestions for cues that may assist in identifying the specific competency (Australian Nursing Council Inc. 1998). Competency statements differ significantly from behavioural objectives in that the former are sufficiently broad to allow for interpretation that includes the application of a diversity of skills within various complex environments. Behavioural objectives, such as those describing the standards of practice of specific nursing acts (for example the administration of insulin) are highly prescriptive and detailed. While the use of behavioural objectives are potentially useful in classroom
Nurse Education Today
(2000) 20, 672–679
673
Grading student clinical practice performance
education, such detail makes their use unwieldy when describing the entirety of clinical practice (Australian Nursing Council Inc. 1993b).
Criterion-referenced assessment Historically, the preference for ‘single cut-off’ pass/fail grades in competency-based assessment is due to its association with criterion-referenced assessment principles (Wolfe 1993). This approach requires that the criteria to be met for a pass grade should be set at a level considered appropriate for proficiency, or ‘mastery’, as it is otherwise termed (McMillan, 1997). This method has been criticized for limiting teaching and learning approaches, as it provides no incentive to extend beyond this basic, though proficient level (Wolfe 1993). It is clear that assessment is a significant stimulus for student learning, influencing not only the volume of content learned but also the learning approach undertaken (Ramsden 1991, Orell 1996, Gibbs 1999). By providing specific but limited criteria for student assessment, criterion-referenced assessment has been criticized for creating a dependent and non-critical student learning environment (Wolfe 1993). In opposition to this, however, concerns have been expressed that norm-referenced assessment (assessment based on a normative distribution of grades) stimulates a competitive educational environment, inappropriate to the needs of the adult learner (Thompson et al. 1996). This latter concern is perhaps more applicable in teaching environments such as programs for the long-term unemployed, where students may react against the control and authority of mainstream education. The current practice of norm-referenced assessment need not require that the number of grades in each category be limited, so reducing concerns for undue competition (McMillan 1997). Much of the argument against grading competency-based assessment stems from a dichotomous view of criterion- and normreferenced assessment. It is perhaps more appropriate for norm and criterion-referenced assessment to be considered as a continuum; most assessment practices, of both theoretical knowledge and clinical application, share aspects of each (Linn & Gronlund 1995). It is clear that in reality academics and other assessors set
674
Nurse Education Today
(2000) 20, 672–679
assessment grades on the basis of what is considered a minimum level of practice for a pass grade (criterion – based judgment), in combination with notions of what the student group is capable of (normative judgment). As Thompson et al. have pointed out, ‘a given test might be considered to be more towards the criterion-referenced end of the continuum, but still capable of yielding norm-referenced information’ (Thompson et al. 1996). This combined approach has considerable potential when applied to the assessment of nursing practice, and will enable assessors and institutions to ensure that safe standards of practice are promoted as the esential criteria for pass grades (criterion-referenced). It is also clear, however, that many students and registered nurses perform at levels in excess of this basic requirement (Benner 1984). It is in recognizing and rewarding meritorious behaviour that components of norm-referenced assessment are utilized. Strictly speaking, norm-referenced and criterion-referenced assessment refers to the method of interpreting the results, rather than to the instrument design and implementation (Linn & Gronlund 1995). The two assessment approaches require the same attention to issues of validity, reliability and rigour. To assume otherwise places in jeopardy the quality of assessment and, perhaps most importantly, the predictive validity of the assessment process; that is, its accurate prediction of students’ performances in their subsequent professional careers (Bennet 1993, Linn & Gronlund 1995). However, in order to achieve the goal of communicating to students what is meant by certain levels of achievement or merit, normative assessment must contain discriminating criteria within the assessment instrument (Linn & Gronlund 1995).
Graded competency-based assessment in Australia The most comprehensive investigation into graded competency-based assessment in Australia was undertaken by Thompson et al. (1996). Their study investigated the practices and policies of ‘grading’ levels of performance in competency-based vocational education and training programs taught within technical and
© 2000 Harcourt Publishers Ltd
Grading student clinical practice performance
further education (TAFE) institutions. In Australia, TAFE institutions are responsible for trade-based programs including hospitality, basic accountancy and trade apprenticeships. As a consequence of the restriction of this study to TAFE institutions, Registered Nurse programs were excluded as these are only taught in universities within Australia. However, many of the results of the investigation are relevant to this debate. Thompson et al. (1996) found that policies and the use of graded assessment varied between institutions and Australian states. The major proponents for grading were private educational providers, especially those with fee-paying students such as those from tourism and hospitality. This support was reportedly based on a belief that grades provided a competitive edge in the competitive program market. Students and employers apparently responded favourably to the type of information communicated through graded assessment criteria (Thompson et al. 1996). Employer demand and satisfaction were cited as major reasons why institutions implemented graded assessment. Other reasons included improved student motivation, rewarding student excellence, and the provision of compatible information for promotion and recognition within other educational programs. In addition to this, grading was thought to inform the assessment process and provide information about the quality of learning achieved. Opponents to graded assessment stated that the practice of grading was inconsistent with the principles of competency-based assessment (Thompson et al. 1996). More specifically, some respondents felt that grading created a competitive environment between students, where attention is diverted from the need to meet an identified standard to comparisons between individuals (Thompson et al. 1996). In their report, Thompson et al. (1996) recommended that policy regarding the graded assessment of competency-based training be clarified. Grading in competency-based assessment was considered to have both educational and employment benefits, though implementation should be based on the specific requirements of the relevant professional group and associated objectives of the assessment. A single, binding policy on the use of graded competency-based assessment for all
© 2000 Harcourt Publishers Ltd
organizations and assessment situations was considered inappropriate. Thompson et al. (1996) also expressed concern at the quality of current assessment instruments used for grading performance. In particular, few assessment tools had been tested for validity and reliability, thus potentially contributing to inaccurate reporting of student achievement. Further research was recommended in order to develop quality instruments and investigate the influence graded assessment has on student learning. Such research should address the need to develop flexible assessment approaches that communicate a variety of quality information regarding student performance (Thompson et al. 1996). Thompson et al.’s (1996) research highlights a major dilemma in the grading of clinical performance: while a high quality approach will benefit both student and employer, such as approach will require considerable development in order to achieve the desired quality. The National Training Board (Australia) has endorsed the possible use of grades, other than pass/fail, in competency-based assessment (Rutherford 1994). In doing so they clearly indicated that performance could be assessed beyond the level of mastery. However, as identified by Thompson et al. (1996), the assessment requirements of vocational programs differ and grading policy should primarily reflect the needs of the professional group concerned.
Nursing’s contribution to the debate As previously mentioned, the contribution of recent nursing literature to this debate is limited to brief justifications for the development of individual assessment tools. Though brief, these do provide a relevant insight into the concerns of the authors and of the wider profession. Glover et al. (1997 p. 111) stated that ‘not awarding a grade to clinical was to somehow devalue it (clinical)’. The need to recognize the value of workplace learning and performance has long been argued (Schön 1983, Benner 1984, Krichbaum et al. 1994). However, the implication that awarding an assessment grade to clinical performance somehow attributes value to the concept of clinical practice has not yet been fully debated in the literature. It is clear, however, that both employment organizations and students highly value practical experience; both have expressed
Nurse Education Today
(2000) 20, 672–679
675
Grading student clinical practice performance
the wish that achievements in clinically-based performance be recognized and merit readily communicated (Krichbaum et al. 1994, Yong 1996). The justification used by Donoghue and Pelletier (1991) to produce an assessment instrument for grading students’ clinical practice was based on the need to enhance both the assessment process and the quality of feedback to students. They stated that ‘quantifying behaviour requires the development of very specific criteria which enhance the validity, objectivity and specificity of student feedback’ (Donoghue & Pelletier 1991, p. 355). Certainly, students frequently express confusion and wish for additional information both about what is expected of them in clinical practice, and how judgments of their performance are arrived at (Yong 1996). Furthermore, assessment of the clinical performance of student nurses has been universally criticized as being unreliable, subjective and at times invalid (Girot 1993, Clifford 1994, Chambers 1998). The development of criteria to assess merit beyond mere competency would certainly assist in describing the qualities of nursing practice beyond the concept of proficient. However, while recognizing the potential in Donoghue and Pelletier’s (1991) argument for graded assessment, it must be noted that the assessment instrument they developed did not demonstrate sufficient reliability. In summary, justification of the non-graded pass/fail criteria for clinical performance assessment cannot be maintained solely on the premise that the principles of criterion-referenced assessment do not support it. Rather, the assessment needs of the student, educational institution and employing agents should be considered in developing associated policy. Graded, competency-based assessment is generally achievable, though the educational advantages and quality of instruments requires further research. The literature applied to nursing in this field is limited, though sound argument has been provided for graded assessment to be considered as an option.
Assessment practices in nursing The movement of nurse education into higher educational institutions has meant changes in the
676
Nurse Education Today
(2000) 20, 672–679
locus of responsibility for assessing undergraduate clinical performance. In the past, hospital training programs relied in part on employment structures and ward social pressures to assess and maintain standards in clinical practice (Holloway & Penson 1987). Such an approach to clinical assessment – in effect, social control of student nurses – now lies within the domain of University Schools of Nursing. The following overview of current assessment practices reported in the literature will further inform the debate regarding the implications of assessment practices for the student. ‘Continuous assessment’ is the predominant assessment approach currently used, both in Australia and internationally, for the evaluation of undergraduate nurses’ performance in the clinical environment (Clifford 1994, Hill 1998). This assessment approach requires that students be monitored throughout the period of clinical practice experience, with results based on general performance (Clifford 1994). Continuous assessment was implemented in order to replace the earlier practice-based/behaviourist approach to evaluating clinical skills. Previously, students had been required to pass a series of discrete practical tests, the prescriptive nature of which made criterion-based assessment most appropriate. This approach had been criticized as being biased toward task-orientated, poorly integrated physical acts of care that failed to represent the complex human interactive nature of nursing (Hepworth 1991, Girot 1993). ‘Continuous assessment’ allowed the student to undertake all aspects of patient care, uninterrupted by individual assessment requirements (Clifford 1994). The objective of this was to have the assessment reflect the complete role of the nurse, and allow for shared input from clinical and academic staff. However, assessors reported difficulties in dealing with the subjective nature of continuous assessment and, it seems, regularly reverted to using focused objectives to assess students (Hill 1998). Nurse educators were reluctant to consider alternatives to criterion-referenced assessment, possibly as a consequence of the difficulties of managing the complexity, subjectivity and holistic nature of continuous assessment. For whatever reason, criterion-referenced evaluation was maintained as the dominant approach, within the continuous assessment framework (Hill 1998).
© 2000 Harcourt Publishers Ltd
Grading student clinical practice performance
Continuous assessment of clinical performance has not been unproblematic. The broad nature and variability of judgments has resulted in concerns, expressed within the literature, about the potential for bias, subjectivity and lack of reliability (Girot 1993, Clifford 1994, Chambers 1998). Many institutions have attempted to address these concerns with staff training programs, and reliability trials of assessment tools (Donoghue & Pelletier 1991, Krichbaum et al. 1994). The majority of institutions have mistakenly, perhaps unwittingly, attempted to avoid the problem by providing the criteria for non-graded pass/fail clinical performance assessment. It is certainly a mistake to believe that by only having to justify why a student should or should not pass, the requirement of reliability and validity is somehow reduced (Linn & Gronlund 1995). Clearly, maintaining consistency between assessments and meeting the need to truly assess the construct of clinical performance (as opposed to issues of personality, etc.) must be difficult in all forms of assessment. Furthermore, most criteria-referenced clinical assessment tools include a process of performance measurement, often in the form of a Likert scale, Bondy scale or similar (Donoghue & Pelletier 1991, Hawly & Lee 1991, Krichbaum et al. 1994, Hill 1998, Woolley et al. 1998). The reliability of such scaling procedures must be equally rigorous for a criteria-referenced tool as for a norm-referenced instrument. While information is not available on the worldwide extent of grading in clinical performance assessment, the implication from the international literature is that non-grading approaches dominate (Donoghue & Pelletier 1991, Glover et al. 1997, Hill 1998). Justification for this appears to be based on concerns felt in relation to assessing the subjective nature of nursing practice (Hill 1998). There can be little dispute that many of the major components of nursing are subjective and contextually based (Benner 1984). In addition to this, the process of assessing students in clinical practice requires the assessor to deal with their own subjective interpretations and values (Hepworth 1989). However, as Hepworth (1991) states, subjectivity does not necessarily render assessment as invalid or unreliable, but rather should be seen as contributing to the value of the assessment of student nurse practice.
© 2000 Harcourt Publishers Ltd
Issues for the graduate The major influence in deciding the format to be used in assessing and reporting student clinical performance must, of course, be the student. Students are clearly concerned about how their assessments are being undertaken, particularly in light of the influence that the clinical report will have on their future employment. The following section provides an overview of how, in Australia, clinical assessment reports are used by employers in selecting recruits, and how this impacts on students’ assessment concerns. Outside of educational institutions, grading workplace performance is uncommon (Thompson et al. 1996). It is therefore interesting to note that amongst the major, though possibly inadvertent, proponents of the graded assessment of clinical performance are the employing institutions. In preparation for this article, the author undertook a series of interviews with staff involved in the recruitment of graduate nurses in South Australian metropolitan hospitals. This revealed that, due to the large numbers of applicants, employee selection decisions were predominantly based on a combination of written application, referee reports, academic record and clinical performance reports. In some instances this information provided the basis for interview shortlists, while others chose not to interview but relied totally on written documentation as the selection criteria. Several of the employing institutions weighted each item of documentation, and ‘clinical performance reports’ generally received a 35% weighting. In an attempt to objectify the contents of the reports, it was the policy of some organizations to ‘grade’ the reports according to the ‘quality’ of comments, or by the rating scales inherent in the assessment tools. The need for organizations to utilize clinical performance reports as part of the employee selection process arose from employer concerns for the need to recognize clinical practice standards in the employee selection process. Academic records were considered to be inadequate for this purpose, as clinical performance assessment in Australian schools of nursing are predominantly graded non-graded pass/fail. The reliability of ‘grading’ the potentially subjective content of clinical performance reports for the purposes of
Nurse Education Today
(2000) 20, 672–679
677
Grading student clinical practice performance
differentiating between applicants must, however, be questioned. Employing agencies have expressed a wish to have the quality of a student’s clinical performance communicated in line with other forms of their academic results. This world provide a more complete picture of the student’s performance, rather than the current bias toward academic performance through merit-based grades for theoretical subjects, and none-graded pass/fail for clinically-applied performance. With educational institutions increasingly required to justify graduate performance, assessment practices are experiencing greater scrutiny (Nurses Registration Board of New South Wales 1998). Students, in the knowledge that their clinical assessment reports contribute significantly to their employment opportunities, are increasingly requesting justification of comments and scale ratings. Employing agencies are similarly concerned with the validity and reliability of clinically-based assessment practices and are increasingly expressing concerns about discrepancies between clinical reports and the actual quality of graduate behaviours (personal communication-Smith A., Assistant Director of Nursing – Education, Flinders Medical Centre, July 1999). Educational institutions are therefore pressured to provide valid and reliable assessment reports, no matter what assessment approach has been implemented. In this context, graded assessment would advantage the student by clarifying the merit-based criteria used to assess their performance. Furthermore, the student’s academic transcript would be more representative of the student’s overall performance and include the application of theory, rather than simply registering their academic ability. As previously stated, it is not the objective of this paper to identify how graded clinical assessment should be instigated; rather to stimulate the debate and encourage further research in this area. Clearly, research would need to be undertaken to identify agreed meritorious criteria and associated assessment tools. Furthermore, quality assessment practices, including staff development and evaluation programs, would need to be instituted to ensure reliability and validity within the assessment process. Those who evaluate the effectiveness of graded clinical assessment would need to
678
Nurse Education Today
(2000) 20, 672–679
consider the influence that such an approach brings to bear on student learning and clinical performance. The potential for students to be distracted by assessment requirements, to the detriment of their overall performance, must be taken into account. Though the depth of consideration required for this task is beyond the scope of this paper, it is recommended that support be given to projects that assist in the further development of these ideas. In Australia, at least, where competencies for Registered Nurses are a well established requirement for practice, the potential of grading clinical performance should be considered. It is clear that research is required both in the development of quality assessment instruments and to investigate the impact on student learning. However, there is no clear argument that precludes the use of graded assessment in the evaluation of clinical practice. Potential advantages include a more reliable and valid assessment of the degree of theory praxis application, and an explicit understanding of the grounds on which excellence in performance is communicated and rewarded. Such communication has the potential to inform and motivate students to achieve high standards of practice (Gibbs 1999), improve the quality of staff selection procedures and thus advantage the wider nursing profession and the recipients of care. References Australasian Nurse Registering Authorities Conference 1982, April 28–30 Report of the 1982 Australasian Nurse Registering Authorities Conference. Paper presented at the Australasian Nurse Registering Authorities Conference, Canberra Australian Nursing Council 1993a. National competencies for the registered and enrolled nurse in recommended domains. Australian Nursing Council, Canberra Australian Nursing Council 1993b A self-directed learning guide for assessment of the national competencies (ANRAC), 2nd ed. Australian Nursing Council, Canberra Australian Nursing Council 1998 ANCI national competency standards for the registered nurse, 2nd ed. Australian Nursing Council, Canberra Benner P 1984 From novice to expert: excellence and power in clinical nursing practice. Addison-Wesley Publishing Company, Menlo Park, California Bennet Y 1993 The validity and reliability of assessments and self assessments of work-based learning. Assessment and Evaluation in Higher Education 18(2):83–94
© 2000 Harcourt Publishers Ltd
Grading student clinical practice performance
Biggs J 1992 A qualitative approach to grading students. HERDSA News 14(3):3–6 Chambers M A 1998 Some issues in the assessment of clinical practice: a review of the literature. Journal of Clinical Nursing 7(3):201–208 Clifford C 1994 Assessment of clinical practice and the role of the nurse teacher. Nurse Education Today 14(4):272–279 Donoghue J, Pelletier S D 1991 An empirical analysis of a clinical assessment tool. Nurse Education Today 11(5):354–362 Gibbs G 1999 Using assessment strategically to change the way students learn. In: Brown S, Glasner A (eds) Assessment matters in higher education: choosing and using diverse approaches. SHRE and Open University Press, Buckingham, p 41–53 Girot E A 1993 Assessment of competence in clinical practice – a review of the literature. Nurse Education Today 13(3):83–90 Glover P, Ingham E, Gassner L 1997 The development of an evaluation tool for grading clinical competence. Contemporary Nurse 6(3/4):110–116 Hager P, Gonzi A 1993 Attributes and competence. Australian & New Zealand Journal of Vocational Education Research 1(1):36–45 Hawly R, Lec J 1991 Standardised clinical evaluation using Bondy rating scale. The Australian Journal of Advanced Nursing 8(3):6–10 Hepworth S 1989 Professional judgement and nurse education. Nurse Education Today 9(6):408–412 Hepworth S 1991 The assessment of student nurses. Nurse Education Today 11(1):46–52 Hill P F 1998 Assessing the competence of student nurses. Journal of Child Health Care 2(1):25–30 Holloway I, Penson J 1987 Nurse education and social control. Nurse Education Today 7(5):235–241
© 2000 Harcourt Publishers Ltd
Krichbaum K, Rowan M, Duckett L, Ryden M, Savik K 1994 The clinical evaluation tool; a measure of the quality of clinical performance of baccalaureate nursing students. Journal of Nursing Education 33(9):395–404 Linn R L, Gronlund N E 1995 Measurement and assessment in teaching, (7th ed) Prentice-Hall, Upper Saddle River, New Jersey McMillan J H 1997 Classroom assessment: principles and practice for effective instruction. Allyn and Bacon, Boston Nurses Registration Board of New South Wales 1998 Guidelines for development of programs leading to registration. Nurses Board of NSW, Sydney Orell J 1996 Assessment in higher education: an examination of academics’ thinking-in-assessment, beliefs-about-assessment and comparison of assessment behaviours and beliefs. Flinders University, Adelaide Ramsden P 1991 Evaluating teaching: supporting learning. In:Ross B(ed) Teaching for effective learning. HERDSA, Sydney Rutherford P 1994 Competency-based assessment – answering some questions. National Training Board Network (15) Schön D A 1983 The reflective practitioner: how professionals think in action. Basic Books, New York Thompson P, Mathers R, Quirk R 1996 The grade debate: should we grade competency-based assessment? National Centre for Vocational Educational Research, Adelaide Wolfe A 1993 Assessment issues and problems in a criterion-based system. Further Education Unit, London Woolley G R, Bryan M S, Davies J W 1998 A comprehensive approach to clinical evaluation. Journal of Nursing Education 37(8):361–366 Yong V 1996 ‘Doing clinical’: the lived experience of nursing students. Contemporary Nurse 5(2):73–79
Nurse Education Today
(2000) 20, 672–679
679