Misplaced confidence in a profession’s ability to safeguard the public?

Misplaced confidence in a profession’s ability to safeguard the public?

Article Misplaced confidence in a profession’s ability to safeguard the public? Eamon Shanley A major requirement of a profession is to safeguard th...

125KB Sizes 0 Downloads 80 Views

Article

Misplaced confidence in a profession’s ability to safeguard the public? Eamon Shanley

A major requirement of a profession is to safeguard the public in providing a valid and reliable means of admitting to practice only those who will meet minimum requirements. Nursing uses two different forms of assessment (clinical and academic assessments) in attempting to discriminate between those who are likely to be safe and competent to practice and those who are not. The paper reviews the literature on systems of assessment with a specific focus on Behaviourally Anchored Rating Scales (BARS) and Objective Structured Clinical Examination (OSCE). The conclusion drawn is that the methods described are inadequate in predicting/assessing the clinical performance of nurses. An implication for the nursing profession is the possible loss of confidence in its ability to safeguard the public. © 2001 Harcourt Publishers Ltd

Introduction

Eamon Shanley Professor of Mental Health Nursing, Gascoyne House, Edith Cowan University, Graylands Hospital, Brockway Road, Mt Claremont 6010, Western Australia. Tel.: +61 8 9347 9416; Fax: +61 8 9383 1233; E-mail: [email protected] dv.au Manuscript accepted: 2 October 2000 Published online: 23 January 2001

In the UK, the issue of self regulation of the nursing profession has recently been activated by the publication of the review of the Nursing, Midwives and Health Visitors Act 1997 (JM Consulting 1998) and the Making a Difference report (Department of Health 1999). While attention is being focused on the role of the United Kingdom Central Council for Nurses Midwives and Health Visitors in safeguarding the public from misconduct by qualified nurses, relatively little attention is being paid to the nurses’ ‘governing bodies’ responsibility for ensuring that valid and reliable criteria are established for admission into nursing. In nursing, as in other professions, this responsibility is undertaken through assessment of academic performance, which is used as an indirect measure of students’ ability to provide good nursing care, and the observation of their practice skills (a more direct measure of clinical performance). Academic assessment attempts to

136 Nurse Education Today (2001) 21, 136–142 doi:10.1054/nedt.2000.0527, available online at http://www.idealibrary.com on

identify students’ level of understanding of issues considered relevant to the practice and management of nursing care. Implicit in academic assessment is the assumption that a correlation exists between the level of understanding as measured by examination performance and the standard of care that the nurse provides. Similarly, the evaluation of students’ clinical performance is aimed at measuring the skills that they demonstrate in providing nursing care. It is acknowledged that assessments have other functions such as providing feedback on students’ performance to the educational institutions, colleagues, teachers and to the students themselves. However, this paper is primarily concerned with examining the discriminative function of the assessment system; that is, distinguishing between those who are safe and competent and those who are not. The presence of such an effective mechanism is central to the profession’s integrity in fulfilling its role of safeguarding the public.

© 2001 Harcourt Publishers Ltd

Misplaced confidence in a profession’s ability to safeguard the public?

Indirect assessment – academic Studies of the reliability of academic assessments have shown variations in correlation between the scores obtained in a test at one sitting and a similar test completed at a different time. In 1936, a seminal study on essay-based assessment by Hartog and Rhodes showed the existence of poor inter-marker reliability. More than 60 years later, despite the changes in educational practices, the situation remains unchanged (Newstead & Dennis 1994). In a study of nursing examination results, Altschul and Sinclair (1989) showed a non-significant correlation among essay-based internal examination results within Colleges of Nursing and Midwifery in Scotland, and also between internal results and the state examinations. Essay-type examinations have also been criticized for their poor predictive validity. As early as 1972, Bendall showed that examination results were a poor predictor of students’ performance in the clinical area. However, different findings were obtained in testing the correlation between multiple choice questionnaires. Studies in the USA based on comparisons between the scores on multiple choice examinations and success in the licencing examinations (also multiple choice) in the USA have shown significant correlations (Outtz 1979, Payne & Duffy 1986, Quick et al. 1985). However, in other countries such as the UK there is a reluctance by educational establishments to adopt multiple-choice questionnaires where higher cognitive processes are being assessed because of the problems of validity. Multiple-choice questionnaire examinations are considered best suited for the recall of factual information and show little predictive ability in determining success in later nursing practice (Burgess et al. 1972, Dubs 1975, Seither 1980, Soffer & Soffer 1972). Methods of academic assessment in nursing are neither reliable nor valid, and the assumption that there is a correlation between the level of understanding as measured by examination performance and the standard of care that the nurse provides is questionable.

Direct assessment – clinical Open and closed systems of assessment Methods of assessing nurses’ practice range from the use of an open-format

© 2001 Harcourt Publishers Ltd

(unstructured report) to a closed forced-choice system. The open format includes the use of unstructured report forms on which the assessor writes about a students’ characteristics and performance with minimal guidance as to what is required. The advantage of this system is that it allows the assessor to record aspects of the student’s individual attributes and behaviours without the constraints of having a fixed system of classification into which the student’s behaviour is categorized. The disadvantages is that validity of open format is questionable due to the arbitrary selection of material for inclusion and the undefined standards used in judging competence. In addition, reliability is also questionable in that there is no system that ensures the use of consistent criteria in the assessment and the likely variations in literary abilities of the assessors. A second open method, the critical incident technique, is attractive in identifying actual behaviours instead of hypothetical characteristics or skills as present in more structured assessment systems (described below). Critical incident involves the assessor keeping a record of positive and negative behaviour of the students in that area. According to Wiatrowski and Palkon (1987), this method has serious limitations that question its validity such as negative incidents being more noticeable than positive incidents. A good example of this is a delay in administering medicine may be recorded while a prompt action in an emergency may be taken for granted. Reliability is also questioned in that there is the danger of the assessor deferring or forgetting to record incidents. In addition, there may be variations in attentiveness and record keeping that affect reliability of the conclusions. Attempts to increase the level of reliability and validity have been made by developing more structured methods of assessment. These approaches involve the selection of a representative sample of desirable behaviours/characteristics and comparing the student’s behaviour/characteristic with these characteristics. One closed forced choice method is a simple checklist which involves the assessor ticking a series of statements as present or absent in the character or behaviour of the student. Another method is the presentation of a series of statements (and the offering of a range of

Nurse Education Today (2001) 21, 136–142 137

Misplaced confidence in a profession’s ability to safeguard the public?

responses on an ordinal scale from 1 ‘unsatisfactory’ to 7 ‘outstanding’). An advantage is that it increases inter-rater reliability in that the same statements are being used. A major shortcoming is that its validity is questionable. Criticisms include the danger of behaviours or states that are difficult to define and measure not being included, e.g. relationships; the inclusion of easily measurable but unimportant behaviours and the restriction of a range of responses open to the assessor, who may wish to express different responses. In addition, the treatment of each statement as being of equal importance is problematic. For example, items that are included may have different levels of worth for which there is no weighting. For example, the item ‘The student has a tidy appearance’ may be considered important enough to be included, yet if there is no system of weighting it is given the same value as other items such as ‘The student shows sensitivity to the patient’s needs’.

Combination of open and closed forms of assessment Behaviourally Anchored Rating Scales (BARS) A solution to the difficulties of establishing reliability and validity offered by Smith and Kendall (1963) is the Behaviourally Anchored Rating Scales (BARS), aimed at combining many of the assets of both open and closed formats of assessment and avoiding the shortcomings of a single approach. The basic procedure employed in the BARS, as described by Smith and Kendall (1963), entailed the following five steps: ●



138

Critical incidents. A group of clinical nurses is asked to describe specific examples of effective and ineffective performance behaviours (performance dimensions). The incidents are clustered into dimensions (retranslation). Another group of clinical nurses is asked to assign each incident to a dimension that it best describes. The level of agreement between the assignments of this group and the first group determine whether it is retained. Scaling incidents. This second group is also asked to rate (from 1–7) the degree to which each incident represents the dimension it has been allocated. Incidents that have a standard deviation of 1.5 or less are retained.

Nurse Education Today (2001) 21, 136–142



Final instrument. A subset of incidents that meet both the retranslation and the standard deviation criterion are used in the scale. The final BARS instrument consists of a series of scales: one for each dimension. The incidents that make up each dimension are located along the scale depending on its rating established in the preceding step.

Since the development of the technique in nursing, this approach has been used by many other occupational groups in the USA. However, from the initial enthusiasm expressed in such optimistic phrases as BARS possessing ‘great promise for overcoming many of the problems and potential errors believed so long to be inherited in most systems of behavioural observations’ (Dunnette 1966, p. 100), the views of the value of BARS changed to the cynical question ‘how long are we going to dance on the head of the BARS pin?’ cited in Bemardin and Smith (1981, p. 564). Despite the recent lack of enthusiasm in the USA for the method, interest in applying this approach to nurses has been demonstrated in the UK by researchers including Aggleton et al. (1987) and Dunn (1986). The criticisms directed at the BARS system concern its reliability and validity. Overall, the tests on the instrument’s reliability showed mixed and questionable results. On the positive side, its test-retest reliability is considered to be at an acceptable level. According to Guttman (1944), the test-retest coefficient should be approximately 0.85 to be considered acceptable. The test-retest coefficients for the BARS tended to reach this minimum acceptable level. However, less supportive results are forthcoming in other aspects of its reliability. For example, in the area of internal consistency Smith and Kendall claimed a level of 0.97. This figure was considered spurious by Jacobs et al. (1980), who argued that internal reliability was calculated during the scale construction and not during the use of the completed instrument. Measures of BARS equivalence, i.e. the consistency of an instrument measuring the same characteristics in the same subjects, was also unconvincing. In three studies comparing BARS with other rating methods, BARS showed slightly higher inter-rater reliability in two (Borman & Vallon 1974, Williams & Seiler 1973) and slightly lower reliabilities in the other (Campbell et al. 1973). © 2001 Harcourt Publishers Ltd

Misplaced confidence in a profession’s ability to safeguard the public?

Very few reliability coefficients exceed 0.60, suggesting only moderate reliability. Jacobs et al. (1980) also found mixed results. While Finley et al. (1977) and Saal and Landy (1977) found that BARS demonstrated greater inter-rater reliability than alternative rating formats, Bemardin et al. (1976) came to the opposite conclusions, namely that different methods of assessment yielded higher inter-rater reliability than BARS. Other studies reviewed by Jacobs et al. (1980) showed no significant difference in inter-rater reliability scores. Overall, it can be seen that there is insufficient evidence of an acceptable level of reliability in the use of the BARS instrument. Attempts to establish an acceptable level of validity has experienced a similar fate as the attempts to demonstrate an acceptable level of reliability of the BARS instrument. The use of experts to identify desirable degrees of knowledge and skills as a means of establishing content validity has been criticized by Runciman (1996). She claimed that there is no evidence to show that the knowledge and skills identified by panels of experts are related to what happens in the nurses’ performance of the role. She maintained that panels tend to identify traditional competencies and attributes, e.g. saintlike qualities not unique to nurses. In addition, the data generated using panels are considered too imprecise to be of use in devising behavioural criteria against which the student performance could be measured. Another criticism is that the responses available to the assessor are restricted only to the written statements. He/she has little opportunity to indicate other areas of behaviour that are considered important. In attempting to identify a level of construct validity, i.e. the relationship between the instrument and a theory or conceptual framework, a multitrait-multimethod procedure had been used. The results were considered disappointing. Jacob et al. (1980) concluded that at most BARS showed a moderate level of convergent validity – different methods of measuring a construct did not give similar results, and a low level of discriminant validity – there was little difference among the dimensions within the scale. Despite the effort that went into the development of BARS, the majority of the review articles stated that it was not shown to have a greater degree of reliability and validity than other methods of assessment.

© 2001 Harcourt Publishers Ltd

Objective Structured Clinical Examination (OSCE) The OSCE has its origins in medical education and was developed by Harden in Scotland in 1975 (Ross et al. 1988). Its format involves the use of a series of testing stations that may range in number from five to 20. At each station various components of clinical competence are assessed. These components include history taking, physical assessment, problem identification, nursing diagnosis and decision making. The examines move from station to station within a specified time limit. There are two types of station: observer and marker. The observer station requires the examinee to complete a task with the examiner looking on. The examiner has a predetermined checklist of criteria to be met by the examinee. The marker station presents the examinee with data that require analysis and interpretation. From this, the examinee will provide a plan of appropriate action to be taken. Simulated patients are used at the observer stations to ‘act out’ patient scenarios or situations. Objective Structured Clinical Examination (OSCE) was seen as being useful as an assessment tool and was modified and implemented by several different nursing schools (O’Neill & McCall 1996, Bujack et al. 1991, Ross et al. 1988, Bramble 1994, Hardin & Gleeson 1979, Borbasi & Koop 1993). Few studies exist that examine the use of OSCE as a method for evaluating the clinical skills of nurses. From the studies that do exist, there are conflicting views about both the reliability and validity of OSCE as an assessment tool (Ross et al. 1988, Borbasi & Koop 1993, Bramble 1994). According to Bramble (1994), Cudmore (1996) and Reed (1992) the control of variables such as the patient and the examiner can maximize the potential for improved reliability and validity. However, other authors have raised issues that contradict these findings (Bujack et al. 1991, Borbasi & Koop 1993, O’Neill & McCall 1996, Hardin & Gleeson 1979). Factors identified as decreasing reliability include: ineffective training for simulated patients and examiners; situations where patients are known to the students therefore increasing the chance of influencing examinees; and the risk of both examiner and patient becoming fatigued

Nurse Education Today (2001) 21, 136–142 139

Misplaced confidence in a profession’s ability to safeguard the public?

due to the demanding nature of their role (Hardin & Gleeson 1979, O’Neill & McCall 1996). Reliability is also compromised when examiners lack experience, in particular, the skills necessary for each station, and there is limited time to brief them. In addition to this, Borbasi and Koop (1993) pointed out that examiners need greater detail on forms and checklists to guide them. Ross et al. (1988) also looked at the issue of checklists and noted in their study the need for closer scrutiny of items to eliminate ambiguous wording. These factors can increase stress levels for examiners, further reducing reliability (Borbasi & Koop 1993). Indeed, stress can be identified as an element that affects not only reliability but also validity. In a study conducted by Fahy and Lumby (1988), stress can be seen as counterproductive as students reported they were too nervous to perform well and were too rushed to think. When Bujack et al. (1991) conducted a follow-up questionnaire 1 month after the OSCE, students continued to acknowledge the assessment as a stressful experience. In an attempt to alleviate fear and anxiety, Borbasi and Koop (1993) advocated the use of an introductory video for examinees completing the OSCE. An investigation conducted by Reed (1992), found that OSCE had only reasonable content validity and both concurrent and predictive validity correlations were low. She found that, when compared with other forms of traditional assessment, OSCE was a less appropriate assessment technique in testing domains of knowledge and problem solving. It was concluded that the OSCE is not a remedy for validity or reliability shortcomings in the evaluation of clinical competence. One of the main criticisms of many authors in attempts to employ a nursing version of OSCE is that there is an inability to replicate the ward environment (Van Der Vleuten et al. 1989, Borbasi & Koop 1993, Cudmore 1996, Fahy & Lumby 1988). In creating an artificial environment, variables such as shift work and usual day to day pressures were not taken into consideration. This hinders the ability to get a clear picture of how nurses perform. Bujack et al. (1991) support this argument, stating that OSCE may not reflect the nature of nursing practice in which there is an ongoing relationship with the patient. According to Borbasi and Koop (1993), interpersonal skills of students were not well

140

Nurse Education Today (2001) 21, 136–142

represented in OSCE and the focus was more on psychomotor skills. They went on further to state that some skills could not be assessed with OSCE. It has also been suggested that simulated patients frequently did not reflect the problems likely to be encountered in everyday practice, i.e. emergency and acute situations (Hardin & Gleeson 1979). In addition to this, several authors felt that with the use of the OSCE format patients were not treated holisticly (Ross et al. 1992, Borbasi & Koop 1993, Papworth 1992, Hardin & Gleeson 1979). The knowledge and skills tested were compartmentalized, thus fragmenting skills resulting in the patient not being viewed as a whole. Nursing is known to require a holistic approach in order to make sound clinical judgements. Performance in one problem or case does not necessarily predict performance on other problems or cases, especially in terms of problem solving and clinical reasoning (Dauphine 1995). This challenges the validity of OSCE as a reality-based assessment and one that measures the attribute of interest, namely clinical performance. The reliability and validity of OSCE remain questionable and, in fact, review articles suggest that OSCE is useful only as an additional technique rather than the sole method of assessment (Reed 1992, Borbasi & Koop 1993, O’Neill & McCall 1996). Evidence to support the claim that direct methods of assessment of clinical performance are valid and reliable has been even more difficult to obtain, despite the plethora of techniques used. In conducting a review of this topic, a major shortcoming is the dearth of published evidence of how the instruments used by Schools of Nursing and Midwifery have been developed and tested for reliability and validity (Coates & Chambers 1992).

Discussion The review of literature concerning the methods of assessment by universities, approved by nurses’ governing bodies, in deciding entry into the nursing profession has failed to show them to be either valid or reliable. While the process and outcome of assessments have other useful educational functions (Mahara 1998), both the methods of direct, e.g. observation of clinical performance, and indirect, e.g. written examinations, assessment have failed to

© 2001 Harcourt Publishers Ltd

Misplaced confidence in a profession’s ability to safeguard the public?

discriminate between individuals who are safe and competent to practice and those who are not. In responding to the use of untrustworthy methods of regulating entry, the nursing profession may consider one of three approaches. The first approach may involve the retention of the existing system of assessment, but with a lowering of expectations about the value of the measures used. This approach would involve the acknowledgement that the system of assessment is of questionable worth. Giving less weight to the results of academic assessments may result in a lowering of the standards of academic requirements to enter nursing, and likely an adverse effect on the status of nursing as an academic discipline. A second approach is for the universities, the nurses’ governing bodies and the wider nursing profession to reject the principle of assessing student nurses as a means of determining entry to the profession. Educational systems without objective assessment procedures have been in operation in various educational establishments for many years (Pallie & Carr 1987). (However, students are still required to meet the professions’ entry requirements.) By the dropping of the entry requirements to the profession, this approach may be seen as being honest in having abandoned any pretensions about the effectiveness in the assessment systems of regulating entry. This radical departure from the traditional system of entry is unlikely to be acceptable because of the need of the profession to show that it safeguards the public against incompetent and unsafe practitioners. In addition, the need for nurse education to be accepted by academic colleagues as a credible discipline may prevent this development. However, it could be argued that, should the status quo continue, the realization that nursing does not possess an assessment system that is shown to be reliable and valid could be a similar experience as that of the Emperor when his court discovered he was in his ‘altogether’. The third approach is for nursing to acknowledge the current deficits, to accept as ineffective the present system of assessing nurses and to commit funding in a concreted effort to produce a valid and reliable entry system to nursing. The arguments presented above clearly demonstrate inadequacies in methods of assessing the clinical performance of nurses and © 2001 Harcourt Publishers Ltd

offer some options that may warrant consideration. The issue of entry criteria should be placed high on the agenda in the UK following the review of the Nursing, Midwives and Health Visitors Act 1997 (JM Consulting 1998) and the Making a Difference report (1999) and before changes are made to the self regulatory function of the UKCC. Failure to act may result in the questioning of the competence of the nursing profession’s ability to perform its role of regulating entry to the profession. References Aggleton, Allen, Montgomery 1987 Developing a system for the continuous assessment of practical nursing skills. Nurse Education Today 7: 158–164 Altschul AT, Sinclair H C 1989 Student assessment in basic nursing education in Scotland. Nurse Education Today 9: 3–12 Bemardin HJ, Smith PC 1981 A clarification of some issues regarding the development and use of behaviourally anchored rating scales (BARS). Journal of Applied Psychology 66: 458–463 Bemardin HJ, Alvares KM, Cranny CJ 1976 A recomparison of behavioural expectation scales to summated scales. Journal of Applied Psychology 61: 564–570 Borbasi SA, Koop A 1993 The objective structured clinical examination: its application in nursing education. Advanced Nursing. 11 (2): 33–39 Borman WC, Vallon WR 1974 A view of what can happen when behavioural expectation scales are developed in one setting and used in another. Journal of Applied Psychology 59: 197–201 Bramble K 1994 Nurse practitioner education: enhancing performance through the use of the objective structured clinical assessment. Journal of Nursing Education 33 (2): 59–65 Bujack L, McMillan M, Dwyer J, Hazelton M 1991 Assessing comprehensive nursing performance: the objective structured clinical assessment (OSCA) Part 1 – development of the assessment strategy. Nurse Education Today 11: 179–184 Bujack L, McMillan M, Dwyer J, Hazelton M 1991 Assessing comprehensive nursing performance: the objective structured clinical assessment (OSCA) Part 2 – report of the evaluation project. Nurse Education Today 11: 248–255 Burgess M, Duffy M, Temple F 1972 Two studies of prediction of success in a collegiate program of nursing. Nursing Research 21: 357–366 Campbell JP, Dummette MD, Arvey RD, Hellervik LW 1973 The development and evaluation of behaviourally based rating scales. Journal of Applied Psychology 57: 15–22 Coates VE, Chambers M 1992 Evaluation of tools to assess clinical competence. Nurse Education Today 12: 122–129 Cudmore J 1996 Trial by assessment. Nursing Times 92 (44): 61

Nurse Education Today (2001) 21, 136–142 141

Misplaced confidence in a profession’s ability to safeguard the public?

Department of Health 1999 Making a difference. Strengthening the nursing, midwifery and health visiting contribution to health and healthcare. HMSO, London Dauphinee WD 1995 Assessing clinical performance Where do we stand and what might we expect. Journal of the American Medical Association 274: 741–743 Dubs R 1975 Comparison of student achievement with performance ratings of graduate and state board examination scores. Nursing Research 24: 59–62 Dunn MD 1986 Assessing the development of clinical nursing skills. Nurse Education Today 6: 28–35 Dunnette MD 1966 Personnal selection and placement. Wadsworth, Belmont Ca Fahy K, Lumby J 1988 Clinical assessment in a college program. Australian Journal of Advanced Nursing 5 (4): 5–9 Finley DM, Osbum HG, Dubin JA, Jeanneret PR 1977 Behaviourally based rating scales: Effects of specific anchors and disguised scale continua. Personnel Psychology 30: 658–669 Guttman LL 1944 Phases for scaling quantitative data. American Sociology Review 9: 139–150 Hardin RM, Gleeson FA 1979 Assessment of clinical competence using an objective structured clinical examination (OSCE). Medical Education 13: 41–54 Hartog P, Rhodes EC 1936 The marks of examinations. Macmillan, London Jacobs R, Kafry D, Zedeck S 1980 Expectations of behaviourally anchored rating scales. Personnel Psychology 33: 595–638 JM Consulting 1998 The regulation of nurses, midwives and health visitors: report on a review of the nurses, midwives and health Visitors Act (1997) JM Consulting, Bristol Mahara MS 1998 A perspective on clinical evaluation in nursing education. Journal of Advanced Nursing 28(6): 1339–1346 Newstead S, Dennis I 1994 Examiners examined. The Psychologist 216–219 O’Neill A, McCall JM 1996 Objectively assessing nursing practices: a curricular development. Nurse Education Today 16: 121–126 Outtz J 1979 Predicting the success of state board exams for blacks. Journal of Nursing Education 18: 35–40

142

Nurse Education Today (2001) 21, 136–142

Pallie W, Carr DH 1987 The McMaster medical education: philosophy in theory, practice and historical perspectives. Medical Teacher 9: 59–71 Payne M, Duffy M 1986 An investigation of the predictability of NCLEX scores of BSN graduates using academic predictors. Journal of Professional Nursing 2: 326–333 Papworth G 1992 OSCE and traditional examinations – a comparison between the two techniques. International Conference Proceedings, Scotland. 108–112 Quick M, Krupa K, Whitley T 1985 Using admission data to predict success on NUCLEX-RN in a baccalaureate program. Journal of Professional Nursing 1: 98–103 Reed S 1992 Canadian competence. Nursing Times 88 (3): 57–59 Ross M, Carroll G, Knight J, Chamberlain M, FothergillBourbonnais F, Linton J 1988 Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing 13: 45–56 Runciman P 1996 Competence to practice. National Board for Nursing Midwifery & Health Visiting for Scotland, Edinburgh Saal FE, Landy FJ 1977 The mixed standard rating scale: an evaluation. Organisational Behaviour and Human Performance 18: 19–35 Seither F 1980 Prediction of achievement in baccalaureate nursing education. Journal of Nursing Education 19(9): 28–36 Smith PC, Kendall LM 1963 Retramalation of expectations: an approach to the construction of unambiguous anchors for rating scales. Journal of Applied Psychology 47: 149–155 Soffer J, Soffer L 1972 Academic record as a predictor of future job performance of nurses Nursing Research 21: 28–36 Van Der Vleuten CPM, Van Luyk SJ, Beckers HJM 1989 A written test as an alternative to performance testing. Medical Education 23: 97–107 Wiatrowski MD, Palkon DS 1987 Performance appraisal systems in health care Administration. Health Care Management Review 12(1): 71–80 Williams WE, Seiler DA 1973 Relationship between measures of effort and job performance. Journal of Applied Psychology 57: 49–54

© 2001 Harcourt Publishers Ltd