ARTICLE IN PRESS
International Journal of Nursing Studies 44 (2007) 143–151 www.elsevier.com/locate/ijnurstu
Portfolios and the assessment of competence in nursing: A literature review Tracey McCready Faculty of Health and Social Care, University of Hull, Collingham Road, Hull HU6 7RX, UK Received 21 October 2005; received in revised form 25 January 2006; accepted 28 January 2006
Abstract Objectives: The purpose of this paper is to explore the literature on the portfolio as a tool for the assessment of competence in nurse education. Design: Literature reviews are a valuable source of information; by locating, appraising and synthesising evidence from primary studies they can provide reliable answers to focused questions. They can also help to plan new research by identifying both what is known and not known in a given area. Literature reviews adhere to a scientific methodology which seeks to minimise bias and errors generating inferences based on the synthesis of best available evidence. Data sources: The literature review was conducted utilising several databases, selected because of their relevance to the subject under review and including CINAHL and MEDLINE as well as a hand search of relevant journals and documents. The search terms included: nurses in education, portfolios and assessment and competence. Articles were included in the review if they focused on portfolios as a method of assessment in nurse education and if they were published after 1993 when portfolios first appeared in the nursing literature. Review methods: The review divides the literature into content themes allowing synthesis of the subject looking at consistencies and differences, followed by a summary and key arguments relating to the next theme. Results: Results highlight the importance of clear guidelines for portfolio construction and assessment, the importance of tri-partite support during portfolio development and guidelines for qualitative assessment. Where the portfolio process is well developed there are clear links to competence in practice. Conclusions: The evidence on portfolios as a means of assessment continues to expand. If educators take on board the lessons learned from previous research and apply it to their assessment process, the difficulties found at present, in defining and measuring competence may be reduced. r 2006 Elsevier Ltd. All rights reserved. Keywords: Assessment; Competence; Literature review; Nurse education; Portfolios; Reflection
What is already known about the topic?
Current
competence assessment methods measure only a quarter of nurse’s competence levels when they focus on skills and not knowledge.
Portfolio
assessment allows nurses to reflect on academic and clinical experiences.
What this paper adds
The
Tel.: +44 1482 464604.
E-mail address:
[email protected]. 0020-7489/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.ijnurstu.2006.01.013
importance of matching theoretical learning outcomes and competence statements to create meaning for the student.
ARTICLE IN PRESS T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
144
The
importance of taking into account the professional and educational level of the nurse when designing support. The importance of assessing a qualitative and holistic assessment in a qualitative way.
1. Background Nursing is a complex combination of theory and practice and requires careful facilitation to encourage integration (Corlett, 2000; Santy and Mackintosh, 2000; Koh, 2002). One of the responsibilities of nurse educators is to facilitate the integration of theory and practice and refute claims in that nurse education is made up of the separate entities of academia and clinical practice. Current nursing programmes in England place an emphasis on self-directed learning, encouraging skills needed to seek, analyse and utilise information. The vast majority of summative assessments however remain essay based, which does not always address the issue of the theory practice gap, assessing levels of knowledge but not the ability to apply that knowledge to practice (Jarvis, 1995; Brown et al., 1996; Bradshaw, 1998). The definition of competence in nursing is open to debate, for some it is an objective concept that can be measured, for others it is more than the performance of skills, it is the intuitive grasp of care situations underpinned by deep understanding and experience (Benner, 1984; Redfern et al., 2002). Current competence assessment methods measure only a quarter of the nurse’s competence levels when they focus on skills and not knowledge (Sharp et al., 1995; Tingle, 1998; Santy and Mackintosh, 2000). Those advocating alternative assessment strategies suggest that using an assessment in which there is a reflective component can provide a tangible bridge between theory and practice by linking the knowledge student’s gain through clinical experience, with the knowledge they gain in the classroom (Klenowski, 2002; Pearce, 2003). Some authorities suggest that if the content of the assessment demonstrates personal development it can be claimed that the assessment is sufficient evidence of competence, others suggest that where professional practice and public accountability are concerned such a collection of material on its own is not evidence enough (Brown et al., 1996; Wenzel et al., 1998). The portfolio is advocated as an assessment tool, capable of demonstrating high quality care and professional competence by offering evidence from a variety of sources: practice, the literature, study and research (Klenowski, 2002; Pearce, 2003). An effective portfolio is a visual representation of the individual, their experience, strengths, abilities and skills. The portfolio
can provide a practitioner with evidence of: reflection on academic and clinical experiences, continuing professional development and lifelong learning, decisions about the quality of work, effective critical thinking skills, reflection on professional and personal growth, responsibility for learning and development of the skills necessary of a critical reflective practitioner (Klenowski, 2002; Pearce, 2003). The portfolio as an assessment tool, as suggested by some authors, is valuable as a means of assessment, enabling students to provide evidence of achievement of competencies (Pearce, 2003; Brown et al., 1996). Currently in England qualified nurse’s can expect to demonstrate competencies, key skills and personal development in order to progress in a career framework tightly linked to pay and progression (DoH, 1999a; UKCC, 1999; DoH, 2003). Objectives: The purpose of this paper is to explore the literature on the portfolio as a tool for the assessment of competence in nurse education. Review methods: The validity of the findings in a literature review are directly related to the comprehensiveness of the literature search employed, the aim being to locate as many studies as possible which are suitable to answer the question posed (Hart, 1998; Khan et al., 2001). For the purpose of this literature review the question was broken down into manageable chunks or ‘facets’ (Khan et al., 2001). The facets used were: nurses in education (population), portfolio assessment (intervention) and competence (outcome). Search terms included: portfolios and competence, portfolios and nurse education, portfolios and assessment in nursing.Various databases were selected because of their relevance to the subject under review and included: Cumulative Index to Nursing and Allied Health Literature (CINAHL), Pre-CINAHL (a companion to CINAHL) , Elsevier Science, Medline, The Campbell Collaboration (a database similar to Cochrane sourcing evidence related to the effects of interventions in social, behavioural and educational settings), The Department of Health, The Royal College of Nursing, Education on line and Dissertationsand theses.com (a searchable database of research papers). The search terms were piloted on CINAHL in order to assess the sensitivity and specificity of the search terms (Honest et al., 2001; Khan et al., 2001). The search term ‘nurse education’ was determined to be too broad a term but was still useful if combined with nursing. The term ‘portfolios’ also yielded a large quantity of material and was much more sensitive when used with nursing or assessment. Relevant studies were retrieved for more detailed evaluation and rigorous application of the inclusion/ exclusion criteria and quality assessment instruments until the final studies were identified.
ARTICLE IN PRESS T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
Inclusion criteria:
Articles
from January 1993 to December 2004. Portfolios were first introduced into the nursing literature in 1993 (Hull and Redfern, 1996). The main focus of the paper is portfolio use as a method of assessment. The general area of study is nurse education. Methodology will include meta- analyses, case control studies, survey, ethnography, phenomenology and opinions of respected authorities based on experience. Minimum quality threshold based on quality assessment instruments formulated by Streubert (2002) and Santy and Kneale (2000) assessing Qualitative and quantitative papers, respectively. The instruments classify studies according to their level of methodological rigour and internal validity: design, conduct and analyses as well as the external validity: populations, interventions and outcome measures.
Exclusion criteria:
Articles prior to 1993. The main focus of the paper is portfolio use as part of professional development.
The area of study is general education. Following application of the inclusion and exclusion criteria (including quality assessment instruments), 14 studies were selected for review. Of the 14 studies, 5 were quantitative, 5 were qualitative, 1 offered a triangulated methodology incorporating both quantitative and qualitative design,1 was in the form of a literature review and 2 offered expert opinion. 2. Results 2.1. Evaluation of the portfolio as an alternative form of assessment Work conducted by Gerrish in 1993 evaluated the implementation of portfolio assessment on a post registration nurse education programme. The portfolio involved tri-partite assessment between the academic supervisor, practice mentor and the student with the student having autonomy over the items to be included in the portfolio . In a postal questionnaire to 20 students, with a response rate of 75%, results highlighted the value of self-assessment and tri-partite assessment as well as autonomy over portfolio development and self-directed learning. The participants suggested that this form of assessment helped to bridge the theory–practice gap although they struggled with the reflective component of the portfolio and the time consuming nature of this form of assessment. Although
145
the work of Gerrish (1993) was innovative, at the time, results should be read with caution as there is little methodological detail. Murrell et al’.s (1998) experience of implementing portfolios on a post registered course mirrored Gerrish’s (1993) work and led to positive course evaluations regarding improvements in practice, self-directed learning and bridging of the theory–practice gap. The course identified 6 practice learning outcomes related to the course subject and students were given the opportunity of identifying learning outcomes of their own. The students were given autonomy regarding the evidence they provided to demonstrate meeting of the outcomes. Assessment of the portfolio again took a tri-partite approach. According to Murrel et al. (1998) the course evaluation was favourable, the external examiner commented positively that practice development could be seen within the portfolio and students commented positively on the process. A later study conducted by Tiwari and Tang (2003) had similar findings when looking at the effectiveness of portfolio assessment in improving student learning using qualitative measures to attempt to understand student’s perceptions and experience of undertaking portfolio assessment. Subjects were drawn from a group of post registered students who had recently undertaken portfolio assessment. A semi-structured interview asked students to comment on: their experience of portfolio assessment. The authors did not divulge any in depth information on the interview detail. All 12 commented positively despite initial anxiety regarding the open ended nature of the assessment and the need to selfselect assessment tasks. Those who did well took active steps to seek help, those who did less well took a more passive approach, all 12 said they would choose portfolio assessment again. Ten of the students highlighted gaining an increased theoretical understanding, applying theory to practice, learning deeply and meaningfully and conceptualising at a high cognitive level. Students also valued the freedom that the portfolio allowed in being able to choose what could be studied as well as what could be included supporting the earlier experiences of Gerrish (1993) and Murrell et al. (1998). A recent study by Spence and El-Ansari (2004) focused on the use of portfolios in post registration education from practice teachers perspective using an action research approach. The aims of the study were to determine practice teachers early experience of the portfolio approach to practice assessment including its use in: student self-evaluation, guiding the compilation of evidence and the development of the student’s professional practice. Two postal questionnaires focused on issues previously highlighted including: students anxiety regarding the compilation of the portfolio and its word limit, the inter -rater reliability, students self directed learning and reflection. The questionnaires had
ARTICLE IN PRESS 146
T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
response rates of 32.3% and 50%, respectively, following the trend expected in a postal questionnaire (Parahoo, 1997). The quantitative results highlighted that practice teachers felt that students had optimum levels of guidance for portfolio development, that students developed self-awareness and awareness of personal competence as well as the ability for self directed learning. The qualitative results highlight that practice teachers would advocate a word limit on portfolios. Three quarters of the practice teachers who responded believed that the evidence compiled by students accurately reflected their level of practice competency. Spence and El-Answari (2004) highlight the limitations in their work, the small convenience sample and low response rate but still highlight the importance of self evaluation within the active learning of portfolio assessment, the need to link the learning outcomes clearly to the assessment and the importance of inter -rater reliability within the assessment. In contrast Gallagher’s (2001) survey set out to gain insight into undergraduate student perceptions of the fairness and appropriateness of a standards based portfolio (SBP) as an assessment tool and to describe the strength of the relationship between the assessment tool and practice based learning. In contrast to the practice learning outcomes in Murrell et al.’s (1998) work, Gallagher’s (2001) SBP outcomes were theory based outcomes related to specific themes and how those themes relate to practice. The quantitative study, utilised a 27 item questionnaire specifically designed for the study. The results from the questionnaire indicate that students knew what steps had to be taken to complete the portfolio but they found specific criteria for success difficult to interpret. Students were able to relate the assessment to their practice and personal experience. They found the portfolio to be a fair form of assessment, a positive contribution to learning and an accurate assessment of knowledge. The work reported by Dolan et al. (2004) supports Gallagher’s findings in that the link between theory and practice extolled in the work of Gerrish (1993), Murrell et al. (1998) and Tiwari and Tang (2003) is not found in their own work. Dolan et al. (2004) set out to investigate whether the portfolio for pre registration nursing students was helpful in bridging the theory–practice gap. The study utilised focus groups and the findings used to generate a questionnaire based on the students experiences of using a portfolio and views about its usefulness. Of 326 eligible students conveniently sampled, 247 attended lectures on the day the questionnaire was administered and 219 agreed to complete the questionnaire possibly attributable to the presence of the researcher. The focus of the portfolio in this work was not assessment however some of the findings clearly link into earlier work as well as the work by Gallagher (2001). Dolan et al. (2004) found evidence to support the
work by Gerrish (1993) and Gallagher (2001) that the portfolio is time consuming. A further link to Gallaghers (2001) work lies in the fact that only 27% of students agreed that the portfolio had helped them to understand clinical practice. Use of the portfolio to link theory and practice received the second lowest rating as one of the perceived purposes of the portfolio. The students did not appear to link the reflective component of the portfolio with the theory-practice gap a finding similar to that of Gerrish (1993) 10 years earlier. The key issues from the studies analysed include; the importance of self directed learning and reflection in the bridging of the theory–practice gap aided by tri-partite assessment. Questions left unanswered related to the use of portfolios and the assessment of competence.
2.2. Portfolios and the assessment of competence In 1993 the ACE project: Assessment of Competencies in Nursing and Midwifery Education (Phillips et al., 1993), investigated the assessment of competencies and focused upon the experiences of staff and students in the classroom and in clinical areas. Data collection utilised an ethnographic approach with fieldwork completed in nine colleges of nursing and in associated placement areas in three geographical locations. The research study has several key recommendations in relation to the assessment of competence including the importance of a range of evidence to support knowledge, skills, attitudes and understanding incorporating reflection and critical analysis of both theory and practice. In 1998, Milligan was asked to offer some guidance related to assessment as part of the re-validation of nursing courses within The University of Luton, ultimately his focus was on the concept of competence. Milligan (1998) argues that the behaviourist approach to outcome competencies which ignores the educational process involved in achieving competence, is reductionist and unhelpful in nurse education. Milligan (1998) highlights Patricia Benner’s work; From Novice to Expert (1984), in relation to post registration nurses looking at the attributes of effective nurses and defining competence as the third of five stages of practice, culminating in expert practice. Regarding pre-registration nursing, Milligan (1998) points out that student nurses are expected to qualify with the full range of skills necessary for effective practice constructed in terms of outcome competencies (DoH, 1999a, b; UKCC, 1999). Competence according to Benner (1984) infers that that there is something more for qualified practitioners to achieve, with competency at the proficient and expert stages only likely to be witnessed with further career development. Milligan (1998) argues that in assessing competence, it needs to be defined in relation to the context within which it is to be used.
ARTICLE IN PRESS T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
Ball et al. (2000), in relation to the measurement of competence suggest a thorough evaluation of whether or not portfolios can provide reliable and valid measurement in this area highlighting issues related to the possible ad hoc nature of portfolio content, objectivity and reliability of portfolio construction as well as the quality of self-directed learning and reflection. They suggest the portfolio can enable the student to construct clinical experience and understand the complexity of their work within different contexts but can also reflect the social and experiential reality of the individual and become more subjective rather than objective in nature, akin to Benner’s (1984) competent nurse. A literature review on portfolios and the assessment of competence conducted by McMullan et al. (2003) as part of an ENB commissioned study is just one of a series of publications related to their work. McMullan et al. (2003) briefly discuss their methodology, citing the use of CINAHL and MEDLINE and the search terms: competence, and portfolios for the years 1989–2001. They found the literature relating to competence to be ambiguous raising questions about methods of assessment, the role of the assessor and the validity and reliability of assessment, each assessor having their own interpretation of competence. The authors rely heavily on the work of Gonczi (1994) to demonstrate what they perceive to be the way forward in this area. Gonczi (1994) suggests that assessment methods should be used in an integrated manner to combine knowledge, understanding, problem solving, technical skills, attitudes and ethics. Gonczi (1994) goes on to suggest that a holistic approach to assessing competence is likely to be more valid or at least equally reliable as other methods. McMullan et al. (2003) recommend the importance of clear guidelines on the purpose, content and structure of the portfolio for the student as well as the assessor (Gallagher, 2001; Tiwari and Tang, 2003; Dolan et al., 2004). The question of how effective the portfolio is at assessing learning and competence remains theoretical rather than empirical with much emphasis on inter-rater reliability. Scholes et al. (2004) reported further findings of their study based on qualitative results from in depth interviews with both students and assessors. The aim: to determine how students and assessors work through all of these complex elements in order to determine competence in practice. The authors do not make data collection transparent although they do highlight the case study approach within four higher education institutions and also that some interviewing took place in practice placements during the assessment process i.e tripartite meetings or work based learning seminars. Findings from the study indicate that students do not feel confident in portfolio preparation even with dedicated teaching sessions around portfolio preparation. Assessor preparation was also a key issue, some
147
institutions favouring distance learning packages for their assessors but in the main course leaders preferred to work on a one to one with their assessors in order to build good relationships between theory and practice. Some assessors felt quite anxious about the process related to their own credibility as practitioners as well as their equity and consistency as assessors. A further key issue was related to the way in which learning outcomes or competencies were presented. If learning outcomes were written in an abstract way to accommodate a variety of clinical situations, students and assessors then had to deconstruct them to make them fit specific practice. If outcomes or competencies did not relate to the practice situation the student reconstructed practice to fit the outcome, raising the question of what is being assessed competence in practice or competence in portfolio construction? One of the key findings of Scholes et al. (2004), also highlighted in the work of Murrell et al. (1998), Gallagher (2001) and Tiwari and Tang (2003), was the ability of students with academic and professional maturity to develop through reflection and portfolio writing as opposed to those students with less academic and professional maturity needing clearer instruction (Murrell et al., 1998; Gallagher, 2001; Tiwari and Tang, 2003). The key issues from the studies analysed include; the importance of holistic assessment utilising a wide range of evidence as well as clear guidelines for portfolio construction. The issue of clear guidelines was raised not just for students compiling a portfolio but for those mentors and lecturers assessing the portfolio. The studies raised issues around reliability and validity in portfolio assessment. 2.3. Reliability and validity in portfolio assessment. Pitts et al. (1999), in their work looking at reliability and validity of portfolio assessment, conducted work around the training and development of general practice teachers utilising the portfolio for performance based assessment. They looked at the reliability of judgements made by a panel of assessors about individual components of the portfolio together with an overall judgement about performance. Eight experienced general practice trainers recruited from a large geographical region assessed portfolios from twelve participants. The assessors utilised an assessment guidance framework which had been previously developed by Pitts (1996) and Coles (1994) and included points relating to; the reflective learning process, identification of personal learning needs, consideration of past learning experiences, recognition of effective teaching behaviours, the ability to identify with being a learner, the awareness of educational resources and finally; drawing conclusions with overall reflections on the course and future career development. A global assessment of the whole portfolio
ARTICLE IN PRESS 148
T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
was also assessed as well as the individual points highlighted. The assessors examined all portfolios on two occasions, 1 month apart. The overall level of agreement (above that expected by chance) between the eight assessors in rating the portfolios was estimated using the kappa (k) statistic. Kappa is a test applied to test rater dependence and to quantify the level of agreement values of k over 0.8 indicating excellent agreement, 0.61–0.8 substantial agreement, 0.41–0.6 moderate agreement, 0.21–0.4 fair agreement and 0–0.2 slight agreement, negative values indicating poor agreement (Shrout, 1998). Inter -rater reliability was determined using the first assessment made by each assessor. Ten portfolios passed on the global assessment by more than half the assessors, three were judged to have passed by all assessors, of the two portfolios that were not passed by five or more assessors one was passed by one and the other by two. Agreement between assessors ranged from slight to fair and was significantly above the level expected by chance (Po0.05). On reassessment, agreement between assessors was calculated using an overall k that is an average of the eight k calculated from individual assessors. Greatest consistency was seen in the global assessment, as well as the reflectiveness judgement and identification with being a learner criterion. The percentage of global passes given by an assessor ranged from 50–92%, this was not statistically significant (P ¼ 0.17) suggesting that the data was not compatible with the pass rates being equal for different assessors. The consistency of individual assessors judgements was moderate but inter- rater reliability did not reach a level that would support safe summative judgement, the convenience sample of portfolios should also be noted. Within their discussion, Pitts et al. (1999) ask whether portfolios should remain as effective formative educational tools rather than as summative assessment because of the difficulties encountered or should the benefit to the student in terms of reflection and learning outweigh the difficulties for the assessors. Pitts et al. (2001) repeated their study with results indicating that inter rater reliability was significantly above the level expected by chance with k values between 0–4. Intra rater reliability was defined as moderate k 0.41–0.6. Pitts et al. (2001) suggest that only a fair degree of agreement between assessors can be achieved even with experienced trained assessors and after clear guidance has been offered to learners. Pitts et al. (2001) argue against the reductionist nature of standardised assessment in that it can limit more meaningful approaches to learning. They suggest that applying measures such as validity and reliability is not appropriate for portfolio based learning and that qualitative approaches are more appropriate. A final study by Pitts et al. (2002) focused on the interrater reliability of assessors initial independent judge-
ments and how open discussion between random pairs of assessors influences reliability, if at all. Pitts et al. (2002) recruited eight experienced assessors from different geographical locations and all utilised the baseline assessor guide developed by the authors in their previous work. All assessors examined all portfolios, which numbered 12, on two occasions. The first was an independent assessment, the assessors then met two months later and in random pairs discussed and reassessed the portfolio to give a composite score. Of the 12 portfolios: 7 were passed by all assessors and 9 were passed by more than half of the assessors, 2 were referred by more than half the assessors. Agreement between assessors ranged from slight to fair, significantly above the level expected by chance (po0.05). In the paired assessment, four pairs of assessors gave 48 paired global assessments on the 12 portfolios, 25 pairs passed portfolios on the paired assessment and had passed them on the individual assessment. Six pairs failed portfolios on the paired assessment and had also both failed them on the individual assessments, again pairing making no difference. Six pairs failed portfolios on the paired assessment but both had passed them on the individual assessments, both assessor’s changing their initial pass to fail after discussion. Seven pairs passed portfolios on the paired assessment but had disagreed on the individual assessments. Four pairs failed portfolios on the paired assessment but had disagreed on the individual assessment, one assessor changing after discussion. Overall 17 out of the 48 pairings or 35% resulted in at least one assessor changing the score. Pitts et al. (2002) acknowledge that the results of the study are not generalisable due to self selection of the portfolios used for the study. The results demonstrate that discussion between assessors increases reliability above the levels achieved in assessments of professional competence and is an improvement on individual assessment. Pitts et al. (2002) acknowledge that portfolios are subjective documents including descriptive and reflective components and assessment of learning should take this into account. Pitts et al. (2002) also point out the subjective nature of assessment of portfolios with assessors bringing something of themselves into the assessment process akin. Pitts et al. (2002) argue for interpretevism rather than reductionism in the assessment process. They go on to advocate discussion between assessors focusing on the holistic approach advocated by Gonczi (1994) and Day (1998). They also highlight the case for the use of discussant pairs as a way of increasing otherwise fair inter-rater reliability (Pitts et al., 2002). Webb et al. (2003) provide a conceptual discussion of their evaluation of the portfolio assessment process, agreeing with Pitts et al. (2002) that qualitative rather than quantitative criteria may be more appropriate because of the nature of the evidence in portfolios.
ARTICLE IN PRESS T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
Webb et al. (2003) suggest that it is difficult to envisage what conventional tests of reliability and validity could be brought to bear on the holistic data presented in portfolios. In their own work (Webb et al., 2003) mapped data against ideas about evaluating rigour in qualitative research. The criteria to be mapped against included: credibility, transferability, dependability, confirmability, adequacy and appropriateness of data, verification with secondary informants, multiple- raters and an audit trail. Webb et al. (2003) suggest the tri-partite meeting is crucial, with the student being able to demonstrate their communication, reflective and analytical skills and the mentor and teacher offering feedback on performance and guidance for future learning. They conclude that it is not possible to apply the concepts of validity (measurement of what is claimed to be measured) and reliability (constancy of measurement) without detailed and objective criteria for grading the evidence. Portfolios contain qualitative rather than quantitative evidence and assessors make qualitative judgements about this evidence, taking into account what they have learned about the student. Webb et al. (2003) suggest that this qualitative assessment of portfolios needs to be systematic and rigorous. The key issue raised from the work analysed is the importance of qualitative assessment of a process which is in itself fundamentally holistic.
3. Discussion In relation to the evaluation of the portfolio as an alternative form of assessment the key issues elicited from the research conducted were based around the ability to engage in self directed learning and reflection (Gerrish, 1993; Murrell et al., 1998; Tiwari and Tang, 2003). According to Harris et al. (2001), in their experience, portfolios that require students to reflect on the relationship between their practical experience and theoretical learning can help bridge the theory– practice gap. They suggest that this is achieved not just by applying concepts and principles from learning into practice but also by tri-partite assessment. It is interesting to note that in the studies analysed, those conducted with pre-registered students highlighted the greatest difficulty in the bridging of the theory practice gap as well as the earlier work by Gerrish (1993), conducted at a time when reflective practice was in its infancy. Harris et al. (2001) go on to suggest that the use of structured reflection is useful particulary for the novice reflector in line with other theorists in this area (Boyd and Fales, 1983; Marland and McSherry, 1997; Johns 2000). Regarding portfolios and the assessment of competence; the studies analysed suggest that competence statements can facilitate self-directed, individualised
149
learning by reflection on current practice (Milligan, 1998; McMullan, 2003; Scholes et al., 2004). Specific outcomes or competencies which match the professional practice to be assessed supported by types of evidence that might match the students academic progression are essential to maximise portfolio use as an effective tool to link theory and practice (Gonczi, 1994; Day, 1998). According to Pitts et al. (1999), whilst a substantial amount of literature relating to portfolios exists in many fields, primarily teaching and nursing, psychometric data to support the use of portfolios as a summative assessment tool are sparse and lacking in the majority of published papers. The extensive work carried out by Pitts et al. (1999, 2001, 2002) in this area suggests that inter- rater reliability can be achieved when it is closely linked to experienced assessors and when students have explicit guidelines for portfolio construction. An holistic approach can be taken and rather than focusing on reliability and validity in a quantitative way, credibility, transferability, dependability and confirmability can be looked at therefore assessing what is a very qualitative form of assessment in an holistic and qualitative way, also taking into account the subjectivity of the assessor.
4. Conclusion Professional practice assessment issues and the ambiguity around how to measure competence have preoccupied nurse educators for years and still continue to pose some problems (Spence and El-Ansari, 2004). A major challenge in the assessment process is objective measurement and this is difficult in the assessment of clinical competency. The strategy for assessing competency is the responsibility of those providing nurse education and according to Dolan (2003), although there are numerous assessment tools available, a comprehensive and effective measure has not been established. Perhaps it is time for those involved in nurse education to embrace the portfolio as a means of assessment, affording it the time, not just related to tripartite assessment but in matching learning outcomes and competence statements that are meaningful and achievable. From the evidence it is clear that portfolio assessment can enhance learning, whether or not portfolios can measure competence remains inconclusive. What is clear from this review is that the evidence on portfolios as a means of assessment continues to expand. If educators take on board the lessons learned from previous research and apply it to their assessment process, the difficulties found at present, in defining and measuring competence may be reduced. Future research in this area should focus on qualitative methodologies in keeping with the holistic nature of the portfolio itself.
ARTICLE IN PRESS 150
T. McCready / International Journal of Nursing Studies 44 (2007) 143–151
References Ball, E., Daly, W.M., Carnwell, R., 2000. The use of portfolios in the assessment of learning and competence. Nursing Standard 14 (43), 35–37. Benner, P., 1984. From Novice to Expert:Excellence and Power in Clinical Nursing Practice Addison-Wesley. Menlo Park, California. Boyd, E., Fales, A., 1983. Reflective Learning; key to learning from experience. Journal of Humanistic Psychology 23, 99–117. Bradshaw, A., 1998. Defining ‘competency’ in nursing (part 11): an analytical review. Journal of Clinical Nursing 7, 103–111. Brown, S., Race, P., Smith, B., 1996. 500 Tips on Assessment. Kogan Page Ltd. London. Coles, C., 1994. A review of learner centred education and its applications in primary care. Education for General Practice 5, 19–25. Corlett, J., 2000. The perceptions of nurse teachers, student nurses and preceptors of the theory-practice gap in nurse education. Nurse Education Today 20 (6), 499–505. Day, M., 1998. Community education: a portfolio approach. Nursing Standard 13 (10), 40–44. Department of Health, 1999a. Making a difference: strengthening the nursing. Midwifery And Health Visiting Contribution To Health and Healthcare. HMSO, London. Department of Health, 1999b. Agenda for Change-Modernising the NHS Pay System. HMSO, London. Department of Health, 2003. The NHS Knowledger and Skills Framework (NHS KSF) and Development Review. Working Draft. HMSO, London. Dolan, G., 2003. Assessing student nurse clinical competency: will we ever get it right? Journal of Clinical Nursing 12, 132–141. Dolan, G., Fairbairn, G., Harris, S., 2004. Is our student portfolio valued? Nurse Education Today 24, 4–13. Gallagher, P., 2001. An evaluation of a standards based portfolio. Nurse Education Today 21, 409–416. Gerrish, K., 1993. An evaluation of a portfolio as an assessment tool for teaching practice placements). Nurse Education Today 13, 172–179. Gonczi, A., 1994. Competency based assessment in the professions in Australia Assessment in Education 1, 27–44 i. Harris, S., Dolan, G., Fairbairn, G., 2001. Reflecting on the use of student portfolios. Nurse Education Today 21, 278–286. Hart, C., 1998. Doing a Literature Review. SAGE, London. Honest, H., Bachmann, L.M., Khan, K., 2001. Electronic searching of the literature for systematic reviews of screening and diagnostic tests for preterm birth. European Journal of Obstetrics & Gynaecology and Reproductive Biology 107, 19–23. Hull, C., Redfern, L., 1996. Profiles and portfolios: a Guide for Nurses and Midwives. Macmillan Press Ltd., London. Jarvis, P., 1995. ‘competency’ in nursing (part 11):an analytical review Journal of Clinical Nursing 7, 103–111. Adult and Continuing Education, 2nd ed. Routledge, London, in Bradshaw A (1998). Johns, C., 2000. Becoming a Reflective Practitioner. Blackwell Science, Oxford. Khan, K.S., Gerbenter, R., Glanville, J., Sowden, A.J., Kleijnen, J. (Eds.) 2001. Undertaking Systematic Reviews
of Research on Effectiveness: Crd’s Guidance for those Carrying Out or Commissioning Reviews, CRD Report 4, 2nd ed. NHS Centre for Reviews and Dissemination, University of York. Klenowski, V., 2002. Developing Portfolios for Learning and Assessment. Routledge, Falmer, London. Koh, L., 2002. Practice based teaching and nurse education. Nursing Standard 23 (16), 38–42. Marland, G.M., McSherry, W., 1997. The reflective diary: an aid to practice based learning. Nursing Standard 12, 49–52. McMullan, M., Endacott, R., Gray, M.A., Jasper, M., Miller, C.M.L., Scholes, J., Webb, C., 2003. Portfolios and assessment of competence: a review of the literature. Journal of Advanced Nursing 41 (3), 283–294. Milligan, F., 1998. Defining and assessing competence: the distraction of outcomes and the importance of educational process. Nurse Education Today 18, 273–280. Murrell, K., Harris, L., Tomsett, G., 1998. Using a portfolio to assess clinical practice. Professional Nurse 13 (4), 220–223. Parahoo, K., 1997. Nursing Research: Principles, Process and Issues. Macmillan, London. Pearce, R., 2003. Profiles and Portfolios of Evidence. Nelson Thornes Ltd., Cheltenham. Phillips, T., Schostak, J., Bedford, H., Robinson, J., 1993. Assessment of Competencies in Nursing and Midwifery Education and Training The ACE Project). ENB, London. Pitts, J., 1996. Pathologies of one to one teaching. Education for General Practice 7, 118–122. Pitts, J., Coles, C., Thomas, P., 1999. Educational portfolios in the assessment of general practice trainers: reliability of assessors. Medical Education 33, 515–520. Pitts, J., Coles, C., Thomas, P., 2001. Enhancing reliability in portfolio assessment: ‘shaping’ the portfolio. Medical Teacher 23 (4), 351–356. Pitts, J., Coles, C., Thomas, P., Smith, F., 2002. Enhancing reliability in portfolio assessment: discussions between assessors. Medical Teacher 24 (2), 197–201. Redfern, S., Norman, I., Calman, L., Watson, R., Murrells, T., 2002. Assessing competence to practice in nursing: a review of the literature. Research Papers in Education 17 (1), 51–77. Santy, J., Kneale, J., 2000. Critiquing quantitative research. Journal of Orthopaedic Nursing 2 (2), 77–83. Santy, J., Mackintosh, C., 2000. Assessment and learning in post registration nurse education. Nursing Standard 14 (18), 38–41. Scholes, J., Webb, C., Gray, M., Endacott, R., Miller, C., Jasper, M., McMullan, M., 2004. Making portfolios work in practice. Journal of Advanced Nursing 46 (6), 595–603. Sharp, K., Wilcock, S., Sharp, D., MacDonald, H., 1995. A literature review on competence to practice School of Nursing. The Robert Gordon University. Edinburgh: National Board for Nursing, Midwifery & Health visiting for Scotland. In: Redfern, S., Norman, I., Calman, L., Watson R., Murrells, T., 2002. Assessing competence to practice in nursing: a review of the literature. Research Papers in Education 17(1), 51–77. Shrout, P.E., 1998. Measurement reliability and agreement in psychiatry. Statistical Methods in Medical Research 7 (3), 301–317.
ARTICLE IN PRESS T. McCready / International Journal of Nursing Studies 44 (2007) 143–151 Spence, W., El-Ansari, W., 2004. Portfolio assessment: practice teacher’s early experience. Nurse Education Today 24, 388–401. Streubert, H., 2002. Evaluating Qualitative Research. In: Haber, J., Lobiondo-Wood G. (Eds.), Nursing Research: Methods, Critical Appraisal and Utilisation 5th ed. Mosby St Louis, pp. 165–182. Tingle, J., 1998. Nurses must improve their record keeping skills. British Journal of Nursing 7 (5), 245. Tiwari, A., Tang, C., 2003. From process to outcome: the effect of portfolio assessment on student learning. Nurse Education Today 23, 269–277.
151
United Kingdom Central Council for Nursing Midwifery and Health Visiting, 1999. Fitness for Practice: The UKCC Commission for Nursing and Midwifery Education. UKCC, London. Webb, C., Endacott, R., Gray, M.A., Jasper, M.A., McMullan, M., Scholes, J., 2003. Evaluating portfolio assessment systems: what are the appropriate criteria? Nurse Education Today 23, 600–609. Wenzel, L.S., Briggs, K.L., Puryear, B.L., 1998. Portfolio: Authentic Assessment in the Age of the Curriculum Revolution. Journal of Nursing Education 37 (5), 208–212.