What influences assessors' internalised standards?

What influences assessors' internalised standards?

Radiography 22 (2016) e99ee105 Contents lists available at ScienceDirect Radiography journal homepage: www.elsevier.com/locate/radi What influences ...

2MB Sizes 18 Downloads 39 Views

Radiography 22 (2016) e99ee105

Contents lists available at ScienceDirect

Radiography journal homepage: www.elsevier.com/locate/radi

What influences assessors' internalised standards? C. Poole a, b, *, J. Boland b, 1 a Applied Radiation Therapy Trinity Research Group, School of Medicine, Discipline of Radiation Therapy, Trinity College Dublin, The University of Dublin, Ireland b School of Medicine, National University of Ireland Galway, Galway, Ireland

a r t i c l e i n f o

a b s t r a c t

Article history: Received 9 July 2015 Received in revised form 13 November 2015 Accepted 21 November 2015 Available online 19 December 2015

Purpose: The meaning assessors attach to assessment criteria during clinical placement is underresearched. While personal beliefs, values or expectations may influence judgements, there is scant evidence of how this manifests in a clinical attachment setting. This research explored the concept and source of internalised standards and how these may influence judgements. Methods: This study, within the constructivist paradigm, was informed by the principles of grounded theory. Seven radiation therapists, purposefully selected, were interviewed face-to-face using semistructured interviews. The sample size allowed for the gathering of sufficient data for in-depth thematic analysis, using the functionality of CAQDAS (NVivo 9). Results: Radiation therapists' judgements when assessing students were influenced by their previous experience. They had different expectations of the appropriate standard for each criterion on students' assessment forms e relating to technical ability, clinical knowledge and attitude. They had their own set of values, or expectations which informed ‘internalised standards’ which influenced their judgements about student performance. Prior experience e as students and as qualified professionals e influenced these decisions. Conclusion: Assessment of students' performance may differ depending on the clinician conducting the assessment. Even where assessors are given the same criteria and training, this does not ensure reliability, as judgements are influenced by their internalised standards. This has implications for the design of more appropriate assessor training which recognises and addresses this phenomenon. These results will be of interest to radiation therapists, radiographers, medical educators, allied health professionals and any academic or professional body with responsibility for ensuring that we qualify competent practitioners. © 2015 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

Keywords: Clinical assessment Clinical education Reliability of assessment Competence Work based assessment

Introduction Clinical placement is a core component of undergraduate education in all medical/health science programmes. Clinical teachers have a pivotal role in facilitating learning. Effective clinical teaching and assessment, however, is a complex role that combines clinical obligations and teaching management. Some clinical practitioners teach, assess and supervise students, in addition to their clinical duties1 and thereby become clinical teachers. Driving learning,

* Corresponding author. Applied Radiation Therapy Trinity Research Group, School of Medicine, Discipline of Radiation Therapy, Trinity College Dublin, The University of Dublin, Centre for Health Sciences, St. James's Hospital, Dublin 8, Ireland. Tel.: þ353 (0) 1 8962973; fax: þ353 (0) 1 8963246. E-mail addresses: [email protected] (C. Poole), [email protected] (J. Boland). 1 Tel.: þ353 (0) 91493857; fax: þ353 (0) 91494519.

assessment influences what students actually learn.2 A well designed assessment can be a robust educational tool where the student can develop and learn by being aware and reflecting on their strength and weakness.3 Observation of a professional's habitual behaviour at work is regarded as one of the only effective methods of accurately assessing the core traits of performance required to perform competently in the working environment.4,5 Performance assessment relates to the uppermost domain of Miller's Pyramid (or Prism) of clinical competence.3,4,6e8 The lower two levels relate to knowledge and application of knowledge. Level 3, coined ‘shows how’ can be assessed in vitro whereas level 4, or ‘does’ involves assessment of performance in the workplace.6,7,9 In recent years, performance or work based assessment (WBA) is of growing importance for summative purposes within medical and health science education and training.7,10,11 Therefore educational institutions must implement robust and reliable performance assessment training programmes for assessors.

http://dx.doi.org/10.1016/j.radi.2015.11.003 1078-8174/© 2015 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

e100

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

Background/literature review

Context

Competency based education or training prevails in the design of programmes in healthcare, at undergraduate, postgraduate level and continuing professional development. When implementing this model, learners are expected to achieve defined competences to stated standards in order to successfully complete the programme.12 Learning outcomes are then developed as part of the curriculum design process, based on the competency framework.13 A ‘holistic’ or integrated approach to embedding competency is used by a growing number of institutions responsible for the design of professional and educational programmes.14e20 Much of the critique of competency based education relates to the putative tendency towards behaviourism and a potential for instrumentalism. With a holistic approach to competency, it is no longer a single outcome but rather the result of a developmental process.14,21 This movement from a behaviourist to holistic approach is evident in many professional curricula where competencies of ‘skills, knowledge and attitude’ are integrated into the overall learning outcomes.13 In the implementation of the curriculum in practice, learners are required to develop and demonstrate interpersonal skills, professional practice inclusive of ethics, team skills, an ability to adapt to a changing environment as well as clinical competency.12 This approach reflects a conception of clinical competence as the combination of ‘theoretical knowledge, practical skill and humanistic endeavour’.17 Thus clinical competence is the ability to perform in practice, integrate knowledge and apply it.22 Assessment of clinical competence should include the assessment of technical skill, problem solving abilities, decision making, clinical knowledge and attitudes. According to Miller's Pyramid,3,4,6e8 WBA represents the most valid means to assess such competencies. Work based or performance based assessment refers to assessment of learners while working in their clinical placement with real patients.23,24 This contrasts with the broader category of competency based assessment, which is generally carried out with learners in more ‘controlled’ environments with standardised patients and practical examinations (e.g. the Objective Structural Clinical Exam).23 It is with the implementation of the former e WBA e that this study is concerned. In assessment, five elements are critical to a quality system: validity, reliability, educational impact, acceptability and feasibility.25 These elements are incorporated by the utility model designed by van der Vleuten.26 The concept can be represented as a theoretical equation to determine the utility of any assessment method.26 Components of the equation can be weighted differently, depending on the purpose of the assessment. There is no perfect assessment; instead assessment designs are a compromise between the five concepts and dependent on the purpose of the assessment.27 Validity of assessment is of paramount importance. WBA, given its real life context, has the potential to optimise validity.11,26 When assessing performance, however, concern about reliability e especially inter-rater reliability e is widely reported.28,29 An investigation of physiotherapy clinical educators highlighted variation in their expectations of students, resulting in subjectivity of judgements and compromised reliability.30 Studies conducted in physiotherapy and teacher education, suggest existence of both ‘subjective’ and ‘objective’ criteria.31,32 Assessors are influenced by their own values, beliefs and expectations of how a student should perform, leading to an internalisation of criteria before a students' assessment. A ‘real’ set of criteria and standards can be identified, that, while related to the formal criterionreferenced assessments, are based on their own internal standards, of which the student is not aware.31,32

The radiation therapy course in Ireland is a four year honour degree programme. Final year Radiation Therapy (RT) students are currently assessed in final year work placements by clinical RTs. They work in a range of hospitals and conduct assessments for the undergraduate programme in Ireland. Assessor training consists of workshops on the standards expected for each student group, access to a pre-determined list of criteria and associated grading standard expectation and open discussions of marking-related issues. The criteria relate to students' performance in clinical placement, professionalism, patient management skills and technical ability. The meaning, or relative importance assessors give to these criteria, however, has not yet been the subject of systematic scrutiny. The aim of this research study was to explore: (i) RTs' understanding of the core competencies and minimum standards necessary for final year students (ii) Factors that may influence their expectations or internalised standards (iii) How judgements are made by the RTs during the assessment process Methods This research can be positioned within a qualitative constructivist paradigm and a social survey was chosen for the research design. This study received ethical approval from the relevant university research committee. In-depth interviews were carried out with RTs responsible for conduct of assessments for one undergraduate programme. RTs from both the public and private sector included in this study had a range of assessor experience, educational and training backgrounds, and clinical experience but all met the University minimum level of experience policy of 3 years. A purposive sampling approach was used to select trained assessors who were ‘information rich’ and could provide insight into the diversity of perspectives and attitudes towards student training and assessment.33,34 Managers were given details of the proposed research and asked to circulate an invitation to RTs who met the inclusion criteria, explaining the focus of the research. Interested parties were asked to contact the researcher by e-mail, work phone or personal mobile after which a suitable time and date for interviews was arranged. Sufficient numbers from each category of assessor RTs volunteered which allowed for a ‘maximum variation sample’; adopting this approach all volunteering participants were included in the study.33,35 Participants were assured of confidentiality and anonymity and could withdraw at any time from the study without penalty. Seven RTs participated in the study, two male and five female, three trained in Ireland and the remainder abroad. Five worked in the public sector and two in the private sector e their clinical experience ranged from 4 to 20 years. Two pilot interviews facilitated review and modification of the topic guide and interview style. The pilot interviews were conducted on one experienced assessor (25 years) and one newly trained assessor (3 years). Each interview was scheduled for 30 min however both ran over by 10e15 min. Each pilot had a topic guide and was recorded. This allowed adjustment of the sequence of questions and practice of interview skills. Participants were then interviewed face-to-face using semi-structured interviews lasting 45 min. These were conducted at their place of work, using a topic guide which was adapted where necessary. Written consent was obtained for audio recording. Interviews were recorded and transcribed either by the researcher or by an external transcription service. Every participant was given a code (e.g. CRT1) and all data assigned a unique number

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

(e.g. Q1: CRT1.01, Q2: CRT1.02) for the purpose of analysis, quotations and citations. Each transcript was returned to the participant to verify content. There were no alterations or withdrawals from the study. An interpretive approach was adopted36 to investigate each participant's personal viewpoint and explore the factors influencing their conception of core competencies and minimum standards.33 Informed by grounded theory, data analysis used an interpretive process to discover how participants made sense of their experiences and how these experiences shape their assessment practice.37 Thematic analysis of data was carried out using CAQDAS (NVivo 9), informed by principles of grounded theory.37 Using the functionality of NVIVO, a systematic account of the interview data was built and an analytical strategy was created (Fig. 1). From this, themes were created from the common meaning in the data37 (Fig. 2). The framework of analysis was grounded in the data and analytical memos were created (Fig. 3). Independent checking at data analysis stage was achieved by involving a colleague, to enhance validity of the analysis process.35 The risk for personal biases to interpretation and presentation of results and themes were documented during the research process.33,35 The following results and discussion are offered as a contribution to the literature. We are mindful that relevance and transferability will be determined by the reader, in keeping with the principle of ‘naturalistic generalisation’ which is one of the characteristics of case study research, which are also pertinent for this study.38

Results Factors influencing expectations The main factors influencing RTs' expectations of the assessment criteria for a student seemed to be their own experience of assessment as students or their experience of clinical placement. Participants often cited their own clinical experience when explaining or justifying their views on the standard they expect a student to attain. These tacit expectations had been internalised, and only became apparent when the participants were asked to reflect on their expectations when assessing. One participant's positive experience of anxiety when she was a student, for example, influenced her perception of more ‘relaxed’ students.

e101

“I would be quite nervous whenever they said this is your assessment … I think nerves are good as a student, if you're too relaxed I don't think, the most relaxed students are always the best students, I think a little bit of nervous energy can be good to keep you on your toes”(CRT7.01.3). Another RT believed that you should not have a single final assessment and that it is better to be continuously assessed, just like she was. ‘I always preferred continuous assessment. I like when … somebody (says) … you've been doing great all week … you don't need to do that one assessment, we've seen you do a hundred prostates’ (CRT4.12.4). Participants trained outside Ireland e i.e. in Northern Ireland (NI), United Kingdom (UK), and New Zealand (NZ) e were asked to reflect on how their personal training differed from the current training programme for Irish students. For those trained in the UK or NZ, they had personal experience of final year clinical assessments which they referred to as being ‘competency assessments’. They described having a competency based assessment which involved written assessments, assessment with standardised patients and performance based assessment. For each assessment they either met the standard or not, which resulted in them passing or failing the assessment. These normally took place on one day and the student was judged on their performance on that day. Irish RTs on the other hand, described being assessed on a continuous basis as part of their WBA. They were directly observed by different assessors across multiple patients and in multiple contexts and were assessed against a list of expected criteria. They were given a grade instead of a pass/fail assessment, based on their entire rotation and not on one day of assessment. They had no experience or awareness of ‘competency assessments’ that involved standardized patients in controlled environments and referred to their own WBAs as ‘continuous placement assessment’. RTs from other training contexts who had been assessed on the basis of achieving defined competences in a one-day assessment had a better understanding of the concept of setting standards and of assessing competence. Participants also had different views on the benefits of students working in various radiotherapy departments. Most felt that exposing students to different working environments was

Figure 1. Analytical strategy.

e102

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

Figure 2. Categories identified and rules of inclusion.

beneficial to their learning and did not perceive a three to four week placement as short. One participant, however, felt students were at a disadvantage. Her own clinical training required that she worked in one department for longer. She felt that shorter student placements did not allow her the opportunity to assess effectively. “Knowing the student for longer… is a more effective assessment basically… I find it difficult when they are doing a short rotation” (CRT1.10). One main change reported in the context of current Irish training was the reduction of the students' time in radiotherapy departments, compared to participants' own training. RTs felt that the students' training in core practical skills such as patient manipulation and tattooing was lacking, in comparison to their own training. “I don't think the students coming through now are as skillful … we were trained (for) practical training” (CRT2.02). Another believed that a productive six-week placement was as good as a three-month unproductive placement. “(In) my day, literally half the course was spent in a clinical environment … you can still do just as much learning in sort of six good weeks … if all you're doing in three months is being a general dog's body as well” (CRT5.03). Factors influencing judgement Participants had different expectations of the appropriate standard for each criterion on the students' assessment forms: technical ability, clinical knowledge and attitude. Participants were asked to describe a ‘good’ student and to consider what they regarded as the minimum standard for a student to pass their clinical placement. Responses illustrated the difference between assessors regarding what they believed were the important competencies, and at what level they should be demonstrated. While each participant received training on the expected level for each criterion, many assessors applied their own tacit standards. This was more obvious when participants were trying to describe their expectations of minimum standards. Participants found it relatively easy, on the other hand, to list what a good student should be able do: “I would like a relatively high technical ability” (CRT2.03). “Thinking a little more outside the box … express an opinion or ask a question” (CRT4.03). While most participants recognised that standards need to be set, they found it difficult to articulate the level at which these

should be set. There was no consensus, moreover, on the essential competencies a student requires for graduation. From the list of competencies cited, only two were agreed on as essential by all participants: (i) management of the patient and (ii) critical thinking ability. These differences in expectations are further illustrated by their descriptions of ‘effective communication’. One described good communication as working effectively in a team, whereas another believed it concerned communicating effectively when setting up the patient. Working with others also featured; “Working with the other RT's, that they're communicating like, the position of the patient … they are getting that across” (CRT6.03.1). Some considered that effective communication was demonstrated when students were able to communicate comfortably with patients. “Communication … [that] is definitely a core competency, being able to communicate effectively with the patient” (CRT1.08). One stated that effective communication was a core competency whereas another felt that “Communication skills … (I) would allow that to dip a bit” (CRT2.09). This was because he believed that, at undergraduate level, the ability of the student to set-up the patient precisely and accurately was more important than communication. “Once they're qualified and actually in the environment … [they can] … pull their communication skills up” (CRT2.09). “Setting hard and fast rules on whether somebody could be passed or failed for communication I think is dangerous” (CRT3.05.02). There was a sense that some participants wished to exercise discretion, as expert adjudicators in the workplace. Different expectations were also evident when RTs were asked to explain what they expected a student to do when dealing with an ‘angry patient’. Some explained that a ‘good’ student would be able to handle the situation. Others did not expect them to handle a difficult patient, but that a qualified member of staff should intervene and manage the situation. They felt that it was important to know when to call for assistance. “I would expect them to hand that over to a staff member” (CRT4.09). Some participants stated that they did not attempt to assess the knowledge level of the student. They never asked the questions and presumed that if the student didn't ask questions, they knew the answers.

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

Figure 3. Final themes and categories.

e103

e104

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

“In reality we do work under the assumption that people understand the methods you are doing,” (CRT4.11). RTs did not feel the need to check students' prior learning even though, paradoxically, most stated it was their role to teach students. Others, however, felt it was essential to question students in order to check their knowledge and to assess their critical thinking. They felt it was important to assess technical skills, knowledge and attitude whereas some felt it was necessary just to assess technical skills. “We question them a lot I suppose …, so we kind of second guess them all the time and that kind of throws them a bit and then you kind of know whether, … [they] are sure of what they, … [have] done or not” (CRT7.03). Some participants considered it the responsibility of the university to ensure effective knowledge acquisition. These differences further suggest that different standards are being applied in the assessment process and reveal that RTs have a different understanding of their role and hold different beliefs about their role in assessment. Some appreciate the need to assess theoretical knowledge, application of skills and attitude whereas others felt it was their role to assess technical skills only, thereby potentially compromising the reliability of the assessment. Discussion With the expanding adoption of competency based education and training, practicing professionals are routinely asked to assess students' or trainees' competence in the workplace. It is important that relevant stakeholders (e.g. academic institutions, professional bodies, and regulatory bodies) appreciate how experienced professionals come to judgement. From this study, it is clear that RTs' understanding of core competencies and standards impacts on these judgements and that tacit, internalised, standards exist. While a given list of criteria outlines the necessary requirements during an assessment, these are inadequate in clarifying standards. Assessors often apply their own version of the criteria depending on their own influences. Criteria are open to individual interpretation and differing standards are applied in practice.11,39 Assessors' own standards, beliefs and values influence their judgements during the assessment process.32,40 Effectively, when holistic assessment is used, assessors apply their own standards for each criterion, depending on what aspect/s they consider to be the most important. Factors influencing expectations include assessors' own training and experience of assessment as students. Some participants in this study were trained and educated outside Ireland and this influenced how they made judgements. The significance of this factor is also suggested by Johnson,39 who reports that when judgements are being made, everyone draws on their previous experience. Participants referred to their own assessment experience when articulating and justifying their views on what a ‘good’ student is expected to do, or on how they practiced as assessors. This confirms the theory posited that prior experiences influence assessors approach to clinical assessments.41 Belinsky's,41 study of radiographers suggests that as assessors, they might have a more positive attitude toward clinical assessments if their own experiences as a student or trainee had been positive. Tan et al.,22 also discusses internal bias. Each individual has a diversity of clinical experience and educational background which may conflict with the institutional norms, or they may have personal philosophies that influence their ideas regarding clinical competence.22 Participants also

exhibited an internal bias towards what they perceived as the appropriate competency level. While inter-rater reliability may be easier to achieve with pass/fail decisions on individual competences, achieving reliability is more challenging in a more holistic approach to assessment of performance. This has implications as, internationally, medical and healthcare education pays more attention to the achievement of generic competence, domains and capabilities, including the growing adoption of ‘entrustment’ as a basis for assessment.42e44 Internalised standards impact on an assessment process and this key factor needs to inform the design of assessor training programmes and improved assessment strategies. It is the shared responsibility of academic institutions and the professions to ensure that we qualify competent practitioners by a robust performance assessment design. These stakeholders have ultimate responsibility for training assessors to ensure that goal is achieved, in the interest of patient safety. This study supports the extensive literature on the difficulties of ensuring valid and reliable assessment. Threats to inter-rater reliability in high stake assessments, which can be due to the subjectivity and bias of assessors, represent a significant source of error.6,17,31,32,39e41,45e47 Reliability is one important aspect of validity. Assessors can adopt their own standards, based on beliefs and values that influence the judgements they make during an assessment.32,40 The quality of an assessment depends, inter alia, on assessor's ability to use the assessment instruments provided to make a judgment on performance.48 Their judgment may be compromised by considering evidence that is not relevant to the competency they are assessing or they may neglect aspects of the performance which are important.48 While it is challenging to determine how assessors make complex judgements it is clear that their personal beliefs and expectations can affect the quality of a judgement. There is a paucity of research on this particular dynamic and its influence on internalised standards. Most of the literature in this area recognises and discusses the subjectivity and bias of assessors and tries to offer solutions.41 More research is needed into the factors influencing these attitudes and beliefs and the role that an assessors' past experience impacts on these factors. We also argue that much more attention needs to be devoted to the design of performance assessment instruments and to assessor training which offers opportunities to recognise and address the issue of internalised standards.

Conclusion RTs are applying their own internalised standards when assessing students in clinical placement which seemed to be influenced by their own personal experience, especially as a former student or trainee. It is possible, if not probable, that in light of these findings students are being assessed differently depending on the internalised standards of the assessor. This study provides further evidence of how internalised standards influence assessor's judgements. Moreover, it is likely that these influences are present with other medical and healthcare professionals, where practitioners are required to judge a student's performance during a clinical placement. There are many factors that can affect reliability of assessment and an assessor's own standards will have a bearing. It is vital, therefore, that exploration of assessor beliefs, expectations and rationale behind their judgements be an integral element of training. Govaerts and Van der Vleuten,10 proposal for an ‘interpretivist’ approach to WBA is perhaps one fruitful avenue to achieve this. This will prompt a refocus on the current assessor training to include workshops using

C. Poole, J. Boland / Radiography 22 (2016) e99ee105

evidence based methodologies that engage the learner and makes them more self aware. Further research is warranted for a greater appreciation of internalised standards held by practicing professionals, in order to inform the clarification of standards, the development of assessment processes and instruments and the design of assessor training for WBA. Conflict of interest statement The authors declare that they have no conflict of interest. Funding There was no funding or financial grant for this study. References 1. Ramani S, Leinster S. AMEE guide no. 34: teaching in the clinical environment. Med Teacher 2008;30(4):347e64. 2. Howley LD. Performance assessment in medical education: where we've been and where we're going. Eval Health Prof 2004;27:285e303. 3. Crossley J, Humphris, Jolly B. Assessing health professionals. Med Educ 2002;26: 800e4. 4. Epstein RM, Hundert EM. Defining and assessing professional competence. J Am Med Assoc 2002;287:226e35. 5. Cox K. Examining and recording clinical performance: a critique and some recommendations. Educ Health 2000;13:45e52. 6. Wass V, Van der Vleuten C, John S, Jones R. Assessment of clinical competence. Lancet 2001;357:945e9. 7. Brown N, Doshi M. Assessing professional and clinical competence: the way forward. Adv Psychiatric Treat 2006;2006(12):81e91. 8. Yielder J, Thompson A, De Bueger T. Re-thinking clinical assessment: what can we learn from the medical literature? Radiography 2012;18(4):296e300. 9. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teacher 2010;32(8):676e82. 10. Govaerts M, van der Vleuten CPM. Validity in work-based assessment: expanding our horizons. Med Educ 2013;47(12):1164e74. 11. Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: construct alignment improves the performance of workplace-based assessment scales. Med Educ 2011;45(6):560e9. 12. Murray E, Gruppen L, Catton P, Hays R, Woolliscroft JO. The accountability of clinical education: its definition and assessment. Med Educ 2000;34(10):871e9. 13. Harden RM, Crosby JR, Davis MH. AMEE guide no.14: outcome-based education: part 1 e an introduction to outcome based education. Med Teacher 1999;21(1):7e14. 14. Kerka S. Competency-based education and training myths and realities. 1998. http://www.eric.ed.gov/ERICWebPortal/custom/portlets/recordDetails/ detailmini.jsp? [accessed 24.03.10]. 15. Gonczi A. Competency based assessment in the professions in Australia. Assess Educ Princ Policy Pract 1994;1(1):27e42. 16. Hager P. Competency standards e a help or hindrance? An Australian perspective. J Vocat Educ Train 1995;47(2):141e51. 17. Cassidy S. Interpretation of competence in student assessment. Nurs Stand 2009;23(18):39e46. 18. O'Connor T, Fealy M Gerard, Kelly Mary, McGuiness Martina Ann, Fiona T. An evaluation of a collaborative approach to the assessment of competence among nursing students of three universities in Ireland. Nurse Educ Today 2009;29: 493e9. 19. Thorkildsen K, Råholm M-B. The essence of professional competence experienced by Norwegian nurse students: a phenomenological study. Nurse Educ Pract 2010;10(4):183e8. 20. Beckett D. Embodied competence and generic skill: the emergence of inferential understanding. Educ Philosophy Theory 2004;36(5):497e508. 21. Ng C, White P, McKay JC. Establishing a method to support academic and professional competence throughout an undergraduate radiography programme. Radiography 2008;14(3):255e64.

e105

22. Tan K, Dawdy K, Di Prospero L. Understanding radiation therapists' perceptions and approach to clinical competence assessment of medical radiation sciences students. J Med Imaging Radiat Sci 2013;44(2):100e5.  n-Maldonado M, et al. The relationship between 23. Rethans J-J, Norcini JJ, Baro competence and performance: implications for assessing practice performance. Med Educ 2002;36(10):901e9. 24. Baartman LKJ, Bastiaens TJ, Kirschner PA, van der Vleuten CPM. Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks. Educ Res Rev 2007;2(2):114e29. 25. Day A. The practical examination in chemical pathology: current role and future prospects. J Clin Pathol 2008;61:545e7. 26. van der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ 2005;39(3):309e17. 27. McKinely RK, Fraser CR, Baker R. Model for directly assessing and improving clinical competence and performance in revalidation of clinicians. Br Med J 2001;322:712e5. 28. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ 2011;45(10):1048e60. 29. Govaerts MB, van der Vleuten CM, Schuwirth LT, Muijtjens AM. Broadening perspectives on clinical performance assessment: rethinking the nature of intraining assessment. Adv Health Sci Educ 2007;12(2):239e60. 30. Cross V, Hicks C. What do clinical educators look for in physiotherapy students? Physiotherapy 1997;83(5):249e60. 31. Alexander HA. Physiotherapy student clinical education: the influence of subjective judgements on observational. Assess Eval High Educ 1996;21(4): 357e66. 32. Hay PJ, Macdonald D. (Mis)appropriations of criteria and standards-referenced assessment in a performance-based subject. Assess Educ Princ Policy Pract 2008;15(2):153e68. 33. Fraenkel R Jack, editor. How to design and evaluate research in education. 7th ed. New York: McGraw-Hill; 2008. 34. Advice P. Study design in qualitative researchd2: sampling and data collection strategies. Educ Health 2000;13(2):263e71. 35. DePoy EA, Gitlin LN. Introduction to research: understanding and applying multiple strategies. USA: Mosby; 1998. 36. Carter Y, Shaw S, Thomas C. An introduction to qualitative methods for health professionals. London: Royal College of General Practitioners; 1999. 37. Ezzy D. Qualitative analysis practice and innovation. London: Routledge; Taylor and Francis Group; 2002. 38. Johansson R. Case study methodology. In: The international conference on methodologies in housing research, Stockholm; 2003; 2003. http://www.psyking. net/htmlobj-3839/case_study_methodology-_rolf_johansson_ver_2.pdf [accessed 03.10.15]. 39. Johnson M. Exploring assessor consistency in a health and social care qualification using a sociocultural perspective. J Vocat Educ Train 2008;60(2):173e87. 40. Chambers MA. Some issues in the assessment of clinical practice: a review of the literature. J Clin Nurs 1998;7(3):201e8. 41. Belinsky SB, Tataronis GR. Past experiences of the clinical instructor and current attitudes toward evaluation of students. J Allied Health 2007;36(1):6. 42. ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ 2013;5(1):157e8. 43. ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach 2015:1e20. 44. General Medical Council and Academy of Medical Royal Colleges. Developing a framework for generic professional capabilities e a public consultation. 2015. http://www.gmc-uk.org/Developing_a_framework_for_generic_professional_ capabilities_form_English_writeable_final_distributed.pdf_61568131.pdf [accessed 04.10.15]. 45. Dudek NL, Marks MB. Failure to fail: the perspectives of clinical. Acad Med 2005;80(10):84e7. 46. Duffy K. Failing students: a qualitative study of factors that influence decisions regarding assessment of students' competence in practice. 2003. http://www. nmc-uk.org/Documents/Archived%20Publications/1Research%20papers/ Kathleen_Duffy_Failing_Students2003.pdf [accessed 06.08.11]. 47. Shapton M. Failing to fail students in the caring profession: is assessment process failing the professions? J Pract Teach Learn 2007;7(2):39e53. 48. Nijveldt M, Beijaard D, Brekelmans M, Wubbels T, Verloop N. Assessors' perceptions of their judgement processes: successful strategies and threats underlying valid assessment of student teachers. Stud Educ Eval 2009;35(1): 29e36.