PHYST-953; No. of Pages 8
ARTICLE IN PRESS
Physiotherapy xxx (2017) xxx–xxx
Systematic review
Clinical performance assessment tools in physiotherapy practice education: a systematic review A. O’Connor a,∗ , O. McGarr b , P. Cantillon c , A. McCurtin a , A. Clifford a a
c
Department of Clinical Therapies, Health Sciences Building, University of Limerick, Limerick, Ireland b School of Education, University of Limerick, University of Limerick, Limerick, Ireland Discipline of General Practice, Clinical Science Institute, National University of Ireland, Galway, Ireland
Abstract Background Clinical performance assessment tools (CPATs) used in physiotherapy practice education need to be psychometrically sound and appropriate for use in all clinical settings in order to provide an accurate reflection of a student’s readiness for clinical practice. Current evidence to support the use of existing assessment tools is inconsistent. Objectives To conduct a systematic review synthesising evidence relating to the psychometric and edumetric properties of CPATS used in physiotherapy practice education. Data sources An electronic search of Web of Science, SCOPUS, Academic Search Complete, AMED, Biomedical Reference Collection, British Education Index, CINAHL plus, Education Full Text, ERIC, General Science Full Text, Google Scholar, MEDLINE, UK and Ireland Reference Centre databases was conducted identifying English language papers published in this subject area from 1985 to 2015. Study selection 20 papers were identified representing 14 assessment tools. Data extraction and synthesis Two reviewers evaluated selected papers using a validated framework (Swing et al., 2009). Results Evidence of psychometric testing was inconsistent and varied in quality. Reporting of edumetric properties was unpredictable in spite of its importance in busy clinical environments. No Class 1 recommendation was made for any of the CPATs, and no CPAT scored higher than Level C evidence. Conclusions Findings demonstrate poor reporting of psychometric and edumetric properties of CPATs reviewed. A more robust approach is required when designing CPATs. Collaborative endeavour within the physiotherapy profession and interprofessionally may be key to further developments in this area and may help strengthen the rigour of such assessment processes. © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Keywords: Clinical performance assessment; Physical therapy; Physiotherapy; Student; Assessment tool
Introduction The World Confederation of Physical Therapy (WCPT) stipulates that practice education must account for approximately one third of the overall content of physiotherapy academic programmes [1,2] emphasising its importance in physiotherapy education. Physiotherapy students must “meet the competencies established by the physical therapist professional entry level education programme” [1] and must be
∗
Corresponding author. Fax: +353 61 234251. E-mail address:
[email protected] (A. O’Connor).
provided with formative and summative feedback during each practice education module [2]. This is achieved through an assessment process, where clinical performance is assessed based on observation by a supervising clinician, known as a practice educator. Clinical performance assessment has long challenged education providers for reasons related to evidence supporting assessment methods and factors related to the subjective nature of observation-based assessment [3–9]. No literature review to date has synthesised the evidence related to psychometric testing (validity and reliability) and edumetric properties (feasibility, usefulness and educational impact) of
http://dx.doi.org/10.1016/j.physio.2017.01.005 0031-9406/© 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
2
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
CPATs used in physiotherapy practice education. Current evidence suggests that psychometric evidence for many of these is inconsistent [10–12] with little or no attention paid to their edumetric properties. A recent systematic review in medical education also acknowledged poor reporting of edumetric properties in CPATs [13]. This is despite their importance in determining a tool’s practicality and feasibility in the workplace. Lengthy or ambiguously worded assessment tools can frustrate busy clinicians which in turn can impact on rigorous completion of student assessments [14]. Psychometric properties are more commonly reported although not always comprehensively [12,13]. Such properties include content validity which captures how accurately learning outcomes described in a CPAT measure various aspects of clinical performance. This is determined by matching selected assessment criteria with published guidelines required for physiotherapy entry level practice [15,16]. Criterion validity assesses the extent to which the measure is related to the outcomes. Evidence of construct validity demonstrates that a tool is sensitive enough to detect changes in student performance over time which confirms progression of student learning [16–18]. Additional psychometric properties include inter-rater and intra-rater reliability which when present ensure consistency of grading across a variety of practice educators and practice education sites. Test–retest reliability is also necessary in CPATs to ensure consistent ranking of students on repeated assessment, particularly relevant in practice education when regular observation forms the basis for awarding final grades. Therefore, CPATs with less than acceptable psychometric and edumetric testing may cast doubt on their inherent ability to identify both the excelling student and the incompetent or unsafe student. This can result in an assessment process that is potentially unreliable and precarious with implications for educational programmes, client safety and professional bodies [3,6]. Physiotherapy undergraduate students, like nursing students, are expected to work unsupervised from the time of graduation unlike medical students who must complete further postgraduate study, known as an internship, in order to practice independently. Only one qualitative evaluation framework has been developed to evaluate and synthesise evidence pertaining to the psychometric and edumetric properties of clinical performance assessment tools used in the clinical learning environment. This was developed by the Accreditation Council for Graduate Medical Education (ACGME) in the United States of America [19]. This framework defined guidelines for grading psychometric and edumetric properties of assessment instruments as well as outlining a system for assigning an overall evidence grade for each tool. While developed for graduate medical students, it was considered appropriate for use in this study as it lends itself easily to use by other health professions [13] especially given the similarities in expectations of the physiotherapy graduate and the medical intern.
In the absence of other reviews of this kind, the need to evaluate and synthesise the evidence related to the edumetric and psychometric properties of CPATs used in physiotherapy education was deemed essential. The specific aims of this systematic review were to: 1. Identify and synthesise available evidence pertaining to the psychometric and edumetric properties of clinical performance assessment tools used in physiotherapy practice education using the ACGME framework. 2. Discuss the findings within the broader context of health professional clinical performance assessment.
Methods Search strategy A systematic review of the literature was undertaken using combinations of search terms (Table S1) to identify English language peer-reviewed papers published from January 1985 to December 2015 relating to CPATs used in physiotherapy. Prior to 1985 few clinical performance assessment tools had been reported in physiotherapy literature [20,21]. Databases included Web of Science, Academic Search Complete, AMED, Biomedical Reference Collection, British Education Index, CINAHL plus, Education Full Text, ERIC, General Science Full Text, MEDLINE, UK and Ireland Reference Centre, Google Scholar and SCOPUS. Reference lists were examined by hand for further citations and checked for eligibility using the same inclusion and exclusion criteria. Inclusion criteria • Any peer-reviewed paper describing a CPAT used in physiotherapy practice education employing observation-based assessment methods, which included reference to psychometric or edumetric testing. • Experimental and observational studies, randomised and non-randomised designs, and prospective or retrospective cohort studies. • Full text papers written in the English language. • Publication date from January 1985 to December 2015. Exclusion criteria • Any peer-reviewed paper describing assessment tools where standardised patients, simulated settings, Objective Structured Clinical Examinations or learning portfolios were used to assess clinical performance. • Assessment tools exclusively from disciplines other than physiotherapy. • Research studies where the full-text paper was not available. • Assessment tools involving student self-assessment only.
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
3
Table 1 Details of assessment tools. Name of tool
Associated citations
Region of origin
No. of items
Response type
Examination of Clinical Competence (ECC) Blue MACS
Loomis 1985 [21]
Canada
40
Rating scale (3 point)
Hrachovy et al. 2000
USA
50
Clinical Internship Evaluation Tool (CIET) PT CPI (1997 version)
Fitzgerald et al. 2007
USA
42
Rating scale of achievement (8 point) Rating scale
Roach et al. 2002 [15]; Adams et al. 2008 [18]; Proctor et al. 2010 [20] Meldrum et al. 2008 [29]
USA
24
100 mm Visual analog scale
Ireland
36
% grading
Coote et al. 2007
Ireland
40
% grading
USA
53
Univ. Birmingham tool (UoB) Clinical Competence Scale (CCS)
Stickley et al. 2005; Luedtke Hoffman et al. 2012 [22] Cross et al. 2001 Yoshino et al. 2013
United Kingdom Japan
Clinical Performance Assessment Form (CPAF) PT CPI (2006 version)
Joseph et al. 2011 [25]; Joseph et al. 2012 [26] Roach et al. 2012 [16]
South Africa
10 53 items in 7 domains 8
Rating scale and 10 cm Visual analog scale Rating scale Rating scale (4 point)
USA
18
Assessment of Physiotherapy Practice (APP)
Dalton et al. 2011 [23]; Dalton et al. 2012 [24]; Murphy et al. 2014 [14] Mori et al. 2015 [27]
Australia
20
Canada
16
Rating scale
Muhamad et al. 2015 [12]
Malaysia
35
5 point Likert scale
Royal College of Surgeons Ireland tool (RCSI tool) Common Clinical Assessment Form (CAF) PT MACS
Canadian Physical Therapy Assessment of Clinical Performance (ACP) Clinical Competence Evaluation Instrument (CCEVI)
• Audits based on the use of an assessment tool but without reference to psychometric testing or edumetric properties. • The development of a tool but no evidence of testing or validation. • Assessment tools used solely for physiotherapy postgraduate education where specialisation in an area of physiotherapy practice was the subject of assessment.
Study selection & data extraction An electronic search was completed by reviewer (AOC). Titles and abstracts of studies retrieved were independently screened to identify studies meeting the inclusion criteria. A system was used to label these studies. Those labelled ‘yes’ and ‘unsure’ were checked independently by reviewer (AC) for decision regarding inclusion. If a decision was unclear from the abstract, the full-text article was obtained for assessment. Any disagreements were resolved through discussion between the reviewers (AOC and AC). A further reviewer (PC) was available for disagreements but was not required. Data extraction involved reviewer (AOC) identifying all empirical evidence related to psychometric and edumetric properties in line with the ACGME framework. The grading for overall recommendation and criteria for determining the level of evidence are outlined in Table S2.
Weighted scoring; max score achievable 100 Ordered categorical scale (6 points) Ordered categorical scale (5 points)
Reviewer (AOC) independently evaluated all included studies. Reviewer (AC) blind reviewed 25% of these papers. Minor disagreements between the two reviewers were due to ambiguity in the wording of standards for validity and reliability. The descriptors for each standard were discussed by the two reviewers and consensus was reached. Once agreed, the two reviewers independently rechecked all papers and reached final consensus. After this Reviewer (AOC) reviewed the remaining papers holding regular meetings with Reviewer (AC) to discuss any queries. Reviewer (PC) was not required during the process.
Results Search results and article overview The initial search identified 4436 articles. PRISMA guidelines were followed in relation to protocol design and data extraction (Fig. S1). 62 articles met the initial study criteria after title and abstract review. Following full text review, 20 papers remained describing 14 CPATs. The characteristics of these tools are summarised in Table 1. 14 of the studies were prospective cohort studies and six were retrospective in design. Seven CPATs, the Physical Therapy Clinical Performance Instrument (PT CPI 1997 and
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
4
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
2002 versions) [15,16,18,20], Physical Therapy Manual for the Assessment of Clinical Skills (PT MACS) [11,22], APP [14,23,24], Clinical Performance Assessment Form (CPAF) [25,26], Assessment of Clinical Performance (ACP) [27] and Common Clinical Assessment Form (CAF) [28] had multiple institution involvement during testing. Four CPATs had singular institution involvement [17,21,29,30]. Three CPATS, the University of Birmingham tool (UoB) [10], the Clinical Competency Evaluation Instrument (CCEVI) [12] and the Clinical Competence Evaluation Scale (CCS) [31] did not provide enough information to determine institution involvement. All CPATs identified were similar in layout. Performance criteria (ranging from 8 to 53 items) were outlined for all tools. All but two used visual analogue scales, categorical rating scales or Likert scales for grading students. Two CPATs were graded by assigning a percentage for each performance criteria within the form [28] or per section of the form [29]. Validity evidence Content validity was described for 12 CPATs and construct validity outlined for seven (see Table 2). Criterion validity was demonstrated for one assessment tool (ECC). Two CPATs, APP and UoB, scored highest for validity evidence, both scoring Level A evidence. Both versions of the PT CPI were awarded an overall B grade for validity evidence. The remaining ten CPATs provided insufficient information relating to validity evidence for an overall grade to be assigned. Reliability evidence Evidence of Internal Consistency was available for eight tools (Table 2). One paper described test–retest reliability [13]. Eight tools demonstrated evidence of inter-rater reliability (Table 2). The highest overall grading for evidence of reliability was awarded to the APP and the UoB tool (Level B) while the CAF, CPAF and CCEVI were awarded a Level C grade. There was insufficient information to grade any of the other tools. Edumetric evidence All CPATs scored a Level B grade regarding their ease of use apart from one tool (ACP) which provided insufficient information to judge. The time taken to complete a student assessment using any of the reviewed CPATs was either not reported or took longer than 20 minutes; hence no CPAT was awarded an A grade for this edumetric property. Information regarding resources required to implement CPATs was inconsistent, with no tool scoring higher than a B grade (see Table 2). Nine of the 14 CPATs were awarded a C grade for evidence of ease of interpretation. No tool provided sufficient information to warrant a grade for educational impact.
Overall ACGME grade and summary of evidence No CPAT was awarded an overall Class 1 recommendation or Level A evidence. The highest grading for overall recommendation (Table 2) was Class 2 awarded to PT CPI (1997 version), APP, and UoB. No tool was awarded higher than Level C evidence. The award of a B grade or above required a CPAT to demonstrate published evidence in a minimum of two settings of all components required by the framework (validity, reliability, ease of use, resources required, ease of interpretation, educational impact).
Discussion This systematic review identified and synthesised available evidence related to psychometric and edumetric testing of 14 CPATs used in physiotherapy practice education from 20 peer-reviewed papers. Five CPATs were developed in the USA, two in Canada, two in Ireland, and one each in United Kingdom, Australia, Japan, Malaysia and South Africa. With hundreds of recognised entry level physical therapy education programmes in existence [32], this indicates a low level of publications meeting the inclusion criteria, thus highlighting the paucity of research in this area. No Class 1 recommendation was made, and no CPAT scored higher than Level C evidence based on the framework criteria. These findings suggest limitations in the robustness of these tools. Evidence from medical education and nursing has emphasised the need for rigorous testing of CPATs highlighting that inconsistencies in these areas may affect judgments made by assessors and have implications on graduates’ readiness for practice and patient safety [3,6]. Psychometric evidence Eight CPATs demonstrated internal consistency and interrater reliability. No reasons were provided to explain why these tests had not been carried out on the other CPATs, apart from one [16] which reported that because inter-rater reliability had been established for an earlier version of the tool, it was not necessary to repeat it although the tool had been significantly modified. Evidence of test–retest reliability was provided for one CPAT [12]. Test–retest reliability is an essential property of any CPAT to ensure robust grading and ranking of students when regular observation of clinical performance is the cornerstone to deciding the final grade awarded. Evidence of validity testing was also inconsistent; some CPATs demonstrated significant testing (APP and UoB), others provided little or no evidence (Blue MACS and RCSI tool). In several papers, information regarding the development and validity testing of the tool was only briefly described. This, together with some ambiguity in the wording of framework standards led to early discussion between the two reviewers to agree on what was acceptable.
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
Name of tool
Content validity
Construct validity
Yes
Yes
No Yes
Overall validity grade
Reliability evidence
Overall reliability grade
Internal Inter-rater consistency reliab.
Test–retest reliab.
NI
No
Yes
No
No NI
NI NI
No Yes
No No
Yes
Yes
B
Yes
Yes
Yes
B
NI
No
Yes
Edumetric properties
Overall Overall evidence recommend. grade class
Ease of Ease of use interpretation
Resources required
Educational impact
NI
B
C
B
NI
C
Class 3
No No
NI NI
B B
C C
B B
NI NI
C C
Class 3 Class 3
Yes
No
NI
B
C
B
NI
C
Class 2
Yes
No
No
NI
B
C
B
NI
C
Class 3
NI
No
Yes
No
NI
B
NI
C
NI
C
Class 3
Yes
NI
No
Yes
No
C
B
NI
B
NI
C
Class 3
Yes
Yes
A
No
Yes
No
B
B
C
B
NI
C
Class 2
Yes
Yes
A
Yes
Yes
No
B
B
C
B
NI
C
Class 2
Yes
No
NI
Yes
Yes
No
C
B
C
B
NI
C
Class 3
Yes Yes
No No
NI NI
Yes Yes
NI No
No No
NI NI
B B
NI C
C B
NI NI
C C
Class 3 Class 3
Yes
No
NI
No
No
No
NI
NI
NI
NI
NI
NI
NI
Yes
Yes
NI
Yes
Yes
Yes
C
B
NI
C
NI
C
Class 3
NI: Not enough Information in paper to judge.
ARTICLE IN PRESS
1 Evaluation of Clinical Competence (ECC) 2 Blue MACS 3 Physical Therapist Manual for the Assessment of Clinical Skills (PT MACS) 4 Physical Therapist Clinical Performance Instrument (PT CPI, 1997 version) 5 Physical Therapist Clinical Performance Instrument (PT CPI, 2006 version) 6 Royal College Surgeons Ireland (RCSI) tool 7 Common Clinical Assessment Form (CAF) 8 University of Birmingham tool (UoB) 9 Assessment of Physiotherapy Practice (APP) 10Clinical Performance Assessment Form (CPAF) 11Clinical Competence Scale (CCS) 12Clinical Internship Evaluation Tool (CIET) 13Canadian PT Assessment of Clinical Performance (ACP) 14Clinical Competence Evaluation Instrument (CCEVI)
Validity evidence
A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
Table 2 ACGME grading for the overall recommendation & level of evidence.
5
PHYST-953; No. of Pages 8
6
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
11 CPATs used Likert scales, rating scales or visual analogue scales to assess student performance despite recent research identifying difficulties with these scales [33]. Such scales employ ordinal data which infers that while response categories have a ranking, the intervals between the values offered may not be assumed as equal. The problem is exacerbated when attempts are made to convert these scores to numerical grades as means and standard deviations are inappropriate for such data. Modes or medians should be employed, however this is rarely acknowledged. Experts have described this practice as providing meaningless information, encouraging competition, stress and anxiety among health professional students rather than promoting continuous progression of learning in the clinical learning environment [33,34]. A further growing body of evidence suggests that psychometric tests alone are insufficient to assess the complexity of clinical performance, and can lead to over-emphasis on standardisation of tests, providing meaningless conversion of behaviours such as communication and team working skills to numbers [33–35]. It has been suggested that a multifaceted approach to clinical performance assessment may be more robust, and may help strengthen the validity and reliability of the assessment process [5,36]. The Mini CEX and Global Rating Scales are examples of tools used in medicine and nursing which have demonstrated high levels of validity and reliability [13,37,38]. Other clinical performance assessment adjuncts used in physiotherapy education include professional portfolios, student self-assessment and reflective essays. These have been criticised due to lack of psychometric evidence to support their sustained use in a singular context. However, medical education research has recently begun re-examining the benefits of subjective data gathered in the clinical learning environment using sensible multiple sampling. This, employed in conjunction with psychometric tests, may provide a more holistic picture of a student’s readiness to practice [9,33,34,37]. The physiotherapy profession could benefit from examining clinical performance assessment methods employed by other health professions. Consideration of existing assessment methods in physiotherapy may be worth re-exploring as potential components of a clinical performance assessment toolbox rather than as sole assessment methods. Edumetric evidence Reporting of edumetric properties was inconsistent across all CPATs. While improvements were often demonstrated in student grades from midway to the end of placement, it was impossible to attribute this to the educational impact of the CPAT used. Evidence for ease of interpretation and ease of use was variable, perhaps explained by the fact that it was not a specific aim of any of the studies. A discrepancy is apparent between the priority placed on edumetric properties by Higher Education Institutions (HEIs) during CPAT development and the importance of these properties to
practice educators in the workplace. HEIs rely heavily on their clinical colleagues to facilitate and assess student learning. The practicality and feasibility of an assessment tool used in a time-constrained environment must be considered as paramount when developing an assessment tool. Findings from this review are similar to a recent review in medical education [13] which also criticised the lack of evidence for educational impact and evaluation of educational outcomes. These findings reiterate the need for greater emphasis on edumetric testing in the early stages of CPAT development. Collaboration The APP and PT CPI (1997 version) each had three associated research studies involving large numbers of participants. Recent work [38] has highlighted the significance of multiple institution involvement in psychometric testing demonstrating how validity evidence can accrue for individual assessment tools when multiple studies are involved e.g. Mini Clinical Evaluation Exercise [38], Global Rating Scale [13]. Rigorous testing is necessary and achievable in physiotherapy if greater emphasis were placed on developing and validating existing tools rather than creating new ones, and focussing efforts on larger studies. The surfeit of assessment tools that have developed across physiotherapy education has compounded this problem largely due to the development of individual tools unique to single institutions. Ongoing development and testing of existing assessment instruments has been advocated [39,40] and should be considered by the physiotherapy profession in order to ensure more rigorous testing of assessment tools. This is especially important in the context of this study findings and the poor levels of evidence demonstrated for most CPATs. Nationally agreed assessment tools are relatively new in physiotherapy education. Investigation of the feasibility and acceptability of the APP (originating from Australia) was explored by a physiotherapy programme in Canada [14] but not implemented there as the ACP [14,23,24] was developed and introduced around the same time. This suggests that assessment tools may have an inherently localised context which may make internationally acceptable and agreed assessment tools problematic. Factors complicating this may be the variation across countries in entry requirements to physiotherapy programmes, programme delivery and the final qualification obtained [32]. Whether there is potential for international collaboration in this regard is unclear but WCPT may provide the impetus for initiating discussion in this area. Limitations of the study The ACGME framework had been employed previously in medical education [13]. Those authors also reported ambiguity in the wording of standards within the framework. It is possible that the current
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
reviewers’ interpretation may have varied slightly from the original framework, but they remained consistent throughout the review process for this paper. The reviewers were both physiotherapists who had been working in the area of physiotherapy education for over 10 years; this may have facilitated consensus during the review process.
Conclusion Findings from this review highlight that further discussion regarding development of CPATs within physiotherapy practice education is necessary to ensure rigour of this assessment process. A demand for objectivity, accuracy and consistency of student grading by practice educators in the midst of prioritisation of core clinical duties on a day-to-day basis is critical in order to achieve readiness for practice and patient safety assurance. Further discussion should occur in the context of evaluating other adjuncts to the clinical performance assessment process. Engagement with medicine and other allied health education providers may provide the physiotherapy profession with new perspectives on clinical performance assessment that may assist with reinforcing current processes. The WCPT may be the keystone towards enabling this discussion to occur, in particular looking towards international collaboration during CPAT development.
Key messages • This systematic review identifies and synthesises the evidence relating to the psychometric and edumetric properties of clinical performance assessment tools used in physiotherapy practice education. • Findings highlight that psychometric and edumetric evidence of these assessment tools is reported inconsistently, and these properties require more systematic and rigorous testing procedures in the early stages of assessment tool development. • Collaborative research effort inclusive of physiotherapy and other health professions may help provide a greater bank of knowledge in clinical performance assessment.
Conflict of interest: None.
Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/ j.physio.2017.01.005.
7
References [1] World Confederation for Physical Therapy. WCPT guideline for a standard evaluation process for accreditation/recognition of physical therapist professional entry level education programmes. London, UK: WCPT; 2011. www.wcpt.org/guidelines/accreditation (Access date 12th February, 2017). [2] World Confederation for Physical Therapy. WCPT guideline for physical therapist professional entry level education. London, UK: WCPT; 2011. www.wcpt.org/guidelines/entry-level-education (Access date 13th February, 2017). [3] Fahy A, Tuohy D, McNamara M, Butler M, Cassidy I, Bradshaw C. Evaluating clinical competence assessment. Nurs Stand 2011;25(50):42–8. [4] Govaerts MJB, Schuwirth LWT, Van der Vleuten CPM, Muijtjens AMM. Workplace-based assessment: effects of rater expertise. Adv Health Sci Educ 2011;16(2):151–65. [5] Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ 2011;45(10):1048–60. [6] Govaerts M, van der Vleuten CP. Validity in work-based assessment: expanding our horizons. Med Educ 2013;47(12):1164–74. [7] Wu XV, Enskar K, Lee CCS, Wang WR. A systematic review of clinical assessment for undergraduate nursing students. Nurse Educ Today 2015;35(2):347–59. [8] Franklin N, Melville P. An exploration of the pedagogical issues facing competency issues for nurses in the clinical environment. Collegian 2015;22(1):25–31. [9] Whitehead C, Kuper A, Hodges B, Ellaway R. Conceptual and practical challenges in the assessment of physician competencies. Med Teach 2015;37(3):245–51. [10] Cross V, Hicks C, Barwell F. Exploring the gap between evidence and judgement: using video vignettes for practice-based assessment of physiotherapy undergraduates. Assess Eval High Educ 2001;26(3):189–212. [11] Stickley LA. Content validity of a clinical education performance tool: the Physical Therapist Manual for the assessment of clinical skills. J Allied Health 2005;34(1):24–30. [12] Muhamad Z, Ramli A, Amat S. Validity and reliability of the clinical competency evaluation instrument for use among physiotherapy students. Pilot study. Sultan Qaboos Univ Med J 2015;15(2):e266–74. [13] Jelovsek JE, Kow N, Diwadkar GB. Tools for the direct observation and assessment of psychomotor skills in medical trainees: a systematic review. Med Educ 2013;47(7):650–73. [14] Murphy S, Dalton M, Dawes D. Assessing physical therapy students performance during clinical practice. Physiother Can 2014;66(2):169–76. [15] Roach K, Gandy J, Deusinger S, Clark S, Gramet P. Task force for the development of student clinical performance instruments: the development and testing of APTA clinical performance instruments. Phys Ther 2002;82(4):329–53. [16] Roach KE, Frost JS, Francis NJ, Giles S, Nordrum JT, Delitto A. Validation of the Revised Physical Therapist Clinical Performance Instrument (PT CPI): version 2006. Phys Ther 2012;92(3):416–28. [17] Fitzgerald LM, Delitto A, Irrgang JJ. Validation of the clinical internship evaluation tool. Phys Ther 2007;87(7):844–60. [18] Adams CL, Glavin K, Hutchins K, Lee T, Zimmerman C. An evaluation of the internal reliability, construct validity, and predictive validity of the Physical Therapist Clinical Performance Instrument (PT CPI). J Phys Ther Educ 2008;22(2):42–50. [19] Swing S, Clyman S, Holmboe E, Williams R. Advancing resident assessment in graduate medical education. J Grad Med Educ 2009;1:278–86. [20] Proctor PL, Dal Bello-Haas VP, McQuarrie AM, Sheppard MS, Scudds RJ. Scoring of the Physical Therapist Clinical Performance
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005
PHYST-953; No. of Pages 8
8
ARTICLE IN PRESS A. O’Connor et al. / Physiotherapy xxx (2017) xxx–xxx
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
Instrument (PT-CPI): analysis of 7 years of use. Physiother Can 2010;62(2):147–54. Loomis J. Evaluating clinical competence of physical therapy students. Part 2: assessing the reliability, validity, and usability of a new instrument. Physiother Can 1985;37(2):91–8. Luedtke-Hoffmann K, Dillon L, Utsey C. Is there a relationship between performance during physical therapist clinical education and scores on the National Physical Therapy Examination. J Phys Ther Educ 2012;26(2):41–9. Dalton M, Davidson M, Keating J. The Assessment of Physiotherapy Practice (APP) is a valid measure of professional competence of physiotherapy students: a cross-sectional study with Rasch analysis. J Physiother 2011;57(4):239–46. Dalton M, Davidson M, Keating JL. The Assessment of Physiotherapy Practice (APP) is a reliable measure of professional competence of physiotherapy students: a reliability study. J Physiother 2012;58(1):49–56. Joseph C, Hendricks C, Frantz J. Exploring the key performance areas and assessment criteria for the evaluation of students’ clinical performance: a Delphi study. S Afr J Physiother 2011;67(2):9–15. Joseph C, Frantz J, Hendricks C, Smith M. Evaluation of a new clinical performance assessment tool: a reliability study. S Afr J Physiother 2012;68(3):15–9. Mori B, Brooks D, Norman KE, Herold J, Beaton DE. Development of the Canadian physiotherapy assessment of clinical performance: a new tool to assess physiotherapy students’ performance in clinical education. Physiother Can 2015;67(3):281–9. Coote S, Alpine L, Cassidy C, Loughnane M, McMahon S, Meldrum D, et al. The development and evaluation of a common assessment form for physiotherapy practice education in Ireland. Physiother Irel 2007;28(2):6–10. Meldrum D, Lydon A, Loughnane M, Geary F, Shanley L, Sayers K, et al. Assessment of undergraduate physiotherapist clinical perfor-
[30]
[31]
[32]
[33]
[34] [35]
[36]
[37] [38]
[39] [40]
mance: investigation of educator inter-rater reliability. Physiotherapy 2008;94:212–9. Hrachovy J, Clopton N, Baggett K, Garber T, Cantwell J, Schreiber J. Use of the Blue MACS: acceptance by clinical instructors and selfreports of adherence. Phys Ther 2000;80(7):652–61. Yoshino J, Usuda S. The reliability and validity of the Clinical Competence Evaluation Scale in physical therapy. J Phys Ther Sci 2013;25(12):1621. World Confederation for Physical Therapy. Entry level physical therapy qualification programmes. London, UK: WCPT; 2015. Available from: http://www.wcpt.org/node/33154. [Updated 22 December 2015; Cited 20 March 2016]. Hanson J, Rosenbergand A, Lindsey Lane J. Narrative descriptions should replace grades and numerical ratings for clinical performance in medical education in the United States. Front Psychol 2013;4(668):1–10. Hodges B. Assessment in the post-psychometric era: learning to love the subjective and collective. Med Teach 2013;35:564–8. Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O. Using standardized narratives to explore new ways to represent faculty opinions of resident performance. Acad Med 2012;87(4):419–27. Hauer KE, Holmboe ES, Kogan JR. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters. Med Teach 2011;33(1):27–33. Eva K, Hodges B. Scylla or Charybdis? Can we navigate between objectification and judgement in assessment? Med Educ 2012;46:914–9. Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. J Am Med Assoc 2009;302(12):1316–26. Yanhua C, Watson R. A review of clinical competence assessment in nursing. Nurse Educ Today 2011;31(8):832–6. Lofmark A, Thorell-Ekstrand I. Nursing students’ and preceptors’ perceptions of using a revised assessment form in clinical nursing education. Nurse Educ Pract 2014;14:275–80.
Available online at www.sciencedirect.com
ScienceDirect
Please cite this article in press as: O’Connor A, et al. Clinical performance assessment tools in physiotherapy practice education: a systematic review. Physiotherapy (2017), http://dx.doi.org/10.1016/j.physio.2017.01.005