Journal of Interprofessional Education & Practice 8 (2017) 23e27
Contents lists available at ScienceDirect
Journal of Interprofessional Education & Practice journal homepage: http://www.jieponline.com
Assessing perceptions of interprofessional education and collaboration among graduate health professions students using the Interprofessional Collaborative Competency Attainment Survey (ICCAS) Rhonda Schwindt, DNP, RN, PMHNP-BC a, *, Jon Agley, PhD, MPH b, c, Angela M. McNelis, PhD, RN, FAAN, ANEF, CNE a, Karen Suchanek Hudmon, DrPH, MS, RPh d, Kathy Lay, PhD e, Maureen Bentley, MSN, PMHNP-BC f a
George Washington University School of Nursing, USA Indiana University School of Public Health, USA Indiana Prevention Resource Center, USA d Purdue University College of Pharmacy, USA e Indiana University School of Social Work, USA f Eskenazi Health, USA b c
a r t i c l e i n f o
a b s t r a c t
Article history: Received 15 August 2016 Received in revised form 17 March 2017 Accepted 25 May 2017
Background: Advancing the science of interprofessional education (IPE) and its impact on collaborative practice requires measurement instruments with robust psychometric properties. Purpose: Investigators assessed the extent to which the Interprofessional Collaborative Competency Attainment Survey (ICCAS) is valid and reliable when applied to assess self-reported attitudes about IPE and perceived ability to engage in collaborative practice among graduate health professions students. Method: A purposive sample of 40 graduate students enrolled in a nurse practitioner (n ¼ 16), social work (n ¼ 15), and pharmacy (n ¼ 9) program was recruited within two large public Midwestern universities in the United States. All participants completed the ICASS following completion of interprofessional training designed to encourage a collaborative approach to tobacco dependence treatment for individuals with mental illness. Discussion: The ICCAS was an appropriate instrument for our population of students. Conclusions: We recommend further research with a larger sample size to further examine the properties of the ICCAS. © 2017 Elsevier Inc. All rights reserved.
1. Introduction Interprofessional education (IPE) improves the knowledge, attitudes, and teamwork skills of health professions students and licensed clinicians, and collaborative practice has been shown to reduce errors, produce a high quality of care, enhance patient satisfaction, and decrease health care costs.1 Insufficient evidence
* Corresponding author. George Washington University School of Nursing, 1919 Pennsylvania Ave., NW, Washington, DC, 20006, USA. E-mail address:
[email protected] (R. Schwindt). http://dx.doi.org/10.1016/j.xjep.2017.05.005 2405-4526/© 2017 Elsevier Inc. All rights reserved.
exists, however, to link IPE with changes in interprofessional (IP) collaborative practice behaviors, system processes, or patient outcomes.1e3 Despite a conceptual history originating as early as World War II, several barriers remain that impede the successful operationalization of IPE in the academic setting, IP collaborative practice in the clinical arena, and the establishment of a connection between the two.1,3e5 Measurement instruments, with robust psychometric properties that measure construct(s) of interest, are essential to the scientific inquiry process for systematic evaluation of IPE and its impact on IP collaborative practice. To assist in the advancement of measurement metrics, we assessed the construct validity and reliability of the Interprofessional Collaborative
24
R. Schwindt et al. / Journal of Interprofessional Education & Practice 8 (2017) 23e27
Competency Attainment Survey (ICCAS) for assessing self-reported attitudes about IPE and perceived ability to engage in IP collaborative practice among nurse practitioner (NP), social work (SW), and pharmacy (PharmD) students enrolled at two large Midwestern universities in the United States. 2. Background Currently, no measurement instrument is widely recognized as the gold standard for assessing IPE and its impact on IP collaborative practice and health outcomes.2 Reeves and colleagues3 reported insufficient evidence to support the effectiveness of IPE, in part, because of the lack of homogeneity among outcome measures. Their findings were consistent with a previously published analysis of 33 survey instruments in which the majority (78%) had been used only once, thereby rendering it difficult to draw generalizable inferences. In addition, no study had assessed the full complement of IPE competencies.6 An expanded systematic review of the literature revealed similar concerns. Brashers and colleagues1 recommended the careful selection of objective and relative outcome measures for future studies, and other researchers have concluded that the psychometric integrity of IPE instruments is inadequate.7,8 Gillan et al.’s6 systematic review included studies of six IP measurement tools developed after 1990 that examined both internal reliability and construct validity to a meaningful degree. Several of those tools yielded good metrics, but were tested with populations outside the scope of American health professions students. For example, the Team Climate Inventory (TCI) was found to be internally reliable with hospital management teams consisting of medical and non-medical personnel.9 The Nurse-Physician Collaboration Scale (NPCS) demonstrated good test-retest reliability and excellent internal consistency with practicing Japanese healthcare professionals,10 and the Demand-Driven Learning Model (DDLM) evaluation tool for web-based learning had excellent internal reliability with adult learners across all fields of study.11 The reliability of tools that are more pertinent to NP, SW, and PharmD students in the United States have been evaluated by several investigators. The Attitude Toward Health Care Teams Scale (ATHCTS), part of which underpins the ICASS, was tested in a threephase study and found to be face-valid and internally-reliable (depending on phase) with professionals including physicians, advanced practice nurses, and social workers.12 Likewise, the Readiness for Interprofessional Learning Survey (RIPLS), which also was used in the construction of the ICASS, was an internally-reliable measure of three IPE factors when pilot-tested with 120 stu^ et al.14 tested an IPE dents from 8 different health professions.13 Le measurement tool based on the RIPLS with 29 nursing, medicine, and pharmacy students, but recommended caution in interpreting the factor analysis because the KMO value was below 0.50 for the post-questionnaire. The ICASS was developed to assess self-reported competencies of IP collaborative practice in IPE programs. The survey consists of 20 items which were selected and modified from existing instruments (i.e., ATHCTS, Interdisciplinary Education Perception Scale [IEPS], and RIPLS) to evaluate team roles and team-based approaches to patient care.15 Because many of the healthprofession-student-specific IPE assessment instruments were developed prior to the year 2000, the ICASS reflects a modern set of IPE competencies developed by the Canadian Interprofessional Health Collaborative.15 While instruments such as the RIPLS and ATHCTS have good psychometric properties, the ICASS is grounded in a theoretical framework and contains items from multiple validated scales. The original validation study for the ICCAS used a multidisciplinary sample of health professionals that included nurses, social workers, and pharmacists.15 Investigators concluded
that it was a valid and reliable tool for use with health professionals in aggregate. The applicability of the instrument for NP, SW, and PharmD students, including identification of underlying factors, is less clear. Fewer than 10% of participants in the 2014 validation study were SWs or pharmacists, and of those, not all were students. Although NPs comprised a quarter of the original study sample, all the respondents either worked or were enrolled in education programs in Canada or New Zealand. Researchers have identified more job dissatisfaction and burnout risk among nurses employed in the United States than in Canada,16 as well as differences in the operationalization of the advanced practice role in New Zealand.17 These differences may have impacted how respondents completed measures of collaborative efforts within healthcare, and as a result, makes generalization to students in the United States more difficult.
3. Methods We conducted a pilot study with a purposive sample of 40 graduate students enrolled in NP (n ¼ 16), SW (n ¼ 15), and PharmD (n ¼ 9) programs within two large, public Midwestern universities in the United States. Data were collected after students completed five hours of training consisting of a 1-hour online module about IP collaborative practice and the treatment of tobacco use and dependence for the targeted population, a 3-hour live session with all disciplines together, a simulation experience with a standardized patient, and a faculty facilitated group debriefing session. Within one week of the training, all participants completed the ICASS via an online data management system. Study methods were approved by appropriate institutional review boards.
4. Data analyses For each of the 20 ICCAS items,15 we computed central tendency data (Table 1). Response options ranged from 1 ¼ Strongly Disagree to 7 ¼ Strongly Agree. Before beginning the first set of analyses, we excluded five cases with one or more missing responses for any of the ICCAS items. To characterize overall self-rated competency with IPE, we assessed the underlying structure of the scale using exploratory factor analysis (EFA) with principal axis factoring.18 We applied an oblique rotation in accordance with our hypothesis that any underlying factors would be related.19 The sample size was insufficient for a highly generalizable execution of EFA with 20 items; however, small sample sizes are feasible for EFA when sampling adequacy is good and the sample exhibits high communality values.19 We assessed the Kaiser-Meyer-Olkin (KMO) value for the overall analysis and removed items iteratively, beginning with those with the lowest individual KMO value, until the overall EFA value exceeded 0.80, indicating good sampling adequacy.20 As a result, we removed items 4, 5, 9, 2, and 3 sequentially from the analyses; each of these exhibited initial individual KMO values of less than 0.50 (Table 1). Reliability of the scales represented by the extracted factors was assessed with Cronbach's alpha. An item was dropped in cases where removing it would significantly improve a scale's reliability. Post hoc, we noted that this step was not required. Scores for both identified factors were calculated for the overall sample and for each separate profession. Differences between each profession were examined using a one-way ANOVA test after checking for extreme non-normality, homogeneity of variance, and outliers. We also applied a Kruskal Wallis test post hoc21 to verify our findings. All analyses were completed in SPSS version 22.
R. Schwindt et al. / Journal of Interprofessional Education & Practice 8 (2017) 23e27
25
Table 1 Central tendency data: IPE questionnaire (n ¼ 40). After participating in the learning activities, I am able to…
Mean (SD)
1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) 15) 16) 17) 18) 19) 20)
6.60 6.85 6.80 6.50 6.74 6.45 6.48 6.55 6.58 6.50 6.50 6.70 6.58 6.40 6.48 6.49 6.41 6.23 6.50 6.35
Promote effective communication among members of an interprofessional (IP) team. Actively listen to IP team members' ideas and concerns. Express my ideas and concerns without being judgmental. Provide constructive feedback to IP team members. Express my ideas and concerns in a clear, concise manner. Seek out IP team members to address issues. Work effectively with IP team members to enhance care. Learn with, from and about IP team members to enhance care. Identify and describe my abilities and contributions to the IP team. Be accountable for my contributions to the IP team. Understand the abilities and contributions of IP team members. Recognize how others' skills and knowledge complement and overlap with my own. Use an IP team approach with the patient to assess the health situation. Use an IP team approach with the patient to provide whole person care. Include the patient/family in decision-making. Actively listen to the perspectives of IP team members. Take into account the ideas of IP team members. Address team conflict in a respectful manner. Develop an effective care plan with IP team members. Negotiate responsibilities within overlapping scopes of practice.
(0.871) (0.362) (0.405) (1.240) (0.498) (1.395) (1.377) (1.300) (1.196) (1.281) (1.261) (0.608) (0.931) (1.429) (1.219) (1.275) (1.601) (1.677) (1.414) (1.189)
Note: Response options range from 1 (Strongly Disagree) to 7 (Strongly Agree). Questions 5, 16, 17, and 18 were missing 1 respondent.
4.1. Exploratory factor analysis Before finalizing the results of the analysis, we reassessed the adequacy of the data for EFA. The correlation matrix revealed a large number of correlated variables, the Kaiser-Meyer-Olkin (KMO) value was good (0.82), and Bartlett's Test of Sphericity was significant (c2 ¼ 877.9, df ¼ 105, p < 0.001). Although the sample size was smaller than suggested for EFA (< 50), there were at least 3 factor loadings > 0.45 on each factor, suggesting factor reliability.22 Principal axis factoring revealed two factors with eigenvalues greater than 1 (11.4 and 1.2, respectively). The appropriateness of selecting two factors was cross-checked using a scree plot and factor plot in rotated factor space; both images also suggested two underlying factors. These factors accounted for 76.2% and 7.8% of total variance, respectively, but cumulative variance explained cannot be estimated because the factors are correlated and the percentages therefore are not additive. The loadings for each factor are displayed in Table 2. Both factors had excellent internal reliability statistics (Factor 1: a ¼ 0.97; Factor 2: a ¼ 0.96). Interpretability of factor meanings is largely subjective and labeling of factors is based on face validity (e.g., what “makes sense”). In this case, we have labeled Factor 1 as “Perceived Ability to Provide
Interprofessional Care” and Factor 2 as “Perceived Ability to Work as Part of an Interprofessional Team.” The two factors were correlated at 0.77. 4.2. Mean scores and between-group comparisons For each student who responded to the 15 items that were retained (n ¼ 37), mean scores were calculated for each factor (Table 3). In general, all students reported high perceived ability to provide IP collaborative care (NP M ¼ 6.86; SW M ¼ 6.32, and PharmD M ¼ 6.54). After removing a single univariate outlier (z > 3.30) and checking for non-normality using histograms, a oneway ANOVA was conducted. The assumption of homogeneity of variance was met, and the between-subjects score indicated no significant differences between student groups in terms of perceived ability to provide IP collaborative care (F2,33 ¼ 1.38, p ¼ 0.27). All students perceived themselves as capable of working as part of an IP collaborative team, with NP students reporting the highest score (M ¼ 6.90), followed by PharmD (M ¼ 6.56), and SW (M ¼ 6.39) (F2,34 ¼ 2.05, p ¼ 0.15). Finally, because sample sizes in each group were small and normality was not optimal, in addition to Levene's test being violated for Factor 2, we verified the findings
Table 2 Factor loading scores. Factor 1 1) 6) 7) 8) 10) 11) 12) 13) 14) 15) 16) 17) 18) 19) 20)
Promote effective communication among members of an interprofessional (IP) team. Seek out IP team members to address issues. Work effectively with IP team members to enhance care. Learn with, from and about IP team members to enhance care. Be accountable for my contributions to the IP team. Understand the abilities and contributions of IP team members. Recognize how others' skills and knowledge complement and overlap with my own. Use an IP team approach with the patient to assess the health situation. Use an IP team approach with the patient to provide whole person care. Include the patient/family in decision-making. Actively listen to the perspectives of IP team members. Take into account the ideas of IP team members. Address team conflict in a respectful manner. Develop an effective care plan with IP team members. Negotiate responsibilities within overlapping scopes of practice.
Note: A gray background indicates the variable loading on that factor.
0.965 0.967 1.047 0.696 0.181 0.058 0.282 0.830 0.783 0.016 0.078 0.225 0.371 0.952 0.765
Factor 2 0.088 0.031 0.136 0.324 0.798 0.952 0.641 0.150 0.175 0.781 0.903 0.785 0.279 0.040 0.203
26
R. Schwindt et al. / Journal of Interprofessional Education & Practice 8 (2017) 23e27 Table 3 Overall ICCAS Factor Scores. Mean (SD) All Students (n ¼ 37) Factor 1: Perceived Ability to Provide Interprofessional Care Factor 2: Perceived Ability to Work as Part of an Interprofessional Team NP Students Only (n ¼ 13) Factor 1: Perceived Ability to Provide Interprofessional Care Factor 2: Perceived Ability to Work as Part of an Interprofessional Team Pharmacy Students Only (n ¼ 9) Factor 1: Perceived Ability to Provide Interprofessional Care Factor 2: Perceived Ability to Work as Part of an Interprofessional Team SW Students Only (n ¼ 15) Factor 1: Perceived Ability to Provide Interprofessional Care Factor 2: Perceived Ability to Work as Part of an Interprofessional Team
6.56 (0.86) 6.61 (0.69) 6.86 (0.28) 6.90 (0.28) 6.54 (0.45) 6.56 (0.69) 6.32 (1.25) 6.39 (0.87)
Note: Response options range from 1 (Strongly Disagree) to 7 (Strongly Agree).
using the non-parametric Kruskal Wallis test. This test verified both findings as non-significant without assuming sample normality or homogeneity of variance (Factor 1: c2 ¼ 4.08, p ¼ 0.13; Factor 2: c2 ¼ 4.92, p ¼ 0.09). 5. Discussion In prior research, the ICCAS was a reliable and valid means of comparing self-rated IPE and IP collaborative practice competencies before (retrospectively) and after an IPE-infused training with a broad spectrum of health professions outside of the United States.15 Following rigorous data analytic procedures, and after removing five items to meet underlying statistical assumptions, we found the ICCAS to be an internally-reliable instrument for our population of students. Our analyses identified two correlated factors corresponding to perceived ability to provide IP care and perceived ability to work as part of an IP collaborative team. The original report identified only a single latent factor post-training, but identified two such factors when retrospectively measuring pre-test IPE and IPC (perception of one's own skill and perception of involvement with the rest of the team/family). It is possible, given the correlation between the factors in this study, that a larger sample size would have yielded a model for which a single-factor solution was appropriate. The internal reliability statistics for the two factors identified in this study were excellent (both a > 0.95). This suggests that the items effectively measured underlying concepts related to IPE for our sample of health professions students. The IPE subscales that conceptually contributed to the ICCAS's development and were tested with similar students and professionals, also exhibited good internal reliability. Investigators reported alpha levels between 0.72 and 0.87 for the three subscales on Phase 2 of the ATHCTS validation study.12 The RIPLS had excellent overall internal reliability (a ¼ 0.90), though subscales 2 and 3 exhibited somewhat lower internal consistency (a ¼ 0.63 and a ¼ 0.32, respectively). However, a more recent validation study of a slightly modified tool among health professionals found higher internal consistency values for those scales (a ¼ 0.86 and a ¼ 0.69).13,24 When selecting a measurement instrument, investigators should consider the congruency between the individual and underlying constructs being assessed and the study's purpose and theoretical framework. We valued the theoretical linkages between the ICCAS and established sets of IPE competencies. We also examined potential differences by profession and determined that field of study did not have a reliable, large effect on either perception of ability to provide IPE care or to work as part of an IP collaborative team. This is analogous to a finding reported by Heinemann et al.12 when testing the ATHCTS. We noted post hoc
that the sample size comprised of three groups and a specified power of 0.80 with a critical alpha of 0.05, was only efficacious for identifying large effects via the ANOVA analysis. As a result, nonsignificant findings may not indicate that no differences would be observed in larger groups of similar students.23 The preliminary conclusion we derived, given this finding and the work by Archibald and colleagues,15 was that the ICCAS may be an appropriate instrument to measure self-perceptions related to IPE and IPC among NP, SW, and PharmD students in the United States. To substantiate this statement, we recommend reexamining the properties of the scale in a larger sample. To further support the instrument's utility, investigators might also consider controlled studies that test for associations between the ICCAS and measurable behaviors in clinical practice. In this context, high ICCAS scores may be a good indicator of students' future IP clinical behaviors. 6. Limitations We cannot determine the extent to which item deletion from the ICCAS for EFA was driven by the population's unique characteristics compared to the original validation study, or the result of the small sample size relative to the number of questionnaire items. The factors underlying the remaining 15 items were found however, to produce highly reliable scales. Additionally, the sample size may have masked any small or moderate effects that the students' field of study had on the ICCAS factor scores. Finally, the potential for response bias and social desirability associated with self-reports must be considered along with constraints regarding generalizability of results generated from small samples. 7. Conclusions Despite the challenges associated with evaluating the impact of IPE interventions on desired outcomes, it remains a sustained topic of interest for key stakeholders including researchers, educators, regulatory agencies, policy makers, and healthcare organizations. To enable judgments of effectiveness, and to enhance our understanding of the value of IPE, it is necessary to identify reliable and valid instruments to assess perceived and actual improvement in collaborative practice competencies and team-based approaches to care. This pilot study provides preliminary evidence that the ICCAS is an appropriate instrument for assessing nursing, social work, and pharmacy students' perceptions related to IPE interventions. The critical appraisal of instruments used to assess IPE and IP collaborative practice is an important undertaking for the advancement of the science. The ICCAS appears to be promising and warrants further exploration.
R. Schwindt et al. / Journal of Interprofessional Education & Practice 8 (2017) 23e27
Funding This work was supported by the Center for Teaching and Learning at Indiana University-Purdue University Indianapolis University. References 1. Brashers V, Phillips E, Malpass J, Owen J. Review: Measuring the Impact of Interprofessional Education (IPE) on Collaborative Practice and Patient Outcomes. Washington, DC: National Academy Press (US); 2015. 2. Cox M, Cuff P, Brandt B, Reeves S, Zierler B. Measuring the impact of interprofessional education on collaborative practice and patient outcomes. J Interprofessional Care. 2016;30(1):1e3. http://dx.doi.org/10.3109/ 13561820.2015.1111052. 3. Reeves S, Perrier L, Goldman L, Freeth D, Zwarenstein M. Interprofessional education: Effects onprofessional practice and healthcare outcomes (update). 2013. Cochrane Database of Systematic Reviews. Art. No. CD002213. 4. Brock T, Boone T, Anderson C. Health care education must be a team sport. Am J Pharm Educ. 2016;80(1). Article 1. 5. Buring SM, Bhushan A, Broeseker A, et al. Interprofessional education: definitions, student competencies, and guidelines for implementation. Am J Pharm Educ. 2009;73(4). Article 59. 6. Gillan C, Lovrics E, Halpern E, Wilger D, Harnett N. The evaluation of learner outcomes in interprofessional continuing education: a literature review and an analysis of survey instruments. Med Teach. 2011;33(9):461e470. http:// dx.doi.org/10.3109/0142159X.2011.58791. 7. Oats M, Davidson M. A critical appraisal of instruments to measure outcomes of interprofessional education. Med Educ. 2015;49:386e398. 8. Thannhauser J, Russell-Mayhew S, Scott C. Measures of interprofessional education and collaboration. J Interprofessional Care. 2010;24(4):336e349. 9. Anderson NR, West MA. Measuring climate for work group innovation: development and validation of the Team Climate Inventory. J Organ Behav. 1998;19(3):235e258. 10. Ushiro R. Nurse-physician collaboration scale: development and psychometric
27
testing. J Adv Nurs. 2009;65(7):1497e1508. 11. MacDonald CJ, Breithaupt K, Stodel EJ, Farres LG, Gabriel MA. Evaluation of web-based educational programs via the Demand-Driven Learning Model: a measure of web-based learning. Int J Test. 2002;2(1):35e61. 12. Heinemann GD, Schmitt MH, Farrell MP, Brallier SA. Development of an attitudes toward health care teams scale. Eval Health Prof. 1999;22(1):123e142. 13. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999;33(2):95e100. ^ Q, Spencer J, Whelan J. Development of a tool to evaluate health science 14. Le students' experiences of an interprofessional education (IPE) programme. Ann Acad Med Singap. 2008;37(12):1027e1033. 15. Archibald D, Trumpower D, MacDonald CJ. Validation of the interprofessional collaborative competency attainment survey. J Interprofessional Care. 2014;28(6):553e558. 16. Aiken LH, Clarke SP, Sloane DM, et al. Nurses' reports on hospital care in five countries. Health Aff. 2001;20(3):43e53. 17. Sheer B, Wong FKY. The development of advanced nursing practice globally. J Nurs Scholarsh. 2008;40(3):204e211. 18. De Winter JCF, Dodou D. Factor recovery by principal axis factoring and maximum likelihood factor analysis as a function of factor pattern and sample size. J Appl Stat. 2012;39(4):695e710. 19. Tabachnick BG, Fidell LS. Using Multivariate Statistics. 6th ed. Boston, MA: Pearson Education, Inc; 2013. 20. Cemy CA, Kaiser HF. A study of a measure of sampling adequacy for factoranalytic correlation matrices. Multivar Behav Res. 1977;12(1):43e47. 21. National Institute of Standards and Technology. Kruskal Wallis. Accessed from: http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/kruskwal. htm; 2015. 22. Zaiontz C. Validity of correlation matrix and sample size. Accessed from: http:// www.real-statistics.com/multivariate-statistics/factor-analysis/validity-ofcorrelation-matrix-and-sample-size/; 2016. 23. Cohen J. Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988. 24. Reid R, D. Bruce, Allstaff K, McLenon D. Validating the Readiness for Interprofessional Learning Scale (RIPLS) in the postgraduate context: are health care professionals ready for IPL? Med Educ. 2006;40:415e422.