The Nebraska interprofessional education attitudes scale: A new instrument for assessing the attitudes of health professions students

The Nebraska interprofessional education attitudes scale: A new instrument for assessing the attitudes of health professions students

Journal of Interprofessional Education & Practice 4 (2016) 33e39 Contents lists available at ScienceDirect Journal of Interprofessional Education & ...

393KB Sizes 129 Downloads 425 Views

Journal of Interprofessional Education & Practice 4 (2016) 33e39

Contents lists available at ScienceDirect

Journal of Interprofessional Education & Practice journal homepage: http://www.jieponline.com

The Nebraska interprofessional education attitudes scale: A new instrument for assessing the attitudes of health professions students Gary L. Beck Dallaghan, PhD a, *, Elizabeth Lyden, MS b, Jane Meza, PhD b, Hugh Stoddard, MEd, PhD c, Catherine Bevil, EdD, RN d, Dean Collier, PharmD e, Margaret Winnicki, MPH f, Devin Nickol, MD g a

University of Nebraska College of Medicine, 985525 Nebraska Medical Center, Omaha, NE 68198-5525, USA University of Nebraska College of Public Health, 984375 Nebraska Medical Center, Omaha, NE 68198, USA Emory University School of Medicine, 100 Woodruff Circle, P-378, Atlanta, GA 30322, USA d University of Nebraska College of Nursing, 985330 Nebraska Medical Center, Omaha, NE 68198, USA e University of Nebraska College of Pharmacy, 986045 Nebraska Medical Center, Omaha, NE 68198, USA f University of Nebraska College of Allied Health Professions, 984035 Nebraska Medical Center, Omaha, NE 68198, USA g University of Nebraska College of Medicine, 986430 Nebraska Medical Center, Omaha, NE 68198, USA b c

a r t i c l e i n f o

a b s t r a c t

Article history: Received 16 December 2015 Received in revised form 3 May 2016 Accepted 19 May 2016

Background: Interprofessional education (IPE) for health professions students can facilitate collaboration. Previous instruments to measure student attitudes about IPE were reported to lack psychometric and validity evidence. Methods: The Nebraska Interprofessional Education Attitude Scale (NIPEAS) was developed to measure the attitudes of pre-clinical learners to practicing health professionals. The 19 item questionnaire was administered in 2012 to 370 students, in 2013 to 280 faculty, and in 2015 to 353 students. Results: An exploratory factor analysis found that NIPEAS items loaded onto four factors: “Team approach to health care,” “Receptivity to teammates,” “Self-efficacy as a team member” and “Ethics in health care.” Confirmatory factor analysis suggested that the four-factor model for NIPEAS provided an adequate fit to the data. Conclusions: The attitude scales in NIPEAS were derived from the IPEC competencies. NIPEAS appears to be useful for measuring attitudes about interprofessional collaboration of pre-service health professions students and health care providers. Published by Elsevier Inc.

Keywords: Interprofessional education Attitudes

Background/rationale The U.S. health care system ranks near the bottom among developed countries on nearly every health care parameter.1,2 Interprofessional collaboration and teamwork have been advocated as an essential remedy to improve the quality of patient care, lower costs, decrease length of stay and reduce medical errors.3 However, preparation of future health care professionals with new ways of relating to one another and new skills requires interprofessional learning approaches.2 Interprofessional education (IPE) is defined as those occasions when students of two or

* Corresponding author. E-mail address: [email protected] (G.L. Beck Dallaghan). http://dx.doi.org/10.1016/j.xjep.2016.05.001 2405-4526/Published by Elsevier Inc.

more health care professions learn with, from, and about one another to improve collaboration and the quality of care.4 Despite the growing body of descriptive literature on IPE programs for health professions students5,6; a review of this literature highlighted the need to evaluate the outcomes of IPE in ways that elucidate the relationship between the interventions and their outcomes.6 Prior efforts to evaluate attitudes toward IPE have been numerous7 however, few of the instruments used were designed for pre-service students. In addition, those that were designed for this population lacked compelling psychometric support.7,8 Additionally, recognizing that attitudes may change over time was not able to be measured in these instruments. A literature review by Gillan et al9 located 33 complete and relevant instruments that measured aspects of IPE. Most of these lacked established reliability and validity and less than one-third of

34

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

them had been used more than once in the literature that was reviewed. The authors concluded that no instrument has been identified yet as the “gold standard” for evaluating IPE and suggested that more work on instrument development needed to be done.9 The diversity of instruments reported in the literature for assessing IPE suggested that there has been no general agreement on the attitudes, values, or competencies that comprise the essential core of interprofessional practice and education.9,10 Two of the most popular instruments, RIPLS and IEPS,9 did not achieve internal consistency measures for two RIPLS sub-scales11,12 as well as internal consistency discrepancies noted in the IEPS.13 Recognizing the difficulty in measuring IPE, the Interprofessional Education Collaborative Expert Panel (IPEC), which was composed of representatives from six national health professions education associations,14 achieved consensus using D'amour and Oandasan's concept of “interprofessionality”15 to define and categorize characteristics of interprofessional practice. The IPEC proposed a set of core competencies14 organized into four domains that were intended as statements of the knowledge, values and skills needed for interprofessional teamwork and patient-centered collaborative care. These consensus competencies provided a strong framework for contemporary IPE curriculum development and assessment strategies. Prior to 2009, IPE events at the University of Nebraska Medical Center (UNMC) were conducted, but lacked curricular structure. In 2009, the UNMC colleges of dentistry, medicine, nursing, pharmacy, public health and the school of allied health professions convened an Interprofessional Education Curriculum Committee (IPE-CC) to develop and implement a comprehensive program of educational activities that would prepare the UNMC students for collaboration and teamwork in the provision of health care.16 With additional programs being planned, the IPE-CC identified a need for more robust evaluations than available from the aforementioned instruments in common use. In 2010, an evaluation subcommittee, comprised of faculty members from each college with expertise in evaluation and assessment, was charged with developing protocols to measure the impact of IPE on the UNMC students and faculty and to collect data related to student achievement of the IPE learning outcomes. During 2011, the UNMC IPE Curriculum Committee adopted IPE competencies to guide the development of the new interprofessional curriculum, using the Core Competencies for Interprofessional Collaborative Practice, developed by the Interprofessional Education Collaborative (IPEC) (2011). An extensive literature review was conducted to determine if there were existing instruments that used these new, or comparable, competencies to measure interprofessional attitudes among health professions students to no avail. The subcommittee was also interested in an assessment tool that could be used throughout the continuum of their educational programs, which RIPLS is not designed to do.17 Due to the lack of literature at that time, the Evaluation Subcommittee created a new instrument, the Nebraska Interprofessional Education Attitudes Survey (NIPEAS). The purpose of this manuscript is to describe the development of the instrument and steps taken to collect validity evidence. Methods Instrument development During the spring and summer of 2011, a literature review of existing instruments for interprofessional attitudes among health professions students was conducted. Concurrently, the UNMC IPECC adopted IPE competencies to refine the interprofessional

curriculum objectives. The adopted competencies were based on the Core Competencies for Interprofessional Collaborative Practice.14 An interprofessional group of health care educators at UNMC selected the sub-set of IPEC Competencies14 which were developmentally appropriate for pre-service health professions students. As reported in a study of existing literature,7,9 we did not find robust IPE attitude measures for pre-service students. Furthermore, no IPE attitude measurement instruments related to the IPEC Competencies were found. Because RIPLS has been so widely used, the Evaluation Subcommittee attempted to map IPEC Competencies to the RIPLS items to determine if there was agreement on what competencies RIPLS might measure. There was a general lack of agreement on which competency related to the RIPLS items. IPE attitudes acquired at early stages of training have more farreaching effects than either IPE knowledge or learners' reactions to the educational interventions.18 A report about evaluating IPE18 classified ‘learner reactions’ as a Level 1 outcome, ‘knowledge, skills, or attitudes’ as Level 2 outcomes, ‘behavior change’ as Level 3, and ‘benefit to patients’ as a Level 4 outcome. We developed the NIPEAS to measure attitudes,19 since the target population of preservice students would not have had opportunities to demonstrate behavioral changes or patient benefits. Our initial intention was for the NIPEAS instrument to have utility for measuring attitudes amongst pre-service students toward IPE goals at any institution and in conjunction with any type of educational intervention. Therefore, the Evaluation Subcommittee created a new instrument, the NIPEAS, which used the IPEC Competencies as its theoretical framework. Concurrent validity of the NIPEAS was achieved by translating IPEC Competencies into statements which could be used with a Likert scale (strongly disagree to strongly agree). The Evaluation Subcommittee held several meetings to develop items for the NIPEAS, using the wording of the IPEC Competencies as a starting point to develop the questions (Table 1). An initial list of 24 items was generated and revisions were made after review by stakeholder groups including faculty, health care professionals, and health professions students. The subsequent version of the NIPEAS, with 18 items, was then administered as a pilot study. The first version of the NIPEAS instrument was piloted with students from 6 health professions programs at UNMC who

Table 1 Selected NIPEAS items with accompanying IPEC competency statements. IPEC competency

NIPEAS item

VE4. Respect the unique cultures, values, roles/responsibilities, and expertise of other health professions TT6. Engage self and others to constructively manage disagreements about values, roles, goals, and actions that arise among health care professionals and with patients and families. CC1. Choose effective communication tools and techniques, including information systems and communication technologies, to facilitate discussions and interactions that enhance team function. RR7. Forge interdependent relationships with other professions to improve care and advance learning.

6. Appreciation of the expertise of other health care professionals leads to a better work environment. 8. In order to be an effective team member, I may have to compromise with others who hold different values.

14. Effective communication is an essential component of all treatment plans.

15. I need to establish good relationships with professionals outside of my own profession in order to practice effectively.

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

participated in an IPE event in August 2011. Descriptive statistics were collected and an exploratory factor analysis (EFA) was conducted. Based on the extracted factors and factor loadings, the Evaluation Subcommittee deemed that revisions were necessary. For example, items which had been written such that a response of “disagree” indicated a favorable attitude toward IPE were rewritten as affirmative statements. Additional clarifications in language or terminology were also made, based on questionnaire data and feedback from student focus groups. At this point, one item was split into two separate items which yielded the final set of 19 items (Table 2). This revised version of the NIPEAS was subsequently tested with the same cohort of students when they participated in an IPE event in February 2012. Descriptive and EFA results of the revised NIPEAS were analyzed and we approved the instrument after making a few minor language refinements. In August of 2012 the NIPEAS was administered to all matriculating students in UNMC's health professions programs. Matriculating students completed the questionnaire online prior to their first IPE educational activity, attempting to elicit attitudes prior to receiving any formal training about interprofessional activities. The first IPE educational activity focused on exploring professional identities and roles. A new EFA was conducted to identify the traits underlying students' attitudes toward IPE. We reviewed the scree plot, factors, factor loadings, and original IPEC Competencies in order to ‘load’ items onto factors and assign titles to each factor. Cronbach's alpha was computed as a measure of internal reliability of the NIPEAS and each of its sub-scales. In addition to the 19 items of the NIPEAS, respondents were asked to identify which professional training program they were entering. The classification variable was collected for quality control of response rates and for use in future research. The collection of all data associated with the

Table 2 NIPEAS items. Item 1 Item 2 Item 3 Item 4 Item 5 Item 6 Item 7 Item 8 Item 9 Item 10 Item 11 Item 12 Item 13 Item 14 Item 15 Item 16 Item 17 Item 18 Item 19

I am able to communicate effectively about patient care with persons from health care professions different from my own. I am able to use terminology that is unique to other health care professions. I understand my own role within the health care team. I understand the roles of other health care professionals. I should learn about the values and expertise required for health care professions other than my own. Appreciation of the expertise of other health care professionals leads to a better work environment. I can provide a higher standard of care if I consider input from other professionals than if I work independently. In order to be an effective team member, I may have to compromise with others who hold different values. To be competent, a person in my profession must work cooperatively with other health care providers. Ethical principles that are foundational to health care are the same for all health care professions. I consider ethical practice and high quality of patient care to be more important than demonstrations of my own knowledge and skills. I would be receptive to critique of my performance from another person in my own profession. I can learn about my own profession from health care professionals outside of my own profession. Effective communication is an essential component of all treatment plans. I need to establish good relationships with professionals outside of my own profession in order to practice effectively. It is more important to listen to the opinions of other health care team members than to state my own viewpoint. Forming relationships with members of other professions can improve patient care and advance learning. I would be receptive to critique of my performance from a person who is in a different profession than my own. The health care team's approach should be determined by the team as a whole rather than by the team leader.

35

Table 3 Participants by academic unit.a Academic unit

Students 2012

Facultyb

Students 2015

Allied health Medicine Nursing Pharmacy Public health

145 105 53 44 23

31 178 35 13 28

136 103 38 44 27

(39.2%) (28.4%) (14.3%) (11.9%) (6.2%)

(9.2%) (52.8%) (10.4%) (3.9%) (8.3%)

(39.1%) (29.6%) (10.9%) (12.6%) (7.8%)

a Numbers represent number of participants by academic unit and the percent of total responses. b Faculty survey also included responses from dentistry (n ¼ 17), Munroe Meyer Institute, institute focused on care and support for individuals with intellectual, developmental, and/or genetic disorders (n ¼ 24) and no report (n ¼ 11).

NIPEAS in each year was approved by the UNMC IRB as “exempt” research (IRB #444-11-EX). The NIPEAS was designed to be the cornerstone of IPE assessment at UNMC. The instrument measures attitudes of practicing health professionals based on IPEC Competencies; therefore, the NIPEAS can be used longitudinally to measure students' attitudes at the beginning of their program, during the process, at the culmination of their training, and into practice. For this reason, the faculty at UNMC was asked to complete the NIPEAS in the spring of 2013. To gauge internal structure validity, factor analysis of the faculty responses was conducted to determine if the NIPEAS results for practicing health care professionals reflected those of trainees. Additionally, the NIPEAS was administered to matriculating UNMC students for an orientation IPE event to conduct a confirmatory factor analysis in the fall of 2015. Statistical analysis Responses were descriptively summarized. Exploratory factor analysis (EFA), eigenvalues, and scree plots were used to identify underlying factors among the 19 items in the 2012 cohort. Scree plots, factors, and factor loadings were reviewed to “load” items onto factors. The minimum communality for an item to load onto a factor was 0.32.20 Variables loading onto each factor were reviewed and assigned titles to each factor. Cronbach's alpha was computed as a measure of internal reliability of the NIPEAS and each of its subscales. EFA was also used to identify factors in the NIPEAS completed by the faculty to compare practicing health care professional responses with students who are matriculating. Table 4 Descriptive statistics for NIPEAS items by student and faculty. 2012 students

Item Item Item Item Item Item Item Item Item Item Item Item Item Item Item Item Item Item Item

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

2013 faculty

Mean

Std dev

Min

Max

Mean

Std dev

Min

Max

2.0 2.6 1.9 2.0 1.5 1.3 1.3 1.7 1.3 1.8 1.7 1.7 2.0 1.2 1.5 2.8 1.4 2.3 1.8

0.7 0.8 0.7 0.7 0.6 0.5 0.5 0.7 0.5 0.8 0.7 0.6 0.8 0.4 0.6 0.8 0.5 0.8 0.7

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

4.0 5.0 4.0 4.0 4.0 3.0 3.0 4.0 4.0 5.0 4.0 5.0 5.0 3.0 4.0 5.0 4.0 5.0 4.0

1.7 2.1 1.5 1.7 1.8 1.4 1.5 1.9 1.5 1.7 1.6 1.7 1.9 1.3 1.5 2.3 1.4 2.0 1.9

0.8 0.9 0.8 0.7 0.8 0.6 0.7 0.8 0.8 0.8 0.8 0.7 0.8 0.5 0.7 0.9 0.5 0.9 0.9

1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0

5.0 5.0 5.0 4.0 5.0 4.0 4.0 5.0 5.0 5.0 5.0 5.0 5.0 4.0 4.0 5.0 3.0 5.0 4.0

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

36

Fig. 1. Scree plot of student response to NIPEAS.

Confirmatory factor analysis (CFA) was used with the completed NIPEAS from the students matriculating in the fall of 2015. CFA specifically, relies on several statistical tests to determine the adequacy of model fit to the data. The Chi-square test indicates the amount of difference between expected and observed covariance matrices.21 The Comparative Fit Index (CFI), which ranges from 0 to

1 with a larger value indicating better model fit, was calculated to assess model fit. Acceptable model fit is indicated by a CFI value of 0.90 or greater.22 Model fit was also examined using Root Mean Square Error of Approximation (RMSEA), which ranges from 0 to 1 with a smaller RMSEA value indicating better model fit. Acceptable model fit is indicated by an RMSEA value of 0.06 or less.22

Table 5 NIPEAS factor analysis rotated component matrix. 2012 student EFA componenta

Item 1 Item 2 Item 3 Item 4 Item 5 Item 6 Item 7 Item 8 Item 9 Item 10 Item 11 Item 12 Item 13 Item 14 Item 15 Item 16 Item 17 Item 18 Item 19 Percent total variance

2013 faculty component

1

2

3

4

1

2

3

4

0.121 0.054 0.183 0.084 0.637 0.780 0.727 0.514 0.697 0.153 0.261 0.300 0.301 0.610 0.586 0.036 0.725 0.173 0.379 21.25

0.167 0.077 0.002 0.044 0.182 0.064 0.127 0.172 0.130 0.032 0.328 0.571 0.649 0.005 0.329 0.630 0.329 0.804 0.329 12.43

0.666 0.737 0.779 0.784 0.026 0.082 0.089 0.154 0.029 0.154 0.083 0.140 0.061 0.044 0.082 0.130 0.032 0.042 0.113 12.33

0.221 0.091 0.058 0.095 0.111 0.111 0.186 0.024 0.057 0.819 0.664 0.094 0.150 0.095 0.102 0.110 0.041 0.007 0.034 6.89

0.238 0.175 0.222 0.214 0.696 0.779 0.720 0.668 0.704 0.162 0.266 0.197 0.474 0.578 0.777 0.080 0.693 0.212 0.203 24.24

0.207 0.146 0.114 0.048 0.259 0.059 0.135 0.260 0.097 0.091 0.358 0.523 0.608 0.188 0.239 0.713 0.311 0.833 0.645 14.91

0.735 0.723 0.713 0.808 0.131 0.209 0.329 0.146 0.205 0.071 0.279 0.294 0.055 0.205 0.171 0.102 0.242 0.084 0.141 14.71

0.019 0.023 0.279 0.126 0.027 0.122 0.169 0.085 0.374 0.870 0.427 0.332 0.016 0.321 0.149 0.139 0.254 0.005 0.082 8.16

Extraction method: principal component analysis. Rotation method: varimax with kaiser normalization. Bold values indicate the factor to which the item loaded. a Rotation converged in 5 iterations.

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

37

Table 6 Student eigenvalues and extraction loadings. Total variance explained Component

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Initial eigenvalues

Extraction sums of squared loadings

Rotation sums of squared loadings

Total

% Of variance

Cumulative %

Total

% Of variance

Cumulative %

Total

% Of variance

Cumulative %

7.710 1.617 1.428 1.030 0.816 0.773 0.680 0.661 0.586 0.564 0.508 0.467 0.432 0.375 0.316 0.306 0.261 0.238 0.230

40.577 8.509 7.518 5.423 4.295 4.071 3.581 3.477 3.085 2.971 2.674 2.459 2.274 1.975 1.661 1.612 1.373 1.255 1.209

40.577 49.087 56.605 62.027 66.323 70.393 73.974 77.451 80.536 83.507 86.181 88.641 90.915 92.891 94.551 96.163 97.536 98.791 100.000

7.710 1.617 1.428 1.030

40.577 8.509 7.518 5.423

40.577 49.087 56.605 62.027

4.606 2.833 2.795 1.551

24.242 14.912 14.710 8.164

24.242 39.154 53.864 62.027

Extraction method: principal component analysis.

Analyses were done with IBM SPSS Statistics Version 23 and SAS Version 9.4. Results In 2012, 370 of the 472 students who participated in the IPE event completed the NIPEAS (79% response rate). 280 faculty members representing all colleges on campus completed the survey (52.8% from the College of Medicine) in the Spring 2013. In 2015, 353 students completed the NIPEAS (69% response rate) (Table 3). NIPEAS items were completed using a 5-point Likert scale of Strongly Disagree to Strongly Agree. Descriptive statistics for NIPEAS items are displayed in Table 4. Principal components factoring with varimax rotation was used for extraction for the initial exploratory factor analysis. Four factors with eigenvalues greater than one were extracted and the

four-factor solution was verified by analyzing a scree plot (Fig. 1). The four extracted factors cumulatively accounted for 53% of the total variance. Each of the 19 items loaded on to one factor, with the exception of one, #19, which loaded onto two factors. The items had factor loadings ranging from 0.37 to 0.79. All of the items' factor loadings are shown in Table 4. We titled the sub-scales: Factor 1 ¼ “Team approach to health care” (8 items); Factor 2 ¼ “Receptivity to teammates” (5 items); Factor 3 ¼ “Self-efficacy as a team member” (4 items); Factor 4 ¼ “Ethics in health care” (2 items). Table 5 presents the 19 items arranged by the 4 factors onto which each loaded. Tables 6 and 7 present the eigenvalues and extraction loadings to explain the variance in factor loading. Using Cronbach's alpha, NIPEAS was measured to have an overall internal reliability of 0.84; the alphas of the sub-scales were: 0.84, 0.69, 0.74, and 0.46 respectively.

Table 7 Faculty eigenvalues and extraction loadings. Total variance explained Component

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

Initial eigenvalues

Extraction sums of squared loadings

Rotation sums of squared loadings

Total

% Of variance

Cumulative %

Total

% Of variance

Cumulative %

Total

% Of variance

Cumulative %

7.710 1.617 1.428 1.030 0.816 0.773 0.680 0.661 0.586 0.564 0.508 0.467 0.432 0.375 0.316 0.306 0.261 0.238 0.230

40.577 8.509 7.518 5.423 4.295 4.071 3.581 3.477 3.085 2.971 2.674 2.459 2.274 1.975 1.661 1.612 1.373 1.255 1.209

40.577 49.087 56.605 62.027 66.323 70.393 73.974 77.451 80.536 83.507 86.181 88.641 90.915 92.891 94.551 96.163 97.536 98.791 100.000

7.710 1.617 1.428 1.030

40.577 8.509 7.518 5.423

40.577 49.087 56.605 62.027

4.606 2.833 2.795 1.551

24.242 14.912 14.710 8.164

24.242 39.154 53.864 62.027

Extraction method: principal component analysis.

38

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

Fig. 2. Scree plot of faculty responses to NIPEAS.

Results of the CFA fit summary statistics for the 2015 student cohort indicated large differences between the observed and expected covariance matrices (c2 ¼ 292.67, p < 0.0001). Since the chi-square test statistic is highly dependent on sample size, it is important to consider measures of model fit.21 The RMSEA indicates the amount of unexplained variance or residual. The RMSEA of 0.0553 met the 0.06 or less criteria indicating acceptable model fit.22 Finally, CFI of 0.9108 value met the criteria (0.90 or larger) for acceptable model fit.23 Therefore, the results of the CFA suggest that the four-factor model for NIPEAS provides an adequate fit to the data. The CFA gives support to the conclusion that the general construct of NIPEAS can be described by the four specific facets: “working as a team” (9 items), “learning with others” (4 items), “self-efficacy in interprofessional care” (4 items), and “ethics and interprofessional practice” (2 items). Factor analysis and scree plots were also used to identify underlying factors for the faculty completing the NIPEAS in 2013. The purpose for doing this was to descriptively examine if faculty and student attitudes toward IPE were congruent. Four factors with eigenvalues greater than 1 were extracted and the four-factor solution was verified by analyzing a scree plot (Fig. 2). These factors fell under the same four facets as the students (Table 5). Discussion The publication of IPEC Competencies in 201114 made it possible for the first time to create an attitude measurement instrument for IPE that derived construct validity from a comprehensive, widely-accepted conceptual framework. This fact, in addition to reported deficiencies in previous IPE measurement instruments, led us to develop the NIPEAS as an instrument that was appropriate for pre-service learners which would measure attitudes toward interprofessional collaboration. Our objective was

to create a valid and reliable instrument that could be used to evaluate the efficacy of IPE curriculum or educational activities on the attitudes of these learners. The results from the analysis reported here along with the rigorous development process we employed, have led us to believe that NIPEAS can be useful for this purpose. Psychometrically, the factor analyses for the NIPEAS were consistent based on the samples used in this study. It should be noted that this is an early development effort for the NIPEAS. There is a weakness in Factor 4. The calculated alpha reliability for Factor 4 was low; such a result is expected when a factor has few items loaded on it.24 Based on Costello and Osborn,20 using the scree plot is the best criteria for determining the number of factors to retain and was the rationale for maintaining this factor. For our students, the UNMC IPE Curriculum Committee kept the items in the instrument as they believed them to be important attitudes to measure. Dow et al25 have also developed an instrument with items developed from the IPEC Competencies and found them to be more reliable. Using their example, future studies should incorporate other items related to these to determine how these NIPEAS relate to other questionnaire items. Recent literature describes different IPE scales. One such article is a systematic review of different scales, promoting the use of various assessment tools.26 However, it should be noted the instruments reviewed for this analysis focused on the Association of American Medical Colleges competencies, which are limited to one domain of the IPEC Competencies of teamwork. Additionally, Dominguez and colleagues (2015) compared two instruments measuring student perceptions of interprofessional education and practice, yet neither instrument has explicitly been mapped to IPEC Competencies. Dow et al27 as well as Thistlethwaite et al28 identify the need for further work on applying organizational frameworks to encourage interprofessional practice and identifying common

G.L. Beck Dallaghan et al. / Journal of Interprofessional Education & Practice 4 (2016) 33e39

nomenclature to describe interprofessional practice. Ideas presented in these publications may provide better explanation of IPEC competencies that could be used to enhance the NIPEAS, particularly where addressing ethical principles in interprofessional ethical behavior which is a weakness in the NIPEAS. The factor analysis process extracted the factors in a manner to be as non-overlapping as possible. These factors mapped NIPEAS items to specific competencies in multiple domains. In collecting validity evidence for their instrument, Dow et al25 also report a similar finding. Based on our labels for the four factors of the NIPEAS, plans are underway to develop entrustable professional activities using the IPEC competencies to develop evaluation tools. This fits with recent literature about the complex nature of competency-based education.29 Measuring competencies is not merely about identifying a particular competency and measuring it, but a combination thereof that culminates in an observable skill. Based on a recent study by Hallam et al,30 there is a need to address these factors given health sciences students have such various skills and knowledge upon matriculation. This four factor structure illuminated by the NIPEAS combines multiple IPEC competencies which is the groundwork for identify observable interprofessional activities that can be measured with the competencies. Work is currently underway to develop these tools. A limitation of this study is that it was completed at a single institution. The student body predominantly hails from the State of Nebraska. This limits the diversity of perceptions. We also recognize that the response rate for faculty was not as robust as we would like. Nonetheless, our analysis gives us confidence that this instrument merits expanded use and further analysis for measuring attitudes about IPE among pre-clinical learners in other contexts and institutions. We are also interested in pursuing studies using NIPEAS longitudinally with students to determine if attitudes change over time as well as with other institutions to compare results. Dow et al25 analyzed their results based on level of learner and differences between colleges; these categorizations will be incorporated into our longitudinal study. Extending the use of the NIPEAS will allow health professions educators to evaluate IPE activities, assisting schools to identify best practices for cultivating these important skills. Conclusion Validity evidence collected for the NIPEAS indicates this instrument is useful for measuring attitudes of matriculating students as well as practicing health care professionals. The construct validity for NIPEAS items was derived from IPEC competencies and the internal reliability was adequate. Additionally, internal structure validity evidence also support use of NIPEAS to measure attitudes of health sciences professionals. References 1. Headrick L. Learning to Improve Complex Systems of Care. Collaborative Education to Ensure Patient Safety. Washington, DC: HRSA/Bureau of Health Professions; 2000:75e88. 2. Institute of Medicine (IOM). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. 3. Zwarenstein M, Goldman J, Reeves S. Interprofessional collaboration: effects of practice-based interventions on professional practice and healthcare outcomes (review). Cochrane Collab. 2009;(4). 4. World Health Organization WHO. Framework for Action on Interprofessional Education & Collaborative Practice. Geneva: World Health Organization; 2010.

5.

6. 7.

8. 9.

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

20.

21. 22.

23.

24. 25.

26.

27.

28.

29.

30.

39

Retrieved November 18, 2015 from http://whqlibdoc.who.int/hq/2010/WHO_ HRH_HPN_10.3_eng.pdf. MacKenzie D, Merritt BK. Making space: integrating meaningful interprofessional experiences into an existing curriculum. J Interprofessional Care. 2013;27(3):274e276. Remington TL, Foulk MA, Williams BC. Evaluation of evidence for interprofessional education. Am J Pharm Educ. 2006;70(3):66. Law R, MacDonald L, Weaver L, Lait J, Pauze E. Program Evaluation for Interprofessional Initiatives: Evaluation Instruments/methods of the 20 IECPCP Projects. Vancouver, Canada: Canadian Interprofessional Health Collaborative; 2009. Thannhauser J, Russell-Mayhew S, Scott C. Measures of interprofessional education and collaboration. J Interprofessional Care. 2010;24(4):336e349. Gillan C, Lovrics E, Halpern E, Wiljer D, Harnett N. The evaluation of learner outcomes in interprofessional continuing education: a literature review and an analysis of survey instruments. Med Teach. 2011;33(9):e461ee470. Lie DA, Fung CC, Trial J, Lohenry K. A comparison of two scales for assessing health professional students' attitude toward interprofessional learning. Med Educ Online. 2013;18. McFadyen AK, Webster V, Strachan K, Figgins E, Brown H, McKechnie J. The readiness for interprofessional learning scale: a possible more stable sub-scale model for the original version of RIPLS. J Interprofessional Care. 2005;19(6): 595e603. McFadyen AK, Webster VS, Maclaren WM. The test-retest reliability of a revised version of the readiness for interprofessional learning scale (RIPLS). J Interprofessional Care. 2006;20(6):633e639. McFadyen AK, Maclaren WM, Webster VS. The interdisciplinary education perception scale (IEPS): an alternative remodelled sub-scale structure and its reliability. J Interprofessional Care. 2007;21(4):433e443. Interprofessional Education Collaborative Expert Panel. Core Competencies for Interprofessional Collaborative Practice: Report of an Expert Panel. Washington, DC: Interprofessional Education Collaborative; 2011. D'amour D, Oandasan I. Interprofessionality as the field of interprofessional practice and interprofessional education: an emerging concept. J Interprof Care. 2005;19(S1):8e20. Margalit R, Thompson S, Visovsky C, et al. From professional silos to interprofessional education: campuswide focus on quality of care. Qual Manag Health Care. 2009;18(3):165e173. Parsell G, Bligh J. The development of a questionnaire to assess the readiness of health care students for interprofessional learning (RIPLS). Med Educ. 1999;33(2):95e100. Barr H, Freeth D, Hammick M, Koppel I, Reeves S. Evaluations of interprofessional education A United Kingdom review for health and social care. London, UK: United Kingdom Centre for the Advancement of Interprofessional Education with the British Educational Research Association; 2000. Hammick M, Freeth D, Koppel I, Reeves S, Barr H. A best evidence systematic review of interprofessional education: BEME Guide No. 9. Med Teach. 2007;29(8):735e751. Costello AB, Osborne JW. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10(7). Available online http://pareonline.net/getvn.asp?v¼10&n¼7. Bentler PM, Bonett DG. Significance tests and goodness-of-fit in the analysis of covariance structures. Psychol Bull. 1980;88:588e600. Hu L, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1): 1e55. Hu L, Bentler PM. Fit indices in covariance structure modeling: sensitivity to underparameterized model misspecification. Psychol Methods. 1998;3: 424e453. Cortina JM. What is coefficient alpha? an examination of theory and applications. J Appl Psychol. 1993;78(1):98e104. Dow AW, Diaz-Granados D, Mazmanian PE, Retchin SM. An exploratory study of an assessment tool derived from the competencies of the interprofessional education collaborative. J Interprofessional Care. 2014;28(4):299e304. Havyer RD, Nelson DR, Wingo MT, et al. Addressing the interprofessional collaboration competencies of the Association of American Medical Colleges: a systematic review of assessment instruments in undergraduate medical education. Acad Med. 2016;91(6):865e888. Dow AW, Diaz-Granados D, Mazmanian PE, Retchin SM. Applying organizational science to health care: a framework for collaborative practice. Acad Med. 2013;88:952e957. Thistlethwaite JE, Forman D, Matthews LR, Rogers GD, Steketee C, Yassine T. Competencies and frameworks in interprofessional education: a comparative analysis. Acad Med. 2014;89:869e875. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37:982e1002. Hallam KT, Livesay K, Morda R, Sharples J, Jones A, de Courten M. Do commencing nursing and paramedicine students differ in interprofessional learning and practice attitudes: evaluating course, socio-demographic and individual personality effects. BMC Med Educ. 2016;16:80.