Journal of Substance Abuse Treatment 45 (2013) 457–465
Contents lists available at ScienceDirect
Journal of Substance Abuse Treatment
Organizational readiness for change in community-based addiction treatment programs and adherence in implementing evidence-based practices: a national study Lena Lundgren, Ph.D. a,⁎, Maryann Amodeo, Ph.D., MSW a, Deborah Chassler, MSW a, Ivy Krull, Ph.D. a, Lisa Sullivan, Ph.D. b a b
Center for Addictions Research and Services, Boston University School of Social Work, Boston, MA 02215, USA Boston University School of Public Health, Boston, MA 02118, USA
a r t i c l e
i n f o
Article history: Received 24 August 2012 Received in revised form 11 June 2013 Accepted 25 June 2013 Keywords: Evidence based Addiction treatment Barriers Adherence Fidelity National study
a b s t r a c t Prior studies by the authors identified that clinical staff who reported that their treatment unit had lower levels of organizational readiness to change experienced higher levels of barriers in implementing an evidence-based practice (EBP). The current study examined whether clinical staff perceptions of their treatment unit's organizational readiness to change were also associated with their adherence to EBP protocols during EBP implementation. Adherence was examined through a variable measuring the extent to which staff modified EBP standards and manuals when implementing a new EBP. Multivariate regression analyses identified that clinical staff who had five or more years of addiction counseling experience, who rated staff in their organization as having higher levels of influence, who less frequently implemented new counseling interventions and who reported higher levels of barriers when implementing a newly funded EBP also reported that their program made more modifications to the EBP in the implementation process. Finally, staff who implemented MI compared to any other EBP reported lower levels of EBP modifications. Implications: Continued federal funding is needed to enhance treatment unit organizational resources in order to reduce barriers and promote adherence to EBPs. Also, funders of treatment need to continue to provide ongoing technical assistance and training opportunities to promote implementation of EBPs with fidelity. © 2013 Elsevier Inc. All rights reserved.
1. Introduction—evidence-based practices (EBPs) Moving EBPs from research (clinical trials) to implementation in community treatment programs is highly complex and addiction researchers and policy makers have expressed concern that the process is often not accomplished effectively. As a result, a new body of studies, referred to as “implementation research,” has emerged to examine this translational process. Such studies focus on barriers and facilitating factors encountered in the EBP implementation process with particular attention to factors in the organization that could support or impede progress. In a recent national implementation research effort, clinical staff (N = 510) in community-based addiction programs reported facing varying levels of barriers when implementing a new EBP (Lundgren, Chassler, Amodeo, D'Ippolito, & Sullivan, 2012). Further, their perception of their treatment program's organizational readiness for change (ORC) was significantly associated with the level of barriers they experienced when implementing the new EBPs. Specifically, when staff perceived that ⁎ Corresponding author. Center for Addictions Research and Services, Associate Dean for Research, Boston University School of Social Work, 264 Bay State Road, Boston, MA 02215, USA. Tel.: +1 617 353 1634; fax: +1 617 353 5612. E-mail address:
[email protected] (L. Lundgren). 0740-5472/$ – see front matter © 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jsat.2013.06.007
their organization had fewer programmatic needs and less organizational stress, they perceived a lower level of barriers related to their EBP implementation efforts. Our study expands on this prior research by examining whether staff perceptions of both their unit's organizational readiness for change (ORC) and level of barriers experienced when implementing a newly-funded EBP are associated with the level of modifications made to the EBP in the implementation process. 1.1. Why is it important to examine modifications made to EBPs during implementation? Does it really matter if clinical staff modify an EBP in the process of implementing it? Is it not acceptable to use an EBP according to general guidelines or in keeping with the spirit in which it was designed without totally adhering to implementation criteria such as number of sessions, length of sessions, delivery format, session content, and target population? The results from studies on these questions are mixed. On one hand, some research studies suggest that the implementation quality of EBPs strongly influences outcomes, and that higher adherence to the defined practice produces superior clinical results (Anthony, Rogers, & Farkas, 2003; Drake et al., 2001; McHugo, Drake, Whitley, Bond, & Finnerty, 2007; Mueser, Torrey,
458
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465
Lynde, Singer, & Drake, 2003; Torrey, Lynde, & Gorman, 2005; Torrey et al., 2001). Also, Carroll and Rounsaville (2007) caution that when an EBP is used without precision, or with clients who differ from those in the original efficacy studies, the EBP may no longer be effective. Having specific adherence criteria for the EBP helps determine whether unsuccessful outcomes are due to a failure of the intervention or a failure to implement the intervention as intended. Often, it is the latter that accounts for negative outcomes; when key elements are omitted during replication, the outcomes tend to be less positive or even contradictory (Blakely et al., 1987; Calsyn, Tornatzky, & Dittman, 1977; McHugo et al., 2007; Moncher & Prinz, 1991; Mowbray, Holter, Teague, & Bybee, 2003). On the other hand, some EBP experts (Bauman, Stein, & Ireys, 1991; Beidas, Koerner, Weingardt, & Kendall, 2011; Corrigan, Steiner, McCracken, Blaser, & Barr, 2001; Kendall, Gosch, Furr, & Sood, 2008; Wilson, 1996) suggest that use of manuals in clinical practice does not and should not preclude flexibility. This is in contrast to the use of manuals for research purposes (Wilson, 1996). Even in clinical situations where the manual cannot be followed precisely due to client crises and other unexpected client behavior, dramatic changes need not be made—the therapist can make adjustments such as providing shorter sessions, more structure, and more hands-on activities (Kendall et al., 2008). However, considerable skill is often needed to know when and how to modify an EBP without loss of fidelity (Addis, 2002; Addis, Wade, & Hatgis, 1999) and many clinicians may not have the background necessary for adherent-yet-flexible EBP implementation. In the addiction treatment field, most clinical staff have not received education beyond the master's level and may have received minimal training on particular EBPs. Thus, the appropriateness of their modifications may depend on the specific EBP to be implemented and the extent of training received on that EBP. As the discussion above indicates, expert opinion on modifications to EBPs generally recommends that staff who implement EBPs: (a) follow treatment manuals when possible in order to improve client outcomes, and (b) limit EBP modifications when it is not possible to follow the manual completely (Anthony et al., 2003; Blakely et al., 1987; Calsyn et al., 1977; Drake et al., 2001; McHugo et al., 2007; Moncher & Prinz, 1991; Mowbray et al., 2003; Mueser et al., 2003; Torrey et al., 2001; Torrey et al., 2005).
1.2. How frequently do staff modify EBPs and what reasons are given for such modifications? In a qualitative study of 100 addiction program directors, about half stated that they had modified the EBP delivered by their organization (Lundgren, Amodeo, Cohen, Horowitz, & Chassler, 2011a). Program directors perceived that their modifications strengthened the selected EBP and helped the organization provide better client services. In many instances, program directors' modifications seemed to originate from a concern to provide culturally competent and responsive services. A 2007 study by McHugo and colleagues cited significant differences in EBP adherence based on the type of mental health-related EBP that was implemented. In their study, fidelity scales for each EBP were applied by two assessors. Assessment included chart reviews, observations, and meetings with clients and key personnel. Longitudinal analyses at five time points revealed a high-fidelity score for 55% of the sites but lower scores for the others. Even for high fidelity sites, scores leveled off between 12 and 24 months. Further analysis of this population by Swain, Whitley, McHugo, and Drake (2010) indicated that some EBPs were discontinued in 20% of the agencies due to inadequate funding or training support, lack of leadership support, non-supportive staff or culture, staff turnover, and competing EBPs.
1.3. Organizational readiness for change and EBP implementation The underlying framework for this study is the Lehman, Greener, and Simpson (2002) study which developed the organizational readiness for change model. This framework was later revised by Simpson and Flynn (2007), and then updated by Lehman, Simpson, Knight, and Flynn (2011). Specifically, these authors offer a four-stage model of organizational change related to the adoption of EBPs in addiction treatment organizations: exposure to the new practice; adoption of the practice—a decision to try it out; implementation of the practice; and standardization and routine use of the practice. Implementation, the third stage of the model and the focus of our study, requires allocation of sufficient resources and institutional support for the change effort. “Implementation serves as the crucial stage that connects adoption decisions with routine practice” (Simpson & Flynn, 2007, p. 112). In our study, the clinical staff in our samples all worked in organizations which received multi-year, multi-million-dollar grants from the Center for Substance Abuse Treatment, Substance Abuse and Mental Health Services Administration (CSAT/SAMHSA) to implement their EBPs, so funding per se was not a constraint in the implementation process (see Materials and Methods section for a further description of sample selection). Hence, the study could focus on the organizational aspects of EBP implementation. Our study used the Texas Christian University organizational readiness for change measures for staff (TCU ORC-S) (Lehman et al., 2002) to examine if clinical staff perceptions of the organizational readiness of their treatment unit were associated with their reports of level of modification made to the EBP they were implementing. The framework posits that five organizational areas (motivation for change, adequacy of resources, staff attributes, organizational climate, and training exposure and utilization) all influence EBP implementation and the current study used the TCU-ORC measures for all these areas. As described in the introduction section, the TCU-ORC measures were used in a prior study by the authors (Lundgren et al., 2012) who identified that, for a sample of clinical staff (N = 510), two aspects of the TCU-ORC organizational readiness for change measures, having a higher level of program needs (adequacy of resources) and higher levels of organizational stress (organizational climate) were associated with reporting higher levels of barriers when implementing EBPs. The current study builds on this prior study. Specifically, through multivariable regression modeling, the current study examines whether there was a significant association between addiction treatment staff ratings of their organization's readiness to change, their perception of the level of barriers they encountered when implementing a new EBP and the level of modifications they made to the EBP implemented. 1.4. Control factors Staff education level and professional experience. Addiction treatment staff with higher levels of education and with more years of professional experience tend to have more positive attitudes about science-based staff training and the usefulness of EBPs compared to their counterparts (Lundgren et al., 2011a; Lundgren et al., 2011b; McCarty et al., 2007; Rieckmann, Daley, Fuller, Thomas, & McCarty, 2007). The current study explored whether staff education levels and years of addiction counseling experience were also associated with staff reports of the extent to which modifications were made to the EBP in the implementation process. The study also controlled for the age and gender of clinical staff. Type of treatment unit. Findings are mixed on whether EBP implementation varies by the specific type of treatment unit in which the staff works. McCarty et al. (2007) and Fuller et al. (2007)
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465
did not find treatment unit type to be associated with differences in staff attitudes regarding EBPs. However, in a study of 376 counselors and 1083 clients in methadone, residential and outpatient addiction treatment programs in Oregon and Massachusetts, Rieckmann et al. (2007) reported that the most consistent support for pharmacological therapies was from staff in outpatient settings. The current study controls for whether a program is an inpatient, outpatient or other type of treatment unit. Treatment program affiliation with a research institution. Staff in organizations affiliated with research institutions, such as universities and hospitals, have more positive attitudes regarding science-based addiction treatment (Lundgren et al., 2011b; McCarty et al., 2007; Pinto, Yu, Spector, Gorroochurn, & McCarty, 2010). Our study controlled for whether or not the principal investigator (PI) or the evaluator for the EBP implementation project was affiliated with a university or hospital. Program duration. The study controlled for how long the program in which the EBP was implemented had been in existence. Lundgren et al. (2012) identified that programs that have been in place for longer periods of time reported higher levels of barriers to EBP implementation. Type of EBP. The specific EBP implemented in the treatment unit has to be controlled for when exploring whether modifications were made to the EBP. The study controlled for the three most common EBPs implemented; motivational interviewing (MI), Adolescent Community Reinforcement Approach (A-CRA), and cognitive-behavioral therapy (CBT). Finally, the study controlled for whether the organization was located in an urban, rural or suburban area. 2. Materials and methods 2.1. Data collection and sample Telephone interviews and Web surveys were conducted with a sample of 524 clinical staff working in treatment programs funded by CSAT/SAMHSA to implement EBPs (see Lundgren et al., 2011b and Lundgren et al., 2012 for further details on data collection methods). In summary, potential participants were sampled from a publicly available listing of organizations receiving awards from CSAT/ SAMHSA, with their EBP grant start date between 2003 and 2008. This list included 495 grantee organizations from which 330 were selected. Program directors in these 330 organizations were asked to identify staff who were directly involved in the implementation of EBPs for this new grant. Of the program directors, 10 declined to participate. Of the 524 staff who were contacted, 5 declined to participate. Further, interview data from nine staff were eliminated from further data analysis due to missing data, resulting in an analysis sample of 510 cases (see Missing data section below for details). The study sample was selected because (a) their treatment programs had specified in their CSAT/SAMHSA proposals the specific EBPs they would implement; (b) funding was not a key barrier to EBP implementation because the CSAT/SAMHSA financial awards were substantial; and (c) they represented a range of geographic areas and program types from around the country. Study protocols were approved by the Boston University Institutional Review Board. 2.2. Measures 2.2.1. Dependent variable: level of modifications made to EBP in the implementation process At the beginning of each telephone interview, interviewers said “We will be talking about [name of specific EBP].” (Project directors in preceding interviews had described key EBPs that staff had implemented, and it is these key EBPs that interviewers named when asking staff to rate modifications of EBPs.) They then stated: “As you may know, there are written manuals and standards for implementing [the
459
EBP]. Did your project use these manuals or standards to implement [the EBP]?” Next, the interviewer asked the following open-ended question: “Did you modify the components of [the EBP]? If so, could you tell me how and why?” The interviewers also provided the following probe: “were there any other ways you modified the EBP (i.e. how and why?)? After respondents answered these first sets of open-ended questions, interviewers then asked, “Based on the modifications you just described, think about a 10-point scale where the number 1 stands for no modification at all, and the number 10 stands for total modification On this ten point scale, how much did your project modify the components of [the EBP]?” [PROBE: “What number best represents the degree of modification made to the components of this EBP as written in the manuals or standards?”] Respondents who indicated that no modifications were made were given a modification rating score of ‘1’. Responses to this ordinal scale were used to measure the extent of modifications when implementing EBPs. The responses to the modification rating scale were unimodal and covered the full range. It should be noted that 6% of staff reported that a proposed EBP for which they received funding had not been implemented at all. These staff were not included in the analyses, yet they should be considered as they were the least successful with EBP implementation. 2.2.2. Key independent variables Level of barriers experienced when implementing the EBP was measured by a 10-point ordinal scale. During the telephone interview with respondents, the interviewer first asked each staff person to “describe barriers that your project encountered in providing this EBP”. Then the respondent was asked: “Using a scale from 1 to 10 where number 1 means that barriers did not interfere at all with providing [the EBP], and number 10 means that barriers totally interfered with providing [the EBP], what number best represents how much these barriers interfered with your project's ability to provide [the EBP]?” It is the responses to this ordinal scale that are used to measure the staff level of barriers experienced when implementing an EBP. Results from the use of this variable as the dependent variable in multiple regression analyses models have been published (Lundgren et al., 2012). Organizational readiness for change. To assess organizational readiness for change, the study instrumentation included scales from the Texas Christian University organizational readiness for change for staff (TCU ORC-S) assessment (Lehman et al., 2002). Staff completed 115 items (five-point agree–disagree Likert scales) hypothesized to form 18 scales reflecting attributes of organizations in the following areas: motivation for change, adequacy of resources, staff attributes, and organizational climate. In addition, Lehman et al. (2002) also developed 14 “Training Exposure and Utilization” questions for the TCU ORC-S (not included in the 18 subscale scores). Two questions concerned training satisfaction, five asked about training exposure, four were about individual-level training utilization and another three were about program-level training utilization. These questions are also included as independent variables in the analyses. For a complete description of the variables included in the TCUORC-S, see Table 1. Standard subscale scoring was used, as described in the TCU-ORC-S (Lehman et al., 2002). In addition, reliability scores for the current study's TCU-ORC subscale scores, compared to the TCU-ORC-S scores as developed and tested by the authors, are also presented in Table 1. 2.2.3. Control factors Age was calculated by subtracting the respondent's date of birth from the date of completion of the online survey. Gender was initially measured as a three-category variable, however two individuals who selected transgender were not included in this analysis due to the small sample size. Education was measured in two ways: by
460
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465
Table 1 Category and associated sub-scales Motivation for change Program needs (Additional guidance needed in…) Training needs (Need more training in…) Pressures for change (Pressure comes from…) Resources Offices Staffing Training Equipment Internet Staff attributes Growth Efficacy Influence Adaptability Organizational climate Mission Cohesion Autonomy Communication Stress Change Training exposure and utilization Training satisfaction Training exposure Training utilization (individual level)
Training utilization (program level)
Variable description (TCU-ORC-S)
Cronbach's alpha
Eight variables measure staff perceptions about how well the program measures and supports client services and needs. Eight variables measure staff perception about their own training needs to better establish rapport and improve client behavioral outcomes. Seven variables measure staff perception about which stakeholders (e.g., clients, staff, managers, community members) pressure the organization for programmatic change.
.87
Four variables measure staff evaluation of the physical adequacy of office facilities. Six variables measure staff evaluation of programmatic resources in providing adequate clinical and support staff, including skill levels and turnover rates. Four variables measure staff opinion of training and continuing education programming (e.g., at professional conferences, in-service training). Seven variables measure staff opinion on availability of computer resources and ease of electronic record-keeping. Four variables assessed staff perception of accessibility, availability and use of the Internet, including email, within the organization.
.72 .66
Five variables measure staff perception about their own professional growth through reading, skill-building, and professional educational opportunities. Five variables measure staff assessment of their own professional efficacy, self-confidence, and skills. Six variables measure staff assessment of their professional, leadership, and mentoring role in the program, including how much they influence decisions of other staff. Four variables measure staff perception of their own flexibility in terms of willingness to learn and implement new ideas and procedures. Five variables measure staff perception of how clearly the program mission is articulated and how well staff understand the mission. Six variables measure staff assessment of how well the staff as a group gets along, helps each other, trusts and cooperates as a team. Five variables measure staff perception of their professional autonomy within the program, freedom to try new techniques with clients, and a sense of feeling trusted by management. Five variables measure staff assessment of ease of, frequency of, and fairness of communication within the program, Four variables measure staff perception of the level of stress, including pressures and workload, among staff members. Five variables measure staff perception of the ease of making change in the program, and the general openness to new ideas among staff and management. Two variables measure satisfaction with training opportunities offered in the last year. Five variables measure how often the staff member attended various types of training workshops during the past 12 months, including those offered by the agency where they work. Four variables measure how often the staff member tried out new ideas and techniques learned in a workshop or in other ways, and how often the staff member encouraged other staff to try that new technique or idea. Three variables measure staff perception of how often new ideas and techniques are discussed, adopted, supported by staff and program managers.
identifying highest degree status (no high school, high school or equivalent, some college, associate's degree, bachelor's degree, master's degree, doctoral degree or other professional degree), and through a two-category recoded variable (below a master's degree education, and master's degree or above). Years of experience in drug abuse counseling was measured by utilizing both an ordinal scale variable (0–5 years) and a recoded two-category variable (four or fewer years of experience and five or more years of experience). Treatment program affiliated with a research institution is a composite variable of four original variables: whether the PI for the EBP project was affiliated with a university, whether the PI was affiliated with a hospital, whether the evaluator was affiliated with a university, and whether the evaluator was affiliated with a hospital. Type of treatment unit initially included several categories which were combined to produce a three-category variable: (1) outpatient unit, (2) inpatient/therapeutic community unit, and (3) other. Primary service area was measured through a three-category variable which identified whether the organization was situated in a rural, suburban or urban location. Program duration was measured by subtracting the year the program was first funded by CSAT/SAMHSA from the year of the study interview. Type of EBP implemented: when asked about the key EBPs that were implemented in the funded
.86 .77
.57 .60 .44
.61 .59 .78 .61
.71 .85 .46 .77 .78 .61
NA NA NA
NA
program, staff named more than 40 to 60 different EBPs they implemented during the project period, depending on how broadly we grouped evidence-based practices. We selected the three most common: motivational interviewing, Adolescent Community Reinforcement Approach (A-CRA), and cognitive–behavioral therapy (CBT). For each of these, at least 42 staff had been involved in the implementation. Hence, the sample sizes were large enough to permit multivariable regression analysis. Three dummy variables were created: (1) MI versus other EBPs, (2) A-CRA versus other EBPs, and (3) CBT versus other EBPs. In addition, for the 10 most commonly named EBPs, MI, CBT, ACRA, Assertive Community Treat (ACT), case management (CM), the Matrix Model, Seeking Safety, Integrated Dual Diagnosis Treatment (IDDT), peer to peer recovery support, and Motivational Enhancement Therapy (MET) the percent of total cases each represents, and their mean modification scores are shown in Table 4 in the results section below. 2.3. Missing data A detailed analysis of missing data was conducted. For each variable that was missing data, modification scale scores (the
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465 Table 2 Bivariate analysis of independent variables in relationship to ratings on the modifications scale (1–10). Variable
Barriers scale Motivation for change Program needs Immediate training needs Pressures for change Adequacy of resources Offices Staffing Training Equipment (computer access; 2002) Internet (e-communications; 2002) Staff attributes Growth Efficacy Influence Adaptability Organizational climate Mission Cohesion Autonomy Communication Stress Change Training exposure and utilization Satisfaction with training offered at workshops available to staff in last year You were satisfied with the training opportunities available to you last year In the last year, how often did you attend training workshops held within 50 miles of your agency? In the last year, how often did you attend training workshops held more than 50 miles from your agency? How many workshops do you expect to attend in the next 12 months? In the last year, how many times did outside trainers come to your agency to give workshops? In the last year, how many times did your agency offer special, in-house training? When you attend workshops, how often do you try out the new intervention or techniques learned? Are your clients interested or responsive to new ideas or counseling materials when you try them? In recent years, how often have you adopted (for regular use) new counseling interventions or techniques from a workshop? When you have adopted new ideas into your counseling, how often have you encouraged other staff to try using them? How often do new interventions or techniques that the staff from your program learn at workshops get adopted for general use? How often do new ideas learned from workshops get discussed or presented at your staff meetings? How often does the management at your program recommend or support new ideas or techniques for use by all counselors? Age of respondent Gender Male Female Number of years of education Below master's level of education Master's level of education or higher Number of years of experience in the drug counseling field
Staff modifications scale Means or correlation coefficient 0.250⁎⁎⁎ 0.066 0.064 0.066 −0.051 −0.070 −0.078 0.002 −0.036 −0.022 −0.074 0.111⁎ −0.015 −0.096⁎ −0.142⁎ −0.054 −0.081 0.150⁎⁎ −0.039 −0.116 −0.84 0.025
−0.030
0.011 −0.019 −0.088⁎ −0.070
0.036
−0.104⁎
461
Table 2 (continued) Variable
Fewer than 5 years Five or more years Research affiliation (PI or evaluator affiliated with hospital or university) Yes No Type of treatment unit Inpatient (includes halfway house, and work release programs) Outpatient (including methadone maintenance, outpatient) Other Primary service area of treatment unit Rural Suburban Urban Program duration Motivational interviewing Yes No Adolescent Community Reinforcement Approach Yes No Cognitive behavioral therapy Yes No
Staff modifications scale Means or correlation coefficient 2.7⁎ 3.1
2.9 3.0 3.0 2.9 3.1 2.7 2.4 3.1 .111⁎ 2.3⁎⁎ 3.1 2.7 3.0 2.6 3.0
Statistics for categorical variables are mean modification scores. Correlation statistics are provided for continuous level independent variables. ⁎ p b .05. ⁎⁎ p b .01. ⁎⁎⁎ p b .000.
dependent variable) were compared for the missing data cases and the complete data cases. To assess for patterns of missing data, comparisons between missing data cases and complete data cases were also made on gender, age, level of education, and program duration. Men were more likely than women to be missing data on the number of years of experience in the drug abuse counseling field. All other comparisons were non-significant. To evaluate the impact of missing data on the final results, mean imputation was used and all bivariate and multivariable analyses were repeated and compared to bivariate and multivariable results based on complete cases. Results for each of these analyses showed that the analysis with imputed data was highly comparable to the analysis based on complete cases. Results using complete cases are presented in both the narrative and the tables.
−0.023
2.4. Data analysis −0.079
−0.047
−0.047
−0.014 3.0 2.9 2.8 3.0
As a first step, bivariate analysis (chi-square, one-way ANOVA and correlation analysis for categorical and continuous variables, respectively) examined the statistical relationship between all independent variables (age, gender, number of years of education, years of experience in drug abuse counseling, program duration, primary service area, type of treatment unit, type of EBP implemented, organizational affiliation with a research institution, rating of barriers to implementation, the 18 TCU-ORC sub-scales and the TCU-ORC training and utilization measures) and the dependent variable. A linear regression model was developed using all variables significant at the bivariate level (p b .05). Because of the ordinal nature of the scale, we also explored a dichotomous outcome using multivariable logistic regression analysis. In the logistic regression model (not shown) all independent variables were associated with the dependent variable in the same direction as in the linear model below, however two of the relationships were no longer significant.
462
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465
3. Results Five hundred ten clinical staff members were identified by program directors as directly involved with the implementation of EBPs in their organization. Respondents were primarily women (72.2%) with a mean age of 42 years. Approximately half (52.0%) held a graduate degree, and 51.9% had 5 years or more of experience as drug abuse counselors. Approximately 27.9% of staff worked in inpatient units (including therapeutic communities), and 71.1% of staff worked in outpatient units (including intensive outpatient treatment, methadone outpatient units, and regular outpatient treatment). On average these programs had been in place for 3.3 years (SD 1.5 years). Of these treatment units 77.5% were in urban settings, 13.2% in suburban areas, and 9.3% in rural areas. 3.1. Bivariate results Bivariate analyses (see Table 2) identified that staff: who reported a higher level of barriers when implementing a new EBP, who perceived that they had more influence in the organization, who perceived a lower sense of organizational mission, who perceived a lower sense of organizational cohesion, who experienced more stress in their organization, who had five or more years of experience in the drug counseling field, who worked in programs funded by CSAT/ SAMHSA for a longer period of time and who reported less frequently adopting new counseling interventions and techniques reported higher levels of modifications of the EBP implemented compared to their counterparts. On the other hand, staff who had implemented MI compared to any other EBP, staff who reported higher satisfaction with training and staff who reported having more in-house training reported lower levels of modifications. There were no significant relationships between staff age, gender, education, working in a research-affiliated organization, type of treatment unit in which staff were employed, the treatment unit's geographic location and level of modifications made to the EBP. 3.2. Regression results The linear regression model presented in Table 3 includes all variables significant at the bivariate level. In this model, the key independent variable, level of barriers experienced when implementing an EBP, still remained significant when other variables were controlled for. Specifically, staff who experienced a higher level of barriers to EBP implementation made more modifications to the EBP implemented. This was also the factor most highly correlated with level of modifications to EBPs.
In addition, four control factors remained significantly associated with level of modifications to EBPs in the implementation process. Staff who had five or more years of addiction counseling experience, staff who less frequently implemented new counseling interventions and techniques, and staff who reported a higher level of influence in their organization reported a significantly higher level of modifications made to the specified EBP in the implementation process compared to their counterparts. In contrast, staff who had implemented MI reported significantly lower level of modifications in the implementation process compared to staff who had implemented other EBPs. As described above, the type of EBP implemented was correlated with having made modifications to the specific EBP. Therefore, an additional table (Table 4 below) was developed which describes the 10 most commonly named EBPs implemented, and the modification scores associated with each EBP. As can be seen in Table 4, staff reported the lowest modification scores for MI (2.3) and case management (2.3). The highest modifications scores were reported for those who implemented “peer to peer recovery” (4.2) and “motivational enhancement therapy” (MET) (4.4).
4. Summary/discussion To summarize findings, the logistic regression model identified five key results. First, addiction treatment staff who experienced more barriers when implementing an EBP reported making more modifications to that EBP. Second, staff who implemented different types of EBPs reported making different levels of modifications, with staff who implemented MI compared to other types of EBPs reporting fewer modifications. Third, staff who less frequently adopted new counseling interventions and techniques reported higher levels of EBP modifications compared to their counterparts. Fourth, staff who reported having worked in the drug counseling field for longer periods of time reported making more EBP modifications. Finally, staff who reported higher levels of staff influence in their organization reported making more EBP modifications. Hence, our findings suggest that primary factors associated with level of modifications to an EBP in the implementation process were staff-related factors, not organizational or treatment unit factors. The factor that mattered most was a staff attitudinal level factor: staff perceiving a greater level of barriers when implementing an EBP. Only one organizational level factor mattered, and that was also a staffrelated factor: staff who reported that staff had a greater level organizational influence also reported making more modifications to the EBP they implemented.
Table 3 Multivariate linear regression (n = 431). Variable
β Standardized coefficients
Barrier rating Influence Mission Cohesion Stress Satisfaction with training offered at workshops available to staff in last year In the last year, how many times did your agency offer special, in-house training? In recent years, how often have you adopted (for regular use) new counseling interventions or techniques from a workshop? Five or more years of experience in the drug abuse counseling field Program duration Motivational interviewing
.229 .125 .008 −.053 .074 −.048 .038 −.130 .104 .089 −.132
R2 .155, adjusted R2 .132, p b .000.
Partial correlation
P value
CI.95 for β
4.870 2.442 0.126 −0.832 1.266 −0.978 −0.796 −2.752
.231 .118 .006 −.041 .062 −.048 −.039 −.133
.000 .015 .900 .406 .206 .329 .426 .006
.294 .005 −.240 −.380 −.240 −.300 −.275 −.471
.693 .508 .273 .154 .273 .101 .116 .079
2.261 1.933 −2.881
.110 .094 −.139
.024 .054 .004
.029 −.003 −.442
.409 .381 −.083
T
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465 Table 4 Evidenced-based practice, percent of total cases, and modification score. Evidence-based practice
Percentage of total cases
Mean modification score (SD) for cases using EBP
Motivational interviewing (MI) Adolescent Community Reinforcement Approach (A-CRA) Cognitive–behavioral therapy (CBT) Assertive community treatment (ACT) Case management (CM) Matrix model Seeking safety Integrated dual diagnosis treatment (IDDT) Peer to peer recovery support Motivational Enhancement Therapy (MET)
15 8
2.3 2.7
8 6 6 5 4 3 3 3
2.6 3.5 2.3 3.0 3.6 2.9 4.2 4.4
Case management—includes intensive, strength-based, “clinical”, comprehensive case management types. Motivational Enhancement Therapy (MET): includes MET only when mentioned without CBT. If the EBP was MET-CBT5, for example, it was counted as CBT.
4.1. Implications The results suggest a range of implications for providers, funders and researchers. Implication 1 is related to addiction treatment programs. Administrators and managers must acknowledge and respond to staff perceptions of barriers in order to decrease the range and degree of modifications made to EBPs. Understandably, if staff perceive greater barriers in EBP implementation, they are more likely to modify the EBP to ensure it is delivered in some form or another, thereby complying with administrative directives and funding guidelines. In previous qualitative findings from the same study (Amodeo et al., 2011, Lundgren et al., 2011a), respondents described barriers such as insufficient client referrals, clients who did not fit the required demographics, inability to hire staff with particular educational credentials, and lack of availability of staff training, and said that they primarily made modification to EBPs in response to client needs. They reported modifications such as offering group treatment in an individual format, changing criteria to increase client participation, and changing educational or training requirements for staff. While these are creative responses, such modifications could result in losing the “active ingredients” that make that EBP therapeutically effective. Hence, it seems preferable that addiction treatment providers work toward removing or addressing the barriers faced by staff rather than allowing staff to modify EBPs in ways that could negatively affect outcomes. Providing training on the minimal necessary elements for delivery of the EBP would be a starting point. Developing policies that require staff review and agreement on modifications beforehand, rather than allowing staff to make ad hoc changes to the EBP, would also offer some safeguards. Implication 2 is related to the use of the TCU-ORC theoretical framework. Only one of 18 TCU-ORC factors, (“staff level of influence”, see discussion about this specific finding under implication 6) was found to influence level of modifications in EBP implementation. Even though our own variable which measured perceptions of “level of barriers experienced in EBP implementation” was highly correlated to level of modifications made, TCU-ORC factors such as “organizational stress” or “organizational cohesion”, which one would have assumed would be barriers to EBP implementation and therefore influence fidelity in implementation, were not significant. How important is then organizational readiness for change or organizational capacity to fidelity in implementing EBPs? In spite of the fact that only one 18 TCU-ORC organizational readiness for change factors was found to influence level of modifications in EBP implementation organizational-level factors may have an indirect effect on EBP implementation fidelity. Specifically, a key finding of the prior Lundgren et al. (2012) study is that the TCU-ORC scores on program needs were strongly associated with staff perceptions of
463
barriers in EBP implementation and the current study identifies that those who report a higher level of barriers also report making more EBP modifications. [Of note, the TCU-ORC “program needs area” provides rating on organizational assessment resources, capacity of matching of client needs to services, client participation and measurement of client performance, effectiveness of group sessions, quality of counseling and using assessments to guide clinical and program decisions and to document program effectiveness (TCUORC-S-SG, Lehman et al., 2002).] Hence, the findings from our two studies suggest a “two-step” relationship between program capacity and fidelity: (1) staff who perceive they work in an organization with higher program needs report higher level of barriers to EBP implementation; and (2) staff who experience higher level of barriers to EBP implementation report higher level of EBP modifications. These findings reinforce the important role of federal funders enhancing CBO programmatic capacity. Their role is especially relevant given anticipated changes in financing resulting from the Affordable Care Act which is likely to move a significant amount of funding from the specialty addiction treatment network into primary care settings. There is no doubt that treatment providers must enhance their programmatic resources in order to reduce barriers to EBP implementation and improve fidelity, and they will be unable to do this without having federal funders and other funders on board with this plan. Implication 3 is related to the need for future research on the “twostep” finding described above. Future research studies need to be conducted with larger national samples of addiction treatment providers which will permit multi-level modeling. Implication 4 is related to future research to identify the particular barriers faced by staff implementing specific EBPs and the actual modifications made to these EBPs. Staff who reported implementing Motivational interviewing (MI) rather than other EBPs reported making lower levels of modifications. Such modifications may be more likely when an EBP is less flexible (i.e., components, format, and timing are dictated by the EBP designers). MI could be seen as a more flexible EBP when compared to A-CRA, for example, where clinician certification is required and the type of sessions is dictated by the protocol (e.g., sessions with the adolescent, sessions with the parents, joint sessions with the adolescent and parents). On the other hand, the quality and availability of training on MI may have been better than for other EBPs. Given the extensive MI network of trainers (MINT: motivationalinterviewing.org) and the longstanding commitment of CSAT/SAMHSA to MI implementation, this might be an explanation. A final, sixth, implication is related to the finding regarding staff influence, staff experience and modifications. A higher level of staff influence, and more years of experience are often associated with more decision-making capacity in an organization, and our study suggests that this decision-making capacity may result in less fidelity to EBP practices. As a prior study by the authors has shown, staff often make modifications to respond to client needs (Lundgren et al., 2011a) so the lack of fidelity may be in the service of making modifications that are thought to respond more appropriately to client needs (or that do actually respond more appropriately to client needs). Front line staff are the ones in an organization with the dayto-day client contact. It may well be that staff with more organizational influence and experience are better able to respond to what they see as client needs. In addition to the need for ongoing staff training, this finding suggests that funders strongly encourage or require that community addiction programs select EBPs that are a good fit with their client population, staff capacity and organizational capacity in order to reduce EBP modifications. 4.2. Limitations Some of the limitations of this study are similar to those described in Lundgren et al. (2012). First, findings from this study of EBP
464
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465
modifications by addiction program staff do not tell us whether the modifications were made in such a way as to preserve the key elements of the practice or were beneficial or non-beneficial to clients. Second, the study only sampled community-based CSAT/SAMHSAfunded addiction treatment programs. It did not include treatment organizations solely funded by states or by private insurance. Third, the study relied on program directors to identify clinical staff directly involved with implementing EBPs. Fourth, given that this is an exploratory cross-sectional study, it is only able to identify possible associations rather than causal connections between selected study factors. Fifth, a possible concern is sample bias: since organizations that are the least successful with implementation of EBPs probably never apply for government funding such as these CSAT/SAMHSA funds, our study does not include the perspectives of those organizations. Hence, one possible limitation is that we do not see the full range of variation in treatment unit organizational readiness for change. On a methodological note, a number of the Cronbach's alpha scores for the TCU-ORC-S scales were slightly below the conventional threshold of .70. Finally, HLM or similar statistical modeling could not be used to explore whether different groups of staff were “nested” in different types of organizations. We decided against examining nesting because of the small group size at each treatment organization, as well as advice from our bio-statistician and articles such as Maas and Hox (2005) and Bell, Ferron, and Kromrey (2008) that note that the use of HLM could result in imprecision and potentially bias in estimating regression coefficients at the various levels. 4.3. Conclusions A key contribution of this study to the addiction treatment literature is its focus on fidelity in EBP implementation. The study identifies how key addiction treatment staff factors (attitudes about the level of difficulty experienced implementing a specific EBP, professional experience and influence, and amount of experience with implementing EBPs), influence fidelity in implementation. Implications of these findings include the essential role of staff training to ensure fidelity in EBP implementation, and the need to respond to staff perceptions of the barriers they encounter, which staff often describe as resulting from discrepancies they perceive between client, organizational and funding needs and realities. To address this, it is critical that (a) treatment unit directors and grant-writing staff, in the proposal development stage, choose EBPs that are congruent with their organization's and staff's capacity, (b) that they do this in consultation with the clinical staff assigned to undertake the implementation, and (c) that they do this with an awareness of staff perceptions of barriers to their capacity to implement the specific EBPs proposed. On a policy level, new federal funding responding to the Affordable Care Act must be directed in such a way that it supports the future of EBP implementation and fidelity and care of the most vulnerable clients. Funding streams must continue to be directed to community addiction programs that are often the only providers that treat the most vulnerable addicted individuals (e.g., homeless, racially and ethnically diverse, dually or multiply-diagnosed). Funding must also be allocated with an awareness that staff will continue to make more modifications to EBPs (possibly rendering the EBPs ineffective) when they perceive that their organization has a higher level of program needs, which is related to a higher level of perceived barriers. Acknowledgments Funding was provided by the Robert Wood Johnson Foundation Substance Abuse Policy Research Project, grant #65029. The authors would like to thank the many program directors and staff who participated in our interviews and completed online surveys. We
also thank our interviewers, our research assistant R. Mitchell Thomas and the data support staff from the Center for Addictions Research and Services for the many hours spent assisting in data collection and analysis.
References Addis, M. E. (2002). Methods for disseminating research products and increasing evidence-based practice: Promises, obstacles and future directions. Clinical Psychology: Science and Practice, 9, 367–378. Addis, M. E., Wade, W. A., & Hatgis, C. (1999). Barriers to dissemination of evidencebased practices: Addressing practitioners' concerns about manual-based psychotherapies. Clinical Psychology: Science and Practice, 6, 430–441. Amodeo, M., Lundgren, L., Cohen, A., Rose, D., Chassler, D., Beltrame, C., et al. (2011). Barriers to implementing evidence-based practices in addiction treatment: Comparing staff reports on motivational interviewing, Adolescent Community Reinforcement Approach, assertive community treatment, and cognitive–behavioral therapy. Evaluation and Program Planning, 34, 382–389. Anthony, W. A., Rogers, E. S., & Farkas, M. (2003). Research on evidence-based practices: Future directions in an era of recovery. Community Mental Health Journal, 39, 101–114. Bauman, L. J., Stein, R. E. K., & Ireys, H. T. (1991). Reinventing fidelity: The transfer of social technology among settings. American Journal of Community Psychology, 19, 619–639. Beidas, R., Koerner, K., Weingardt, K., & Kendall, P. (2011). Training research: Practical recommendations for maximum impact. Administration and Policy in Mental Health and Mental Health Services Research, 38, 217–222. Bell, B., Ferron, J., & Kromrey, J. (2008). Cluster size in multilevel models: The impact of sparse data structures on point and interval estimates in two-level models. Paper reported in the American Statistics Association's 2008 Joint Statistical Meetings (pp. 1122–1129). Blakely, C. H., Mayer, J. P., Gottschalk, R. G., Schmitt, N., Davidson, W. S., Roitman, D. B., et al. (1987). The fidelity-adaptation debate: Implications for the implementation of public sector social programs. American Journal of Community Psychology, 15, 253–268. Calsyn, R., Tornatzky, L., & Dittman, S. (1977). Incomplete adoption of an innovation: The case of goal attainment scaling. Evaluation, 4, 127–130. Carroll, K. M., & Rounsaville, B. J. (2007). A vision of the next generation of behavioral therapies research in the addictions. Addiction, 102, 850–869. Corrigan, P. W., Steiner, L., McCracken, S. G., Blaser, B., & Barr, M. (2001). Strategies for disseminating evidence-based practices to staff who treat people with serious mental illness. Psychiatric Services, 52, 1598–1606. Drake, R. E., Goldman, H. H., Leff, H. S., Lehman, F., Dixon, L., Mueser, K. T., et al. (2001). Implementing evidence-based practices in routine mental health service settings. Psychiatric Services, 52, 179–182. Fuller, B. E., Rieckmann, T., Nunes, E. V., Miller, M., Arfken, C., Edmundson, E., et al. (2007). Organizational readiness for change and opinions toward treatment innovations. Journal of Substance Abuse Treatment, 33, 183–192. Kendall, P. C., Gosch, E., Furr, J. M., & Sood, E. (2008). Flexibility within fidelity. Child and Adolescent Psychiatry, 47, 987–993. Lehman, W. E. K., Greener, J. M., & Simpson, D. D. (2002). Assessing organizational readiness for change. Journal of Substance Abuse Treatment, 22, 197–209. Lehman, W. E. K., Simpson, D. D., Knight, D. K., & Flynn, P. M. (2011). Integration of treatment innovation planning and implementation: Strategic process models and organizational challenges. Psychology of Addictive Behaviors, 25, 252–261. Lundgren, L., Amodeo, M., Cohen, A., Horowitz, A., & Chassler, D. (2011). Modifications of evidence-based practices in community-based addiction treatment organizations: A qualitative research study. Addictive Behaviors, 36, 630–635. Lundgren, L., Amodeo, M., Krull, I., Chassler, D., Weidenfeld, R., Zerden, L. D., et al. (2011). Addiction treatment provider attitudes on staff capacity and evidencebased clinical training: Results from a national study. The American Journal on Addictions, 20, 271–284. Lundgren, L., Chassler, D., Amodeo, M., D'Ippolito, M., & Sullivan, L. (2012). Barriers to implementation of evidence-based addiction treatment: A national study. Journal of Substance Abuse Treatment, 42, 231–238. Maas, C., & Hox, J. (2005). Sufficient sample sizes for multilevel modeling. Methodology, 1, 86–92. McCarty, D., Fuller, B. E., Arfken, C., Miller, M., Nunes, E. V., & Edmundson, E. (2007). Direct care workers in the National Drug Abuse Treatment Clinical Trials Network: Characteristics, opinions, and beliefs. Psychiatric Services, 58, 181–190. McHugo, G. J., Drake, R. E., Whitley, R., Bond, G. R., & Finnerty, M. T. (2007). Fidelity outcomes in the national implementation of evidence-based practices project. Psychiatric Services, 58, 1279–1284. Moncher, F. J., & Prinz, F. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11, 247–266. Mowbray, Holter, Teague, & Bybee (2003). Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation, 24, 315–340. Mueser, K. T., Torrey, W. C., Lynde, D., Singer, P., & Drake, R. E. (2003). Implementing evidence-based practices for people with severe mental illness. Behavior Modification, 27, 387–411. Pinto, R. M., Yu, G., Spector, A. Y., Gorroochurn, P., & McCarty, D. (2010). Substance abuse treatment providers' involvement in research is associated with willingness to use findings in practice. Journal of Substance Abuse Treatment, 39, 188–194.
L. Lundgren et al. / Journal of Substance Abuse Treatment 45 (2013) 457–465 Rieckmann, T., Daley, M., Fuller, B. E., Thomas, C. P., & McCarty, D. (2007). Client and counselor attitudes toward the use of medications for treatment of opioid dependence. Journal of Substance Abuse Treatment, 32, 207–215. Simpson, D. D., & Flynn, P. M. (2007). Moving innovations into treatment: A stage-based approach to program change. Journal of Substance Abuse Treatment, 33, 111–120. Swain, K., Whitley, R., McHugo, G. J., & Drake, R. E. (2010). The sustainability of evidence-based practices in routine mental health agencies. Community Mental Health Journal, 46, 119–129.
465
Torrey, W. C., Drake, R. E., Dixon, L., Burns, B. J., Flynn, L., Rush, A. J., et al. (2001). Implementing evidence-based practices for persons with severe mental illness. Psychiatric Services, 52, 45–50. Torrey, W. C., Lynde, D. W., & Gorman, P. (2005). Promoting the implementation of practices that are supported by research: The National Implementing Evidence-Based Practice Project. Child and Adolescent Psychiatric Clinics of North America, 14, 297–306. Wilson, C. T. (1996). Manual based treatments: The clinical application of research findings. Behavioral Research and Therapy, 34, 295–314.