Using formative evaluation methods to improve clinical implementation efforts: Description and an example

Using formative evaluation methods to improve clinical implementation efforts: Description and an example

Psychiatry Research xxx (xxxx) xxxx Contents lists available at ScienceDirect Psychiatry Research journal homepage: www.elsevier.com/locate/psychres...

1009KB Sizes 0 Downloads 50 Views

Psychiatry Research xxx (xxxx) xxxx

Contents lists available at ScienceDirect

Psychiatry Research journal homepage: www.elsevier.com/locate/psychres

Using formative evaluation methods to improve clinical implementation efforts: Description and an example A. Rani Elwya,b, , Ajay D. Wasanc, Andrea G. Gillmand, Kelly L. Johnstone, Nathan Doddse, Christine McFarlandf, Carol M. Grecof ⁎

a

Center for Healthcare Organization and Implementation Research, Edith Nourse Rogers Memorial Veterans Hospital, 200 Springs Road (Mailstop 152), Bedford, MA 01730, USA b Department of Psychiatry and Human Behavior, Alpert Medical School, Brown University, Box G-BH, Providence, RI, USA c Departments of Anesthesiology and Psychiatry, University of Pittsburgh School of Medicine, UPMC Pain Medicine, 5750 Centre Avenue, Suite 400, Pittsburgh, PA 15206, USA d Department of Anesthesiology and Perioperative Medicine, University of Pittsburgh School of Medicine, 200 Lothrop St, Pittsburgh, PA 15213, USA e Department of Psychiatry, University of Pittsburgh Medical Center, 100 North Bellefield, Rm 770, Pittsburgh, PA 15213, USA f Department of Psychiatry, University of Pittsburgh School of Medicine, UPMC Center for Integrative Medicine, 580 S. Aiken Avenue, Suite 310, Pittsburgh, PA 15232, USA

ARTICLE INFO

ABSTRACT

Keywords: Formative evaluation Qualitative methods Theory of diffusion of innovation Implementation strategies

Formative evaluation, a rigorous assessment process to identify potential and actual influences on the implementation process, is a necessary first step prior to launching any implementation effort. Without formative evaluation, intervention studies may fail to translate into meaningful patient care or public health outcomes or across different contexts. Formative evaluation usually consists of qualitative methods, but may involve quantitative or mixed methods. A unique aspect of formative evaluation is that data are shared with the implementation team during the study in order to adapt and improve the process of implementation during the course of the study or improvement activity. In implementation science, and specifically within formative evaluation, it is imperative that a theory or conceptual model or framework guide the selection of the various individual, organizational or contextual factors to be assessed. Data from these theory-based constructs can translate into the development and specification of implementation strategies to support the uptake of the intervention. In this article, we describe different types of formative evaluations (developmental, implementationfocused, progress-focused, and interpretive), and then present a formative evaluation case study from a realworld implementation study within several academic pain clinics, guided by the Theory of Diffusion of Innovation.

1. Introduction 1.1. Formative evaluation Formative evaluation is “a rigorous assessment process designed to identify potential and actual influences on the progress and effectiveness of implementation efforts” (Stetler et al., 2006), enabling researchers to explicitly study the complexity of implementation projects and suggests ways to answer questions about context, adaptations, and response to change. Formative evaluation can be used within a traditional experimental study such as a randomized controlled trial; the experimental design is thus combined with descriptive or observational formative evaluation approaches consisting of qualitative and quantitative



techniques, creating a richer dataset for interpreting study results (Stetler et al., 2006; Curran et al., 2012; also see paper by Landes and colleagues in this issue). Although randomized controlled trials are regarded as the gold standard for establishing the effectiveness of interventions, effect sizes do not provide policy makers with information on how an intervention might be replicated in their specific context, or whether trial outcomes will be reproduced (Moore et al., 2015). Formative evaluation utilizes the same methods as process or summative evaluations (see articles by Hamilton and Finley, and Smith and colleagues, in this special issue), but differs in that data are shared with the implementation team during the study in order to adapt and improve the process of implementation during the course of the study or improvement activity (Bauer et al., 2015). To this extent, formative

Corresponding author. E-mail address: [email protected] (A.R. Elwy).

https://doi.org/10.1016/j.psychres.2019.112532 Received 9 April 2019; Received in revised form 25 August 2019; Accepted 25 August 2019 0165-1781/ Published by Elsevier B.V.

Please cite this article as: A. Rani Elwy, et al., Psychiatry Research, https://doi.org/10.1016/j.psychres.2019.112532

Psychiatry Research xxx (xxxx) xxxx

A.R. Elwy, et al.

evaluations are iterative and may result in more successful implementation of healthcare innovations. For researchers new to implementation science, formative evaluation can be conceptualized as being similar to fidelity monitoring that goes on as part of any traditional clinical trial, but differs in that it is specified a priori in a study research question (Bauer et al., 2015). Without formative evaluation, intervention studies may fail to translate into meaningful patient care or public health outcomes or across different contexts (Glasgow, Lichenstein and Marcus, 2008). When intervention efforts fail, it is important to know if the failure occurred because the intervention was ineffective in a new setting (intervention failure) or if a good intervention was deployed incorrectly (implementation failure; Proctor et al., 2011). Analyses of barriers to changing practice, in a seminal review of 76 studies involving physicians’ adherence to clinical practice guidelines (Cabana et al., 1999), have shown that obstacles to changing clinical practice can arise at different stages in the healthcare system, at the level of the patient, the provider, the healthcare team, the healthcare organization, or the wider environment (Grimshaw and Grol, 2003; Grol, 1997). Formative evaluation allows for this distinction between intervention and implementation failure, by identifying at an early stage whether desired outcomes are being achieved, and if they are not, using the formative evaluation data to develop or refine implementation strategies to improve implementation success, as needed. Formative evaluation can make the realities and unknown nature of implementation at these multiple levels more transparent to healthcare decision-makers (Stetler et al., 2006). Data for formative evaluations can come from various sources and can include either or both quantitative and qualitative data, from patients, family members, providers, healthcare system leaders as well as publicly-available indices from community, policy and regulatory outer contexts (Palinkas and Cooper, 2018; Bauer et al., 2015) and require careful attention in the study design to the various ways to combine such data (Fetters et al., 2013). Most qualitative formative evaluation data collection methods include semi-structured interviews or focus groups with patients, providers, or other stakeholders, direct observation of clinical processes to understand work flow, and reviews of documents such as meeting minutes, agendas, memorandums, and more (Palinkas and Cooper, 2018; Bauer et al., 2015). Formative evaluation at any stage requires distinct plans for adequate measurement and analysis (Stetler et al., 2006). Qualitative data analysis can either be structured as a hypothesis-free exploration using grounded theory or related approaches (Charmaz, 2014; Miles, Huberman and Saldana, 2013) or can utilize directed content analysis (Hsieh and Shannon, 2005) to address pre-specified issues such as hypothesistesting or measurement of intervention fidelity (see article by Hamilton and Finley in this special issue for more discussion about qualitative methods). New guidance on using qualitative methods in implementation science calls for methods such as directed content analysis for conducting rapid implementation evaluations (Palinkas et al., 2019).

1.2.1. Developmental FE Developmental FE, which occurs during the first stage of a project, is focused on enhancing the likelihood of success in the particular setting of a project, and involves collection of data on four potential influences: (a) actual degree of less-than-best practice; (b) determinants of current practice; (c) potential barriers and facilitators to practice change and to adoption of the evidence-based practice (EBP) or innovation; and (d) feasibility of using a pre-conceived implementation strategy to increase uptake of the EBP or innovation (Stetler et al., 2006). Activity at this stage may involve assessment of known factors related to the targeted uptake of evidence, such as perceptions regarding the evidence, attributes of the proposed innovation, or leadership and administrative commitment (Palinkas et al., 2016; Aarons and Sommerfeld, 2012). Examples of quantitative formative diagnostic tools include organizational readiness and attitude and belief surveys (Shea et al., 2014; Aarons et al., 2014; Aarons, 2004). Such developmental FE data enable researchers to understand potential problems and, where possible, overcome them prior to initiation of interventions in study sites through identification and use of robust implementation strategies. In a developmental FE aimed at identifying and specifying implementation strategies to address primary care providers’ reluctance of hepatitis C screening and testing, data from interviews with PCPs suggested a multi-component intervention around awareness and education, feedback of performance data, clinical reminder updates, and leadership support, would address both a significant clinical need, and be deemed acceptable and feasible to primary care providers (Yakovchenko et al., 2019). 1.2.2. Implementation-focused FE Implementation-focused FE occurs during the process of implementation and focuses on the discrepancies between the implementation plan and its execution in order to document actual implementation processes and evaluate and measure actual exposure to the intervention (Stetler et al., 2006). This type of FE occurs throughout implementation of the project plan. It focuses on analysis of discrepancies between the plan and its operationalization and identifies influences that may not have been anticipated during the developmental FE. Implementation-focused FE is sometimes referred to as process evaluation, as both of these evaluation approaches are meant to assess fidelity and quality of implementation, clarify causal mechanisms and identify contextual factors associated with variation in outcomes (Craig et al., 2008; Waters et al., 2011). Implementation-focused formative data enable researchers to document, evaluate and/or track more fully the major barriers to goal achievement and what it actually takes to achieve change, including the timing of project activities (Stetler et al., 2006). Bokhour and colleagues used implementationfocused FE—namely, qualitative interviews with providers, nurse care managers and social workers— to examine improvements in HIV testing across 15 primary care clinics. The project was guided by the Promoting Action on Research Implementation in Health Services framework (Kitson et al., 2008), which states that (1) evidence, defined as providers’ perceptions the strength of the evidence for the evidencebased practice (in this case, HIV testing), (2) context, which involves aspects of culture, leadership and evaluation in the local setting, and (3) facilitation, which describes how the evidence-based practice is introduced in a setting, are factors that will lead to successful implementation. Bokhour and colleagues rated each of the 15 sites as high, medium or low on dimensions of perceived evidence and the HIV testing context. Sites where HIV testing improvements were most pronounced also were rated highest by providers at these sites on the role of evidence in facilitating these testing improvements. Contextual barriers, such as insufficient resources and provider discomfort with HIV testing were associated with lower rates of improvements in HIV testing (Bokhour et al., 2015).

1.2. Four stages of formative evaluation (FE) Stetler et al., (2006) describe four stages of formative evaluation (FE): (1) developmental FE (pre-implementation interviews with identified implementation champions, providers, patients and family members), (2) implementation-focused FE (tracking attendance of specific implementation steps, such as at group meetings; use of predetermined implementation strategies by local champions; dashboards of patients’ utilization of the evidence-based practice or innovation, or providers’ delivery of it), (3) progress-focused FE (tracking rates of patients’ health and process outcomes), and (4) interpretive FE (postimplementation surveys and interviews with implementation champions, providers, patients and family members). Studies may involve one or more of these stages (Stetler et al., 2006; Bokhour et al., 2015; Yakovchenko et al., 2019) or all four stages (Hagedorn et al., 2016). 2

Psychiatry Research xxx (xxxx) xxxx

A.R. Elwy, et al.

1.2.3. Progress-focused FE The purpose of progress-focused formative evaluation is to monitor progress towards implementation goals and performance targets by identifying new challenges that may have emerged in the implementation process, and which can then be addressed through feedback to the implementation team. Steps can then be taken to optimize the intervention or reinforce progress via positive feedback to key stakeholders (Stetler et al., 2006). These steps may include refining implementation strategies or introducing new ones to ensure that implementation goals are met. In an intervention designed to decrease alcohol use through increasing access to pharmacoptherapy (Hagedorn et al., 2016), primary care providers were trained to serve as local implementation/clinical champions and receive external facilitation, two specific implementation strategies (Powell et al., 2015). Primary care providers received access to consultation from local and national clinical champions, educational materials, and a dashboard of patients with alcohol use disorder (AUD) on their caseloads for case identification (Hagedorn et al., 2016). The implementation team collected progress-focused FE data from a random sample of patients’ electronic health records across three sites (n = 455), to examine the progress being made towards pharmacotherapy implementation. Chart reviews indicated that although two-thirds of patients (n = 299) had a documented discussion of their alcohol use with their provider, only 7.5% (n = 22) had a documented discussion about pharmacotherapy specifically, and only 5 of these patients received the pharmacotherapy treatment (Haley et al., 2019). Chart review data was combined with prior, developmental FE interview data from patients to provide a more robust examination of the role of patient preferences in accessing AUD pharmacotherapy. Feeding back this information through such channels as the dashboard developed for the implementation study is critical for helping providers recognize the disconnect between their implementation efforts and the reality of patients’ access to treatment. 1.2.4. Interpretive FE Interpretive formative evaluation uses the information collected from all of the other FE stages as well as information collected at the end of the project (summative evaluation) regarding the experiences of participants to clarify the meaning of successful or failed implementation and to enhance understanding of an implementation strategy's impact (Stetler et al., 2006). Additionally, information can be obtained on stakeholder views regarding (a) usefulness or value of each intervention, (b) satisfaction or dissatisfaction with various aspects of the process, (c) reasons for their own program-related action or inaction, (d) additional barriers and facilitators, and (e) recommendations for further refinements (Stetler et al., 2006). Following recruitment to a RCT testing two interventions for the treatment of posttraumatic stress and patient follow-up within the trial, Elwy et al. (2019) conducted online surveys with 69 mental health providers about their provider advice-seeking networks and examined whether or not these networks were associated with providers’ referral behavior to the trial—a key factor for the success of the trial. Qualitative interviews were conducted with a subset of these providers, to further explore ways in which providers seek evidence about new mental health treatments. Social network analyses indicated that providers deemed highly central in their network (e.g., other providers often seek their advice and input about clinical situations) were more likely to buy-in to the intervention trial and refer patients to the intervention testing these two treatments. Qualitative interviews further indicated that providers want to seek out opinions on clinical treatments from those that they trust, but finding the time for these advice-seeking activities was difficult. Additionally, evidence alone was not necessarily enough for providers’ decisionmaking about a treatment; many indicated that conversations with trusted others were more helpful in determining which evidence-based treatments to use with their patients. Thus, through this mixed methods design, the research team was able to determine that the success of the trial was due to a small but significant number of mental health

Fig. 1. Healing Encounters and Attitudes Lists (HEAL) short forms.

providers referring patients to the treatment study, and that further efforts to implement these PTSD treatments in usual care would need to involve these highly trusted, credible, and central providers as part of the implementation efforts moving forward (Elwy et al., 2019). Interpretive FE data could become part of a case study, where rival explanations for implementation success could be compared across different sites in a multiple case study format (Yin, 2017; also see article by Kim and colleagues in this special issue), and which could contribute to further theory development about the role of evidence and trust in implementation success.

3

Psychiatry Research xxx (xxxx) xxxx

A.R. Elwy, et al.

2. Development FE: a case example of implementing a clinical innovation

their usual practice; (2) HEAL scales are compatible with their and their practice's perceived needs, values and norms; (3) they perceive the complexity of using HEAL scales to be low; (4) HEAL scales are amenable to being tried out on a limited basis; (5) HEAL scale results are observable; and (6) HEAL scales can be adapted for local circumstances (e.g., choosing to use only one or two HEAL scales) . We assessed these six aspects of using HEAL measures in clinical encounters during our developmental FE—consisting of qualitative interviews with 25 clinical providers and staff and 13 patients—to understand how different participants and practices perceive HEAL qualities as well as specific information about the users (patients, PCPs, nurses, etc.) and the context (type of clinic, such as multidisciplinary or physician-based). We found mixed evidence of the importance of these five perceived qualities of the intervention: relative advantage, compatibility with perceived needs, values and norms, complexity of use, amenable to being tested out (trialability), and the potential for adaptation. No patients or providers discussed the importance of observing HEAL measure use in the clinic as a perceived quality of the innovation, and not all of the five perceived qualities of the intervention were applicable to all HEAL scales.

There is expanding evidence that contextual, patient-centered factors are important for any treatment regimen (Greco et al., 2013). Our research team is conducting a project on measuring and using context factors across a network of seven academic and community pain specialty practices at an academic medical institution through implementation of the Healing Encounters and Attitudes Lists (HEAL) scales (Fig. 1) within the clinics’ patient-reported outcomes platform in the electronic medical record, and encouraging providers to use responses to the scales during conversations about pain management with their patients. These clinics offer outpatient evaluation and treatment of the most complex pain problems, including headaches, migraines, fibromyalgia, lower back pain, neck pain, osteoarthritis and post-amputation pain. Attending physicians, advanced practice providers, psychologists, psychiatrists, neurologists, an addiction social worker, physical and occupational therapists, and fellows and residents provide care in these clinics. Previous research with these HEAL scales predicted pain interference and pain intensity scores over time (Greco et al., 2016, 2017; Slutsky et al., 2017). The HEAL scales consist of six separate questionnaires: Treatment Expectancy (TEX), Patient-Provider Connection (PPC), Health Care Environment perceptions (HCE), Positive Outlook (POS), Spirituality (SPT), and Attitudes toward Complementary and Alternative Medicine (CAM). After a first visit for chronic pain treatment, patients face the common dilemma of accepting a multimodal treatment plan that may not include opioids versus not returning to the provider and either receiving no further care or seeking other pain treatments, such as opioids. As such, the practice of comprehensive, multimodal chronic pain treatment (Heres et al., 2018; IOM, 2011), is often thwarted by a lack of patient engagement and a lack of acceptance of the treatment plan, which is intended to be mutually agreed upon between patient and provider. The implementation of HEAL assessments addresses this problem by bringing the patient and provider together to discuss issues relevant to the patient-provider relationship and course of treatment, such as expectations for treatment success. HEAL scale scores and discussions could help providers and patients identify approaches that may best meet patients’ needs and preferences, and may provide a starting point for conversations that can help to overcome cultural and educational barriers to positive treatment outcomes. HEAL scores and discussions can also identify the need for clarity regarding various treatment options: how they work, their side effects, and the commitment required to fully engage and benefit.

2.2. Developmental FE study example: using results to shape implementation strategies Once we identified patients’ and providers’ concerns about using the various HEAL measures in pain clinic settings, as guided by the Theory of Diffusion of Innovation, we mapped these concerns to the list of 73 evidence-based implementation strategies from the Expert Recommendations for Implementation Change (ERIC) group (Powell et al., 2015) to determine which of these may be most appropriate to use in the pain clinics, to increase uptake of and buy-in for using the HEAL measures during usual clinic practice. Implementation strategies are defined as “methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice” (Proctor et al., 2013). Building on patients’ and providers’ concerns about their perceived qualities of using HEAL measures in the pain clinics, the team identified specific implementation strategies to enhance the adoption of HEAL at the time of rollout within the pain clinic electronic patient-reported outcome system. This mapping of concerns to strategies occurred through team meetings, and a discussion and consensus building approach similar to that which occurs in qualitative data analysis (Miles, Huberman and Saldana, 2013). In Table 1, we present some example patient and provider concerns about using the HEAL measures in the pain clinics that were collected from the developmental FE, the specific construct from the Theory of Diffusion of Innovation that represents this, the ERIC implementation strategy our team identified to address these concerns, and the specific action or tool developed by the research team for implementation prior to HEAL measure rollout in the patient-reported outcome system across the seven pain clinics. Some of the implementation strategies we have identified, such as changing the record system, are a direct result of patients and providers telling our team that they do not like certain HEAL measures, or that they felt that the timing of HEAL measure use was either critical prior to the first pain clinic treatment visit, or only applicable on return visits. As a result, the implementation team was able to adapt which HEAL scales are administered at each patient visit. Other implementation strategies, such as holding educational meetings, came about once we learned of the disconnect among providers about whether or not some of the HEAL measures would be useful to their clinic work. Holding a pain clinic-wide retreat allowed de-identified data from the formative evaluation interviews from both patients and providers to be fed back to providers allowed providers to understand that when they might think a HEAL measure is not useful to them, their patients think this would be helpful to talk about during treatment discussions. Developing and distributing educational materials, two additional

2.1. Using theory to guide formative evaluation methods In this special issue, Damschroder discusses the role of theories, frameworks, and models in implementation science, which are critical in identifying which healthcare factors (patient, family, provider, organizational) are relevant to specific implementation efforts (Moore et al., 2015). One theory, that of the Theory of Diffusion of Innovations (Rogers, 1995), stresses that it is often the interpersonal persuasion of trusted others which finally convinces individuals to adopt a behavior. This suggests that providers will learn from trusted others the importance of measuring and incorporating patients’ perspectives into clinical conversations. Thus, our examination of the potential implementation barriers of the HEAL scales is guided by the Theory of Diffusion of Innovations’ principles of the perceived qualities of the intervention. Widespread uptake of the HEAL measures in clinical settings depends upon the interaction between features of the intervention (electronic data collection through the patient-reported outcome system in the EMR), the adopters (pain clinic provides, staff and patients), and the context (pain clinics). Potential adopters of the HEAL scales will adopt them if they perceive these qualities of the intervention: (1) they perceive a relative advantage of using HEAL scales over 4

Psychiatry Research xxx (xxxx) xxxx -Short videos created by clinic leadership describing evidence and need for HEAL scales in conversations with patients, distributed to clinical providers and staff

implementation strategies, led our team to create specific Frequently Asked Question (FAQ) sheets to address specific patient concerns, which are now being distributed in person by the front desk staff. Short videos (2–3 min) featuring the pain clinic director explaining why the HEAL scales scores are important for providers to use during their conversations with patients about pain treatment have been distributed via email and placed on YouTube for continued use. 3. Discussion Formative evaluation is a key component of implementation success. Studies may use one form of FE, or all four evaluation methods, as described in this paper. Using developmental FE methods allowed our team to identify patients’ and providers’ concerns about using HEAL scales prior to and during pain clinic treatment, and to identify specific implementation strategies our team could use to address these concerns prior to clinic-wide implementation of the HEAL measures. None of this decision-making would have been possible without the results of the developmental FE. However, as the developmental FE presented here as a case example is currently ongoing, the impact of the FE results on implementation outcomes is currently unknown. Our future formative evaluation efforts will go beyond this developmental stage and will involve an implementation-focused FE involving interviews with patients, staff and providers at the seven clinics, to assess their perspectives on completing and using the HEAL measures during pain clinic appointments. From this implementation-focused FE, the team may identify additional barriers to HEAL use, which may require the implementation of additional strategies to encourage uptake. The usefulness of our implementation strategies will be identified through a progress-focused FE where the team will hold group meetings with providers and staff to better understand which patients are completing HEAL and which providers have used HEAL in conversation with patients. An interpretive FE will take place at the conclusion of the study, which will consist of final qualitative interviews with providers, staff and patients, to determine the success of the HEAL implementation and understand the barriers that need further attention before widespread scale-up. While these efforts required additional time and effort prior to the actual implementation study, we expect these efforts to pay off in the long-term, by ensuring that our team has uncovered the potential roadblocks and barriers to successful implementation ahead of time and developed strategies that can eliminate these before they occur. The developmental FE methods and mapping of the FE results to published implementation strategies appears specific to our project discussed here, but in reality, these methods and mapping can be done with any implementation study. A central goal of FE is to develop a blueprint for implementation strategies to be used in an evolving study. An implementation blueprint is an overarching ERIC-identified implementation strategy, defined as a document that includes all goals and strategies of the implementation effort (Powell et al., 2015). This blueprint should include (1) the aim and purpose of the implementation; (2) scope of the change (for example, what organizational units are affected), (3) timeframe and milestones, and (4) appropriate performance and progress measures (Powell et al., 2015). Use of the implementation blueprint will be updated over time, as additional implementation efforts are undertaken and subsequent data on their effectiveness are known. Within this implementation blueprint, it is also important to specify the actors of each of our implementation efforts (clinic providers, staff, leadership), the actions they undertake (educational meetings, outreach, distribution of materials), the targets of their actions (such as patients, family members, clinic staff), the timing of each of these actions (prior to implementation rollout, during an implementation effort), the dose of the actions (once a week, once a month), and the justification for each of these (the evidence from this study, as well as evidence previously published) (Proctor et al., 2013). The goal of

-Conduct ongoing training; -Develop educational material; -Distribute educational material

-Plan for and conduct training in the clinical innovation in an ongoing way; -Develop and format manuals, toolkits, and other supporting materials in ways that make it easier for stakeholders to learn about the innovation and for clinicians to learn how to deliver the clinical innovation; -Distribute educational materials (including guidelines, manuals, and toolkits) in person, by mail, and/or electronically

-FAQ sheet created to address patients’ concerns, to be distributed by clinic staff;

-Retreat held with providers to discuss importance of HEAL; -Conduct educational meeting; Compatibility with perceived needs, values and norms (low)

-Hold meetings targeted toward different stakeholder groups (e.g., providers, administrators, other organizational stakeholders, and community, patient/consumer, and family stakeholders) to teach them about the clinical innovation;

Removed some HEAL scale questions prior to being programmed into clinic's electronic medical record Change records systems to allow better assessment of implementation or clinical outcomes. Change record system Relative Advantage (low)

HEAL questions are repetitive with other questions already asked by providers HEAL questions will not be discussed at visit or change what providers do

ERIC Implementation Strategy Theory of Diffusion of Innovation Construct Patient/Provider Concerns from FE Interviews

Table 1 Implementation strategies identified to address potential implementation barriers: examples.

Definition of Implementation Strategy

Specific Pain Clinic Tool or Action Implemented

A.R. Elwy, et al.

5

Psychiatry Research xxx (xxxx) xxxx

A.R. Elwy, et al.

specifying these important aspects of any implementation is to allow for replication of our efforts in other contexts and ecological systems, and to ensure that implementation science is driven by evidence. Formative evaluation is the first step in driving evidence-based implementation, and as illustrated here, is critical for identifying implementation barriers. Successful implementation is dependent on seeking stakeholder input, and formative evaluation is the method for doing just that.

M.J., Klem, M.L., McFarland, C.E., Lawrence, S., Colditz, J., Maihoefer, C.C., Jonas, W.B., Ryan, N.D., Pilkonis, P.A., 2016. Measuring nonspecific factors in treatment: item banks that assess the healthcare experience and attitudes from the patient's perspective. Qual Life Res. 25, 1625–1634. https://doi.org/10.1007/s11136-0151178-1. Greco, C.M., Yu, L., Dodds, N.E., Johnston, K.L., Slutsky, J., McFarland, C.E., Lawrence, S., Morone, N., Schneider, M., Glick, R., Ryan, N., Pilkonis, P.A., 2017. Nonspecific factors in complementary/alternative medicine (CAM) and conventional treatments: predictive validity of the Healing Encounters And Attitudes Lists (HEAL) in persons with ongoing pain. J. Pain 18, s76. https://doi.org/10.1016/j.jpain.2017.02.256. Grol, R., 1997. Beliefs and evidence in changing clinical practice. BMJ 315, 418–421. Grol, R., Grimshaw, J., 2003. From best evidence to best practice: effective implementation of change in patients’ care. Lancet 362, 1225–1230. Hagedorn, H.J., Brown, R., Dawes, M., Dieperink, E., Myrick, D.H., Oliva, E.M., Wagner, T.H., Wisdom, J.P., Harris, A.H.S., 2016. Enhancing access to alcohol use disorder pharmacotherapy and treatment in primary care settings: aDaPT-PC. Implement. Sci. 11, 64. Haley, S.J., Pinsker, E.A., Gerould, H., Wisdom, J.P., Hagedorn, H.J., 2019. Patient perspectives on alcohol use disorder pharmacotherapy and integration of treatment into primary care settings. Substance Abuse. https://doi.org/10.1080/08897077.2019. 1576089. Published online March 4, 2019. Heres, E.K., Itskevich, D., Wasan, A.D., 2018. Operationalizing multidisciplinary assessment and treatment as a quality metric for interventional pain practices. Pain Med. 19, 910–913. https://doi.org/10.1093/pm/pnx079. Hsieh, H.-.F., Shannon, S.E., 2005. Three approaches to qualitative content analysis. Qual Health Res. 15, 1277–1288. Institute of Medicine, 2011. Relieving Pain in America: A Blueprint for Transforming Prevention, Care, Education, and Research. Committee on Advancing Pain Research, Care, and Education. National Academies Press. Kitson, A.L., Rycroft-Malone, J., Harvey, G., McCormack, B., Seers, K., Titchen, A., 2008. Evaluating the successful implementation of evidence into practice using the Parihs framework: theoretical and practical challenges. Implement. Sci. 3, 1–12. Miles, M.B., Huberman, A.M., Saldana, J., 2013. Qualitative Data Analysis: A Methods Sourcebook, 3nd Ed. Sage Publications, Inc. Moore, G.F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., Moore, L., O'Cathain, A., Tinati, T., Wight, D., Baird, J., 2015. Process evaluation of complex interventions: medical research council guidance. BMJ. 350https://doi.org/10.1136/ bmj.h125. h1258. Stetler, C.B., Legro, M.W., Wallace, C.M., Bowman, C., Guihan, M., Hagedorn, H., Kimmel, B., Sharp, N.D., Smith, J.L., 2006. The role of formative evaluation in implementation research and the queri experience. J. Gen. Int. Med. 21, 1–8. Palinkas, L.A., Cooper, B.R., 2018. Mixed methods evaluation in dissemination and implementation science. In: Brownson, R.C., Colditz, G.A., Proctor, E.K. (Eds.), Dissemination and Implementation Research in Health: Translating Science to Practice, 2nd edition. Oxford University Press. Palinkas, L.A., Garcia, A.R., Aarons, G.A., Finno-Velasquez, M., Holloway, I.W., Mackie, T.I., Leslie, L.K., Chamberlain, P., 2016. Measuring use of research evidence: the structured interview for evidence use. Res. Soc. Work Pract. 26, 550–564. https://doi. org/10.1177/1049731514560413. Palinkas, L.A., Mendon, S.J., Hamilton, A.B., 2019. Innovations in mixed methods evaluations. Annu. Rev. Public Health 40, 423–442. https://doi.org/10.1146/annurevpublhealth-040218-044215. Powell, B.J., Waltz, T.J., Chinman, M.J., Damschroder, L.J., Smith, J.L., Matthieu, M.M., Proctor, E.K., Kirchner, J.E., 2015. A refined compilation of implementation strategies: results from the Expert Recommendations For Implementing Change (ERIC) project. Implement Sci. 10, 21. https://doi.org/10.1186/s13012-015-0209-1. Proctor, E.K., Powell, B.J., McMillen, J.C., 2013. Implementation strategies: recommendations for specifying and reporting. Implement Science 8, 139. Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., Griffey, R., Hensley, M., 2011. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 38, 65–76. https://doi.org/10.1007/s10488-010-0319-7. Rogers, E.M., 1995. Diffusion of Innovations, 4th. Edition. The Free Press, New York, NY. Shea, C.M., Jacobs, S.R., Esserman, D.A., Bruce, K., Weiner, B.J., 2014. Organizational readiness for implementing change: a psychometric assessment of a new measure. Implement Sci. 9, 7. Slutsky, J., Greco, C., McFarland, C., Dodds, N., Johnston, K., Glick, R., Schneider, M., Janjic, J., Kelly, N., Morone, N., Adams, C., Lawrence, S., Pilkonis, P., 2017. Measuring clarity, relevance, and usefulness of heal and promis measures in pain treatment through interviews with patients and their healthcare providers. J. Pain 18, s63. https://doi.org/10.1016/j.jpain.2017.02.329. Waters, E., Hall, B.J., Armstrong, R., 2011. Essential components of public health evidence reviews: capturing intervention complexity, implementation, economics and equity. J. Public Health 33, 462–465. Yakovchenko, V., Bolton, R., Drainoni, M.L., Gifford, A.L., 2019. Primary care provider perceptions and experiences of implementing hepatitis C virus birth cohort testing: a qualitative formative evaluation. BMC Health Serv. Res. 19, 236. https://doi.org/10. 1186/s12913-019-4043-z. Yin, R.K., 2017. Case Study Research And Applications: Design And Methods. 6th Ed. Sage.

Acknowledgements Research reported in this publication was [partially] funded through a Patient-Centered Outcomes Research Institute (PCORI) Award (DI-2017C2-7558) to Drs. Carol Greco and Ajay Wasan, at the University of Pittsburgh School of Medicine. The views in this article are solely the responsibility of the authors and do not necessarily represent the views of the Patient-Centered Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee. Dr.. Elwy is an investigator with the Department of Veterans Affairs, Health Services Research and Development Service. The views expressed in this article are those of the authors and do not necessarily represent those of the Department of Veterans Affairs or the United States Government. Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.psychres.2019.112532. References Aarons, G.A., 2004. Mental health provider attitudes toward adoption of evidence-based practice: the evidence-based practice attitude scale. Ment. Health Serv. Res. 6, 61–74. Aarons, G.A., Ehrhart, M.G., Farahnak, L.R., 2014. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 9, 45. https://doi.org/10.1186/1748-5908-9-45. Aarons, G.A., Sommerfeld, D.H., 2012. Leadership, innovation climate, and attitudes towards evidence-based practice during a statewide implementation. J. Am/ Acad. Child. Adolesc. Psychiatry 51, 423–431. https://doi.org/10.1016/j.jaac.2012.01.018. Bauer, M.S., Damschroder, L., Hagedorn, H., Smith, J., Kilbourne, A.M., 2015. An introduction to implementation science for the non-specialist. BMC Psych. 3, 32. https://doi.org/10.1186/s40359-015-0089-9. Bokhour, B.G., Saifu, H., Goetz, M.B., Fix, G.M., Burgess, J., Fletcher, M.D., Knapp, H., Asch, S.M., 2015. The role of evidence and context for implementing a multimodal intervention to increase HIV testing. Implement Sci. 10, 22. https://doi.org/10.1186/ s13012-015-0214-4. Cabana, M., Rand, C., Power, N., Wu, A.W., Wilson, M.H., Abboud, P-A.C., Rubin, H.R., 1999. Why don't physicians follow clinical practice guidelines. JAMA 282, 1458–1465. Charmaz, K., 2014. Constructing Grounded Theory, 2nd Ed. Sage. Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., Petticrew, M., 2008. Developing and evaluating complex interventions: new guidance. BMJ. 337:a165510. 1136/bmj.a1655. Curran, G.M., Bauer, M., Mittman, B., Pyne, J.M., Stetler, C., 2012. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care 50, 217–226. https://doi.org/10.1097/MLR.0b013e3182408812. Elwy, A.R., Kim, B., Plumb, D.N., Wang, S., Gifford, A.L., Asch, S.M., Bormann, J.E., Mittman, B.S., Valente, T.W., Palinkas, L.A., 2019. The connectedness of mental health providers referring patients to a treatment study for posttraumatic stress: a social network study. Administration and Policy in Mental Health and Mental Health Services Research. https://doi.org/10.1007/s10488-019-00945-y. Published online June 24. Fetters, M.D., Curry, L.A., Creswell, J.W., 2013. Achieving integration in mixed methods designs—principles and practices. Health Serv. Res. 48, 2134–2156. https://doi.org/ 10.1007/s10488-019-00945-y. Greco, C.M., Glick, R.M., Morone, N.E., Schneider, M.J., 2013. Addressing the ‘it is just placebo’ pitfall in CAM: methodology of a project to develop patient-reported measures of nonspecific factors in healing. Evid. Based Complement. Altern. Med., 613797. https://doi.org/10.1155/2013/613797. Greco, C.M., Yu, L., Johnston, K.L., Dodds, N.E., Morone, N.E., Glick, R.M., Schneider,

6