Social Science & Medicine 138 (2015) 265e268
Contents lists available at ScienceDirect
Social Science & Medicine journal homepage: www.elsevier.com/locate/socscimed
Commentary
Minimal, negligible and negligent interventions Penelope Hawe a, b, * a b
Menzies Centre for Health Policy, University of Sydney, NSW, Australia The Australian Prevention Partnership Centre, Australia
a r t i c l e i n f o
a b s t r a c t
Article history: Available online 16 May 2015
Many interventions are not disruptive enough of the patterns that entrench poor health and health inequities. Ways forward may require a break with tradition to embrace system-focussed theory, complex logic modelling, and ways of funding and responding to problems that address the competition of ideas and needs. © 2015 Elsevier Ltd. All rights reserved.
Keywords: Complex interventions
By 2003 enough evidence had accumulated about how to prevent chronic disease that Yarnall and her colleagues deduced that it would take a primary care doctor working with the average population 7.4 h every day if every single opportunity was taken to counsel, advise or guide patients according to practice recommendations (Yarnall et al., 2003). This of course left no time for treatment. It was no surprise therefore that the idea of “minimal interventions” took hold. That is, the idea that some small conversation or series of strategies could have a desirable impact on patient well being, even if the effect was not as large or as long lasting as a more intensive intervention (Russell et al., 1979). This idea developed in fields like tobacco cessation, physical activity and alcohol and remains of high interest. It represents a case of interventions being made deliberately modest in order to match the realities of time-pressured contexts. The chance that they would be executed frequently and reliably is considered higher if the interventions are minimally disruptive. Now fast forward to the present day conversations about “complex interventions” to tackle wicked or intractable problems (Boardman and Sauser, 2008). By contrast, it could be argued that the interventions are too conservative and not disruptive enough. Take for example the recent account given by Okwaro and his colleagues of a health care improvement intervention in rural Uganda designed to impact on malaria related health indicators (Okwaro et al., 2015). Although the formative research that preceded the intervention identified many ways to make local health care improvements, the authors state that they chose four components because they met the project focus, could be clearly
DOI of original article: http://dx.doi.org/10.1016/j.socscimed.2015.02.032. * Menzies Centre for Health Policy, University of Sydney, NSW, Australia. E-mail address:
[email protected]. http://dx.doi.org/10.1016/j.socscimed.2015.05.025 0277-9536/© 2015 Elsevier Ltd. All rights reserved.
defined and acted upon, with corresponding outcomes that could be pre-specified and measured through a cluster randomised control trial. These decisions, they state, were informed by the Medical Research Council's guidance on complex interventions (Craig et al., 2008) and other advice consistent with maintaining definable replicable components with pre-specified outcomes (Sridharan and Nakaima, 2011; Campbell et al., 2000). The authors proceeded with the trial. When the subsequent qualitative interviews found that the intervention was insufficient to bring about the desired changes in health care quality the authors, awkwardly, found themselves to be part of an approach to global health research that they ostensibly denounce (based on the commentary in their introduction). They refer here to “factorial” approaches to disease where social and cultural aspects of health are reduced to discrete and quantifiable factors. This type of thinking, they say, only aggravates the “projectification” of solutions i.e., myriads of individual projects focussing on just a small piece of a larger picture. The health outcomes of the trial have not yet been reported. But based on the evidence so far, the authors appear to be on track for a case study of a negligible intervention; that is, one unlikely to make a difference to the larger, system-level factors driving poor health that they identified before they started. But could this have been predicted at the outset? Is the intervention not just likely to be negligible but also negligent? That is, was there a duty of care to design an intervention of likely effectiveness? Was there a failure to exercise that care? And was there damage as a result? Full credit to the authors for their transparency, because some insights into the logic of their choice of intervention strategy are available to date along with the results of prior community consultations (Chandler et al., 2013a, 2013b; Staedke et al., 2013) This possibly places the reader in the position to judge their
266
P. Hawe / Social Science & Medicine 138 (2015) 265e268
decisions. And perhaps some of us would be more inclined to do so, were we not so cognisant of having been in similar vexed situations ourselves (Riley et al., 2005). But what guidance is there for any investigator in a similar situation and keen to avoid a negligible intervention? The UK's Medical Research Council (MRC) guidance on complex interventions is clear that an early task must be “to develop a theoretical understanding of the likely process of change by drawing on existing evidence in theory, supplemented if necessary by new primary research.” (Craig et al., 2008 p.3) The more recent guidance on process evaluation by the MRC reinforces this point (Moore et al., 2015). But the choice of theory is rightly left to the expertise of the investigators, their situation, their research question, and their disciplinary perspective. This is an important point. The MRC guidance on complex interventions is not a substitute for skills and experience in intervention design and delivery, any more than a clinical practice guideline is a substitute for clinical training. Intervention designers can draw ideas from a range of places however. Dominant in public health circles and health services research is behavioural science/health psychology, where there is increasing consensus about the key functions and likely components of interventions designed to change specified behaviour patterns (Michie et al., 2011). For example, a recently published taxonomy involved 54 experts reviewing 93 behaviour change techniques, allocating them into 16 clusters that characterise the active content of complex interventions as an aid to design, development, replication and implementation (Michie et al., 2013). The precision of the intervention logic comes at a cost however. Specific interventions are needed for each specific problem, potentially making these approaches less useful for crowded contexts. Alternatively, researchers can look elsewhere for theorising and intervening in the larger structures in which behaviour is placed. Community psychology, for example, not only represents a distinctively different disciplinary perspective on behaviour-incontext to behavioural health psychology, it represents a set of values and way of working with communities that commits to developing ongoing resources for sustainability and future problem solving (Trickett, 2009). The approach offers a better ecological fit for interventions, that is, interventions more suited to context because they are driven by local actors (Miller and Shinn, 2005). Multiple and multiplied effects ensue (Foster-Fishman and Behrens, 2007). However, the specific effects are less predictable (Hawe et al., 2015) and hence less traditional funding and project management models may need to be devised (Kania and Kramer, 2013). Context-level intervention design moves beyond more customary methods (e.g., educational workshops), by intervening in the structure of systems, for example, by identifying and changing the dynamics of the activity settings that structure the routine patterns within organisations and communities (O'Donnell and Tharp, 2012; Tseng and Seidman, 2007; Seidman, 2012; Davison and Hawe, 2012). A collection of commentaries and case examples of multilevel, community-based, culturally situated interventions curated and edited by two leaders in the field (an anthropologist and a community psychologist) was published in 2009 (Schensul and Trickett, 2009). The interdisciplinary field of systems-level intervention (Meadows, 2008) is rapidly developing with the advancement of theory, methods for understanding context, and software to support the descriptive mapping of systems and the quantification of unintended effects (Williams and Hummelbrunner, 2009). Being newer in public health than behavioural science and health psychology, systems-intervention design and evaluation is not as well synthesised or organised. It requires navigation and reconciliation of disparate views. But a lack of packaging for public health consumption should not be mistaken for a lack of sophistication.
From where else can insights for intervention design, improvement and evaluation be gained? A recent reflection on nine cluster randomised trials of complex interventions in developing countries identified six primary lessons to improve evaluation coordination and sense making of both intervention and evaluation activity rollout across different teams and subgroups (Reynolds et al., 2014). The authors concluded with a plea for greater reflexivity on the process of intervention delivery and evaluation conduct (Reynolds et al., 2014). Earlier authors went further, suggesting that the insights gained through the process information generated in intervention trials not be left to withinteam discussions alone. Rather it was recommended that a formal process evaluation oversight committee (with external appointees) become part of routine practice in intervention trials so that the myriad of interpretations, contests and choice points that ultimately affect intervention success are open to scrutiny (Riley et al., 2005). Such a committee would have equivalent status to a trials health outcomes data monitoring committee and closer access to the day-to-day understanding of the trial than existing community or scientific advisory committees. Complex intervention researchers have also turned to the technique of evaluability assessment to identify the type of knowledge to be generated from different points in the evolution of a complex intervention; the best use of evaluation resources for decision maker needs; plausible sizes and distribution of effect; the advancement of evidence overall; and practicalities of evaluation within policy time frames (Olgivie et al., 2011). A year-long process of practice-based workshops across Australia produced a guide to program planning and evaluation that has two recommendations also relevant to this discussion (Hawe et al., 1990). First, that intervention evaluation should only proceed to randomised trial once preliminary process and impact evaluations have been conducted. In other words, not only does an intervention have to be feasible, but teething troubles in delivery have to be ironed out and consultations and qualitative research has to illuminate the range of possible intended and unintended side effects. Second, that the evaluability assessment should include the vital step of interrogating the logic of the intervention so that relatively puny programs with ambitious goals are identified and rectified. In other words, goals can be made more modest to match the program, or alternatively, the program can be redesigned to make it more likely that the goals can be met. Either way it is considered wasteful to proceed to the evaluation of an intervention that seems at the outset too modest to make a difference (Hawe et al., 1990). The point to note is that the “real world contexts” which produce challenges for trialists are often just “business-as-usual” contexts for health promotion practitioners and the wisdom that has accumulated there is worth finding and listening to in that field's specialist journals, conferences and texts (e.g., Rootman et al., 2001). One of the most important developments in recent times within the evaluation research literature is the sophistication that has come to logic modelling e the pictorial representation of the change theory underpinning interventions e as applied to complex interventions. If complexity is understood to be a property of the system into which an intervention is placed and not just a property of the intervention itself (Shiell et al., 2008) then new approaches to logic modelling are vital. Some of the lead scholars here are Patricia Rogers and Sue Funnell (Rogers, 2008; Funnell and Rogers, 2011). A number of features distinguish logic modelling for simple linear interventions from models which attempt to incorporate complexity (Rogers, 2008). Simple models applied to complex situations risk overstating the causal contribution of the intervention (Rogers, 2008). Most helpfully, Rogers and Funnell provide examples of poor logic models so that interventions likely to make minimal or negligible difference to the problem are easier to
P. Hawe / Social Science & Medicine 138 (2015) 265e268
identify. They also illustrate how emergent effects and side effects can be tracked and considered within ongoing implementation (Funnell and Rogers, 2011). A logic model of the dynamic system within which we may wish to place a particular intervention should draw attention to the likelihood of a particular intervention being not disruptive enough to bring about desired change in deep-seated patterns. Pictorial models of this type also draw attention to the multiple competing priorities and interests within the local context, where if each particular priority developed its own program/intervention one could soon imagine a scenario where communities are pulled in multiple different directions in the same way that Yarnall and colleagues identified that a general practitioner is tempted every day (Yarnall et al., 2003). Indeed this was the cacophony that Okwaro and his colleagues elaborate upon in their discussion, the scenario being made worse, they explain, by the fact that the research data gathering activities and technologies around these interventions seem more prominent in the eyes of the community than the health care services they are seeking to improve (Okwaro et al., 2015). The decision to move forward with intervention trials in such circumstances is the investigators' ethical choice. Writing about global health (but with arguments applicable in many settings) Panter-Brick and colleagues suggest that we have “broken faith” with the “core ethical mandate to address the root causes of poor health outcomes” (Panter-Brick et al., 2014, p.1). They argue that the field has “fallen prey” to “ silo gains” (individual risk factors and problems) linked to funding availability and to technological rather than structural solutions. The broad promises made at the time of engagement with communities have been left largely unfulfilled as a consequence (Panter-Brick et al., 2014). Panter-Brick and colleagues scorch us with the observation that public health tends to be “incommensurably proud of short-term successes in narrowly defined outcomes.” (Panter-Brick et al., 2014. p.3) Other researchers also point out that the ethical review of their intervention research traditionally focuses on issues of confidentiality and data rigour, leaving many aspects of potential for harm unvisited (Riley et al., 2005). The ethical issues raised deserve more attention than can be given justice here. However, better ethical choices may be enabled in future with revised grant giving processes. For example, funding agencies with a research focus typically require the majority of their applications to focus on research methods with scant attention/room on the page for intervention design and justification. Conversely, agencies with a health promotion focus err in the opposite direction, with insufficient space available to describe and justify evaluation methods. Currently we mostly ask investigators to build a case only for (their specific recommended new) action in a particular area. We don't follow a business case model that requires consideration of other options of competing merit. We don't ask for an assessment of “what currently is” and what activities/ resources the proposed intervention will displace (and the consequences of this). We don't ask for independent community impact statements or sustainability assessments, but instead queue up to burden community representatives to provide letters of support for what are too often weak courses of action. We don't find out where the proposed idea fits with the hierarchy of needs in any given community, even though there is track record of researchers ignoring the priority needs in a community in order to select only those that they themselves can respond to (Hawe, 1996). We rarely ask investigators to work in loose, broadly-defined cross-system coalitions where, without the threat of loss of funding, the desired action in one problem area (say infectious disease) could be delayed, coordinated in sequence, revised, or aligned with activity with another problem area (say, transport or animal health) in
267
order to maximise effect. We speak the language of complexity (e.g., unpredictability, innovation) but impose a research merit and project management structure from a bygone era. It does not have to be that way. Not when so much experience and understanding has accumulated to recalibrate the way we act. It is time to be maximally disruptive of the patterns that currently entrench poor health and health inequities. Ironically, we have never been so well equipped to do so. Acknowledgements I thank Alan Shiell for critical feedback on an early draft. References Boardman, J., Sauser, B., 2008. System Thinking: Coping With 21st Century Problems. CRC Press. Campbell, M., Fitzpatrick, R., Haines, A., Kinmonth, A.L., Sandercock, P., Speighalter, D., 2000. Framework for the design and evaluation of complex interventions to improve health. Br. Med. J. 321, 694e696. Chandler, C.I., Kizito, J., Taaka, L., Nabyrie, C., Kayendeke, M., Liberto, D., Staedke, S.G., 2013a. Aspirations for quality health care in Uganda: how do we get there? Hum. Resour. Health 11, 13. Chandler, C.I.R., DiLiberto, D., Nayiga, S., Nabirye, C., Kayendeke, M., Hutchison, E., Kizito, J., Maiteki-Sebuguzi, C., Kamya, M.R., Staedke, S.G., 2013b. The PROCESS study: a protocol to evaluate the implementation, mechanisms of effect and context of an intervention to enhance public health centres in Tororo, Uganda. Implement. Sci. 8, 113. Craig, P., Dieppe, P., Macintyre, S., Mitchie, S., Nazareth, I., Petticrew, M., 2008. Developing and evaluating complex interventions: the new Medical Research Council Guidance. Br. Med. J. 337, a1655. Davison, C.M., Hawe, P., 2012. New perspectives on school engagement among aboriginal children in northern Canada: insights from activity settings theory. J. Sch. Health 82, 65e74. Foster-Fishman, P.G., Behrens, T.R., 2007. Systems change reborn: rethinking our theories, methods and efforts in human services reform and community based change. Am. J. Community Psychol. 39, 191e196. Funnell, S.C., Rogers, P.J., 2011. Purposeful Programme Theory. Effective Use of Theories of Change and Logic Models. John Wiley. Hawe, P., Degeling, D., Hall, J., 1990. Evaluating Health Promotion: a Health Worker's Guide. McLennan and Petty Pty Ltd, Sydney. Hawe, P., 1996. Needs assessment must become more change-focused. Aust. N. Z. J. Public Health 20 (5), 473e478. Hawe, P., Bond, L., Ghali, L.M., Perry, R., Blackstaffe, A., Davison, C.M., Casey, D.M., Butler, H., Webster, C.M., Scholz, B., 2015. Replication of a whole school ethos changing intervention: different context, similar effects, additional insights. BMC Public Health 15, 265. Kania, J., Kramer, M., 2013. Embracing Emergence: How Collective Impact Addresses Complexity. Stanford Social Innovation Review. Blog, 21 January 2013. Meadows, D., 2008. Thinking in Systems. A Primer. Chelsea Green Publishers. Michie, S., van Stralen, M.M., West, R., 2011. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement. Sci. 6, 42. Michie, S., Richardson, M., Johnson, M., Abraham, C., Frances, J., Hardeman, W., Eccles, M., Cane, J., Wood, C.E., 2013. The behavioural change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behaviour change interventions. Ann. Behav. Med. 46, 81e95. Miller, R.L., Shinn, M., 2005. Learning from communities: overcoming difficulties in dissemination of prevention and promotion efforts. Am. J. Community Psychol. 35 (3/4), 169e183. Moore, G.F., Audrey, S., Barker, M., Bond, L., Bonell, C., Hardeman, W., Moore, L., O'Cathain, A., Tinati, T., Wight, D., Baird, J., 2015. Process evaluation of complex interventions: Medical Research Council guidance. Br. Med. J. 350, h1258. O'Donnell, C.R., Tharp, R.G., 2012. Integrating cultural community psychology: activity settings and the shared meanings of intersubjectivity. Am. J. Community Psychol. 49, 22e30. Okwaro, F.M., Chandler, C.I.R., Hutchinson, E., Nabirye, C., Taaka, L., Kayendeke, M., Nayiga, S., Staedke, S.G., 2015. Challenging logics of complex intervention trials: community perspectives of health care improvement intervention in rural Uganda. Soc. Sci. Med. 131, 10e17. Olgivie, D., Cummins, S., Petticrew, M., White, M., Jones, A., Wheeler, K., 2011. Assessing the evaluability of complex public health interventions: five questions were researchers, funders and policy makers. Millbank Q. 89, 206e222. Panter-Brick, C., Eggerman, M., Tomlinson, M., 2014. How might global health master deadly sins and strive for greater virtues? Glob. Health Action 7, 23411. Reynolds, J., DiLiberto, D., Mangham-Jeffries, L., Ansah, E.K., Mbakilwa, H., Bruxvoort, K., Webster, J., Vestergaard, L.S., Yeung, S., Leslie, T., Hutchinson, E., Reyburn, H., Lalloo, D.G., Schellenberg, D., Cundill, B., Steadke, S.G., Wiseman, V., Goodman, C., Chandler, C.I.R., 2014. The practice of doing evaluation: lessons
268
P. Hawe / Social Science & Medicine 138 (2015) 265e268
learned from nine complex intervention trials in action. Implement. Sci. 9, 75. Riley, T., Hawe, P., Shiell, A., 2005. Contested ground: how should qualitative evidence inform the conduct of a community intervention trial. J. Health Serv. Res. Policy 10, 103e110. Rogers, P.J., 2008. Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation 14, 29e48. Rootman, I., Goodstadt, M., Hyndman, B., McQueen, D.V., Potvin, L., Springett, J., Ziglio, E., 2001. Evaluation in Health Promotion. Principles and Perspectives. World Health Organisation. Regional Publications, Geneva. No 92. Russell, M.A.H., Wilson, C., Taylor, C., Baker, C.D., 1979. Effect of general practitioners advice against smoking. Br. Med. J. 2, 231e235. Schensul, J.J., Trickett, E.J., 2009. Introduction to multilevel community-based culturally situated interventions. Am. J. Community Psychol. 43, 232e240. Seidman, E., 2012. An emerging action science of social settings. Am. J. Community Psychol. 50, 1e16. Shiell, A., Hawe, P., Gold, L., 2008. Complex interventions or complex systems? implications for health economic evaluation. Br. Med. J. 336, 1281e1283.
Sridharan, S., Nakaima, A., 2011. Ten steps to making evaluation matter. Eval. Program Plan. 34, 135e146. Staedke, S.G., Chandler, C.I.R., DiLiberto, D., Maiteki-Sebuguzi, C., Nankya, F., Webb, E., Dorsey, G., Kamya, M.R., 2013. The PRIME protocol: evaluating the impact of an intervention implemented in public health centres on management of malaria and health outcomes of children using a cluster randomised design in Tororo, Uganda. Implement. Sci. 8, 114. Trickett, E.J., 2009. Multilevel community-based culturally situated interventions and community impact: an ecological perspective. Am. J. Community Psychol. 43, 257e266. Tseng, V., Seidman, E.A., 2007. Systems framework for understanding social settings. Am. J. Community Psychol. 39, 217e228. Williams, B., Hummelbrunner, R., 2009. Systems Concepts in Action: a Practitioner's Guide. Stanford University Press, Stanford CA. Yarnall, K.S.H., Pollak, K.I., Ostbye, T., Krause, K.M., Michener, J.L., 2003. Primary care: is there enough time for prevention? Am. J. of Public Health 93, 635e641.