Rethinking Evaluations of Health Equity Initiatives: An introduction to the special issue

Rethinking Evaluations of Health Equity Initiatives: An introduction to the special issue

Evaluation and Program Planning 36 (2013) 153–156 Contents lists available at SciVerse ScienceDirect Evaluation and Program Planning journal homepag...

151KB Sizes 0 Downloads 7 Views

Evaluation and Program Planning 36 (2013) 153–156

Contents lists available at SciVerse ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Rethinking Evaluations of Health Equity Initiatives: An introduction to the special issue Sanjeev Sridharan a,b,*, Carol Tannahill c a

The Evaluation Centre for Complex Health Interventions, Kennan Research Centre, Li Ka Shing Knowledge Institute, St. Michaels Hospital, Canada Department of Health Policy, Management and Evaluation, University of Toronto, Canada c Glasgow Centre for Population Health, United Kingdom b

A R T I C L E I N F O

A B S T R A C T

Article history: Available online 8 March 2012

This paper is an introduction to a special issue on ‘‘Re-thinking Evaluations of Health Equity Initiatives.’’ The papers in this volume aim to build understanding of how evaluations can contribute to addressing inequities and how evaluation design can develop a better understanding and also better respond to: (i) policy maker and practitioner needs; (ii) the systemic and complex nature of the interventions necessary to impact inequities; (iii) an understanding of the processes that generate inequities. ß 2012 Elsevier Ltd. All rights reserved.

Keywords: Health equities Evaluation Social determinants of health Learning frameworks.

Highlighted by the recently enacted U.S. Health Care and Education Reconciliation Act of 2010, as well as the publication of the World Health Organization’s report ‘Closing the Gap in a Generation’ (2008), the interest in methods and approaches to evaluate the impacts of health system reforms on health inequities continues to grow. This volume aims to build understanding of how evaluations can contribute to addressing inequities and how evaluation design can develop a better understanding and also better respond to: (i) policy maker and practitioner needs; (ii) the systemic and complex nature of the interventions necessary to impact inequities; (iii) an understanding of the processes that generate inequities. This forum includes contributions from leading evaluators, health equity theorists and policy scholars working in applied settings. The papers in this volume raise questions about how evaluations can help understand the processes that generate health inequities. The papers also discuss, what is it about a program or policy that disrupts the processes that lead to health inequities? The papers in this volume are informed by a generative view of social causation. It is perhaps useful to compare such a view of causation with a successionist understanding of causality. ‘‘The quest to understanding ‘what works?’ in social interventions is, at root, a matter of trying to establish causal relationships, and the hallmark of realist inquiry is its distinctive ‘generative’

* Corresponding author at: Evaluation Centre for Complex Health Interventions, Kennan Research Centre, Li Ka Shing Knowledge Institute, St. Michaels Hospital, Canada. E-mail address: [email protected] (S. Sridharan). 0149-7189/$ – see front matter ß 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2012.03.001

understanding of causality. This is most easily explained by drawing a contrast with the ‘successionist’ model, which underpins clinical trials. On the latter account what is needed to infer causation is the ‘constant conjunction’ of events: when the cause X is switched on (experiment) effect Y follows, and when the cause is absent (control) no effect is observed. The generative model calls for a more complex and systemic understanding of connectivity. It says that to infer a causal outcome (O) between two events (X and Y) one needs to understand the underlying generative mechanism (M) that connects them and the context (C) in which the relationship occurs.’’ (Pawson, Greenhalgh, Harvey, & Walshe, 2005, p. 2) Much of what we see in the evaluation literature on health inequities is informed by a successionist view of causality. One illustrative example comes from the theorized ‘‘sub-group’’ analysis idea that the same program can have different effects for different groups of individuals. In our experience, quite often such undertheorized sub-group analysis does not shed light on the generative mechanisms driving inequities and may not inform the development of ‘‘solutions’’ for health inequities. The questions we hope that this set of papers raises include: given the complexities involved in reducing inequities, how can evaluations help policy makers and practitioners navigate and align complex activities to achieve long term goals of greater equity? How do evaluations of initiatives focused on equities differ from other evaluations? A focus on generative mechanisms precipitates attention to the context and mechanisms (Pawson, 2006) associated with health inequity outcomes. This in turn has implications for an evaluation methodology that can help us to learn if programs work, how programs work and what can be done to make programs work.

154

S. Sridharan, C. Tannahill / Evaluation and Program Planning 36 (2013) 153–156

A key theme of the papers in this volume is the recognition of the difference between the ‘‘problem space’’ (knowing what variables or systems of relationships or patterns are associated with the problem) and the ‘‘solution space’’ (the strategic drivers with scope to address and reverse the problem). The deeper question this collection of papers raises is whether understanding of the causes of health problems is a sufficient basis for policy to more effectively address problems of inequities. In other words, would knowledge of the drivers of health inequities suffice in planning actions to reduce health inequities? Does knowing the pattern of the problem suffice to move towards action for the solution space? 1. Possible implications for the field of evaluation Two aspects of the volume will have very broad appeal for evaluators. First, health inequity interventions are conceptualized as examples of complex interventions. Interventions focussed on health inequities are complex in multiple ways. Surprisingly little research on the evaluations of health inequity initiatives focuses on the sources of complexity (Sridharan & Nakaima, 2010). In our experience, interventions focussed on long term outcomes such as reductions in health inequities need to address at least three different kinds of complexity. First there is complexity due to the multiple components that are involved in complex interventions – a single program might not by itself suffice to address health inequities. Multiple interventions might be needed to impact health inequities. These many components might interact and the evaluation challenge is to assess the impacts of multiple interacting interventions. A second source of complexity is the dynamic nature of programs: as an example, interventions often change over time in response to a number of factors (Patton, 2010). Dynamic changes in programmes have implications for both program theory and evaluation design. A third source of complexity is due to contextualization (Pawson et al., 2005): complex programmes are located in specific settings, and therefore the act of translating an initiative requires adaptation to local settings. Additionally, in some cases the absence of a clear a priori theory implies that complex health equity interventions rarely have a clear blueprint at the outset. Each of these sources of complexity – multiple interacting components, dynamic complexity and contextualized complexity – has implications for both evaluation theory and design. Given the growing interest in evaluations of complex interventions (Patton, 2010; Pawson, 2006), we think these ideas will be relevant to fields outside health equities. This discussion of complexity will appeal to a wide range of evaluators, tackling similar issues in a wide range of sectors. Second, the papers in this volume also reflect on what is ‘‘useful information’’ from policy and practice perspectives. The implication here is that not all evidence is equally useful, and future evaluation methods must be informed by an understanding of the needs of policy makers as well as the decisions they are required to make in finding ‘‘solutions’’ to health inequities. What types of information do policy makers and practitioners find useful from evaluations in making a decision on health equities? The answer to this question requires a shift in evaluative focus from ‘‘does a program work?’’ to ‘‘what is it about a program that makes it work?’’. Understanding the mechanisms and contexts that influence how successful a program is can help with the planning and implementation of interventions. Health equity interventions are dynamic (change over time) and therefore depend critically on the context in which they are implemented as well as on stakeholder reasoning (Pawson, 2006).

Key programming challenges include: How can evaluations help program planners and implementers align the complex programming needed with the long term goals of impacting on health inequities? How does this knowledge of what makes a program work help in the planning and implementation of programs? 2. The papers in this volume The first paper, by Tannahill and Sridharan, asks the important question: How can evaluation help understand and build the bridge between problem space to solution space? In addition to unpacking some features of the problem and solution spaces, Tannahill and Sridharan discuss how evaluations can help understand a bridge between the problem and the solution spaces. They pose a number of questions for reflection in thinking evaluatively about health inequities including: Is program (or policy) the right unit of analysis? How can an evaluation shed light on the mechanisms that generate inequities? How does the intervention disrupt such mechanisms? The uniting theme of the subsequent papers is examining the role of evaluation in bridging the solution and the problem spaces. Two of the papers explore the roles of social distance and spatial proximity in evaluating health inequity initiatives. Parsons et al. describe an evaluation of a multimedia (photo and sound) art installation intended to promote awareness of health disparities as experienced by homeless persons living in Toronto, Canada. The objective of the evaluation was to determine whether the installation had an impact on audience members, and if so, to understand the mechanisms and pathways of influence on viewers’ perspectives on homelessness. This evaluation helps demonstrate the complexity of the pathways by which artsinformed ‘interventions’ work including allowing viewers to re-imagine the lives of others and identify points of common interest. Parsons et al. demonstrate that one important mechanism by which evaluation of arts programs can work is to reduce the social distance. They implicitly challenge the narrow way in which impacts of health equity interventions are normally evaluated. The idea of social distance also provides a good segue to Koschinsky’s focus on spatial proximity. Koschinsky argues for the need to incorporate spatial thinking in evaluation of health equity initiatives. A spatial perspective on health inequities is important because it can bring attention to the context of a program and the interdependence between the program and its context, along with the a more concrete understanding of mechanisms by which interventions work. In our experience, program planning and implementation often does not take a nuanced approach towards incorporating context in implementing the interventions. For example, very few evaluations challenge the boundaries between programs and contexts; fewer still investigate the role of the location of the program in the theory of change of the intervention. Koschinsky provides strong evidence that the spatial revolution has mostly left evaluation untouched and reflects on the consequences of this. She relates the spatial perspective to both a realist evaluation and a randomized control trial perspective in evaluation to demonstrate some benefits of a spatial perspective to evaluation. Two of the other papers in this series unpack details of a theory of a solution space. Montague and Porteous discuss the importance of an explicit reach component in a program theory of health inequities. They argue persuasively for the need to build reach into logic models and results frameworks of public health initiatives. They also highlight problems that can stem from a lack of explicit description of reach including the potential of interventions to exacerbate health inequities. While much of their focus in on representing (and visualizing) health equity interventions, they also discuss the

S. Sridharan, C. Tannahill / Evaluation and Program Planning 36 (2013) 153–156

knowledge translation implications of incorporating reach into planning solutions for health inequities. Implicit in their paper is an emphasis on thinking dynamically about interventions, which includes considering the effect of different stakeholder’s participation and engagement with the intervention. The paper presents a number of concrete ideas about reach that can lead to a better representation and translation of interventions to impact health equity. Dunn et al. describe what it means to think of interventions as program theories. They reflect on multiple dimensions that need to be considered when thinking theoretically about health inequities. They examine the theories of three interventions implemented in very different settings to highlight the heterogeneity in views about theory. They provide insight on how evaluators need to respond to the underlying theory of inequities. In addition, they emphasize that theory must be realistic by incorporating knowledge of implementation while allowing space for implementers to scrutinize theory assumptions before implementation. There are three papers with a strong policy and programmatic focus: these include papers from research conducted by staff of the World Health Organization and the Canadian Health Services Research Foundation; additionally there is a recent study on developing a performance measurement system for the Toronto Central Local Health Integration Network. What these papers have in common is the idea that an evaluation and performance measurement system that is ‘‘off the shelf’’ might not work. Further they seek to learn from a synthesis of data. In Simpson’s et al.’s case the focus is on what is a ‘good enough’ case study to inform implementation; Phililips et al.’s focus is learning about context across final reports submitted to a research foundation and Nakaima et al.’s focus is utilizing feedback from many different stakeholders in Toronto Hospitals on how to build a performance system for measuring health equities. Phillips et al. examine contextual factors that impact knowledge uptake in a leading Canadian health services research foundation. This paper is important because funders rarely take stock of what factors are associated with the uptake of knowledge. The paper explores the role of context in knowledge uptake. The authors reach a somewhat surprising conclusion: ‘‘Furthermore, contrary to the literature, which suggests intervention assessment and reporting practices rarely explain the outcomes of an intervention in terms of context, there was a sufficient number of final reports in this study that appeared to attend to context within the project design and measurement.’’ The important contribution of their paper lies in bringing greater clarity on how research and evaluation on health equity can be commissioned to anticipate and incorporate the context in the translation process. Simpson et al. explore case studies as a source of data for taking action on social determinants of health/health inequities. They utilize case studies as a means of understanding what works, in what context and why. The question they raise is: What is a ‘good enough’ case study that can aid implementation of health equity initiatives? An important contribution of their paper is a checklist that will enable policy makers and evaluators to quickly assess if a case study contains enough information to assist in the development of policy options for reducing health inequities. While the authors’ focus is on case studies, the implication of their work goes well beyond this study design: what is a ‘good enough’ evaluation or synthesis that can aid with future implementations of programs or policies? Nakaima et al. discuss how a coordinating local organization (like the Toronto Local Health Integration Network) develops a performance measurement system for health equities. Their focus is on hospitals but there are important lessons for other organizations interested in impacting health equity. Their argument is that performance measurement itself can help make a

155

difference to health inequities. They integrate feedback from 18 Toronto hospitals to synthesize learning about developing a performance measurement system. Some key questions raised in the paper include: Why would the impact trajectory be similar for different hospitals? Can there be a standardized measure of success or should each hospital define success individually? Would different hospitals (with different specializations) need to have very different performance measures? A key focus of future research on evaluation of health equity initiatives is to integrate evaluation with systems of performance monitoring. Lastly, in the final paper Carden looks ahead: he provides a view of evaluation as a resource to rethink health equity interventions, reshape interventions based on lessons learned across a number of implementations and also build a knowledge base of how best to reform systems based on what has worked and what has not worked with other reform efforts. Key the authors’ focus is an integrated view of evaluation in which evaluators work closely with policy makers and practitioners at the beginning, middle and end of implementing programs and projects. The purpose of evaluation, clarity of values and the role of leadership all contribute to his view of rethinking, reshaping and reforming evaluations. 3. Towards a framework of learning The papers demonstrate that there are very different kinds of learning that might be possible from the evaluation depending on the evaluation purpose. There is a need to match the model of learning to that of the initiative; and to identify whether the goal of the evaluation is to demonstrate effectiveness, or to understand the complexity of planning, implementing, and pathways by which the health equity interventions work. Other functions of evaluations can include ‘‘influencing the conceptualisation of issues, the range of options considered and challenging taken-for-granted assumptions about appropriate goals and activities’’ (Sanderson, 2003, p. 333). Additionally, evaluations of health equity initiatives can also create a roadmap that is often missing at the start of an intervention. For example, consider Pawson’s description (2003, p. 488) of the function of evaluation, ‘‘I think the aim should be to produce a sort of ‘highway code’ to programme building, alerting policy makers to the problems that they might expect to confront and some of the safest measures to deal with them. What the theory-driven approach initiates is a process of thinking through the tortuous pathways along which a successful programme has to travel.’’ Often the need to get output numbers can, at times, run counter to the need to innovate and experiment in order to understand the roadmap by which complex programmes can work. Policy makers, staff and managers working on initiatives that address health inequities need help with aligning their programme activities with the overall long-term goal of health inequities. The pathways by which singular programs impact health inequities is complex and, in our experience, even detailed logic models may not help in aligning a programs with the long-term goal of health inequities. To use Pawson’s metaphor above, there is a need for an evaluation that can contribute towards a ‘‘highway code’’ to understand how singular or systemic interventions can impact health equities. The challenge we see is not just one of developing best tools or methods. Instead if is the question of how evaluations themselves can help build coordination between policies, programs and other multiple interventions (Patton, 2010)? How can evaluation help develop a more strategic approach to inter-sectoral interventions? An important argument that Smith and Spenlehauer (1994, p. 277) present is that evaluations can serve to both fragment or integrate policies: ‘‘the question researchers in public policy need to pose is whether policy evaluation is being used as an instrument for the improved integration of policies, or rather as an agent useful to

156

S. Sridharan, C. Tannahill / Evaluation and Program Planning 36 (2013) 153–156

administrative systems for preserving the disintegration of policies, which has allowed these systems to reproduce themselves in the past?.’’ We hope that the papers in this volume help promote an approach that supports an integration of responses to equities. References Patton, M. (2010). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press. Pawson, R. (2003). Nothing as practical as a good theory. Evaluation, 9(4), 471–490. Pawson, R. (2006). Evidence-based policy: A realist perspective. London: Sage. Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review: A new method of systematic review designed for complex policy interventions. Journal of Health Service Research and Policy, 10, 21–34. Sanderson, I. (2003). Is it ‘what works’ that matters? Evaluation and evidence-based policy making. Journal of Research Papers in Education, 18(4), 329–343. Smith, A., & Spenlehauer, V. (1994). Policy evaluation meets harsh reality: Instrument of integration or preserver of disintegration? Evaluation and Program Planning, 17(3), 277–287. Sridharan, S., & Nakaima, A. (2010). Evaluation and health equities: How can evaluations help in responding to health inequities? Invited paper for the WHO Kobe Center Center for Health Development, World Health Organization. World Health Organization. (2008). Closing the gap in a generation: Health equity through action on the social determinants of health. Commission on Social Determinants of Health Final Report. http://www.who.int/social_determinants/final_ report/en/ Accessed 28.10.2011.

Sanjeev Sridharan is Director of the Evaluation Centre for Complex Health Interventions at the Li Ka Shing Knowledge Institute at St. Michaels Hospital and Associate Professor of Health Policy, Management and Evaluation at the University of Toronto. Prior to his position at Toronto, he was the Head of the Evaluation Program and Senior Research Fellow at the Research Unit in Health, Behaviour and Change at the University of Edinburgh. His work over the last decade has been funded from a variety of sources including the Scottish Executive, NHS Health Scotland, U.S. Department of Health and Human Services, UNICEF South Asia and U.S. Department of Justice. He is presently working closely with the China National Health Development Research Centre to build evaluation capacity in the health sector in China. He is also working on an initiative to develop a post-graduate program in evaluation in five S. Asian countries. He is also advising the Ministry of Health in Chile on utilizing evaluation approaches to redesign health policies in Chile. He is on the Board of the Canadian Journal of Program Planning and Evaluation and Program Planning.

Carol Tannahill is Director of the Glasgow Centre for Population Health (GCPH), a research and development centre generating insights and evidence for action to improve health and tackle inequality. In her role as Director of GCPH, she has been instrumental in establishing a number of multi-disciplinary public health research and evaluation projects which are helping to elucidate the pathways linking deprivation and illhealth, and to evaluate the effects on health of various social policy interventions. Over a period of almost 20 years, Carol has contributed to a wide range of international, national and local public health developments. She is Honorary Professor with the University of Glasgow, and Honorary Visiting Professor at Glasgow Caledonian University. Previous posts include Senior Adviser in Health Development at the Public Health Institute for Scotland, and Director of Health Promotion and Executive Board Member for Greater Glasgow Health Board.