Correspondence
Fred Carden
[email protected] International Development Research Centre, Ottawa, ON K1R 7Y6, Canada 1
Jennifer Bryce and colleagues (Feb 13, p 572)1 present a retrospective evaluation of the Accelerated Child Survival and Development (ACSD) programme in west Africa. One could debate a number of points raised in the study. For example, what is the implication of the finding that the programme operated in the most difficult areas of the countries? Consideration of this contextual factor might have suggested different conclusions about the level of success or failure of the initiative. As an evaluator, I noted that Bryce and colleagues observed that: “these evaluations draw attention to the need for a new approach in assessment of scale-up of large programmes under real-life situations, in which the distinction between intervention and comparison areas is not clear cut”. This was the second reference in the article to methodological inadequacy. The associated Editorial2 called for raising the profile and priority of evaluation in global health and for ensuring higher quality evaluation that contributes to positive change. This is useful. I hope The Lancet will take a strong position on seeking out and publicising the new approaches called for by Bryce and colleagues—more of the same will not help to improve global health. These approaches will need to explore new perspectives on causality. I hope this rolling Lancet series of evaluations of large-scale global health programmes will insist that most of its articles focus on the application of alternative methods (inter alia, mixed methods, comparative case methods, and realist evaluation) that do struggle to understand who in society is affected, in what ways, and in what contexts, bearing in mind that it is not about methods but about improved health outcomes. I declare that I have no conflicts of interest.
www.thelancet.com Vol 375 May 1, 2010
2
Bryce J, Gilroy K, Jones G, Hazel E, Black RE, Victora CG. The Accelerated Child Survival and Development programme in west Africa: a retrospective evaluation. Lancet 2010; 375: 572–82. The Lancet. Evaluation: the top priority for global health. Lancet 2010; 375: 526.
As the deadline for the Millennium Development Goals approaches, an understanding of which interventions improve outcomes for pregnant women and children is crucial. Additional lessons can be learned from the report by Jennifer Bryce and colleagues,1 which detected no overall effect of UNICEF’s Accelerated Child Survival and Development (ACSD) programme on maternal and child health care and outcomes in west Africa. Traditional evaluation of interventions such as ACSD rely on national surveys that are expensive and may be unsuited to environments experiencing rapid change coupled to poor data systems. We strongly support The Lancet’s call2 for better ways of doing evaluative research. Traditional before-and-after comparisons cannot account for local variation in the timing, intensity, and effectiveness of implementation of an intervention. In these settings, we may learn more from a more dynamic approach that promotes real-time feedback3 and uses time-series design. External evaluations should build on the data, measurements, and assessments that are a part of intervention programmes, since traditional approaches will not detect potentially misleading changes in data quality during the intervention. Although whether ACSD was effective in this case is still uncertain, we argue that interventions to improve health outcomes will require novel approaches to redesigning the health delivery system for those interventions.4,5 Evaluation methods that both assist and more accurately measure the effect of complex interventions in challenging settings will deliver useful support to
the intervention programme while providing a more robust assessment of outcomes and impact. We declare that we have no conflicts of interest.
*Pierre M Barker, Nana A Y Twum-Danso, Lloyd Provost
AFP/Getty Images
Retrospective evaluation of UNICEF’s ACSD programme
[email protected] University of North Carolina at Chapel Hill, Chapel Hill, NC 27516, USA (PMB); Institute for Healthcare Improvement, Cambridge, MA, USA (NAYTD); and Associates in Process Improvement, Austin, TX, USA (LP) 1
2 3
4
5
Bryce J, Gilroy K, Jones G, Hazel E, Black RE, Victora CG. The Accelerated Child Survival and Development programme in west Africa: a retrospective evaluation. Lancet 2010; 375: 572–82. The Lancet. Evaluation: the top priority for global health. Lancet 2010; 375: 526. Scriven M. Beyond formative and summative evaluation. In: McLaughlin MW, Phillips EDC, eds. Evaluation and education: a quarter century. Chicago: University of Chicago Press, 1991: 169. McCannon CJ, Schall MW, Perla RJ. Planning for scale: a guide for designing large-scale improvement initiatives. Cambridge, MA: Institute for Healthcare Improvement, 2008. http://www.ihi.org/IHI/Results/WhitePapers/ PlanningforScaleWhitePaper.htm (accessed April 6, 2010). Berwick DM. Lessons from developing nations on improving health care. BMJ 2004; 328: 1124–29.
We applaud the important evaluation by Jennifer Bryce and colleagues.1 With millions of dollars given by international agencies to individual countries for health and development programmes, there is an ethical imperative that the effectiveness of such funds is evaluated, both rigorously and regularly. We offer a few comments on the work by Bryce and colleagues in terms of the design and analysis. First, as Bryce and colleagues rightly point out, because of the limited sample size, especially at baseline, one can detect an effect only if it is very large. Second, one key assumption of difference-in-differences analysis is that the comparison or “control” areas, and their health trends, are similar to that of focus or “treatment” areas. However, Bryce and colleagues make it clear that the focus-area selection was not random and that there were others working in these areas. Thus, the assumptions for using difference-in-differences are
Submissions should be made via our electronic submission system at http://ees.elsevier.com/ thelancet/
1521
Correspondence
not clearly met. Additionally, although they report p values for their differencein-differences, Bryce and colleagues do not report average treatment effects, a normally reported measure for such differences analysis (eg, see work by King and colleagues2), nor do they indicate the contextual variables controlled for obtaining the estimates in table 3. Because the study design and analysis cannot separate effects from the programme in question and those of other intervening factors, we question the validity of the study’s policy conclusions. These concerns about internal validity should point to the importance of rigorous, preferably prospective, evaluation design. We declare that we have no conflicts of interest.
Bradley Chen, *Victoria Y Fan
[email protected] Department of Global Health and Population, Harvard School of Public Health, Boston, MA 02115, USA 1
2
Bryce J, Gilroy K, Jones G, Hazel E, Black RE, Victora CG. The Accelerated Child Survival and Development programme in west Africa: a retrospective evaluation. Lancet 2010; 375: 572–82. King G, Gakidou E, Imai K, et al. Public policy for the poor? A randomised assessment of the Mexican universal health insurance programme. Lancet 2009; 373: 1447–54.
Authors’ reply Corbis
The dialogue stimulated by this first paper in The Lancet’s rolling series on evaluation contributes to its aim of building “an inclusive global network to support evaluation, one that tries to propose new and better ways of doing evaluative research”.1 In addition to their overall support for this aim, the authors of these letters pose several questions which we address briefly below. Fred Carden points to the importance of contextual factors in determining programme success, and especially the fact that the Accelerated Child Survival and Development (ACSD) programme was intended to focus on the most difficult geographic areas in each 1522
country. Our results show that baseline mortality and undernutrition levels were very similar in intervention and comparison areas in the two countries where mortality could be evaluated (table 5). Baseline differences are therefore unlikely to have affected the results. We agree with Pierre Barker and colleagues’ call for evaluations built on data measurements and assessments that are part of intervention programmes. We agree that ideally this should be the case, with data quality maintained prospectively through close coordination between programme implementers and independent evaluation teams. We also believe, however, that the evaluation team must be independent of programme implementation. In the case of the ACSD evaluation, internal documentation and monitoring of programme processes was weak and resulted in early conclusions that could not be replicated retrospectively by the independent evaluation team. Bradley Chen and Victoria Fan reiterate our statement that sample sizes at baseline were an important limitation of the study, which was powered to detect the 20% reduction in mortality in children younger than 5 years (the main objective of the ACSD strategy) but had small sample sizes for coverage indicators with denominators addressing subsets of the population such as antibiotics for childhood pneumonia and exclusive breastfeeding. We do not understand their statement that the “differencein-differences” method requires highly similar baseline levels in intervention and comparison areas— in fact, this method attempts to take into account baseline imbalances by looking at differences over time in each of the two areas. Regarding their critique that we did not report average treatment effects directly, this is not true. Tables 4 (nutrition) and 5 (mortality) include a column with changes over time,
which are akin to treatment effects, and in table 3 (coverage) these may be obtained by subtraction. We do agree with Chen and Fan that the presence of other programmes in the comparison areas has affected our results, as mentioned in our conclusions. However, given that the ACSD programme was aimed at accelerating progress, we stand by our main policy conclusions that there was no evidence of such acceleration. We declare that we have no conflicts of interest.
*Jennifer Bryce, Kate Gilroy, Gareth Jones, Elizabeth Hazel, Robert E Black, Cesar G Victora
[email protected] Institute for International Programs, Johns Hopkins Bloomberg School of Public Health, 615 North Wolfe Street, Baltimore, MD 21205, USA (JB, KG, GJ, EH, REB); and Universidade Federal de Pelotas, Pelotas, Brazil (CGV) 1
The Lancet. Evaluation: the top priority for global health. Lancet 2010; 375: 526.
Enzyme replacement therapy for Fabry’s disease I would like to draw your attention to a fundamental question regarding the design of the study by A Mehta and colleagues (Dec 12, p 1986).1 I am an investigator on the Fabry Outcome Survey (FOS) and have contributed data to this analysis. I believe that my experience in relation to this study might be instructive. To be recruited into this study, a patient with Fabry’s disease would have had to have started treatment with agalsidase alfa and remained on that treatment for 5 years. Six of the male patients that I see began enzyme replacement therapy with agalsidase alfa in a timeframe that should have allowed an analysis of their 5-year outcome. Of the six patients, only one remains alive and on agalsidase alfa treatment. He is doing reasonably well and is probably www.thelancet.com Vol 375 May 1, 2010