Evaluation and Program Planning 46 (2014) 25–37
Contents lists available at ScienceDirect
Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan
Ethical implications of resource-limited evaluations: Lessons from an INGO in the eastern Democratic Republic of Congo Ramya Ramanath * School of Public Service, DePaul University, 14 E Jackson Blvd., Suite 1606, Chicago, IL 60604, USA
A R T I C L E I N F O
A B S T R A C T
Article history: Received 29 August 2013 Received in revised form 11 March 2014 Accepted 23 April 2014 Available online 9 May 2014
The emphasis on demonstrable program results in international development work has produced countless evaluation guidelines and numerous scholars have championed specific, ethical-based evaluation approaches to guide international nongovernmental organizations (INGOs). Yet few studies have examined the ethical implications of current evaluation practices among INGOs or the resulting effects on INGO-funded programs. This article focuses on one among a growing population of young, U.S.-based INGOs whose evaluation practices reflect limitations of time, methodological expertise and funding. Drawing on existing principles of ethical evaluations, the author explores the circumstances and potential implications of one evaluation performed by an INGO in the eastern Democratic Republic of Congo and concludes that an ethically defensible evaluation exceeds the capacity of this young INGO. Four propositions are forwarded to highlight the tensions between currently accepted evaluation guidelines and INGO realities. Finally, to help under-resourced INGOs minimize the potential ethical implications for their programs, the article recommends that they prioritize their limited resources to: (1) build local capacity and decentralize evaluation tasks and responsibilities; (2) share program agendas and solicit feedback on implementation from evaluands; (3) share field impressions with local and expert stakeholders; and, (4) translate communications into local dialects to facilitate discussion about structuring future programs and their evaluation. ß 2014 Elsevier Ltd. All rights reserved.
Keywords: International nongovernmental organizations Evaluation Ethics Implications Resources INGO Accountability
1. Introduction It is relatively straightforward to argue in favor of cross-border assistance provided by Northern international nongovernmental organizations (INGOs) (Riddell, 2007; Singer, 2010). While there is little doubt about the merits of a few providing assistance to the multitude of the world’s poor, the dilemma around how best to deliver and then evaluate the effectiveness of such assistance is far more nuanced. Should a Northern INGO’s program to provide economic opportunities to needy women in the global South be evaluated solely on its ability to disburse funds or must it also be assessed on its efforts to undertake a systematic analysis of how beneficiaries may maintain the economic gains they realize? How open must INGOs be about their own uncertainties and failures? Similarly vexing questions should concern INGOs and all those with a stake in their performance.
Abbreviations: INGO, international nongovernmental organization; NGO, nongovernmental organization. * Tel.: +1 312 362 7708; fax: +1 312 362 5506. E-mail address:
[email protected] http://dx.doi.org/10.1016/j.evalprogplan.2014.04.006 0149-7189/ß 2014 Elsevier Ltd. All rights reserved.
INGOs explicitly acknowledge that values such as empowerment (e.g., Action Aid International; Kiva; One Acre Fund; Pact), individual advancement (e.g., Africare; Family Health International; World Resources Institute), community well-being (e.g., Christian Relief Services; Haitian Health Foundation, Inc.; Eurasia Foundation Inc.) and moral, spiritual, even religious sensibility (e.g., Convoy of Hope; World Vision) underpin their missions. Evaluation is recognized as a critical means for testing these claims and thus informing organizational learning, management and decision-making (Benjamin, 2008; Ebrahim, 2003; Grasso, 2010). A focus on ethics, the article argues, can render evaluation practice more realistic and accessible, especially for young, resourcechallenged INGOs that are under new pressures to demonstrate accountability to a wide variety of stakeholders, including donors, beneficiaries, staffs and their partners. What remains unclear is what principles or criteria an INGO should use to order these multiple claims. As Brown and Moore (2010) recount, INGOs can utilize many different types of judgments in settling these claims, namely, legal and regulatory (enforced largely by donors and courts), prudential (based on consequences), ethical or moral, or strategic (based on balancing legal, prudential and ethical claims).
26
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
This article makes a case for the use of ethical guidelines as one means to help young, resource-limited lNGOs organize and evaluate their interventions amidst multiple, often competing, sets of accountability demands. What type of evaluation is ethical (if any) may be hard to determine. What is ethical according to Rachels (2003, 19) may be either a matter of opinion or defined by context. For Rachels (2003), the most crucial consideration in a discussion of ethics is not the decision itself but the reasons and principles forwarded to support it. Hendricks and Bamberger (2010) bring a similar understanding to the notion of ethics in development evaluation when they observe that there are no absolutes, i.e., no good or bad, when applying ethical measures to development evaluation. Yet, drawing on a host of different guidelines, they (2010) conclude that ethical evaluations must realize four values: -
provide information about the effects of programming; identify potential harms; promote equity; and, provide a voice to all stakeholders.
Building on these values, this article attempts to answer two questions: (1) what ethical challenges do evaluators encounter in evaluating programs funded by young U.S.-based INGOs? and (2) what ethical implications do such evaluations have on the programs these INGOs fund? International charities based in the United States have grown in both number and importance in the 21st century (Reid & Kerlin, 2003). According to the National Center for Charitable Statistics (2014), the number of 501(c)(3) public charities focusing on international, foreign affairs and national security work has ballooned from 7,508 in May 2000, to 16,818 in December 2013. Within this set of public charities are several subsets of organizations, including those dedicated to advocacy, cultural exchange, emergency relief and development. The largest two subsets pursue international relief and development. This article pertains to a young U.S.-based public charity that identifies itself as an international development organization and is primarily financed by unrestricted funds.1 As a young organization, with fewer established routines or structures (Crossan, Lane, & White, 1999), its lack of institutionalization offers an opportunity to examine how it may learn from errors and interpret, integrate and organize future evaluation practices to reflect ethical principles. Moral theorizing that is attuned to the actual constraints of INGO evaluations can, the author argues, provide a sounder basis for decision-making than generally articulated rules for ethical conduct. Broad-based guidelines for conducting evaluation, such as those supplied by the International Development Evaluation Association (2012), the American Evaluation Association (2004), the United States Agency for International Development (2011), the United Nations Evaluation Group (2008), African Evaluation Association (2006/2007) and others, are useful but need to be considered in relation to exactly how young INGOs go about evaluating the effectiveness of their work. 2. Review of literature 2.1. Evaluation in international development INGOs are often required to complete an evaluation of their work for donors, regulators, the general public and media. The 1 Young public charities (0-5 years old) are likely to be small-staffed and financed by minimal unrestricted contributions from members and other individuals (Avina, 1993). The author recognizes that evaluation challenges for INGOs receiving funds from such sources as foundations and governments may be different.
results of these evaluations can influence program design, determine future funding and ultimately affect the communities where the INGOs work. Recent initiatives like the International Initiative for Impact Evaluation (3ie)2 and the Paris Declaration for Aid Effectiveness3 reflect the heightened attention on demonstrable results. Scholars of civil society have devised numerous frameworks for how to approach and conduct evaluations. Existing scholarly literature on the use of evaluation in international development focuses on five dominant frameworks. Each framework generates a somewhat different picture of the ethics driving the evaluation (i.e., how evaluators make decisions about good and bad and what falls within the moral duties and obligations of an evaluation). These frameworks, their associated contributors and the key values underlying each framework are summarized in Table 1. The most prevalent and widely documented is the logical framework analysis (LFA). It and its many derivatives recommend prioritizing and outlining program objectives, intended outcomes and the causal relationships between them (Bennett, 1979; Funnell & Rogers, 2011; Jackson, 1997; McLaughlin & Jordan, 1999; Rockwell & Bennett, 2004; USAID, 1979). If rigidly focused on quantification (i.e., product rather than process data), LFA can be an inadequate tool for monitoring complex development interventions (Fine, Thayer, & Coghlan, 2000). Ebrahim (2005, 86) analyzes the use of LFA among Southern NGO grantees and finds that the resulting efforts better meet the account-seeking needs of international donors than the learning and decision-making needs of Southern NGOs. The ethics of such an approach are shaped by the upward accountability demands of donors which, if left unchallenged, could lead Southern clients (and INGOs) to select the most ‘‘appropriate’’ action based on what fits donors’ expectations. In contrast to LFA’s emphasis on observable and measurable performance data for funders, the expressive accountability framework4 and its many derivatives incorporate data not easily articulated but rather perceived and expressed through NGO and community members (Chambers, 1994, 1997; Cousins & Earl, 1995; Cousins & Whitmore, 1998; Fetterman, 1994; Mertens, 2007; Uphoff, 1991). This framework privileges learning and growth, particularly for Southern clients, by relying on process data collected through field visits, participatory rural appraisals, discussions with field workers, community needs assessment, and training and evaluation (Christensen & Ebrahim, 2006; Cousins & Earl, 1992; Cousins & Earl, 1995; O’Dwyer and Unerman, 2008; Smillie & Hailey, 2001). This framework troubles those who see participatory methods as imposing, not overcoming, power relations and romanticizing local knowledge while ignoring divisions within a community (Cooke & Kothari, 2001; Leurs, 1996; Wallace & Chapman, 2004). A third set of evaluation methods is labeled calculated impact framework, comprising cost-benefit analysis and a host of other techniques developed by grantmaking foundations and venture philanthropists (see Table 1). Collectively, these methods borrow concepts from the business world—such as costs and benefits, internal rate of return, expected return—to monitor social and financial returns for funders. They focus on what to measure, how to monetize costs and benefits and whether resulting outcomes may be attributed to the amount of funds that support the program (Acumen Fund, 2014; Weinstein, 2009; William & Flora Hewlett Foundation, 2008). ‘‘A key challenge in such quantification and attribution,’’ according to Ebrahim and Rangan (2010, 8), ‘‘lies in addressing the thorny issue of causality.’’ Since program outcomes can be caused by factors unrelated to the program, attribution is 2 3 4
http://www.3ieimpact.org/en/. http://www.mfdr.org/Sourcebook/1stEdition/2-1Paris.pdf. This is a term borrowed from Knutsen and Brower (2010).
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
27
Table 1 Key frameworks in International Development Evaluation. Framework
Description of intent
Contributors
Key values
Logical Framework Analysis (and other results-based management tools)
Starts with identifying problems and analyzing them to produce and uphold objectives; it is a structured, resultsfocused approach to prioritizing objectives
USAID (1979), Bennett (1979), Jackson (1997), McLaughlin and Jordan (1999), Rockwell and Bennett (2004), Funnell and Rogers (2011)
Scientific management; structure; cause and effect; outcomes
Expressive accountability (and other relationship-based methods)
Focus is on the process used to achieve the objectives; participation and empowerment of NGO field staff, program participants
Uphoff (1991), Fetterman (1994), Cousins and Earl (1995), Chambers (1994, 1997), Cousins and Whitmore (1998), Mertens (2007)
Reflection; participation; self-determination
Calculated impact (cost-benefit analysis and its variants)
Focus is on quantifying impact and costeffectiveness. Entails some comparison— against expectations, over time, and across types of interventions, organizations, populations, or regions.
Expected Returns (Weinstein, 2009) and; Expected returns (William and Flora Hewlett Foundation, 2008); The Best Available Charitable Option (Acumen Fund, 2014)
Monetization; financial viability; social impact
Randomized control trials (and quasi-experimental methods)
Measures difference in outcome between targets that receive an intervention and equivalent units that do not. Any difference measures program ‘‘effects’’
Campbell and Stanley (1966), Boruch and Wothke (1985), Boruch (1997), Abdul Latif Jameel Poverty Action Lab (n.d.)
Quantification of impact; reduction in (selection and allocation) bias; internal validity; replication, dissemination
Broadened Accountability
Encompasses objective and subjective measures; recognizes multiple stakeholders
Action Aid International (2011), Keystone Impact Planning, Assessment and Learning (n.d.); iScale’s Impact planning, assessment, reporting and learning (IPARL) (n.d.). Also: Patton (1997), Brown and Moore (2010), Unerman and O’Dwyer (2006), Morrison and Salipante (2007), Ebrahim (2010)
Intended use by intended users; prioritization; negotiation; learning and decision-making
Note: Adapted, in part, from Ebrahim and Rangan (2010).
easier in interventions like the provisioning of food, vocational training and housing than in areas like human rights or gender empowerment. Advocates of these methods hold that what gets measured gets done. A fourth framework for assessing program effects is randomized control trials (RCTs) which involves the use of a randomly assigned treatment or control group that does not participate in the program but is equivalent to the participants that do (Campbell and Stanley, 1966; Boruch & Wothke, 1985; Boruch, 1997). Any differences in outcomes between the two groups should represent the effects of the program. Randomization utilizes chance to determine assignments in order to reduce bias. RCTs are currently used in evaluating international development efforts by research organizations such as the Abdul Latif Jameel Poverty Action Lab (n.d.). They are feasible and appropriate for certain types of impact evaluation—such as clinical trials of vaccines or other tangible health care interventions (e.g., Hallfors et al., 2011) or teacher incentive programs (e.g., Muralidharan & Sundararaman, 2011)— where groups can be isolated for comparison. Critics say randomized evaluations are expensive, ambiguous and unfairly deprive control groups of positive benefits of interventions. Some assert that quasi-experimental methods (like pretest–posttest design and interrupted time-series design, among others), where groups are assigned by means other than true randomization, are a good substitute. According to Valadez and Bamberger (1994), international development agencies like the World Bank successfully apply quasi-experimental designs. Scholars caution that to minimize the potential for bias in group selection, even quasiexperimental methods require a thorough review of literature prior to collecting variables to serve as statistical controls and as the basis for statistical modeling. In reporting findings, evaluators are urged to ‘‘advise stakeholders in advance that the resulting estimates of program effect cannot be regarded as definitive’’ (Rossi, Lipsey, & Freeman, 2004, 297).
The fifth dominant framework is the broadened, integrated or holistic accountability framework5 which embraces the upward, internal, lateral and downward accountability requirements facing INGOs. International development funding agencies (e.g., Action Aid International, 2011) and other nonprofits and global networks (e.g. iScale, n.d.; Keystone Accountability, n.d.) have recognized the multiple and often competing dimensions of accountability that INGOs must address. Scholars, notably Patton (1997), Brown and Moore (2010), Morrison and Salipante (2007), Unerman and O’Dwyer (2006) and Ebrahim (2010), recommend that NGO leaders deliberate and prioritize their accountability to key stakeholders. Brown and Moore (2010) suggest that accountability be aligned with the INGO’s guiding strategy. Thus, service delivery INGOs may owe greatest accountability to donors and service regulators; capacity-building INGOs may be most answerable to the clients whose capacities they are developing; and policy INGOs may be chiefly accountable to the targets of their advocacy. Patton (1997) similarly calls for a utilization-focused evaluation and asks evaluators to identify, organize and engage primary intended users i.e., those who have a direct, identifiable stake in the evaluation. To deepen an evaluation’s utility, evaluators are encouraged to also identify the secondary stakeholders and audiences of the evaluation (Patton, 1997). Since the needs of each stakeholder group most likely cannot be fully addressed in a single evaluation, it is often useful to identify and focus the evaluation on the needs of one or two primary stakeholders. The ethical basis of this framework is particularistic, with each INGO encouraged to realize its highest values by negotiating performance measures with its own set of key stakeholders, thus enhancing the utility and actual use of evaluations. Each of these frameworks provides a reasonable approach to evaluation. But the author argues that their usefulness in 5
This is a term I borrow from Morrison and Salipante (2007).
28
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
demonstrating accountability—to upward, internal, lateral, downward audiences and/or to wider groups of constituents—is limited in the pursuit of an ethical INGO evaluation. In the last two decades, data obtained on INGO performance show the ‘‘dark side’’ (Slim, 1997, 244) of INGOs struggling to meet the pressures and expectations of multiple, often competing accountabilities. INGOs are accused of failing to make a difference; of being too corporate and professionalized; of being departmentally fragmented and of misrepresenting community needs; of yielding to the demands of their donors; of lacking critical internal reflection; and of having weak downward accountability procedures (Chambers & Pettit, 2004, 137; Edwards & Hulme, 1995; Lewis and Opuku-Mensah, 2006, 668; Power, Maury, & Maury, 2002, 100; Zaidi, 1999). The standards set by evaluation associations such as those listed earlier are one response to pressures for tighter INGO regulation exerted by government, donors, media and the general public. However, the author finds it critical to distinguish between institutional mechanisms designed to serve a regulatory function over evaluations and efforts by membership associations and networks to establish ethical guidelines. The latter require self-regulation and ‘‘felt-accountability’’ (Fry, 1995) of a program evaluator whose duties and obligations extend to other program evaluators, to those being evaluated and to the general public. Despite the prevailing understanding that ‘‘ethics matter’’ to evaluation (Morris, 2003), there is little research on the nature and effects of ethical issues in the evaluation of programs funded by a growing number of young, resource-limited INGOs. Resource limitations are known to have ethical implications on programs by inhibiting an evaluator’s ability to fulfill his or her moral duties and obligations as an evaluator (Hendricks & Bamberger, 2010). These limitations can hinder an evaluations’ ability to justify the decisions that follow from it; decisions that Rachels (2003) argues carry the potential to be unprincipled. This article presents a case study to illustrate the circumstances and potential ethical implications surrounding the evaluation of one such resourcelimited INGO. 3. Methods The article draws on existing research and uses a multi-method ethnography (participant observation, interviews, focus groups, secondary source analysis, and audio and video recordings) to study the evaluation practices of a young U.S.-based INGO that the author calls Women’s Resource Fund (WRF). For purposes of confidentiality, all names used in the study are pseudonyms and specific locations are also disguised. Women’s Resource Fund is an INGO headquartered in a small city in Midwestern United States. Anna, the founder, was a graduate student of the author (from 2007 to 2009). The author took a keen interest in the student’s efforts to start her INGO and was later invited to create a research agenda to inform the INGO’s evaluation practices. To start, the author observed WRF’s governance for three months. This involved participant observation of one board meeting, one board retreat, two fundraisers and two Program Committee meetings. The author also conducted archival research into WRF’s grant applications, Anna’s field reports, the INGO’s quarterly newsletters and its audited financial statements. Semi-structured interviews were held with Anna, three members of WRF’s Program Committee and five members of the board. These conversations were structured to gather data on the subjects’ respective understanding of WRF’s mission, strategy and the key successes and challenges it faced as it planned, monitored and evaluated its programs. The author traveled with the INGO on its evaluation trip (March 3–20, 2011) to East Africa where WRF conducted evaluations of ongoing projects in Uganda, Kenya and eastern DR Congo. This was
the INGO’s second trip to the DR Congo; the first was made in 2009 to solicit grant proposals. On the 2011 trip, Anna and a founding WRF board member evaluated two projects in Uganda, one in Kenya and six in eastern DR Congo. The author was a participant observer of the evaluations and other planning activities at all nine sites and had unique access to observe the INGO’s challenges as it implemented evaluation and grappled with issues over organizing its future interventions in DR Congo. The choice to concentrate on this particular case is driven by its revelatory nature (Yin, 2003). The subject project is located in a district in the eastern DR Congo (Fig. 1), a vast, resource-rich country in the heart of Africa. At the time of this visit, the district in eastern DR Congo was emerging from a longstanding and violent conflict between two main ethnicities, the Hemu and the Lendu. Given this context and the INGO’s own time and budget constraints, the evaluation trip to the district was planned to be short, from March 8–13, 2011. The author audio recorded and occasionally video recorded all key interactions observed between WRF and the community members visited at the project sites, the Committee for WRF in the Congo, and amongst the INGO leaders. These ‘‘key interactions’’ are the embedded units of analyses of this study. A Congolese hired by the INGO verbally translated these interactions into English for the visiting INGO team and translated the visitors’ responses into French or Kiswahili. Limited time and access to a translator prevented the author from independently interviewing the Congolese stakeholders, conversations that would have provided a deeper, more complete understanding of the program’s implementation and the implications of the evaluation on the program’s future. Upon returning to the U.S., the author continued participant observation of WRF’s follow-up activities for the next 18 months, consisting of archival research,6 Program Committee and board meetings, and fundraisers. All the audio recordings collected by the author were translated into English by a U.S.-based professional translation firm. The case study presented here was developed into an audio-visual teaching case later (Ramanath, 2014) and was shown to Anna, the Program Committee and the board. The data collected was manually coded to identify the key ethical challenges facing INGOs as they evaluate their projects. These key challenges were then examined in the context of the four principles of ethical evaluations enumerated earlier. 4. The INGO and the eastern DR Congo 4.1. Women’s resource fund and entry into the DR Congo Women’s Resource Fund (WRF) registered as a tax-exempt 501(c)(3) public charity in 2006 and began work with a threepronged mission to advance health, education and economic development opportunities among the world’s 1.5 billion women who earn less than $2 a day. Anna was the INGO’s executive director and sole full-time paid staff.7 A portfolio of developing country NGOs and other grantees was built almost entirely on the basis of referrals and friendships and through networks Anna had developed through her former work as a church volunteer and a missions director. Grant applications were solicited online (via WRF’s website), but mostly in person when Anna visited the host countries. WRF’s Program Committee vetted the applications on 6 Anna wrote a field report of her March, 2011 trip to Africa which was published in an English-language quarterly newsletter and uploaded to WRF’s website. This was also mailed to several U.S.-based stakeholders. Anna also verbally reported to her board on the number of women that WRF had ‘benefited’. A French-speaking WRF volunteer translated all documents collected in Africa (evaluation reports and new grant applications) to English. 7 In January 2012, WRF hired a part-time administrative assistant on an hourly basis and in 2013 a part-time, hourly bookkeeper.
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
29
Fig. 1. Map of DR Congo: The elliptical shape represents WRF’s area of work in the eastern district of DR Congo. Source: Accessed February 2014: http://www.nationsonline.org/oneworld/map/dr_congo_map2.htm
the basis of participant demographics, project sustainability, budget, other funding sources, accountability systems, the applicant’s prior experience with WRF, and other project details. However, the committee only seriously considered applications from people Anna personally knew and trusted or who came recommended by others that she knew. WRF’s entry into eastern DR Congo followed a similar pattern. In her brother’s church in early 2009, Anna met Edmond, an aspiring lawyer from DR Congo who was fluent in English. His father was a former Parliamentarian who had been President of Civil Society of a northeastern district of the DR Congo from 2001 to 2007 (see attached map, Fig. 1). Anna and her INGO knew little about the DR Congo beyond some newspaper accounts about the country’s legacy of brutal rape and abuse of women. Edmond offered to escort her there. A few months later, in mid-August 2009, Anna headed to Africa to visit the DR Congo, to make a follow-up visit to a WRF-funded project in Kenya and to meet a potential partner in Uganda. She arrived in eastern DR Congo accompanied by a founding board member, Alicia, and her new contact, Edmond. Their entry via Rwanda was bumpy. They were told that Americans needed visas to enter the country and they went over-budget while waiting three days for emergency visas in Rwanda near the border of Bukavu, DR Congo. Three days, a boat ride (to Goma, DR Congo) and another flight later, they reached Edmond’s family home in an eastern district DRC city. Edmond’s father, the former Parliamentarian and civil society leader, hosted them. Anna was struck by the obvious respect he commanded within the district and by his fluency in local dialects. Edmond and his father escorted Anna and Alicia through exceedingly rough terrain to visit several villages in the district. At their first stop, about a 30-mile drive from Edmond’s home, a large group of villagers greeted them. Anna’s team pulled aside six of the village women for a group conversation that Edmond translated.
They shared their experiences of the 2006 war in which some had lost their husbands and children. They also talked about the current lack of food, work, money and education. Before leaving, the visitors were handed three proposals from a community-based organization (CBO) of village women, headed by someone named Cecile, which was working to improve the conditions of local women. The proposed projects included the purchase of a graingrinding mill and materials to construct a shed to house it and also a vocational school for women. 4.2. Project selected and funds wired: September 2009–May 2010 Back in the U.S., WRF’s Program Committee8 translated the documents from the CBO from French to English, ranked the application on a rubric of 15 different criteria9 and then met with Anna. In the grant application to WRF for the mill,10 Cecile described how the CBO had created several activities designed to improve the socio-economic situation for area women and foster economic independence. It did not specify the nature or outcomes of their past activities. The application ignored requirements specifying the number of intended beneficiaries of the project and did not state who would operate the mill or how beneficiaries would be selected. In the application form, Cecile noted that the
8 Board members and volunteers were placed on six committees—Program, Finance, Administration, Development, Marketing and Personnel. 9 Each of the 15 criteria is awarded a score from 1 through 5. The three-member Program Committee’s scores for the mill project ranged from a high of 71 to a low of 54 out of a possible 75 points. The member awarding the lowest score had concerns about accountability mechanisms, demographics, sustainability and budget. 10 WRF shared only the mill project proposal with the author. This proposal was handwritten in French and included five sections: context and justification for the project; project objectives; proposed activities and inputs required; planned activities; and budget.
30
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
CBO’s members would contribute funds to the project but did not specify an amount. The Program Committee approved the mill grant application as an economic development project, awarding it the full $2000 requested. It also approved the CBO’s request to fund a vocational school for women to the extent of $2000 but made that funding contingent on a positive appraisal of the CBO’s performance in the mill project. In March 2010, WRF signed a Memorandum of Understanding (MOU) with three Congolese volunteers designated as the Committee for WRF in the Congo. WRF and the committee mutually agreed that the latter would receive no remuneration from the INGO other than payments for expenses incurred in administering and monitoring the program.11 These three, who were put in charge of the project and its accountability, were Edmond’s father, selected as the committee representative; a male medical doctor that Edmond’s father recommended as WRF’s health coordinator in DR Congo; and a female NGO professional also well-known to Edmond’s father who was assigned as project coordinator for this and five other WRF-funded projects in villages throughout the district. Upon signing the MOU with WRF, the Congolese team also signed an MOU with the CBO’s president and the deputy leader in July 2010. This second MOU was in French and titled the mill project somewhat differently: Installation of a Mill and Agriculture and Livestock.12 The MOU with the CBO did not mention WRF’s major accountability demand, namely, the number of women to be impacted by the project. Anna handed responsibility to what she believed was an able team of experienced professionals who were trusted by the locals. They received a $280 stipend for expenses to be incurred in monitoring and administering the mill project. Funds for the mill project, totaling $2280, were wired to a bank in eastern DR Congo in May 2010 and the project began as scheduled in July 2010. 4.3. Evaluating the mill project: March 10, 2011 Less than a year after wiring the money to the Committee for WRF in the Congo, Anna prepared to visit the mill project site to evaluate progress and, if satisfied, disburse the promised funds for the vocational school project and even consider potential new proposals from the CBO (and other groups). This occurred early in the grant cycle but Anna was eager to get data that she could use as evidence to buttress further fundraising appeals for other projects in this region. Although Anna’s team had a smoother arrival into eastern DR Congo this second trip, other tensions soon surfaced. On this visit Anna did not have the advantage of a skilled translator like Edmond. She had hired Edmond’s friend, Chris, and was disappointed to find that he had no experience translating, was more fluent in French and Kiswahili than English and did not speak the language used in Cecile’s village. In addition, Edmond’s father, Anna’s trusted, volunteer Congolese representative, had developed throat cancer and could only whisper. Neither of the other two members of WRF’s DR Congo team could attend the site visit. Before going there to meet the CBO, Edmond’s father showed Anna a written report of all the expenses incurred in purchasing the mill and constructing the 11 The MOU specified that the committee was to attend the first visit of each project when funds would be distributed, to accompany the WRF team on monitoring visits, and to visit each site near the end of the project to verify completion and to fulfill their administrative responsibilities. 12 Agriculture and livestock were not mentioned in WRF’s records nor was the creation of a fund for mothers. These items were mentioned in the original French version of the application submitted by the CBO, but apparently did not make it into the English translation in which the project is entitled simply, ‘‘the mill project to benefit area women.’’
surrounding shed. Though detailed, it mentioned neither the number of women who benefited from the project nor the extent and nature of their involvement. Anna asked Chris to tell Edmond’s father and the rest of the Congolese team: We have to know the number of people helped. . . Also, how they are going to count who is helped. They have to start predicting, have to set goals. Makes a big difference. Five people with $5000 or 5000 people with $5, must know. The reason we ask this is because of the cost-benefit factor: how much did the project cost and how many people were helped? That they need to project. Complications over the costs of renting a jeep, a driver, a translator and accommodations made Anna worry that her trip would go over-budget. The site visit thus began with concerns about trip expenses, the translator and the absence of information on the extent and nature of the CBO women’s involvement in the mill project. The team allocated three to four hours for their visit to the village. There they were warmly greeted by local women who had assembled in a church along with a sizeable number of men, including Edmond’s father, Chris, WRF’s designated driver (Edmond’s brother), the village chief, and other men and boys who trickled in throughout the meeting. Cecile began by leading a prayer song and remained standing to read a lengthy account of all the expenses incurred. She spoke in French and Chris verbally translated into English (Image 1). Cecile stated that expenses had consumed almost all of the $2000 from WRF. Transporting materials was very expensive, she reported, and seasonal fluctuations had caused other problems. She said the grain seeds had been attacked by insects. This confused Anna since WRF had not approved the purchase of seeds, only of a mill and a shed. She was told that besides purchasing a grinding mill, the money had been used for an agricultural project. Cecile’s report raised additional questions and revealed that the grinder was being operated by a man who was given a small, unspecified sum of money. Anna grew visibly upset as she requested clarifications and got no answers to her questions about who was benefiting from the mill, whether the women were using it for free or how they were using it. At this point, she loudly remarked, ‘‘So basically no one benefits from the mill?’’ (Image 1) Even without a full translation of all of Anna’s comments, the tension in the church was palpable. Cecile was quivering and mostly mumbled her answers to the translator who prodded her to speak up. The secretary and the treasurer of the CBO did not speak. Amidst all this, Anna accused the translator and Edmond’s father of coaching the women and of interrupting the flow of her conversations with them. 4.4. Maybe we should go see the mill! Before things degenerated further, Anna suggested taking a walk to the mill with some ‘‘beneficiaries,’’ including the secretary
Image 1. Chris translates Anna’s question for Cecile: ‘‘So, basically no one benefits from the mill?’’ Photo: Primary author, 2011.
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
31
Image 2. Anna on her walk to the mill with select members of the CBO and Chris (her translator). Photo: Primary author, 2011.
and the treasurer of the CBO, five other women and three men. During the 30-min walk (Image 2), the secretary told Anna that about 450 women across the area were targeted to benefit from the mill, but lack of rain had kept crop yields and mill usage low. Closer scrutiny of this tape-recorded exchange between Anna, the secretary and the treasurer revealed that many of Anna’s questions about the CBO (how it functioned, what it did and how it distributed benefits) remained poorly answered. When the group reached the mill, repeated attempts to start the grinder failed, raising further suspicion. To Anna’s visible relief, the driver who had accompanied the group took over and managed to jump-start the grinder (Image 3). Returning to the village church to deliver concluding remarks, Anna again asked how many had benefited from the mill. Five women in a church packed with at least 60 women, men and children raised their hands. She then asked how many of them were likely to use the mill once there was grain to grind. A substantial number of hands went up (roughly 20 women, four men and three young boys). Anna declared herself satisfied with the project’s outcome and announced that WRF would fund the CBO’s second proposal to start a vocational tailoring project for women. She noted that she would return in 2012 to assess how the tailoring project had influenced area women. After returning to the U.S., Anna reflected further on her decision to fund the second project proposed by the CBO. Anna told the author that on her walk to the mill, she asked the women how many children they had. The CBO secretary told her she had 15 but 5 had died in the war; another said she had 10 but 4 had died. Anna concluded that life for them was difficult. She felt that Cecile lacked leadership abilities but had accomplished the stated goal of setting up a mill. Anna therefore saw no reason to back out of the tailoring project13 and she handed $2000 to the CBO for that project immediately after her speech at the church (Image 4). Anna returned to the village in March 2012, as part of a routine yearly trip that she makes to visit WRF’s project sites around the world, and found that Cecile was gone.14 She had been replaced as president of the CBO by the former secretary who had accompanied Anna to the mill the year before. Since that visit, Anna has not found time to revisit the grinding mill or inquired about its use. Anna got no answers from CBO members to her inquiries about Cecile’s whereabouts or the reason for the leadership change.
13
WRF committed to disburse $2000 to the CBO for a vocational training project by November 2010. Since WRF did not visit DR Congo in 2010, there was a fourmonth delay in delivering on this promise. Soon after Anna’s March 10, 2011 speech in the church, MOUs were signed (between her Congolese team and WRF and between the Congolese team and the CBO) and $2000 in cash was handed to Cecile. 14 Between March 2011 and 2012, WRF’s chief liaison in the Congo (Edmond’s father) had fallen seriously ill. WRF, as a result, had no advance information on the progress of the mill project and the status of the recently funded vocational training project.
Image 3. Anna expresses delight at seeing the grain grinder start. Photo: Primary author, 2011.
Image 4. Anna hands $2,000 to Edmond’s father who, in turn, hands the money to Cecile. Photo: Primary author, 2011.
5. Evaluation circumstances, ethical implications and future actions for resource-limited evaluations Resource constraints plague all types of evaluations, particularly those of international development projects whose evaluators may not fully comprehend or be aware of the complex contexts surrounding the work and may lack time and funds to visit or communicate with the project site. That an evaluation in a remote and troubled area such as eastern DR Congo would be difficult on a young INGO such as Women’s Resource Fund is not surprising. Instructive, however, is an examination of how decisions are made in light of these challenges, what ethical implications they may carry for the program and what measures might help minimize the ethical challenges evaluators face under resource constraints.15 Drawing on Hendricks and Bamberger’s (2010) framework, the author summarizes the challenges of conducting evaluations, their ethical implications and some future steps for organizing and implementing INGO evaluation in Table 2. Column 1 contains the key challenges encountered in the course of planning for and evaluating the program. Column 2 describes how the INGO struggles to realize the four ethical values in the context of each stage of evaluation. Challenges in meeting these values, and the resulting ethical implications, can arise during any stage of the evaluation. Each value is then articulated in a proposition,
15 The case study INGO’s evaluation struggles reflect its project planning and design challenges. The author also recognizes that the case can be analyzed through multiple lenses, including the tensions between upward (donor-driven, quantified) and downward (context-based, culturally sensitive) accountability, or between connoisseurship evaluation (personal judgment) and criticism (verifiable information) (Eisner, 1976) or the dilemmas and consequences of using internal versus external evaluators (Mathison, 1999; Torres et al., 1997). The author acknowledges the value of other prisms but argues that an ethical framework helps identify the potential implications of the tensions observed and improvements for conducting future INGO evaluations.
32
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
Table 2 Applying a framework to identify and address ethical implications of INGO-led evaluations.
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
concluding with a summary of suggested strategies (Column 3) to better align a young INGO’s capacity with the demands to produce an ethically sound evaluation. 5.1. Proposition 1 If the evaluation has limited resources to research all relevant data and documents in advance, then the program may struggle to gather valid and useful information on its effects. Evaluators are often called upon to make explicit the underlying assumptions about how programs work—the program theory—and then use this theory as the foundation of a systematic evaluation (Rogers et al., 2000). Knowledge of this theory helps identify and prioritize key evaluation questions and guides the selection of data collection and analysis techniques (Birckmayer & Weiss, 2000; Donaldson, 2007; Wholey, 1987). However, if the evaluator does not have the time, the methodological expertise or the financial wherewithal to design an appropriate evaluation, then the evaluation may never answer such critical questions as whether the program works or if the pilot should be extended. Some scholars believe there is nothing inherently wrong with treating one’s program as a ‘‘blackbox’’ (Astbury & Leeuw, 2010), but the author finds that this carries potential ethical implications for the program (see Column 2, Table 2). While WRF’s Program Committee favorably evaluated the grant application for the grinding mill, one reviewer had expressed concerns about the project’s accountability, demographics, budget and sustainability. Typical of WRF’s culture, Anna prevailed upon the committee and exercised what Eisner (1976) terms ‘‘connosieurship’’ to judge the proposal’s merit.16 The INGO wired funds for the mill project to DR Congo and Anna made an evaluation visit to the village in 2011 to test her judgment. She expected to find a mill both operated by and benefiting women. The day before her site visit Anna received a report from her local representative accounting for expenses but including no data on how many women had ‘‘benefited’’ from the mill. The site visit also does not yield all the needed information along the logic chain: from outputs (a purchased mill) to desired impact (economic empowerment of women). Lacking information about how a program should work obviously hampers Anna’s ability to design a full and realistic appraisal. It raises questions such as: Is it fair to subject the community to a summative evaluation on the program’s ability to impact women when there is no relevant data on the level and nature of their involvement? Are women of the CBO, in fact, operating the mill? Did the MOU clearly articulate the nature of women’s involvement in the mill? Is the program, therefore, ready to be evaluated for impact on women? What types of evaluation might be most suitable at this early stage of the program (such as participatory reflection, an institutional theory approach)? In order to design an ethically justifiable evaluation for WRF, it may be critical to develop a design in dialog with the INGO’s Congo Committee and with the CBO participating in the process. As evaluator, Anna cannot know what data to collect without systematically analyzing what was agreed upon in the first place, i.e., without a careful scrutiny of what the grantee asked for, why the INGO approved the grant and the subsequent terms of the award. Failure to examine these fundamentals has ethical implications for WRF’s program because as the evaluator, Anna struggles to find what she wants to observe, subjecting the local 16 Connoisseurs, according to Einser (1976, 140) ‘‘appreciate what they encounter.’’ Such appreciation could be tantamount to liking what one encounters but is defined by Einser as an awareness and understanding of what one has experienced. This awareness then forms the basis of the judgment. Theoretically, connoisseurship is distinguished from criticism which is more technical and scientific. Einser (1976, 141) writes: ‘‘Effective criticism requires the use of connoisseurship, but connoisseurship does not require the use of criticism.’’
33
stakeholders to questions that run the risk of being answered incompletely or even inaccurately, simply to please the evaluator. At the church, Anna runs into two pitfalls that highlight the INGO’s research limitations: (1) Anna mistakenly believes that the INGO only supported a mill program and is surprised about the purchase of seeds with the awarded funds; and (2) she has no details about how the CBO operates, let alone how it utilizes the mill, and unsuccessfully presses for a response about the nature and number of CBO beneficiaries. When Anna makes her spontaneous walk to the mill with other CBO leaders, they inform her that 450 women are targeted to benefit from the mill. Anna accepts the number prima facie and, upon returning to the U.S., includes the number in her estimate of nearly 1000 women that she declares were ‘‘positively impacted’’ by WRF’s projects in Uganda, Kenya and the DRC.17 To avoid these pitfalls, WRF’s Program Committee and the local Congolese Committee for WRF team might corroborate such details prior to a site visit. This would require building knowledge of WRF’s mission and imparting evaluation capacity in Anna, the Program Committee and other local stakeholders including the translator and CBO leaders. The connoisseur (Anna) needs to engage the critic (Program Committee) to examine the antecedents of grantmaking and design a system to monitor and evaluate the grant. This system must also incorporate input from the CBO and the Committee for WRF in the Congo. Such a collaboration has the potential to provide a stronger foundation for organizational learning and continuous improvement (Fetterman, 1996), which is discussed in more detail later. However, cultivating capacity among lay members of the community could be costly, requiring the Committee for WRF in the Congo to periodically travel to monitor developments in the program site (a responsibility WRF’s stipend was intended to cover). Holding local partners accountable for every detail also runs the risk that future programs may deliberately or unconsciously be orchestrated to show results that please the INGO (Fowler, 2000, 23–24). At a minimum, the INGO must build the evaluation capacity of its local partners so that they can be held accountable for monitoring progress (and feasibility) toward the INGO’s intended outcome of benefiting local women. WRF’s Congolese team (and the CBO) could be trained to present and document their own account of the successes, challenges and support received. Photographs and storytelling might be used to gather contextrich details not captured in dry technical assessment renditions (Davies & Dart, 2005; Denning, 2005). Although such an institutional theory approach (Herman & Renz, 1999; Murray & Tassie, 1994) might require a longer time frame, it would allow host country stakeholders to drive the evaluation process in a manner that better suits their context and minimizes ethical implications. 5.2. Proposition 2 If the evaluation has limited resources to undertake a systematic field visit, then the program may not acquire a full understanding of the harms and benefits it is distributing. Evaluators are called upon to conduct systematic, data-based inquiries (American Evaluation Association, 2004). However, if the program’s context includes complicated religious and ethnic issues, and if the evaluation is incapable of detecting these distinctions, then the program will not learn about the complexities or be able to act on them. The meeting in the village began with CBO President Cecile’s formal account of expenditures. As both evaluator and executive director of the INGO, Anna was eager to find out how many women had benefited from the mill. Her 17
The sum is highlighted in WRF’s summer newsletter.
34
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
repeated efforts to get an answer did not elicit a response from any of those present. Anna had only a few hours to spend in the village. She yelled in frustration, ‘‘So, basically no one benefits from the mill?’’ to a room full of intimidated evaluands and a translator not proficient in their local dialect. Cecile stayed largely quiet despite repeated queries from the funder, the translator and Edmond’s father. This encounter left the INGO with a series of unanswered questions with serious ethical implications for the program. Are the women confused over what Anna means by the term ‘‘benefits’’? Is some meaning getting lost in translation? Are men co-opting the mill? Is Cecile a legitimate leader or simply a cipher? Is there a village hierarchy that is inhibiting some women from voicing their opinion about the mill? (As noted earlier, the village chief, and other men and boys were present throughout the meeting at the church.) Why does a man operate the mill? Who is he, what has he done and how much is he being paid? How much do they charge for grinding? Are the profits distributed to the local women’s CBO, and how? In a militaristic culture of tremendous male exclusivity and a deep-rooted sense of patriarchal entitlement like in the DRC (Farwell, 2004; Yuval-Davis, 2004), questions like these beg for answers. Rigorous data collection is expensive to undertake. Yet, a combination of approaches can help control costs. Where harms and benefits are not immediately observable, data can be obtained through a more representative sample of primary and secondary stakeholders (Patton, 1997), for instance from female members of the CBO other than its leaders and other female non-members; information can be triangulated from sources such as the INGO’s Committee in the Congo who might inform judgments about the program’s progress; or the site might be independently evaluated by a skilled board member or a volunteer instead of a founderleader. Broadly stated, INGOs and their evaluators must be aware of the ‘‘dark side’’ of INGO work, explained in a previous section, and must reveal how their evaluation methods influence what is or is not shared by those they seek to evaluate.
adaptive strategies influence the distribution of equity in the INGO’s program area. Deeper analysis of the socio-political and economic context within which the women are called upon to operate (and benefit from) this mill is critical. The site of this program was the epicenter of a bloody, eight-year conflict (1999– 2007) between two tribes, the Hemu (powerful livestock keepers) and the Lendu (majority impoverished agriculturists) over land disputes. An uneasy calm has prevailed since 2011 amidst sporadic militia activity, high crime levels and deep inter-ethnic distrust. Militia leaders continue to recruit combatants, supply weapons and commit human rights abuses (Office for the Coordination of Humanitarian Affairs, 2007, as cited in Pottier, 2010, 30). The INGO was aware of the continued presence and effects of a severe ethnic conflict. The death toll and the loss of children and livelihoods consumed Anna’s conversation with the CBO secretary and treasurer en route to the mill. But how these deep-seated conflicts may influence performance, and vice versa, remains poorly examined. Addressing this ethical implication requires careful and potentially time-consuming and expensive data analysis which could stall or even end the program in this district altogether. To minimize the ethical implications, the slightest misgivings about implementation and consequences must be shared and deserve to be brainstormed within the INGO, between the INGO and its Committee in the Congo and, also between the Committee and the CBO. They are the primary stakeholders of the evaluation and may contribute to a deeper understanding of the context shaping the INGO’s work in this region (Patton, 1997). Yet even those measures cannot guarantee that resource-challenged INGOs like WRF will produce effective evaluations. Local elites may have their own vested interests and resist discussion of deepseated divides, particularly with fly-by evaluators. Here INGOs may benefit from the judgments of ‘‘experts’’ who ‘‘are knowledgeable about NGOs and sympathetic to their aims’’ (Horton, 2011, 41): displaying characteristics such as relevant training and experience, familiarity with relevant literature and data and more general traits of intelligence and insight.
5.3. Proposition 3
5.4. Proposition 4
Superficial data analysis results in an incomplete understanding of the elements influencing performance and thus runs the risk of perpetuating the very inequities the program sets out to address. In this case, Anna, as executive director and evaluator, takes sole responsibility for undertaking a task that tests her capacity to carry out a careful inquiry. Alicia, the accompanying board member, is a silent witness and plays no direct role in the course of the site evaluation. Anna grows irritable over not fully comprehending the cause for Cecile’s fear and silence and, above all, is vexed that no one will articulate how the program is tangibly benefiting village women. Faced with a complex situation, decision-makers simplify the task. Wright (1974) points to several simplifying strategies that decision-makers use. In the months following the trip, Anna justified her decision on the grounds that, ‘‘There was, after all, a fully functional mill, wasn’t there? I was making inroads into a new community.’’ Once she witnessed the grinder working, Anna excluded from consideration certain data about mission-related work (namely, details about its benefits to women), even though she considered this important input under less trying conditions. She intentionally limited her data intake by repositioning the focus of her evaluation to a more positive aspect of the mission, namely, that the community had purchased a working grinding mill. The money, she told herself and her board, was used for the designated purpose and that $2000 was such a small sum. Besides deciphering the task environment that compelled Anna’s adaptation, it is necessary to make explicit how her
If the evaluation is limited in its capacity to report its findings to all stakeholders, then the program is less likely to have learned about itself and more likely to be driven by the agenda of a few. Evaluators are expected to know exactly what data to collect and how to analyze them. They also must have a plan for the presentation of those data and the results that simply and accurately answers the evaluation questions. Ideally, the report would be customized for various audiences to read, provide feedback and learn from the findings. Minus such a report, including a description of the scope, focus and methods of the evaluation, it is difficult to understand the basis for decisions that follow from the evaluation, potentially undermining the project itself. Even elaborate reports cannot guarantee that those producing them will know how to use their findings to change behavior (Ebrahim, 2005; Tassie, Murray, & Cutt, 1998). Limited resources make it tempting for NGOs such as Women’s Resource Fund to look no further than easily measurable components of their work, such as the tangible output of a functioning mill, limiting the value of the evaluation and the project. A joint assessment, involving WRF, its Congolese Committee and the CBO (the primary stakeholders), of how a mission to advance the economic empowerment of women can be achieved is critical for learning and capacity building. This is particularly crucial considering two developments following the March 2011 site visit: (1) WRF has not reexamined the performance of the mill and (2) the CBO leader was replaced without explanation. To minimize potential ethical implications, the INGO needs to address
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
questions such as: How far do INGO commitments extend—to the successful completion of the program? Should more money be allocated to get the mill off the ground since Cecile reported that the initial $2000 was nearly all used up? What happened to CBO President Cecile? Is the new leader still responsible for the mill project? What happens if the mill goes unused or if weather again does not cooperate? Does Anna’s involvement in the project end completely and the CBO assume sole responsibility? Could the mill become a white elephant, a symbol of abandonment or of unfulfilled potential or promises? Does the INGO have a responsibility to know more about the CBO and its competence, particularly in light of the puzzling leadership change? If the purpose of evaluation is to build capacity for organizational and sustained community change, then how should the decisions stemming from such evaluations be reported and acted upon—particularly within the communities and groups they seek to support? A report does not have to be elaborate or expensive, but it must clearly communicate why the INGO is doing its current activities (Torres, Preskill, & Piontek, 2005). Reports can take numerous forms (including videos, photographs, role-plays and stories) that can be shared with the program participants, in this case the Committee for WRF in the Congo, the CBO, and WRF’s staff, volunteers and board. Evaluation thus becomes a means to more broadly inform the decision-making and learning processes that should follow. 6. Lessons learned This study displays some well-known intricacies of development evaluation. The case, in which a young Northern INGO is both the funder and evaluator, provides a rich point of departure for further examination of ethical implications generated by resource-challenged evaluations. For young Northern INGOs, such as Women’s Resource Fund, evaluation is complicated by resource deficits and by how INGOs fulfill their missions. Most Northern INGOs do not implement projects by themselves but work, largely or wholly, through local ‘‘partners’’ such as Southern NGOs and CBOs. Women’s Resource Fund, the subject of the case, works with CBOs and other trusted hosts comprised of local elites and translators to implement its projects. To comprehensively and ethically evaluate its activities, it therefore must evaluate these local partners as well, while remaining mindful that excessive INGO demands on host country participants may run the risk of complicating local agendas and explicitly or implicitly cause them to engineer reports to merely please the INGO. The article argues that evaluations undertaken through an ethical lens can more adequately capture and reconcile the INGO demands of accountability with the needs of host country evaluands. Ethical frameworks offer INGOs and their evaluators the structure they need to better understand themselves and the context of their host country, justify the decisions that follow from evaluation, and move forward. Evaluation frameworks (Table 1) and national and international ethical codes offer INGOs a starting point. Moving beyond the standard tenets of ethical conduct—that an evaluation must be respectful of others, serve others, be just, honest and build community—the author explores how challenges in evaluation affect the actual conduct of the program evaluation and raise concerns about potential effects and sustainability. Evaluation challenges of time, methodological expertise and funding hamper organizations’ ability to obtain valid and useful information, identify harm, promote equity and give voice to all those affected (Table 2). Applying these four values of ethical evaluation, the author suggests the following strategies to help minimize the ethical implications that resource-challenged evaluations can have on programs:
35
Proposition 1. Build capacity in and devolve responsibilities to local stakeholders (Southern NGOs and CBOs) and develop evaluation designs in dialog with them; Proposition 2. Share the program’s agenda with those being evaluated and allow them to trustingly provide information on the program’s successes and challenges; Proposition 3. Document field impressions and seek feedback from local and expert stakeholders on the validity and reliability of these impressions and their effect on program performance; Proposition 4. Translate program-related communications into local languages, disseminate reports (perhaps utilizing videos, photographs or other nontraditional methods of data-collection) of evaluation findings and conduct focus group discussions with stakeholders that encourage feedback on how to structure future evaluations and reporting. The action steps the author suggests are strategies intended to reduce the ethical implications for programs subject to underresourced evaluations. They require evaluators to gather highly contextualized knowledge about who the local agents of change might be, how effective they are and how they might be best supported. Resources, including time and money, typically spent on travel could be redirected toward building an infrastructure to support methodological expertise, program design and the development of ethically sound evaluation practices. Fundamental to an ethical evaluation is the creation of an individual, organizational and sector-wide culture that values it as a necessary means to realize organizational growth and development. An INGO’s efforts to ease tensions between what it should do (based on avowed principles) and what it can do (based on resources) are likely to fail if the consequences of such tensions are not recognized and acted on. And while young INGOs, with their scarce resources, may not explicitly articulate their challenges in evaluation or the resulting ethical implications for their programs, the article proposes that the use of an ethical evaluation adapted to their own and their evaluands’ limitations offers them the best means to satisfy accountability demands and mitigate or avoid potentially serious, even fatal, errors. With the persistence of severe poverty and other conditions that will continue to motivate financial aid throughout the developing world, it behooves both Northern INGOs and their attendant stakeholders to continually examine how their well-intentioned programs conform to the moral imperatives of development work. Acknowledgements The author extends her earnest thanks to Linda Levendusky for the able help with editing and for the meticulous insights that strengthened the case analysis. Sincere thanks are also extended to Sara Lepro for her able research assistance. Four anonymous reviewers offered detailed comments that informed many portions of this work. The author, above all, is obliged to Women’s Resource Fund for opening its doors to let me be a part of its learning and growth.
References Abdul Latif Jameel Poverty Action Lab (n.d.). Methodology overview. http://www.povertyactionlab.org/methodology Accessed February 2014. Action Aid International (2010). The empowerment guide. http://www.actionaidusa.org/ publications/empowerment-guide Action Aid International (2011). Alps: Accountability, learning and planning system http://www.actionaid.org/sites/files/actionaid/alps2011_revised06july2011.pdf Accessed February 2014.
36
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37
Acumen Fund (2014). The best available charitable option http://acumen.org/idea/thebest-available-charitable-option/ Accessed February 2014. African Evaluation Association (2006/2007). African evaluation guidelines – standards and norms http://www.afrea.org/?q=about_us Accessed February 2014. Africare (n.d.). Mission and vision. http://www.africare.org/about-us/mission/mission.php Accessed February 2014. American Evaluation Association (2004). Guiding principles for evaluators http:// www.eval.org/Publications/GuidingPrinciplesPrintable.asp Accessed September 2012. Astbury, B., & Leeuw, F. L. (2010). Unpacking black boxes: Mechanisms and theory building in evaluation. American Journal of Evaluation, 31(3), 363–381 http:// dx.doi.org/10.1177/1098214010371972 Avina, J. (1993). The evolutionary life-cycles of non-governmental development organizations. Public Administration and Development, 13(5), 453–474. Benjamin, L. M. (2008). Evaluator’s role in accountability relationships: Measurement technician, capacity builder or risk manager? Evaluation, 14(3), 323–343 http:// dx.doi.org/10.1177/1356389008090858 Bennett, C. (1979). Analyzing impacts of extension programs (ESC-575). Washington, DC: USDA Science and Education Administration. Birckmayer, J. D., & Weiss, C. H. (2000). Theory-based evaluation in practice: What do we learn? Evaluation Review, 24, 407–431 http://dx.doi.org/10.1177/ 0193841X0002400404 Boruch, R. F. (Ed.), (1997). Randomized experiments for planning and evaluation: A practical guide (Vol. 44,). Thousand Oaks, California: Sage Publications Inc. Boruch, R. F., & Wothke, W. (Eds.), (1985). New Directions in Evaluation: vol. 28. Randomization and field experimentation (p. ). San Francisco: Jossey-Bass. Brown, D., & Moore, M. H. (2010). Accountability, strategy and international nongovernmental organizations. Nonprofit and Voluntary Sector Quarterly, 30(3), 569–587 http://dx.doi.org/10.1177/0899764001303012 Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNdly. Chambers, R. (1994). The origins and practice of participatory rural appraisal. World Development, 22(7), 953–969. Chambers, R. (1997). Whose reality counts? Putting the first last London: Intermediate Technology Publications. Chambers, R., & Pettit, J. (2004). Shifting power to make a difference. In L. C. Groves & R. B. Hinton (Eds.), Inclusive aid: Changing power and relationships in international development (pp. 137–162). Sterling, VA: Earthscan. Christensen, R. A., & Ebrahim, A. (2006). How does accountability affect mission? The case of a nonprofit serving immigrants and refugees. Nonprofit Management and Leadership, 17(2), 195–209 http://dx.doi.org/10.1002/nml.143 Christian Relief Services (n.d.). Our history. http://wp.christianrelief.org/about-us/history/ Accessed February 2014. Convoy of Hope (n.d.). Convoy of hope. http://www.convoyofhope.org/about/ Accessed February 2014. Cooke, B., & Kothari, U. (Eds.). (2001). Participation: The new tyranny? London: Zed Books. Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), 397–418. Cousins, J. B., & Earl, L. M. (Eds.). (1995). Participatory evaluation in education: Studies in evaluation use and organizational learning. London: Falmer Press. Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation in understanding and practicing participatory evaluation. In E. Whitemore (Ed.), New directions for evaluation (80, pp. 5–23). San Francisco: Jossey-Bass. Crossan, M. M., Lane, H. W., & White, R. E. (1999). An organizational learning framework: From intuition to institution. The Academy of Management Review, 24(3), 522–537. Davies, R., & Dart, J. (2005). The Most significant change (MSC) technique: A guide to its use. Cambridge: M and E News http://www.mande.co.uk/docs/MSCGuide.pdf (consulted 4 March 2013). Denning, S. (2005). The leader’s guide to storytelling: Mastering the art and discipline of business narrative. San Francisco: Jossey-Bass. Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Mahwah, NJ: Erlbaum. Ebrahim, A. (2003). Making sense of accountability: Conceptual perspectives from Northern and Southern nonprofits. Nonprofit Management and Leadership, 14, 191– 212 http://dx.doi.org/10.1002/nml.29 Ebrahim, A. (2005). Accountability myopia: Losing sight of organizational learning. Nonprofit and Voluntary Sector Quarterly, 34(1), 56–87 http://dx.doi.org/10.1177/ 0899764004269430 Ebrahim, A. (2010). The many faces of nonprofit accountability. In David O’ Renz, D. David, & Herman (Eds.), The Jossey-Bass handbook of nonprofit leadership and management (3rd ed., pp. 101–123). San Francisco: Jossey-Bass. Ebrahim, A. S., & Rangan, V. K. (2010). The limits of nonprofit impact: A contingency framework for measuring social performance. Harvard Business School General Management Unit Working Paper, (10-099), 10-099. Edwards, M., & Hulme, D. (Eds.). (1995). Non-governmental organisations: Performance and accountability beyond the magic bullet. London: Earthscan Publications Limited. Eisner, E. W. (1976). Educational connoisseurship and criticism: Their form and functions in educational evaluation. Journal of Aesthetic Education, 10(3/4), 135– 150. Eurasia Foundation Inc. (n.d.). Eurasia foundation. http://www.eurasia.org/about-us Accessed February 2014. Family Health International (n.d.). Vision and mission. http://www.fhi360.org/aboutus/vision-and-mission Accessed February 2014.
Farwell, N. (2004). War rape: New conceptualizations and responses. Affilia, 19, 389– 403 http://dx.doi.org/10.1177/0886109904268868 Fetterman, D. M. (1994). Empowerment evaluation. American Journal of Evaluation, 15(1), 1–15 http://dx.doi.org/10.1177/109821409401500101 Fetterman, D. (1996). Empowerment evaluation: An introduction to theory and practice. In D. M. Fetterman, S. J. Kaftarian, & A. Wandersman (Eds.), Empowerment evaluation: Knowledge and tools for self-assessment and accountability (pp. 3–48). Newbury Park, CA: Sage Publications. Fine, A. H., Thayer, C. E., & Coghlan, A. T. (2000). Program evaluation practice in the nonprofit sector. Nonprofit Management and Leadership, 10(3), 331–339 http:// dx.doi.org/10.1002/nml.10309 Fowler, A. (2000). Civil society, NGDOs and social development: Changing the rules of the game. Geneva: United Nations Research Institute for Social Development http:// www.observacoop.org.mx/docs/Dec2009/Dec2009-0008.pdf Accessed February 2014. Fry, R. E. (1995). Accountability in organizational life: Problem or opportunity for nonprofits? Nonprofit Management and Leadership, 6(2), 181–195 http://dx.doi.org/ 10.1002/nml.4130060207 Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, CA: Jossey-Bass. Grasso, P. (2010). Ethics and development evaluation: Introduction. American Journal of Evaluation, 31(4), 533–539 http://dx.doi.org/10.1177/1098214010373647 Haitian Health Foundation Inc. (n.d.). About. http://www.haitianhealthfoundation.org/ index.php/about/ Accessed February 2014. Hallfors, D., Cho, H., Rusakaniko, S., Iritani, B., Mapfumo, J., & Halpern, C. (2011). Supporting adolescent orphan girls to stay in school as HIV risk prevention: Evidence from a randomized controlled trial in Zimbabwe. American Journal of Public Health, 101(6), 1082–1088 http://dx.doi.org/10.2105/AJPH.2010.300042 Hendricks, M., & Bamberger, M. (2010). The ethical implications of underfunding development evaluations. American Journal of Evaluation, 31(4), 549–556 http:// dx.doi.org/10.1177/1098214010373376 Herman, R. D., & Renz, D. O. (1999). Theses on nonprofit organizational effectiveness. Nonprofit and Voluntary Sector Quarterly, 28(2), 107–126 http://dx.doi.org/10.1177/ 0899764099282001 Horton, K. (2011). Aid agencies: the epistemic question. Journal of Applied Philosophy, 28(1), 29–43. International Development Evaluation Association (2012). Competencies for development evaluation evaluators, managers, and commissioners http://www.ideas-int.org/ content/index.cfm?navID=19&itemID=396&CFID=635980&CFTOKEN=34039701 Accessed February 2014. iScale (n.d.). Innovations for scaling impact. http://www.scalingimpact.net/innovations/ monitoring-evaluation-learning-impact Accessed February 2014. Jackson, B. (1997). Designing projects and project evaluations using the logical framework approach http://cmsdata.iucn.org/downloads/logframepaper3.pdf Accessed February 2014. Keystone (n.d.). Impact planning, assessment and learning. http://www.keystoneaccountability.org/ Accessed February 2014. Kiva (n.d.). KIVA. http://www.kiva.org/ Accessed February 2014. Knutsen, W. L., & Brower, R. S. (2010). Managing expressive and instrumental accountabilities in nonprofit and voluntary organizations: A qualitative investigation. Nonprofit and Voluntary Sector Quarterly, 39(4), 588–610 http://dx.doi.org/ 10.1177/0899764009359943 Leurs, R. (1996). Current challenges facing participatory rural appraisal. Public Administration and Development, 16(1), 57–72 http://dx.doi.org/10.1002/(SICI)1099162X(199602)16:1<57::AID-PAD853>3.0.CO;2-Z Lewis, D., & Opoku-Mensah, P. (2006). Moving forward research agendas on international NGOs: Theory, agency and context. Journal of International Development, 18(5), 665–675 http://dx.doi.org/10.1002/jid.1306 Mathison, S. (1999). Rights, responsibilities, and duties: A comparison of ethics for internal and external evaluators. In J. L. Fitzpatrick, & M. Morris (Eds.), Current and emerging ethical challenges in evaluation (New Directions for Evaluation, No. 82). (pp.25–34). San Francisco, CA: Jossey-Bass. McLaughlin, J., & Jordan, G. (1999). Logic models: A tool for telling your program’s performance story. Evaluation and Program Planning, 22, 65–72. Mertens, D. M. (2007). Transformative paradigm: Mixed methods and social justice. Journal of Mixed Methods Research, 1(3), 212–225. Morris, M. (2003). Ethical considerations in evaluation. In T. D. Kellaghan & L. Stufflebeam (Eds.), International handbook of educational evaluation: Part one perspectives (pp. 303–328). Dordrecht, Netherlands: Kluwer Academic. Morrison, J. B., & Salipante, P. (2007). Governance for broadened accountability: Blending deliberate and emergent strategizing. Nonprofit and Voluntary Sector Quarterly, 36(2), 195–217. Muralidharan, K., & Sundararaman, V. (2011). Teacher opinions on performance pay: Evidence from India. Economics of Education Review, 30(3), 394–403. Murray, V., & Tassie, B. (1994). Evaluating the effectiveness of nonprofit organizations. In R. D. Herman (Ed.), Handbook of nonprofit management and leadership (pp. 303– 324). San Francisco: Jossey Bass. National Center for Charitable Statistics (2014). NCCS all registered nonprofits table wizard http://nccs.urban.org/ (consulted February 2014). O’Dwyer, B., & Unerman, J. (2008). ‘The paradox of greater NGO accountability: A case study of Amnesty Ireland. Accounting Organizations and Society, 33(7–8), 801–824. One Acre Fund (n.d.). One Acre Fund: Empowering farmers to grow themselves out of poverty. http://www.farmingfirst.org/2012/08/one-acre-fund-empowering-farmers-to-grow-themselves-out-of-poverty/ Accessed February 2014. Pact (n.d.). Our Mission. http://www.pactworld.org/our-mission (Accessed February 2014.
R. Ramanath / Evaluation and Program Planning 46 (2014) 25–37 Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage. Pottier, J. (2010). Representations of ethnicity in the search for peace: Ituri, Democratic Republic of Congo. African Affairs, 109(34), 23–50 http://dx.doi.org/10.1093/afraf/ adp071 Power, G., Maury, M., & Maury, S. (2002). Operationalising bottom-up learning in international NGOs: Barriers and alternatives. Development in Practice, 12(3–4), 272–284 http://dx.doi.org/10.1080/0961450220149663 Rachels, J. (2003). The right thing to do: Basic readings in moral philosophy (3rd ed.). Boston: McGraw-Hill. Ramanath, R. (2014). The ethical implications of fly-by evaluations: Lessons from a leader’s heartfelt experiences in the eastern Democratic Republic of Congo. Cases and Simulations Portal for Public and Nonprofit Sectors. Rutgers University http://spaa.newark.rutgers.edu/casesimportal/ Accessed May 2014. Reid, E. J., & Kerlin, J. A. (2003). The international charitable nonprofit subsector in the United States: International understanding, international development and assistance, and international affairs http://www.urban.org/UploadedPDF/411276_nonprofit_subsector.pdf Riddell, R. C. (2007). Does foreign aid really work? Oxford, New York: Oxford University Press. Rockwell, K., & Bennett, C. (2004). Targeting outcomes of programs: A hierarchy for targeting outcomes and evaluating their achievement http://digitalcommons.unl.edu/aglecfacpub/48/ Accessed February 2014. Rogers, P. J., Petrosino, A., Huebner, T. A., & Hacsi, T. A. (2000). Program theory evaluation: Practice, promise, and problems. New directions for evaluation, 87, 5–13. Rossi, P. H., Lipsey, M. W., & Freeman, H. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, London, New Delhi: Sage Publications Inc. Singer, P. (2010). Foreword. In K. Horton & C. Roche (Eds.), Ethical questions and international NGOs: An exchange between philosophers and NGOs. (pp. v–vi). Dordrecht: Springer. Slim, H. (1997). Doing the right thing: Relief agencies, moral dilemmas and moral responsibility in political emergencies and war. Disasters, 21(3), 244–257 http:// dx.doi.org/10.1111/1467-7717.00059 Smillie, I., & Hailey, J. (2001). Managing for change: Leadership, strategy and management in Asian NGOs. London: Earthscan. Tassie, B., Murray, V., & Cutt, J. (1998). Evaluating social service agencies: Fuzzy pictures of organizational effectiveness. Voluntas, 9(1), 59–79 http://dx.doi.org/ 10.1023/A:1021455029092 Torres, R. T., Preskill, H. S., & Piontek, M. E. (1997). Communicating and reporting: Practices and concerns of internal and external evaluators. American Journal of Evaluation, 18(2), 105–125 http://dx.doi.org/10.1177/109821409701800203 Torres, R. T., Preskill, H. S., & Piontek, M. E. (2005). Evaluation strategies for communicating and reporting: Enhancing learning in organizations (2nd ed.). Thousand Oaks, CA: Sage Publications. Unerman, J., & O’Dwyer, B. (2006). Theorising Accountability for NGO Advocacy. Accounting, Auditing & Accountability Journal, 19(3), 349–376.
37
United Nations Evaluation Group (2008). UNEG code of conduct for evaluation in the UN system http://www.unevaluation.org/unegcodeofconduct Accessed January 2013. United States Agency for International Development (2011). Evaluation learning from experience: USAID evaluation policy http://transition.usaid.gov/evaluation/USAIDEvaluationPolicy.pdf Accessed January 2013. Uphoff, N. (1991). A field methodology for participatory self-evaluation. Community Development Journal, 26(4), 271–285. United States Agency for International Development (1979). The logical framework: A manager’s guide to a scientific approach to design and evaluation http://pdf.usaid.gov/pdf_docs/PNABN963.pdf Accessed February 2014. Valadez, J., & Bamberger, M. (1994). Monitoring and evaluating social programs in developing countries: A handbook for policymakers, managers and researchers. Washington, DC: The World Bank, Economic Development Institute. Wallace, T., & Chapman, J. (2004). An investigation into the reality behind NGO rhetoric of downward accountability. In Lucy Earle (Ed.), Creativity and constraint: Grassroots monitoring and evaluation and the international aid arena. Oxford: INTRAC. Weinstein, M. (2009). Measuring success: How Robin Hood estimates the impact of grants https://www.robinhood.org/sites/default/files/2009_Metrics_Book.pdf Accessed February 2014. Wholey, J. S. (1987). Evaluability assessment: Developing program theory. In L. Bickman (Ed.), New directions for evaluation (Vol. 33, pp. 77–92). San Francisco: JosseyBass http://dx.doi.org/10.1002/ev.1447 William and Flora Hewlett Foundation (2008). Making every dollar count: How expected return can transform philanthropy http://www.hewlett.org/uploads/files/Making_Every_Dollar_Count.pdf Accessed February 2014. World Resources Institute (n.d.). World Resources Institute. http://www.wri.org/about/ values Accessed February 2014. World Vision (n.d.). Who we are. http://www.worldvision.org/about-us/who-we-are Accessed February 2014. Wright, P. (1974). The harassed decision maker: Time pressures, distractions, and the use of evidence. Journal of Applied Psychology, 59(5), 555–561. Yin, R. K. (2003). Case study research design and methods. London: Sage Publications. Yuval-Davis, N. (2004). Gender, the nationalist imagination, war, and peace. In W. Giles & J. Hyndman (Eds.), Sites of violence: Gender and conflict zones (pp. 170–190). Berkeley, CA: University of California Press. Zaidi, S. A. (1999). NGO failure and the need to bring back the state. Journal of International Development, 11(2), 259–271 http://dx.doi.org/10.1002/(SICI)10991328(199903/04)11:2<259::AID-JID573>3.0.CO;2-N
Ramya Ramanath is Assistant Professor with the School of Public Service at DePaul University. Her teaching and research interests focus on nonprofit-government relations, nongovernmental organizations, urban poverty and housing in developing countries. She holds a Ph.D. from Virginia Tech’s School of Public and International Affairs and a Masters in social work from the Tata Institute of Social Sciences in Mumbai, India. Her recent works are published in the Voluntary Sector Review, Nonprofit and Voluntary Sector Quarterly, Nonprofit Managemnet and Leadership, and in Public Voices.