Evaluation and Program Planning 29 (2006) 149–152 www.elsevier.com/locate/evalprogplan
Guest Editorial
The relationships of program and organizational capacity to program sustainability: What helps programs survive? Keywords: Sustainability; Program capacity; Organizational capacity; Process evaluation; Nonprofit organization
1. Program capacity and sustainability: what helps programs survive? Even the most effective program faces two challenges: maintaining or expanding its capacity and sustaining its effectiveness over time. An obvious reason for these challenges involves funding issues. Government funding is tight for human services, and it is only getting tighter, due to, among other things, current funding priorities, large budget deficits, and a current focus on aiding an aging population. Although the challenge of sustaining or growing program capacity might not be a primary focus of most, including program funders and evaluators, it is of great importance to the practitioners who run them. This special issue of Evaluation and Program Planning is devoted to better understanding factors that help organizations sustain themselves while they maintain or expand the size of their program(s). It poses the question: How might program planning and evaluations help organizations build on their ideas, develop their capacities, improve their operations, and sustain their functioning? We believe that evaluators have focused insufficiently on organizational and program capacity and sustainability, and such a line of inquiry is long overdue. Program effectiveness, as well as organizational and program capacity and sustainability, are fundamental issues that must be understood for an organization to build and maintain programs. If evaluation demonstrates a program to be effective, then surely other important questions will follow, such as how to expand service slots for those who need it and how to sustain the organization and its related programs into the future. While practitioners who work in areas, such as internal evaluation and participatory evaluation, touch on these issues, they are rarely explicit about them. At the same time, federal policy analysts often look to expand and replicate empirically validated program models. Although they are ever-alert to the failures of policy implementation, they often operate without appreciation of the local forces that make sustainability and capacity possible. 0149-7189/$ - see front matter q 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2005.12.002
2. Definitions In this special issue, we look at capacity at both the organizational and program levels. Therefore, we consider a two-fold definition of capacity that takes both levels into account. Capacity for both programs and organizations can be defined as the adequacy of inputs (knowledge, financial resources, trained personnel, well-managed strategic partnerships, etc.) necessary to carry out a program and achieve desired outcomes. It can also mean service capacity, as in the number of clients that can be served and the dose of treatment (i.e. intensity, duration, and relevance) that can be given (Hunter & Koopmans, 2006). In both uses of this term, capacity is grounded in a program’s logic model and the organizational theory of change of which it is a part. Such a theory of change must answer fundamental questions such as: what levels and kinds of resources are required to achieve immediate, intermediate, and ultimate objectives, both at the organizational (e.g. long-term sustainability) and program (e.g. strong participant outcomes) levels? By developing a strong theory of change, an organization identifies a clear plan for best utilizing its resources as inputs to create the outputs that (it hopes) will result in the achievement of its desired objectives (including, but not limited to, program participant outcomes). If we consider how programs and organizations have capacity when they possess the parts they need to perform well, then it makes sense to assume that capacity building focuses on the process by which those programs and organizations use those parts in optimal ways. Simply having the resources is not enough. Programs and organizations must develop core skills and capabilities, such as leadership, management, and fundraising abilities, and they must utilize the insight and knowledge they gain in ways that address problems and implement change effectively (Center for Philanthropy & Nonprofit Leadership, 2006). Program sustainability has been operationalized in a variety of ways. In a recent review of the literature on program sustainability, Scheirer (2005) offers a critique of available operational definitions of this topic and notes that sustainability is defined differently across existing studies. Recently, Pluye and colleagues (2004) asserted that programs are sustainable
150
Guest Editorial / Evaluation and Program Planning 29 (2006) 149–152
when their essential activities are routinized in the organizations that house them. It seems logical that a program—and the organization that runs it—should have capacity in order to be sustainable. In line with this reasoning, we offer the following provisional view of program sustainability: Program sustainability exists when elements essential to a program’s effectiveness continue to operate over time, within a stable organization, at stable or increased organizational and service capacity. Upstream, midstream, and downstream factors affect organizational and program sustainability and capacity. By downstream factors, we mean those issues that directly impinge upon implementation, such as local demand, resources, and staffing. Upstream factors include funding: The priorities and resources of funders such as government, private foundations and the United Way (and, as well, the ability and willingness of clients to pay for the program or service). Midstream factors include the characteristics of the organizations that house and run programs. The diversity is enormous in terms of scale, complexity, and reach. Think of neighborhood-based single-service providers (e.g. faith-based after-school programs); community mental health centers; community development corporations and settlement houses; local school districts; Federally Qualified Health Centers; or the national organizations sponsoring local affiliates or chapters (e.g. the American Heart Association, Boys and Girls Clubs of America, Big Brothers Big Sisters of America). Organizations may house more than one program, as when a school district operates both an adult education program and a special education program that is split off from its ‘mainstream’ schools. Or, programs may be implemented across many organizations, as when the federal Compensatory Education program spans state educational agencies and local school districts, or when the Heart at Work program is implemented by local Heart Association chapters. While many of this country’s largest and most important social or human service programs are run by the public sector, some are offered by the corporate sector. This special issue of Evaluation and Program Planning particularly focuses on nonprofit organizations and the programs they run. It should be obvious that organizational capacity comprises an important set of midstream factors that influence program capacity and sustainability. We define organizational capacity as the ability to (1) manage its operations successfully over time, (2) run programs in conformity to the performance criteria spelled out in their logic models, and (3) implement and complete new projects or expand existing ones. Evaluators sometimes make the valid point that organizational survival and capacity are not the same things as program sustainability and capacity (Scheirer, 2005). However, programs are inherently context-constrained—they are implemented locally, by organizations. Therefore, we believe that local program sustainability and capacity are intimately linked with local organizational survival and capacity. And, across settings, program sustainability and capacity are linked to prevalent organizational capacity
and survival. The point is simple: Just as it is difficult to build a solid house on a cracked foundation, it is virtually impossible to build a high-quality, effective and sustainable program in an organization that lacks resources, staffing, and leadership to stand on its own. This perspective raises a fundamental question that more often than not goes unasked: If organizational capacity affects program capacity and sustainability, which organizations can implement which programs? We believe that implementation or process evaluations look at this matter obliquely and always through too narrow a lens. It is essential that such an effort assess the organizational capacity as a whole when examining program implementation. Only if this happens can formative evaluations, and the summative (outcome and impact) evaluations that follow, provide helpful guidance for assessing the validity (effectiveness, efficacy, and sustainability) of programs. It is well recognized that upstream and midstream organizational characteristics have powerful effects on program implementation (e.g. Brock, 2003; Davis & Salasin, 1975; Pressman & Wildavsky, 1984; Scheirer, 1981). At the same time, organizations that house programs are strongly affected by program decisions. It must be noted, however, that capacity building is rooted in collaboration between organizations and their programs. In order to expand capacity, organizations and programs must share ownership and power in advancing their successes and solving their problems (Gray, 1989). This is not an area that has received much attention from evaluators. As seen in several articles of this special issue, programs and the organizations that house and deliver them are engaged in a dynamic relationship and exercise reciprocal influences on each other. 3. Article highlights Seven articles comprise this special issue. These derive from the efforts of two major philanthropies, the Edna McConnell Clark Foundation (EMCF) and the Robert Wood Johnson Foundation (RWJF), to understand both organizational and program capacity and sustainability. The approaches of EMCF and RWJF offer an interesting contrast. RWJF takes a fairly conventional approach to program development and evaluation. By contrast, the Edna McConnell Clark Foundation has departed dramatically from foundations’ usual approaches to program development and evaluation. EMCF takes a novel approach to building and sustaining program capacity: It uses an extremely rigorous due diligence process to identify youth-serving organizations that house programs with empirical evidence of effectiveness and a commitment to tracking participant outcomes (as well as strong leadership, organizational depth, financial sustainability), and then invests in organizational capacity development, including (but not limited to) the building of internal evaluation capacity. In this way, internal evaluation capacity is developed and used for ongoing program performance and quality management. At some point in the process, an external evaluation of program impacts may be appropriate, but EMCF’s grantees are encouraged not to undertake this prematurely.
Guest Editorial / Evaluation and Program Planning 29 (2006) 149–152
At RWJF, it is generally assumed that due diligence in the selection of grantees will assure adequate program implementation and survival. In the first article of this journal, Stevens and Peikes of Mathematica Policy Research Corporation present evidence that, for at least one RWJF initiative, this assumption was justified. The authors studied the survival of 120 Local Initiatives Funding Partners programs after RWJF funding ended. Their findings suggest that programs are more likely to be sustained if they deliver a service that people think is needed and effective, if programs have knowledge on how to operate efficiently, and if programs know how to raise funds on their own. Nonprofit organizations provide an increasing number of human services, and their capacity clearly affects program implementation, reach, effectiveness and survival. In the second article, Patrizi and colleagues describe the EMCF experience with a cluster of six juvenile justice agencies that received multi-year capacity-building grants. The process described in this article offers a view of what can happen if good leaders of stable programs are given opportunities to improve their organizations with limited direction and reporting requirements from a funder. This flexible approach to grantmaking allowed the agencies in this cluster to flourish and strengthen their organizational capacity. In spite of due diligence in selecting program sites, funders and program planners can be disappointed by the capacity of nonprofits to do the work (Miller, Bedney, & Guenther-Gray, 2003). In the third article, Schuh and Leviton describe a framework that helps to predict and explain the effects of organizational capacity on program implementation. The framework is based on diverse research literature and on empirical observation of 56 nonprofit organizations to date. A rating tool derived from the framework assesses key features of nonprofit capacity and development. The rating tool relies on the “maturity model” approach and shows good reliability and validity. The framework can assist in designing developmentally appropriate capacity-building interventions. It also identifies differences and unique assets of nonprofit organizations, which can be utilized to further strengthen their abilities to implement change. The Edna McConnell Clark Foundation’s distinctive approach to funding organizations and the use of evaluation to promote organizational and program capacity as well as maintain and/or improve program effectiveness dictates a special approach to evaluation, as well. The fourth, fifth and sixth articles underscore how evaluation must change its focus when it is used to increase program capacity. The fourth article, by Hunter, is entitled ‘Daniel and the Rhinoceros’ and sets the stage for this needed change. Hunter uses the biblical story of Daniel to demonstrate how simple evaluations that operate within realistic parameters can provide helpful, user-friendly information for organizations. Hunter compares these evaluations with what many program staff regard as ‘the rhinoceros in the room’: Accountability-driven evaluations that emphasize randomized experiments to assess outcomes and impacts. He highlights how, in many situations, simple, reality-driven
151
evaluations can, in fact, be quite useful and even more costeffective than these ’gold standard’ evaluations. Although Hunter’s argument in this article is familiar to evaluation, its specific focus on program capacity building is new. In fact, Hunter asserts that funders provide a disservice to nonprofit organizations when they focus rigidly on accountability and scientifically valid outcomes. In doing so, they often overlook meaningful process-related factors that are tied to an organization’s ability to build capacity. In addition, Hunter argues that simpler evaluations—although less scientifically rigorous than randomized experiments—can be built more easily into organizations’ activities, rather than operate as external forces imposed upon organizations’ performances. Such evaluations are less disruptive to the organizations’ operations and can result in more meaningful evaluative feedback that organizations can use to build their own capacities. The fifth article, also by Hunter, emphasizes the importance of using a rigorous theory to map out the process of building organizational capacity and program sustainability. Organizations can create backdrops for innovation and strong programs when they develop and use strong theories of change as roadmaps to manage their operations—including their programs—at high levels of quality and effectiveness. Strengthening these organizations does not consist merely of providing more program money. In order for a good, effective program to be sustainable or even to expand, more often than not the organization has to grow as well. In this article, Hunter describes how the Edna McConnell Clark Foundation designed workshops to help grantee organizations develop strong theories of change that are meaningful to them, plausible with regard to the objectives they posit, doable (or possible to implement), and testable (or possible to assess empirically). During these workshops, facilitators guide grantees in identifying and clarifying their target populations, choosing outcomes and indicators that are measurable and realistic, and designing programs that meet their objectives and fit the organization’s needs and goals. The foundation believes that through participation in such workshops—and through the careful design and implementation of strong theories of change—these grantees become better able to develop strong foundations for subsequent business planning so they can, in fact, have the capacity to implement their programs long after their EMCF funding has stopped. How might funders and program planners understand whether their investments are meeting their objectives with regard to sustained program effectiveness and growth? In the sixth article, Hunter and Koopmans explore this question by presenting a newly designed measure of effective program capacity called ‘active service slots.’ While turnstile counts of participant attendance are the most widely used measure of program capacity, they do not provide meaningful measures of such capacity because they often fail to provide adequate information about the quality and level of service delivered and, therefore, of the likelihood that participants will benefit as intended. This paper distinguishes between the raw turnstile
152
Guest Editorial / Evaluation and Program Planning 29 (2006) 149–152
count and a more meaningful count that takes the quality and level of service delivery into account. It argues that turnstile numbers alone do not tell the whole story. Rather, in assessing program capacity, it is critical to track not only the gross number of participants in the program, but also the net number of participants who actually utilize services as they were designed to be used. In the seventh article, we return to the issue of sustainability, but this time for a large number of local programs. Herrera and her colleagues at Public/Private Ventures conducted two surveys of the Faith in Action Program, which provides volunteer services to homebound people with chronic diseases. In this initiative, RWJF provided seed grants to interfaith coalitions in literally thousands of communities. With such large sample sizes, the authors were able to examine important factors associated with program survival. In reviewing this special issue, we gain a sense of programs’ vulnerability—but also a heartening sense of their resiliency. By attending to matters of capacity and sustainability as well as issues of effectiveness, evaluators and program planners can increase the resilience of programs and the organizations that run them.
Davis, H. R., & Salasin, S. E. (1975). The utilization of evaluation. In E. Struening, & M. Guttentag (Eds.), Handbook of evaluation research (pp. 621–665). Beverly Hills, CA: Sage. Gray, B. (1989). Collaborating: Finding common ground for multiparty problems. San Francisco, CA: Jossey-Bass. Hunter, D. E. K., & Koopmans, M. (2006). Calculating program capacity using the concept of active service slot. Evaluation and Program Planning, 29(2), doi: 10.1016/j.evalprogplan.2005.10.002. Miller, R. L., Bedney, B. J., & Guenther-Grey, C. (2003). Assessing community capacity to provide HIV prevention services: The feasibility, evaluability, sustainability assessment protocol. Health Education and Behavior, 30(5), 582–600. Pluye, P., Lotvin, L., & Denis, J. L. (2004). Making public health programs last: Conceptualizing sustainability. Evaluation and Program Planning, 27, 121–133. Pressman, J., & Wildavsky, A. (1984). Implementation. Berkeley, CA: University of California Press (expanded edition). Scheirer, M. A. (1981). Program implementation: The organizational context. Beverly Hills, CA: Sage. Scheirer, M. A. (2005). Is sustainability possible? Sustainability is possible! A review and commentary on empirical studies of program sustainability. American Journal of Evaluation, 26(3), 320–347.
Elaine F. Cassidy* Laura C. Leviton Research and Evaluation, The Robert Wood Johnson Foundation, Route One and College Road East, Princeton, NJ 08543, USA E-mail address:
[email protected]
References Brock, T. (2003). A framework for implementation research: Toward better program description and explanation of effects. Unpublished manuscript. New York, NY: MDRC, 33 pp. Center for Philanthropy and Nonprofit Leadership. (2006). Nonprofit good practice guide. Accessed online on January 6, 2006, Available from: http://www.npgoodpractice.org/.
David E.K. Hunter The Edna McConnell Clark Foundation, 415 Madison Avenue, 10th Floor, New York, NY 10017, USA Received 12 December 2005; Received in revised form 21 December 2005; Accepted 30 December 2005
* Corresponding author. Tel.: C1 609 627 7611; fax: C1 609 514 5531.