Evaluation and Program Planning 80 (2020) 101799
Contents lists available at ScienceDirect
Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan
Value for money: A utilization-focused approach to extending the foundation and contribution of economic evaluation
T
Christina Peterson*, Gary Skolits University of Tennessee, College of Education, Health & Human Sciences, United States
A R T I C LE I N FO
A B S T R A C T
Keywords: Utilization-focused evaluation Value for money Evaluation use Economic evaluation Rubric
Value for Money (VfM) is an evaluative question about the merit, worth, and significance of resource use in social programs. Although VfM is a critical component of evidence-based programming, it is often overlooked or avoided by evaluators and decision-makers. A framework for evaluating VfM across the dimensions of economy, effectiveness, efficiency, and equity has emerged in response to limitations of traditional economic evaluation. This framework for assessing VfM integrates methods for engaging stakeholders in evaluative thinking to increase acceptance and utilization of evaluations that address questions of resource use. In this review, we synthesize literature on the VfM framework and position it within a broader theory of Utilization-Focused Evaluation (UFE). We then examine mechanisms through which the VfM framework may contribute to increased evaluation use. Finally, we outline avenues for future research on VfM evaluation.
1. Introduction A world with scarce resources requires strategic decisions regarding their allocation toward programs that address public need. The implication is that choosing to allocate resources to one program involves an opportunity cost such that the prospect of allocating these same resources for alternative uses is forgone (Drummond, Schulpher, Claxton, Stoddart, & Torrance, 2015). From this perspective, economic evaluation is as much an applied moral philosophy for comparing the value of alternative courses of social action in terms of resource use and outcomes as it is a decision-making strategy (Adler & Posner, 2006; Rudmik & Drummond, 2013). Through a systematic assessment of both costs and consequences, economic evaluation helps compare the feasibility, scalability, sustainability, and equity of program alternatives to help optimize the impact of invested resources (Adler & Posner, 2006; Svistak & Pritchard, 2014). For this reason, the Evidence-based Policy Making Collaborative emphasizes the increasing importance of economic evaluation to governments and philanthropic organizations which seek to support programs that provide the greatest value for money (White & Silloway, 2016). Proponents also argue that accounting for the costs, uncertainty, and present value of a program contributes to evidence-based practice by providing a more complete basis for decision-making about the best possible use of resources (Helfand, 2005). Although economic factors are a critical link between program evaluation and evidence-based practice, they are often
⁎
overlooked or avoided by evaluators and decision-makers (Herman, Avery, Schemp, & Walsh, 2009; Persaud, 2007; Yates, 2012). Utilization issues have been important areas of inquiry in the evaluation field for decades (Alkin & King, 2016; Kirkhart, 2000; Patton, 2008). Studies regarding the prevalence of economic evaluation use suggest that although the majority of decision-makers believe cost-effectiveness should be considered, it seldom impacts their decisions (Williams, Bryan, & McIver, 2006; Zwart-Van Rijkom, Leufkens, Busschbach, Broekmans, & Rutten, 2000). Obstacles to using economic evaluation are institutional, political, cultural, and methodological in nature (Eddama & Coast, 2008; Williams, Byran, & McIver, 2006; Zwart-Van Rijkom, et al., 2000). In part, decision-makers avoid making choices about interventions and resource allocation based on cost-effectiveness alone, preferring to use their own judgement of an intervention’s advantages and disadvantages across multiple criteria (ZwartVan Rijkom et al., 2000). Economic evaluation has traditionally been perceived as isolated from moral considerations like equity and environmental sustainability. This perceived gap has contributed to strong resistance against using economic evaluations among program staff and decision-makers (Brazier, Ratcliffe, Salomon, & Tsuchiya, 2017). Furthermore, organizational and program budgets are frequently inflexible and evaluation findings are often not available in a timely manner. Evidence also suggests many decision-makers do not find the information relevant to real world program implementation, in part because they perceive
Corresponding author at: University of Tennessee, 503 Bailey Education Complex, 1122 Volunteer Blvd., Knoxville, TN 37996-3452, United States. E-mail address:
[email protected] (C. Peterson).
https://doi.org/10.1016/j.evalprogplan.2020.101799 Received 22 July 2018; Received in revised form 14 December 2019; Accepted 11 February 2020 0149-7189/ © 2020 Elsevier Ltd. All rights reserved.
Evaluation and Program Planning 80 (2020) 101799
C. Peterson and G. Skolits
Fig. 1. VfM conceptual framework (adapted from DFID, 2011).
or QALY per dollar ratio. Alternatively, a given intervention may be considered cost-effective if the ICER in terms of DALY or QALY is above a pre-determined threshold. This approach is most common in the healthcare field (Svistak & Pritchard, 2014). Finally, cost-benefit analysis and SROI estimate the value of an intervention by monetizing all expected outcomes and summing them to provide a ratio of overall monetary return per dollar invested (Cordes, 2017; Svistak & Pritchard, 2014). CBA and SROI can also be expressed in terms of net present value, where the present value of costs is subtracted from the present value of benefits. Common steps to economic evaluation across all three methods include a cost analysis, benefit or effectiveness estimation, present value discounting, and sensitivity analysis (Levin, McEwan, Belfield, Bowden, & Shand, 2018). The SROI approach includes an additional step for stakeholder engagement in mapping intended and unintended outcomes (SROI Network, 2012). With the exception of SROI, economic evaluation rarely includes stakeholders in key evaluation decisions, elicits stakeholder values and assumptions, examines how these values and assumptions impact subsequent evaluative judgements, makes criteria for evaluative judgements explicit, or emphasizes evaluation use and organizational learning. While economic evaluation continues to provide useful information about the value of resource use, these methods traditionally privilege output efficiency over other important dimensions, such as equity or environmental sustainability. Moreover, over-reliance on quantitative measurement in these methods makes it difficult to account for unintended outcomes or contextual aspects that may influence program value and give meaning to evaluation findings.
assumptions and methodology of economic analyses to be unrealistic or esoteric (Eddama & Coast, 2008; Williams, Byran & McIver, 2006; Zwart-Van Rijkom et al., 2000). Overlooking the contribution economic evaluation makes toward understanding the full value of a social investment represents a missed opportunity and may lead to overestimating program benefits relative to their costs (Persaud, 2007; Yates, 2012). However, even if an economic evaluation is conducted, it may not be a cost-effective endeavor itself if the evaluation results are not subsequently used (Svistak & Pritchard, 2014). This observation has resulted in growing recognition that economic evaluation approaches must become more responsive to the priorities and perspectives of program stakeholders (King, 2016; Persaud, 2007; Sefton, 2000). To advance thinking about the merit, worth, and significance of resource use in social programs, this article synthesizes the literature on a rubric-based framework for assessing VfM. We then position this framework for evaluating VfM within a broader theory of Utilization-Focused Evaluation (UFE) to examine mechanisms through which it may contribute to increased evaluation use. Finally, this article outlines avenues for future research to strengthen the evidence base for improving utilization of VfM evaluation. 2. Economic evaluation methods Economic evaluation is a set of methods for systematically identifying, measuring, valuing, and comparing the cost and consequences of alternative courses of action (Drummond et al., 2015). There are multiple methods available for evaluators to consider when conducting an economic evaluation. The three most common methods include: CostEffectiveness Analysis (CEA), Cost-Utility Analysis (CUA), and CostBenefit Analysis (CBA) or Social Return on Investment (SROI). Each approach produces a metric reflecting a comparison of program benefits to costs to signal the value of a program investment. Cost-effectiveness is used to estimate the ratio of dollars spent to achieve a unit of outcome. In this method, the outcome measure is not converted to a monetary value, so the resultant ratio is expressed as units of outcome, such as improvements in reported fruit and vegetable intake, BMI, or food insecure days, per dollar invested (Svistak & Pritchard, 2014). When comparing two alternatives on the same outcome, the incremental cost-effectiveness ratio (ICER) can be calculated by dividing the difference in total costs by the difference in total effectiveness. This provides a metric for the extra cost per additional unit of outcome and can reveal a substantially different comparison than the average cost-effectiveness ratios (Drummond et al., 2015). Cost-utility analysis is similar to cost-effectiveness but reflects standardized outcome units in terms of Disability Adjusted Life Years (DALY) or Quality Adjusted Life Year (QALY). In this way, interventions with different outcomes can then be compared based upon their DALY
2.1. Value for money Value for Money refers to the merit, worth, and significance of resource use in public programs and policy (King, 2017). In VfM (Fig. 1), economy describes the cost of program inputs. Efficiency is the level of output or outcome achieved relative to the level of inputs. Effectiveness is defined as achieving positive program outcomes, and equity refers to ensuring that program resources and outcomes are distributed fairly (Fleming, 2013; King & Guimaraes, 2016). In response to the limitations of traditional economic evaluation, flexible framework for evaluating Value for Money (VfM) that emphasizes the importance of evaluative reasoning across the dimensions of efficiency, equity, effectiveness, and economy using rubric methods has emerged (King & OPM, 2018). The goal of this framework is to be responsive to donor requirements for accountability and good resource allocation, while supporting reflection, learning, and adaptive management (King & OPM, 2018). Placing evaluative reasoning at the core of the framework is a strategy for enhancing utilization of evaluation for accountability, learning and adaptation (King, McKegg, Oakden, & Wehipeihana, 2013; King & OPM, 2018). 2
Evaluation and Program Planning 80 (2020) 101799
C. Peterson and G. Skolits
3. Utilization-focused evaluation
and perspective taking, and informing decisions in preparation for action” (Buckley, Archibald, Hargraves, & Trochim, 2015, p. 378). Coconstructing theories of change (ToC) and rubrics are strategies often used for increasing primary user involvement in evaluation and facilitating evaluative thinking. A ToC is a depiction of the program results chain broken that help guide rubric construction. Rubrics are an evaluation and analytical tool to help synthesize evidence and values for the purpose of making evaluative judgements regarding the merit and worth of outcomes (Davidson, 2005). They enable quantitative economic analyses to be interpreted alongside qualitative data, which can encourage methodological responsiveness to the program context and stakeholder values (Davidson, 2005; King, 2016).
Utilization-focused evaluation (UFE) is a pragmatic model of evaluation based on the principle that evaluations should be done “for and with specific intended primary users for specific, intended uses” (Patton, 2008, p. 37). This model proposes an evaluation should be judged not only on its methodological quality, but also on its utility and use to primary users (Patton, 2008). A key element of utilization-focused evaluation is stakeholder involvement in the evaluation process, specifically primary decision-makers like program directors, staff or funders (Patton, 2008). 3.1. Evaluation use
3.4. Utilization-focused evaluation & VfM The concept of evaluation use is composed of two broad categories: process-based use and results-based use (Alkin & King, 2016; Preskill & Torres, 2000). From an economic evaluation perspective, results-based uses would encompass using findings to explain how a program creates value, changes budgetary allocations, secure new funding, justify the continuation of a program, or to compare program alternatives. UFE’s emphasis on stakeholder involvement in the evaluation process also contributes to the process-based use of individual and organizational learning (Patton, 2008). For example, participation in the evaluation decision-making offers a mechanism for primary users to learn about the program, build evaluative thinking skills, and adapt their actions through the collaborative dialog and reflection (Patton, 2008; Preskill & Torres, 2000). However, stakeholders who feel that their inclusion in the evaluation was insincere or unfair are less likely to use the evaluation findings to make key decisions (Froncek & Rohmann, 2019). Evaluation use is contingent upon selecting evaluation methods that are context-responsive (Patton, 2008). UFE encourages methodological flexibility, including quantitative or qualitative data obtained via naturalistic or experimental designs (Patton, 2008, 2013). Inclusion of multiple data types or research methodologies, when appropriate, in evaluation is believed to improve utility and accessibility of findings among primary users.
The VfM framework offers a UFE approach to extending the contributions of economic evaluation beyond answering questions of efficiency. First, the VfM framework invites primary user participation in evaluation decision-making. One of the key considerations about who should be involved in the process is utility (King & OPM, 2018). In other words, individuals who are most likely to use the findings should be included in key decisions and processes. Although application of the VfM framework is limited, existing case studies report including monitoring and evaluation advisors, technical advisors, program management, foundation staff, and trustees (King & Allan, 2018; Kinnect Group & Foundation North, 2016). This suggests that in practice the VfM framework has been oriented toward primary user inclusion rather than a more broadly defined stakeholder group of “people who matter” (King et al., 2013) that includes those who are impacted upon by the intervention. Second, this VfM framework involves several strategies that engage primary users in evaluative thinking. Co-constructing program TOC and rubrics help foster communication about values, how the program or policy is believed to lead to social changes, and intended evaluation uses to create shared understanding before data collection takes place (King & OPM, 2018; Martens, 2018; Patton, 2008). The rubric construction process also makes equity concerns more explicit by defining meaningful measures of equity in the program context and systematically addressing the possibility that reaching marginalized groups may require more resources (Independent Commission for Aid Impact (ICAI), 2018; King & Allan, 2018). Third, rubrics establish an explicit framework for drawing evaluative conclusions by negotiating what constitutes “good” or “quality” program performance in each VfM dimension. In this way, primary users can propose meaningful criteria and descriptors that are reflective of contextual aspects, language preferences, and value perspectives (Davidson, Wehipeihana, & McKegg, 2011). By thinking through what each performance level means prior to data collection, participation in rubric design may also stimulate conversations about how key stakeholders would use findings under various result scenarios (Davidson, 2005). King et al. (2013) argue that rubrics also make evaluative judgements more transparent, thereby increasing the likelihood findings will be used. Finally, the VfM framework allows for greater methodological responsiveness to decision-maker needs than traditional economic evaluation because rubrics offer a way to synthesize multiple sources of evidence about value. For instance, experimental and quasi-experimental methods continue to dominate the practice of economic evaluation, despite concerns by some decision-makers about their credibility and practical relevance (Zwart-Van Rijkom et al., 2000). Since qualitative and mixed-method research designs are widely used in program evaluation and have been legitimately employed to explore questions of economic value, rubrics provide a way to integrate them into VfM evaluation (King & OPM, 2018; Stevens, Rogers, Boymal, & Humble, 2008). Use is enhanced when decision-makers are involved in making methods decisions (Patton, 2008).
3.2. Primary user involvement Stakeholder involvement in evaluation processes is associated with various types of evaluation use (Cousins, 2003; Roseland, Lawrenz, & Thao, 2015). The UFE approach to evaluation is characterized by primary users serving as partners in decision-making about some or all the evaluation activities, including interpretation of findings and judgements of merit and worth (Patton, 2008). The evaluator’s role in UFE is as a facilitator and partner to build evaluation capacity by helping primary users focus on decisions that need to be made, select appropriate models and methods, organize the data to facilitate interpretation, and draw evaluative conclusions (Patton, 2008). Feeling a sense of ownership in the evaluation process and greater understanding of evaluation results increases the likelihood that intended users find utility in the findings because “individuals will not act on what they are told is real, but on what they themselves believe to have personal or social meaning” (Lincoln & Guba, 2011, p. 229; Patton, 2012). However, the technical and complex nature of economic evaluation traditionally excludes primary users from participating in the evaluation process, which can inhibit their ability to utilize results (Coast, 2004; Flemming, 2013). 3.3. Evaluative thinking Evaluative thinking is a key component of UFE associated primarily with process use that enhances shared understandings and organizational learning capacity (Patton, 2008). Defined as critical thinking in the context of evaluation, ET “involves identifying assumptions, posing thoughtful questions, pursuing deeper understanding through reflection 3
Evaluation and Program Planning 80 (2020) 101799
C. Peterson and G. Skolits
provides a conceptual link between the strategies used in VfM framework and improved credibility and transparency. However, in an experimental study of this relationship, Jacobson and Azzam (2018) found that unless the stakeholders involved in evaluation decisions were perceived as highly credible individuals, their involvement reduced evaluation credibility. Moreover, transparent decision-making procedures have been shown to decrease general trust in the decision outcome (de Fine Licht, 2011). Group information sharing research also suggests stakeholders may strategically withhold critical information to influence decisions (Wittenbaum et al., 2004), limiting the degree of actual transparency. Since evidence for the beneficial relationship between stakeholder participation in evaluation, transparency, and credibility is mixed, future research should identify moderators or mediators through which the VfM framework achieves these outcomes. Validating measurement scales to study transparency and credibility in the evaluation context is an essential step in advancing research in this area.
Proponents of the VfM framework, and rubrics more generally, claim it is effective at strengthening stakeholder engagement in evaluation processes, enhancing transparency of judgements, building trust among stakeholders, and improving use of evaluation findings (King & Allan, 2018; King & Guimaraes, 2016; King et al., 2013; Oakden, 2013). King et al. (2013) reflect that compared to their previous approaches, rubrics enabled them to identify and deal with values more transparently. In general, rubric use is positively associated with motivation to learn (Brookhart & Fei Chen, 2015; Jonsson, 2014). Co-creating rubrics is associated with positive, significant effect on activation of learning strategies, professional judgement, and student achievement (Fraile, Panadero, & Pardo, 2017; Kocakülah, 2010; Menéndez-Varela & Gregori-Giralt, 2018). Although rubrics can help focus attention and build shared understanding, in some contexts they can be difficult to use and restrict thought (Tremblay, Bertrand, & Fraser, 2017). 4. Implications for practice and future research
4.3. VfM & evaluation use
The VfM framework is a promising strategy for drawing conclusions about the value of resource use in social programs, yet more research is needed to fully understand both its potential benefits and limitations. Despite expectations and generally positive narratives, systematic research on the VfM framework, and rubric methods in program evaluation more generally, is scarce. Much of the supporting evidence is based upon reflective practice and expert opinion rather than systematic investigation (Dickinson & Adams, 2017; King et al., 2013; King & Guimaraes, 2016; King et al., 2013; Oakden, 2013). Most studies on rubrics are confined to student assessment and may not translate to the program evaluation context. The following section outlines four possible avenues for future research on the VfM framework, including investigation of how rubrics influence group information sharing, exploration of the types of use VfM improves and the mechanisms through which this use occurs, explicit consideration of how VfM deals with unintended outcomes, and the impact of this framework on the credibility and transparency of evaluative judgements.
The intended consequence of UFE is increased use of evaluation processes and findings among primary stakeholders. Existing literature often asserts that the VfM framework increases use without defining the nature of this use. The focus on evaluative thinking suggests it would lead to increased process use. However, the types of use VfM framework increases or how substantially it increased remains unclear. In practice, the VfM framework may result in appropriate and inappropriate uses. These include instrumental use of findings to make decisions, process use of evaluation activities to improve organizational learning, symbolic use of evaluation rubrics to appear transparent while justifying pre-determined decisions, or conceptual use of findings to establish or alter attitudes about the program. Given the VfM framework requires considerable time, resources, and opportunity cost for stakeholders, it is critical to consider the marginal benefit from different types of use, or even misuse, of evaluation findings over that of traditional economic evaluation approaches.
4.1. VfM & group information sharing 4.4. VfM & unintended outcomes An important area for evaluation research relates to how groups collectively make judgements in the evaluation context. For instance, UFE theory might suggest stakeholder involvement rubric construction increases inter-rater reliability and validity of the instrument creating shared understanding of the program. However, studies have demonstrated that groups are sub-optimal users of information for decisionmaking and the degree of information sharing is motivated, in part, by strategic considerations of how to advance personal goals (Lu, Yuan, & McLeod, 2012; Wittenbaum, Hollingshead, & Botero, 2004). Conscious evaluative thinking requires substantial cognitive energy, so most judgements remain automatic, unconscious, and intractable (Bargh & Chartrand, 1999). The VfM framework assumes stakeholders are willing work cooperatively and expend substantial cognitive energy to consciously re-evaluate initial judgements in order to interpret evidence against quality criteria. Whether co-construction of ToC or rubrics influences tendencies to remain attached to initial judgements or protects against shared bias is not clear in the existing literature. It is also unclear to what extent the observed benefits of this framework can be attributed to the participatory methodology more generally or to rubrics as a shared analytical tool for fostering evaluative reasoning. Understanding the cognitive-relational mechanisms of rubric use among groups to make evaluative judgments provides fruitful avenues for research on VfM and evaluation use.
The importance of how to address issues of unintended program outcomes is not unique to the VfM framework and continues present challenges for traditional economic evaluation. Patton (2008) discusses the need for evaluation frameworks that are attentive to issues like unintended consequences, irreproducible effects, lack of program implementation fidelity and multiple paths to the same outcomes. King and OPM (2018) acknowledge the pragmatic breakdown of VfM criteria necessitate simplified depictions of the results chain that may not capture the realities of aid programs. Since criteria descriptions are based upon intended inputs, activities, deliverables, outputs, and outcomes, this insight brings up the question of how the VfM framework illuminates and deals with unintended outcomes. Given Tremblay et al.’s (2017) observation that using rubrics can restrict thought, it is possible that rubrics may actually send unconscious cues to stakeholders that evidence outside of the given criteria descriptions is less relevant or important, making it more difficult to integrate unexpected findings. While including qualitative methods and diverse user input into the evaluation design are mechanisms through which the VfM framework may identify unintended outcomes attention to this issue is warranted. 5. Conclusion
4.2. VfM, credibility & transparency
Overall, evidence suggests there are many strengths to the VfM framework in appropriate contexts. This article synthesized the literature on evaluating VfM and positioned it within the theory of UFE. A central principle of UFE is that evaluations should be judged by their
The VfM framework is appealing as an approach for increasing credibility and transparency of judgements about resource use. UFE 4
Evaluation and Program Planning 80 (2020) 101799
C. Peterson and G. Skolits
utility and use (King & Allan, 2018; Patton, 2012). From a VfM perspective, if an evaluation is not useful to stakeholders, the cost of conducting the evaluation may not worth the resources invested and overall welfare may not be improved. The limited use of economic evaluation in social services program evaluation suggests there is much to gain from economists and evaluators shifting their approach to program evaluation from standard economic analyses to the VfM framework. The VfM framework offers several methods for fostering meaningful user engagement in determining program value. Furthermore, by incorporating evidence from both qualitative and quantitative data collection methods, the VfM framework may address the limitation of traditional economic evaluation in making information more relevant to primary users. Given the potential for this approach to improve use of economic evaluation in public decision-making, additional research along the lines discussed in this review should be prioritized.
evaluation. Thousand Oaks: Sage Publications. Davidson, J., Wehipeihana, N., & McKegg, K. (2011). The rubric revolution. Paper Presenter at the Australasian Evaluation Society Conference September, Retrieved from https://www.betterevaluation.org/sites/default/files/AES-2011-Rubric-RevolutionDavidson-Wehipeihana-McKegg-xx.pdf. (Accessed July 2018). de Fine Licht, J. (2011). Do we really want to know? The potentially negative effect of transparency in decision making on perceived legitimacy. Scandinavian Political Studies, 34(3), 183–201. DFID (2011). DFID’s approach to Value for Money (VfM). United Kingdom: Department for International Development Retrieved from https://assets.publishing.service.gov.uk/ government/uploads/system/uploads/attachment_data/file/49551/DFID-approachvalue-money.pdf. (Accessed July 2018). Dickinson, P., & Adams, J. (2017). Values in evaluation–The use of rubrics. Evaluation and Program Planning, 65, 113–116. Drummond, M. F., Schulpher, M. J., Claxton, K., Stoddart, G. L., & Torrance, G. W. (2015). Methods for the economic evaluation of health care programmes (4th ed.). Oxford: Oxford University Presshttps://doi.org/10.1016/s0277-9536(96)00398-x. Eddama, O., & Coast, J. (2008). A systematic review of the use of economic evaluation in local decision-making. Health policy, 86(2-3), 129–141. Fleming, F. (2013). Evaluation methods for assessing Value for Money. Australasian Evaluation Societyhttp://betterevaluation.org/sites/default/files/Evaluating %20methods%20for%20assessing%20VfM. Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on selfregulated learning, self-efficacy and performance of establishing assessment criteria with students. Studies in Educational Evaluation, 53, 69–76. Froncek, B., & Rohmann, A. (2019). You Get the Great Feeling That You’re Being Heard But in the End You Realize That Things Will Be Done Differently and in Others’ Favor”: An Experimental Investigation of Negative Effects of Participation in Evaluation. American Journal of Evaluation, 40(1), 19–34. Helfand, M. (2005). Incorporating information about cost-effectiveness into evidencebased decision-making: The evidence-based practice center (EPC) model. Medical Care, II33–II43. https://doi.org/10.1377/hlthaff.24.1.123. Herman, P. M., Avery, D. J., Schemp, C. S., & Walsh, M. E. (2009). Are cost-inclusive evaluations worth the effort? Evaluation and Program Planning, 32(1), 55–61. https:// doi.org/10.1016/j.evalprogplan.2008.08.008. Independent Commission for Aid Impact (ICAI) (2018). DFID’s approach to value for money in programme and portfolio management: A performance reviewFebruaryhttps://icai. independent.gov.uk/wp-content/uploads/ICAI-VFM-report.pdf. Jacobson, M. R., & Azzam, T. (2018). The effects of stakeholder involvement on perceptions of an evaluation’s credibility. Evaluation and Program Planning, 68, 64–73. https://doi.org/10.1016/j.evalprogplan.2018.02.006. Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment and Evaluation in Higher Education, 39(7), 840–852. King, J. (2016). Value for investment: A practical evaluation theory. Auckland: Julian King & Associates Ltd – a member of the Kinnect Group. King, J. (2017). Using economic methods evaluatively. The American Journal of Evaluation, 38(1), 101–113. https://doi.org/10.1177/1098214016641211. King, J., & Allan, S. (2018). Applying evaluative thinking to value for money: The Pakistan sub-national governance programme. https://www.nzcer.org.nz/system/files/journals/ evaluation-maters/downloads/EM2018_207.pdf. King, J., & Guimaraes, L. (2016). Evaluating value for money in international development: The Ligada female economic empowerment programme. eVALUation Matters, Third Quarter, 2016. Africa Development Bankhttp://idev.afdb.org/sites/default/files/ documents/files/Evaluating%20value%20for%20money%20in%20international %20development-.pdf. King, J., McKegg, K., Oakden, J., & Wehipeihana, N. (2013). Evaluative rubrics: A method for surfacing values and improving the credibility of evaluation. Journal of MultiDisciplinary Evaluation, 9(21), 11–20. King & OPM (2018). The OPM approach to assessing value for money: A guide. Oxford: Oxford Policy Management Ltd Retrieved from https://www.opml.co.uk/files/ Publications/opm-approach-assessing-value-for-money.pdf?noredirect=1. (Accessed July 2018). Kinnect Group & Foundation North (2016). Kua Ea Te Whakangao Māori & Pacific Education Initiative: Value for investment evaluation reportAuckland: Foundation North. (Accessed 13 December 2020) https://www.julianking.co.nz/wp-content/uploads/ 2019/03/fn-mpei-evaluationreport-f-spreads-ilovepdf-compressedcompressed-minmin-min-min.pdf. Kirkhart, K. E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Evaluation, 2000(88), 5–23. https://doi.org/10.1002/ev. 1188. Kocakülah, M. S. (2010). Development and application of a rubric for evaluating students’performance on Newton’s laws of motion. Journal of Science Education and Technology, 19, 146–164. https://doi.org/10.1007/s10956-009-9188-9. Levin, H. M., McEwan, P. J., Belfield, C., Bowden, A. B., & Shand, R. (2018). Economic evaluation in education: Cost-effectiveness and benefit-cost analysis (3rd ed.). Thousand Oaks: Sage Publications. Lincoln, Y. S., & Guba, E. G. (2011). The roots of fourth generation evaluation: Theoretical and methodological origins. In M. Alkin (Ed.). Evaluation roots: A wider perspective of theorists (pp. 226–241). Thousand Oaks: Sage Publications. Lu, L., Yuan, Y. C., & McLeod, P. L. (2012). Twenty-five years of hidden profiles in group decision making: A meta-analysis. Personality and Social Psychology Review, 16(1), 54–75. Martens, K. S. (2018). How program evaluators use and learn to use rubrics to make evaluative reasoning explicit. Evaluation and Program Planning, 69, 25–32. https:// doi.org/10.1016/j.evalprogplan.2018.03.006. Menéndez-Varela, J. L., & Gregori-Giralt, E. (2018). Rubrics for developing students’
Author statement Christina Peterson was responsible for formulating the purpose of the manuscript, conducting the literature review, critiquing existing literature, and writing. Gary Skolits was responsible for supervision of the project, providing feedback on the purpose and direction, reviewing and editing drafts, and assisting with responses to reviewer comments. Funding This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Declaration of Competing Interest None. Acknowledgements We would like to acknowledge Rachel Ladd for her detailed feedback on the manuscript and support through the writing process. We would also like to thank the anonymous reviewers for their thoughtful comments, which greatly contributed to the improvement of this paper. Appendix A. Supplementary data Supplementary material related to this article can be found, in the online version, at doi:https://doi.org/10.1016/j.evalprogplan.2020. 101799. References Adler, M. D., & Posner, E. A. (2006). New foundations of cost-benefit analysis. Harvard University Press. Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. The American Journal of Evaluation, 37(4), 568–579. https://doi.org/10.1177/ 1098214016665164. Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. The American Psychologist, 54(7), 462. Brazier, J., Ratcliffe, J., Salomon, J., & Tsuchiya, A. (2017). Valuing health. Measuring and valuing health benefits for economic evaluation, 83–117. Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. The American Journal of Evaluation, 36(3), 375–388. Cordes, J. J. (2017). Using cost-benefit analysis and social return on investment to evaluate the impact of social enterprise: Promises, implementation, and limitations. Evaluation and Program Planning, 64, 98–104. https://doi.org/10.1016/j. evalprogplan.2016.11.008. Coast, J. (2004). Is economic evaluation in touch with society's health values? Bmj, 329(7476), 1233–1236. Cousins, J. B. (2003). Utilization effects of participatory evaluation. In T. Kelligan, & D. L. Stufflebeam (Eds.). International handbook of educational evaluation (pp. 245–265). Dordrecht: Kluwer Academic Publishers. Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound
5
Evaluation and Program Planning 80 (2020) 101799
C. Peterson and G. Skolits
Retrieved from http://mams.rmit.edu.au/phhpu3ty2nm5.pdf. (Accessed July 2018). Svistak, M., & Pritchard, D. (2014). Economic evaluation: What is it good for? A guide for deciding whether to conduct an economic evaluation. London, England: New Philanthropy Capital Retrieved from https://www.thinknpc.org/publications/ economic-analysis/. (Accessed July 2018). Tremblay, G. H., Bertrand, F., & Fraser, M. (2017). Using rubrics for an evaluation: A national research council pilot. Canadian Journal of Program Evaluation, 32(2). White, D., & Silloway, T. (2016). Cost-benefit analysis. Evidence-based policymaking collaborative Sept 2, Retrieved from https://www.evidencecollaborative.org/toolkits/costbenefit-analysis. (Accessed July 2018). Wittenbaum, G. M., Hollingshead, A. B., & Botero, I. C. (2004). From cooperative to motivated information sharing in groups: Moving beyond the hidden profile paradigm. Communication Monographs, 71(3), 286–310. Yates, B. T. (2012). Step arounds for common pitfalls when valuing resources used versus resources produced. In G. Julnes (Vol. Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New directions for evaluation: 133, (pp. 43–52). Zwart-van Rijkom, J. E., Leufkens, H. G., Busschbach, J. J., Broekmans, A. W., & Rutten, F. F. (2000). Differences in attitudes, knowledge and use of economic evaluations in decision-making in The Netherlands. Pharmacoeconomics, 18(2), 149–160.
professional judgement: A study of sustainable assessment in arts education. Studies in Educational Evaluation, 58, 70–79. Oakden, J. (2013). Evaluation rubrics: How to ensure transparent and clear assessment that respects diverse lines of evidence. Melbourne: Better evaluation. www.betterevaluation. org. Patton, M. Q. (2008). Utilization-focused evaluation. Thousand Oaks: Sage Publications. Patton, M. Q. (2012). A utilization-focused approach to contribution analysis. Evaluation, 18(3), 364–377. https://doi.org/10.1177/1356389012449523. Patton, M. Q. (2013). The roots of utilization-focused evaluation. In M. Alkin (Ed.). Evaluation roots: A wider perspective of theorists (pp. 293–297). Thousand Oaks: Sage Publications. Persaud, N. (2007). Is cost analysis underutilized in decision making? Journal of Multidisciplinary Evaluation (JMDE:2). Preskill, H., & Torres, R. T. (2000). The learning dimension of evaluation use. New Directions for Evaluation, 2000(88), 25–37. https://doi.org/10.1002/ev.1189. Roseland, D., Lawrenz, F., & Thao, M. (2015). The relationship between involvement in and use of evaluation in multi-site evaluations. Evaluation and Program Planning, 48, 75–82. https://doi.org/10.1016/j.evalprogplan.2014.10.003. Rudmik, L., & Drummond, M. (2013). Health economic evaluation: Important principles and methodology. The Laryngoscope, 123(6), 1341–1347. https://doi.org/10.1002/ lary.23943. Sefton, T. (2000). Getting less for more: Economic evaluation in the social welfare field. Case paper 44, Centre for analysis of social exclusion. London School of Economics Retrieved from http://eprints.lse.ac.uk/6444/1/ Getting_Less_for_More_Economic_Evaluation_in_the_Social_Welfare_Field.pdf. (Accessed June 2018). SROI Network (2012). A guide to social return on investment. Written by Jeremy Nicholls, Eilis Lawlor, Eva Neitzert and Tim Goodspeed, and edited by Sally Cupitt, SROI Network, Liverpool, UK. Retrieved from http://www.socialvalueuk.org/app/uploads/2016/03/The%20Guide%20to%20Social%20Return%20on%20Investment %202015.pdf. (Accessed July 2018). Stevens, K., Rogers, P., Boymal, J., & Humble, R. (2008). Evaluation of the stronger families and communities strategy: Qualitative cost benefit analysis. RMIT University CIRCLE
Christina Peterson (MS) is a doctoral researcher in the Evaluation, Statistics, and Measurement program at the University of Tennessee. Her research focuses on the social psychology of evaluation, evaluation culture, and evaluation methods. She is also a research consultant for the Research Computing Support office. Gary Skolits (EdD) serves is a tenured associate professor of Evaluation, Statistics, and Measurement at the University of Tennessee. He recently completed 10 years of service as the Executive Director of the University of Tennessee’s Institute for Assessment and Evaluation. His research interests include evaluation methods, higher education leadership, and intervention field studies.
6