Benefit-cost analysis for program evaluation

Benefit-cost analysis for program evaluation

196 BOOK REVIEWS studies. Given that our emphasis is likely to be on residual examination and model reformulation, there is little reason to have st...

404KB Sizes 0 Downloads 62 Views

196

BOOK REVIEWS

studies. Given that our emphasis is likely to be on residual examination and model reformulation, there is little reason to have started the analysis by estimating and testing the simple regression model. We could have begun with a regression-type method that does not provide optimal estimates or hypothesis tests but which is designed specifically to facilitate residual examination and model reformulation. Exploratory data analysis includes many such methods. The weaknesses of this book are minor but worthy of note. First, it fails (as do its competitors) to explore the logical foundations of data analysis and thus fails to dislodge the notion that exploratory methods are atheoretic. Exploratory methods indeed look best where the utopian assumptions required for classical statistical analysis have little theoretic or empirical foundation. Classical statistical analysis tests theory; but exploratory analysis suggests theory to be tested. Second, the book does not emphasize why a particular method is being used on a particular data set. This omission may lead the reader to conclude that the success of the methods is due only to the fertility of Tukey’s mind and that the exploratory approach has little to offer the neophyte data analyst. This conclusion is false but is fueled by the author’s style. If a social scientist wants to taste the methods of exploratory data analysis at minimum cost, the book by Hartwig and Dearing (1979) or the article by Leinhardt and Wasserman (1979) are strongly recommended for their simplicity and brevity. If that taste is pleasant, and if the reader decides to add the exploratory approach to his or her analytic quill, then Tukey’s book is a must. Those who like what they read and aspire to become proficient exploratory analysts would be well-advised

Benefit-Cost

Analysis

for Program Evaluation,

to consult the companion volume on regression by Mosteller and Tukey (1977); new books on data analysis by Gnanadesikan (1977) and Belsley et al. (1980); and articles showing the use of exploratory methods by Mayer (1978) and McNeil and Tukey (1975). Computer methods for exploratory data analysis are described in a book by McNeil (1975) and in an announcement to appear by Stine (1980) in The American Statistician. References BELSLEY, D. A., KUH, E., & WELSCH, R. E. Regression diagnostics: Identifying influential data and sources of collinearity. New York: John Wiley, 1980 GNANADESIKAN, R. Methods for statistical data analysis multivariate observations. New York: John Wiley, 1977. HARTWIG, F., & DEARING, B. E. Exploratory Beverly Hills, Calif.: Sage Publications, 1979.

of

data analysis.

LEINHARDT, S., &WASSERMAN, S. S. Exploratory data analysis: An introduction to selected methods. SociologicalMethodology, 1979, ZO, 31 l-365. MCNEIL, D. R. Interactive York: John Wiley, 1975.

data analysis:

A practicalprimer.

MCNEIL, D. R., & TUKEY, J. W. Higher-order way tables, illustrated on two sets of demographic tions. Biometrics, 1975, 31, 487-510.

New

diagnosis of twoempirical distribu-

MAYER, L. S. Estimating the effects of the onset of the energy crisis on residential energy demand. Resourcesand Energy, 1978, 1, 57-92. MOSTELLER, F., &TUKEY, J. W. Dataanalysisandregression: second course in statistics. Reading, Mass.: Addison-Wesley, STINE, R. A. An exploratory Statistician, in press.

by Mark S. Thompson.

data analysis

package.

A 1977.

The American

Beverly Hills, Calif.: Sage Publications,

Inc., 1980, 31Opp., $20.00 (hardcover), $9.95 (softcover). Reviewer: Daniel B. Fishman Benefit-cost (B/C) and cost-effectiveness (C/E) analysis is a very important topic in program evaluation, especially in light of the new fiscal conservatism that has been recently sweeping the country. In BenefitCost Analysis for Program Evaluation, Mark S. Thompson presents a detailed discussion of these analytic techniques for aiding decision-makers with such questions as: (1) Should a regional government body build a proposed toll road, in light of the economic gains that travelers, consumers, producers, and some landowners will accrue; the economic losses suffered by some transporters; and the ecological losses suffered by landowners through increased pollution and traffic?; (2) How much should a governmental body with an interest in saving lives spend on such diverse programs as

neonatal screening, atmospheric pollution control, mobile coronary care units, and road safety?; and (3) which of a number of alternative programs for supervising adult probation should a correctional department choose based upon differential costs, recidivism rates, the probationers’ employment success, and the probationers’ social adjustment? Thompson defines benefit-cost analysis as “assessing the good and bad aspects of a decision alternative by valuing them in terms of money. Benefit-cost analysis . . . uses monetary valuation to achieve commensurability of all decision attributes” (pp. 15-16). For example, in the road-building illustration given above, the major benefits, such as increased land values along the road and increased business produced by the road, can

BOOK REVIEWS

be transformed into dollar terms. Likewise, the costs of the road, such as its construction expense and the potential freight revenue lost by present airline companies to new trucking businesses that would use the road, can be transformed into dollar terms. The final phase of B/C analysis, therefore, is to assess the net dollar gain of the project after the monetary losses have been subtracted from the monetary gains. Much of the book consists of an elaboration of the basic B/C process. For example, there are different ways for monetary values to be assigned. The most straightforward method is to value a product or service based upon its open market price. When there is no such market, other techniques can be used, such as “compensating variation,” which is “for a program beneficiary, the amount of money he could pay so that, with the program but having [been] paid this money, he would be just as well off as without the program and without the payment” (p. 41). Once monetary values have been assigned to both the most direct costs and benefits, the analyst must take into consideration the concept of “discounting,” which recognizes the fact that if a decision-maker has money in hand for a particular project, this money could alternatively be loaned to yield a benefit at present interest rates; and likewise, if on the other hand the decision-maker has to borrow funds to implement the project, there is an additional cost of interest on the borrowed funds. Another variable to be considered as “distributional effects,“which refer to the extent to which the proposed program redistributes resources among the rich and the poor. Still other relevant variables are “external effects,” which refer to the “incidental impact of an action in the private sector on persons with no decisional control over it” (p. 70). While the number of variables and subsequent complexities increase as Thompson delves deeper into the B/C process, when all is said and done, the total costs and benefits can be summarized in one number indicating the net dollar gain or loss of implementing the proposed project. Thus, B/C analysis is very elegant in principle. However, the reader who is primarily identified with the planning and evaluation of human services will find that for all it elegance and sophistication, benefit-cost analysis is not appropriate for evaluating social programs. Thompson himself points out: “in most social program areas . . . valuation of all benefits and costs is so difficult and controversial that methods avoiding these valuations are prized” (p. 224). This becomes apparent in the second two examples given above: how can we dollarize the value of saving a human life in preventive health programs, and how can we dollarize variables such as “employment success” and “social adjustment” outside incarceration in evaluating probation programs? As another instance of inappropriateness of trying to dollarize the effects of social programs, Rivlin (1971)

discusses the choice of funding one of two different programs - one to teach poor children to read, and the other to find a cancer cure. Rivlin notes that while it might be possible to hypothesize the market value of education and health in terms of investment to improve the nation’s productivity and the individual’s earnings, “analysis based on future incomes ignores what most people would regard as the most important benefits of health and education. Cancer is a painful and frightening disease. People would want to be free of it even if there were no effect on future income. Reading is essential to culture and communication; it opens the doors of the mind” (p. 57). Still another reason for not employing benefit-cost analysis in social programs is the problem that better cost-benefit ratios are usually obtained for educational programs involved with brighter students, job-training programs involved with more skilled trainees, and health programs involved with less sick individuals. However, it would not seem reasonable then to conclude that health, education, and welfare programs should be funded only for brighter, more skilled, and less sick individuals (Fishman, 1975). Thompson has a well organized discussion of an alternative approach for analyzing the types of human service program effects we have been discussing, nameanalysis (C/E).” In this apIY, “cost-effectiveness proach, a decision option is evaluated “( 1) by making all effects commensurable in terms either of money or of one unvalued [non-dollarized] output unit and (2) by comparing these dimensions of impact” (p. 226). Such an analysis can be made when two programs are being compared which deal with the same type of beneficiaries who have the same kinds of target problems. Since the target problems are the same, whatever measurement of benefit is employed for one group is applicable to the other groups, and there is no need to dollarize the benefits. For example, if one group of neurotic depressives receives psychotherapy alone and another group, drug therapy alone, the measures of treatment effectiveness relevant for one group, such as decreased depression and improved family relationships, are as relevant for the other group. Thus, there is no need to transform the outcome variables into monetary terms. While Thompson does recognize the limitations of benefit-cost analysis for social programs and does have many interesting things to say about cost-effictiveness analysis, he generally presents the former as the ideal for which program evaluators should strive. For instance, the book begins with a presentation of the costbenefit paradigm using a hypothetical business application of it, which Thompson describes as “a context that avoids many of the murky conceptual problems faced in public benefit-cost analysis” (p. 6). In light of the limitations of B/C analysis in non-business contexts, this reviewer believes that the organization and theme of Thompson’s book is misleading, at least to human

BOOK REVIEWS

198

service evaluators. Rather than devoting the majority of the book in trying to adapt the B/C model from business to public program applications, I believe the book would have been more helpful to human service evaluators if the first few chapters had discussed the intrinsic difficulties of applying B/C analysis to most social programs, and then the major part of the book had presented a detailed discussion of cost-effectiveness techniques which are appropriate for evaluating human service programs. Even so, this is a very worthwhile book, chock full of useful and stimulating ideas about B/C and C/E analysis. Moreover, the book is very clearly outlined by numbered sections, so that it is quite easy to follow the flow of the author’s thought. However, to the typical reader of this journal who is a human services program planner or evaluator, two caveats are necessary. First, while Thompson claims (on page 4) that the book is written for “the intelligent lay reader,” there is a conceptual density in the writing that generally assumes two or three middle-level undergraduate courses in economics. Related to this point, a glossary of the more technical economic terms employed (such as “potential Pareto improvements” and “consumer surplus”) would have been very helpful for the less economically sophisticated reader. Secondly, as discussed above, the majority of the book, which is devoted to the subtleties of

Program

Evaluation:

Methods

and Case Studies,

benefit-cost analysis, is less relevant to the human services evaluator than the single chapter on cost-effectiveness analysis. On the other hand, when David Stockman, President Reagan’s choice to head up the Office of Management and Budget, and Jack Kemp, new chairman of the House Republican Conference, criticize government-sponsored programs as being “devoid of policy standards and. criteria for cost-benefit, costeffectiveness, and comparative risk analysis” (Stockman & Kemp, 1980), it is an important sign to the human services program evaluator that he or she should be knowledgeable about these different techniques and their relative applicability to the programs with which the evaluator is associated. References FISHMAN, D. B. Development of a generic cost-effectiveness methodology for evaluating the patient services of a community mental health center. In _I. Zusman & C. R. Wurster (Eds.), Program evaluation: Alcohol, drug abuse, and mental health services. Lexington, Mass.: Lexington Books, 1975. RIVLIN, A. M. Systematic thinking for social action. Washington, D.C.: The Brookings Institution, 1971. STOCKMAN, D. A., &KEMP, J. F. Memo to Reagan: “Avoidingan economic Dunkirk.” New York Times, December 14, 1980, Section F, p. 19.

by Emil Posavac

and Raymond

G. Carey.

Englewood Cliffs,

N.J.: Prentice-Hall, 1980, 35Opp., $15.95 (hardcov&. Reviewer: Leonard

Saxe

A former mentor once advised me not to select a textbook for a course that was “too good” or “too compatible” with my own view of a field. He reasoned that a superb text might overshadow my own contributions and that students would not, as a consequence, appreciate me as much. Posavac and Carey’s recent text,

Program

Evaluation:

Methods

and Case Studies,

brings to mind my mentor’s counsel. Although the text competently deals with most of the issues one would want to cover within an introductory evaluation course, it treats many issues superficially and, at least for a teacher who is committed to a scientific view of evaluation, the orientation is problematic. To be fair, the book is written for an audience of both students and non-evaluator practitioners. As such, the book’s eclectic and simplified approach to evaluation was deliberate and may be favorably received by many teachers and evaluators. The book is clearly written, although not excitingly so, and is designed as a practical introduction to program evaluation for those involved in human service programs. In contrast to some of the academic treatises

on evaluation, Posavac and Carey organize their book according to the steps that one would take in developing an evaluation study, from planning to getting the results of an evaluation used. They include, as case study material, a number of reprinted evaluation studies and an appendix which provides a guide to basic statistics. Each chapter is replete with suggestions about how to approach evaluation problems and how to tackle the actual conduct of a program evaluation study. The chapters also include study questions and suggestions for further reading. From my perspective as a scientific evaluator, the reader seems to be eased into the “hard” science of evaluation. The initial chapters discuss planning and monitoring and present many evaluation tasks as simple extensions of good management practice. Outcome evaluation is introduced later in the book and, here too, the reader is slowly introduced to concepts such as validity and the rationale for true experimentation. The designs for outcome evaluations are, similarly, ordered from least to most rigorous. An initial chapter discusses individual assessments, the next discusses non-experimental evaluation designs and experi-