Advancing public policy evaluation: Learning from international experiences

Advancing public policy evaluation: Learning from international experiences

Book Reviews Advancing Pub& Policy Evaluation: Learning from International Experiences, edited by J. Mayne, M. L. Bemelmans-Videc, J. Hudson, and R. ...

217KB Sizes 0 Downloads 49 Views

Book Reviews

Advancing Pub& Policy Evaluation: Learning from International Experiences, edited by J. Mayne, M. L. Bemelmans-Videc, J. Hudson, and R. Conner. Amsterdam, The Netherlands: Elsevier Science Publishers B.V., 1992,327 pp. Reviewed by: MARK OROMANER

This volume is a contribution to what I hope to be an emerging literature on the comparative and international analysis of evaluation research and policy making. Another recent more modest contribution to this literature is the 1989 work of Cave, Hanney, Kogan, & Trevett (1989) on performance indicators in higher education. Those authors were particularly interested in the relationship between the development of such indicators and political systems. Unfortunately for those interested in transnational regularities, Cave et al. report that, “ . . .the introduction of performance indicators may be adventitious, or may be used in different places to advance quite different ends” (p. 54). In contrast to the focused concern of the Cave et al. publication, that is, higher education, Advancing Public Policy Evaluation is not limited in its policy area. That is, examples include evaluation of the FBI in the USA, research and development and employment and immigration policy in Canada, approaches to homelessness and the elderly in Spain, and national audits in a number of countries. Indeed, this book emerged from what was perhaps the first international conference on policy and program evaluation. The conference was attended by 35 participants from 13 nations, and was held in December 1990 in The Hague. The volume reflects the international composition of the conference. The four editors represent three countries (Canada, The Netherlands, USA), and the authors of the 27 papers represent 12 countries. I was surprised that only two of the 27 papers were coauthored, and that both of these were written by a team (two authors) from the UK, There Mark Oromaner . Dean of Planning Jersey City, NJ 07306. Evaluation

and Institutional

Research, Hudson County Community

Practice, Vol. 16, No. I, 1995, pp. 95-97.

College, 901 Bergen Avenue,

Copyright

ISSN: 0886-1633

@ 1995 by JAI Press, Inc.

All rights of reproduction

95

in any form reserved.

96

EVALUATION PRACTICE, 16(l), 1995

appears to be little international collaboration in this area. The authors represent the USA, Canada, Australia, and a number of Western European countries. Michael Hendricks is identified as an American who, since 1984, has been an independent consultant in India. However, his contribution is a discussion of the role of the evaluator, and does not reflect experiences in India. Although Non-Western and Eastern European countries are not represented, the volume contains an excellent sample of Western countries that participated in the emergence of policy evaluation in the 1960s (e.g., USA, Canada, and Sweden) or in the 1980s (e.g., UK, Norway, and The Netherlands). One interesting hypothesis concerning the emergence of first wave (1960s) vs. second wave (1980s) countries is that the former benefited from the boom-years of the 1960s while the latter came to evaluation during a period of budgetary constraint. Therefore, first wave countries were more likely to view evaluation as a means of identifying choices among alternative policies, while second wave countries were more likely to view evaluation in terms of cost factor assessments. It doesn’t take much political insight to suggest that at least one first wave nation, the USA, now resembles the second wave nations in terms of emphasis on cost effectiveness. Who now is willing to identify himself/ herself with The Great Society? The 27 papers are distributed among the three parts of the book. In addition, one or more of the editors contributes a brief overview to each of the parts. Part 1--“Institutionalization of Program Evaluation: A Comparative Perspective,” comprises an overview by Mayne and eight papers. In a summary of reports on attempts to institutionalize evaluation at the national level in France, Canada, the UK, and research would be quite interesting, but Australia, Mayne concludes that, “Comparative has yet to be done. Indeed, it is not even easy to conclude on the success in any one country, nor even to know how to measure success” (pp. 4-5). In his contribution on evaluation in Sweden, Evert Vedung points out that although it is “almost dogma” that modern evaluation research originated in the USA during the 1960s evaluation research existed 15 years earlier in the context of high school reforms in Sweden. The obvious question to be raised is: Why is it that evaluation started so early in Sweden? Vedung does not provide an answer, however, he does point out that the “Principle of Publicity” dating to the 1766 Freedom of the Press Act gave every citizen the right of access to official documents. For those more interested in comparative analysis, Vedung raises the following questions: Is it really the case in other countries that evaluation started in the school sector? What are the characteristics of the school sector, or if there are other triggering sectors in other countries, that promote early adoption of evaluation? Part 2-“The Conduct of Evaluation in National Contexts,” comprises an overview by Hudson and twelve papers. The first five papers are devoted to evaluation in national audit offices. Although the most common kind of information that auditors deal with is contained in financial reports, the function of audit offices varies from country to country. Today many national audit offices engage in performance audits and are concerned with the three E’s-Economy, Efficiency, and Effectiveness. The remaining seven papers in this part are case studies of approaches to evaluation in different policy sectors in a number of countries. Part 3-“The Outcomes of Evaluation,“comprises an overview by Bemelmans-Videc and Conner, and seven papers. The first four papers point to the active role of the evaluator in enhancing the probability of the utilization of the evaluation. Olaf Rieper stresses the importance of an understanding of the organizational context, and Mette Qvortrup stresses

97

Book Reviews

the importance of the initial choice of evaluation model. Goram Arvidsson suggests that at the beginning of the process, a clear evaluation contract should be agreed upon. The contract should include time schedules, financial matters, and the intentions of the parties. Michael Hendricks adds that evaluators should come to view themselves as good personal (e.g., athletic) coaches. This model will enable us to view the full range of our contributions. Once these are recognized, “the more we can document our full worth to ourselves and to others” (p, 261). The remaining three papers shift attention to the role of the client, customer, stakeholder, citizen, etc. The fundamental issues here are definitional (“Who are they?‘? and methodological (“How are they to be represented in the evaluation process?“) My description of the contents indicates that the editors were successful in including a broad range of papers, and that these pose questions that form an agenda for the next stage in the development of an international and comparative study of policy and program evaluation. This volume makes a greater contribution to the comparative study than it does to the international study. A more accurate subtitle would have been Learning from National Experiences. The edifors could have made major contributions if they had written a number of state-of-the-knowledge papers, rather than three brief overviews. Policy-makers, administrators, and evaluators should read this book. Although few readers are likely to find each of the papers to be of interest, through a reading of selected papers readers will have an opportunity to transcend their professional ethnocentrism, to become more politically sophisticated about their work, to recognize their potential for influence in the evaluation process, and to think about those basic issues they gloss over in their day-to-day work experiences.

REFERENCES Cave, M., Hanney, S., Kogan, M., & Trevett, G. (1989). ZIre use ofperformance indicators in higher education: A critical analysis of developing practice. London: Jessica Kingsley Publishers.