Evaluation and Program Planning 28 (2005) 121–122 www.elsevier.com/locate/evalprogplan
Book review Evaluation as moral enquiry T.A. Schwandt, Evaluation Practice Reconsidered (2002, Peter Lang Publishing, New York NY) Thomas Schwandt provides a consolidated overview of his position on evaluation in this book, drawing on a set of previously published essays in research and evaluation journals. Having heard him speak on several occasions, I welcomed the opportunity to read about Schwandt’s arguments outlined in detail, for the ideas he expounds are not simple. I found myself re-reading the book chapters to come to grips with the complex ideas he advances. Schwandt challenges the notion that the application of social science knowledge has led to national progress (in this case in the United States of America). He asserts that normative social science has not been the expected engine for improvement of social policy and its implementation. So, by implication most evaluation work has been ineffective. This is an attack on scientific rationality as the dominant form of knowledge for decision-making about social issues in a civil society. However, he also believes that not all evaluation approaches have been equally ineffective. Schwandt acknowledges the attempts of some evaluators who encourage the involvement of stakeholders in evaluation through negotiation, deliberative dialogue, and attention to conflict resolution. However, this does not go far enough in moving evaluation from a technology in which there is still a dependency on traditional methods of social enquiry. The ability to fully evaluate social practices is restricted because moral enquiry is not included in evaluation practice. Schwandt thinks evaluation should be more like a moral science rather than an applied science. This involves attention to issues consistent with ‘critical reflection on the kind of life worth living that comes with human experience’. There is a role for evaluators to question the values that shape institutions and programs, which he labels moral probing. This goes beyond the exhortations of theorists who believe in democratic dialogue to encourage use of evaluation findings based on empirical information, to an investigation of the reasons for the value stances held by stakeholders. Dialogue is seen to be useful through which stakeholders open up to conversations which can lead to q 2004 Published by Elsevier Ltd. doi:10.1016/j.evalprogplan.2004.09.001
self-transformation, and an understanding of other parties’ points of view. An evaluator working under these assumptions must bring a special sensitivity to organisational settings, to assist stakeholders to identify the problem that must be investigated, and encourage a form of grounded social criticism which identifies the ‘goods’ of practice and the structures that support them, using socio-anthropological techniques. The evaluator is no longer an expert but a social commentator or critic exploring the meanings of programs with stakeholders, by encouraging reflective practice. The evaluator working in a moral enquiry mode is continuously engaged, with the evaluation being undertaken contiguously with program delivery. This is not new to proponents of participator approaches to evaluation. However, what is novel is that Schwandt sees moral discourse as the new science of evaluation, which ‘would allow us to rehabilitate and cultivate several natural human capacities and thereby allow evaluation to be more continuous with the way we live in the world’ (p. 47). Evaluation of this kind is linked to praxis, similar to craft knowledge, practical wisdom or ethical know-how (phronesis); a type of human engagement that is embedded within a tradition of communally shared understandings and values that are connected to professional experience. Essential to praxis is the use of principles and expertise to deliberate, this is known as a dialectic approach. In organizations, dialecticism involves rhetoric, the art of persuasion, and the ability to move an audience to action. So, for Schwandt, evaluation practise in based at the local level, involving and serving the needs of practitioners. Evaluation should be seen as an ‘activity undertaken by practitioners of all kinds in which they seek to judge again and again, from one occasion to the next, whether they are doing the right thing and doing it well’. Within organizations, practitioners need to make sound judgements about social practice on an ongoing basis. What to do is often a point of debate, and so there is ambiguity, ambivalence and disruption. Schwandt sees the benefit of an evaluator in the roles of teacher, interpreter and assistant in helping practitioners better understand one another’s value positions and practice based on these positions. This is seen as an additional role for an evaluator in addition to that of using rationality and reason. This is clearly a rationale for more interactive and participatory evaluation, and a diminution of
122
Book review / Evaluation and Program Planning 28 (2005) 121–122
the special place of traditional scientific expertise within these processes. I considered these ideas in terms of my own practices, many of which have involved working inside organizations, using participatory and interactive approaches to evaluation. Recently, I was involved in an ongoing evaluation of training programs run by a government agency. An objective was to build up the evaluation capacity of the agency over time. As part of this process, we (an internal staff member and myself) routinely reported our findings to managers and members of the training team. The objective was to encourage the adoption of findings of each specific evaluation into the future work of the team. While the team were interested in the findings, I found that a good deal of time in these reporting sessions was taken up by the trainers discussing their own views about what was effective and what they could do to make their work more effective. I found myself as chair probing staff about the assumptions they were making when they proffered their viewpoints. Evidently I was unconsciously operating in ways that were consistent with Schwandt is suggesting in this book. In the case I have just outlined, I saw decision-making being dependent on a combination of knowledge of two kinds, findings from systematic enquiry and the craft knowledge of practitioners. I must confess to a preference
for the combination of these kinds of knowledge in agency level decision-making. This may not be too far from what Schwandt is suggesting. In fact he says that evaluators should not case aside scientific knowledge, but recognise that knowledge based on research can be fallible and not relevant to some contexts. It seems to me that there is a distinction between traditional ‘research’ in this context and the production of empirical findings that are collected at agency level in response to the information needs of stakeholders in that agency. This is a book that should be read by all evaluators. It contributes to the ongoing debate about the nature of evaluation, and where evaluation practice begins and ends. It should be particularly useful for those engaged in organisational level evaluation, particularly those interested in transformative principles. It is not an easy read and the use of unfamiliar terminology may rankle some. We need philosophers to continue to challenge what we do, and Tom Schwandt does this better than most. John M. Owen* Centre for Program Evaluation, The University of Melbourne, Parkville, Vic., Australia E-mail address:
[email protected]
*Tel.: C61 3 8344 8371; fax: C61 3 8344 8490.