Book Reviews
tions are repeated in chapters of the book. For example, the McLaughlin and Patton discussion about the role of technology in evaluation is in the “Framing Questions” section of the “Role of the Evaluator” chapter and in the “Data for Multilevel Use” section of the “Other Methodological Issues” chapter. I was having deja vu all over again! T. Schwandt: Atkin apparently thought that because these conversations applied to several topics discussed in the book, it was necessary to repeat them. But he could just as easily have cross-referenced them in a footnote. Speaking of footnotes . . . What did you think about Hendricks and Ellett’s responses to the conversations in their footnotes? P. Magolda: The inclusion of discussants’ reactions is an excellent idea. Unfortunately, using footnotes as the medium didn’t allow respondents ample space for their critiques. For example, Hendricks, reacting to a participant’s reason for evaluation, stated “I find that we usually do evaluations for lots of different reasons, not just one or two” [p. 481. The reader never learns Hendrick’s reasons. Not only was the amount of response space insufficient, but the location of the comments was distracting. I found myself volleying between the dialogue and the notes. This format undermines the author’s intent of presenting a free-flowing diaIogue. T. Schwandt: I’d characterize the Elfett and Hendrick notes as running random kibitzing. I generally found the kibitzing more annoying than illuminating . . . Much of what we’ve been talking about, it seems, deals with the format and presentation of the book. For me, anyway, the form often inhibited my appreciation of the content. What do you think?
P. Magolda:
Maybe I appreciated the content more than you. The misuse discussion, including Alkin’s misuse categorization model [p. 2931, broadened my understanding of the ways in which evaluation can be misused. I liked the emphasis placed on the evaluator-client relationships and on evaluation planning. I also liked the idea that the text itself modeled the fluid nature of evaluation thinking and practice. T. Schwandt: interesting observations. I’m not sure for whom the book is tarP. Magolda: geted. As a graduate student, I would prefer reading many of the original publications referenced in the discussions, rather than this more fragmented dialogue. In defense of the book, it does provide glimpses into the research agendas and current thinking of an impressive collection of evaluation theoreticians and practitioners. It highlights the diversity of evaluation methodologies and methods. T. Schwandt: I think that graduate students interested in a loosely organized survey of issues in evaluation could benefit from this book provided it was supplemented by an examination and discussion of references of the kind you refer to. I think readers attracted to the title “debates” on evaluation will be somewhat disappointed, and they may not be pleased to find that a portion of the book is devoted to the Weiss and Patton papers. To be sure, important, interesting, and provocative issues that contribute to our understanding of evaluation practice are to be found in the dialogue . . . I need some more coffee; whose idea was it to meet this early, surely not mine. P. Magolda: I suppose you’re going to stick me with the bill . . .
Back to Work: Testing ~eErnpIo~rn~~t Service for DispIaeed Workers by Howard S. Boom. Kalamazoo, MI: W.E. Upjohn Institute for Employment Research, 1990, 180 pp. Reviewer: Richard H. Price This book is a detailed review of what the author describes as “a rigorously designed and carefully implemented randomized field experiment to study the implementation, impacts, and costs of job-search assistance and retraining services for displaced workers” (p, v). The social importance of such evaluations is substantial. The number of displaced workers, that is, those
who have lost stable, well-paying jobs because of economic changes or changes in technology is estimated to be about 1 million a year, or about 10% of the unemployed. National efforts in the 1980s to respond to this problem were undertaken primarily under the Job Training Partnership Act (JTPA) enacted in 1982 and implemented in 1983. The author of this monograph
has participated in a number of evaluation studies of local programs implemented under this act, which were commissioned and funded by the Texas Department of Community Affairs. The Department had the willingness, insight, and persistence to support randomized evaluations of programs designed to help displaced workers reenter the work force. The current volume is a detailed description of the evaluation of the Texas Worker Adjustment Demonstration, which involved a randomized experimental evaluation of 2,192 displaced workers in three sites during 1984 and 1985. While the mono~aph omits detailed descriptioris of the programs themselves, reasonably detailed descriptions of the program context, the evaluation design, the nature of the data sample, the nature of services received, and evaluation of program impacts are all provided. As an evaluation monograph, his book has several merits. Findings reported are fully documented in most cases, and the author carefully qualifies interpretations of findings whenever it seems appropriate to do so. These are virtues that we would expect of any competent evaluation of an impo~ant sociaI program. However, probably the strongest point of this book is that the author “teaches as he goes” throughout the monograph. By this I mean that Bloom is an experienced evaluator who is sensitive to the many things that can compromise even a rigorously designed randomized field experiment. As he identifies the various pitfalls that all such designs are heir to, he not only identifies the problems, but in many cases describes in detail the way in which the current evaluation attempts to deal with the problems that inevitably will confront those attempting to evaluate a randomized field experiment. For example, Bloom provides a thoughtful commentary on the practical problems associated with maintaining a random assignment in an evaluation design, particularly as one deals with the humanitarian impulses of local program administrators who wish to provide different mixes of services that compromise random assignment and degrade the design. Perhaps the best example, however, of Bloom’s attempt to “teach as he goes” is dealing with problems of attrition or failure to show up to obtain services in the intervention on the one hand, or attrition as a result of nonresponse in subsequent survey waves. In this case,
Bloom discusses program efforts to minimize such problems as “no-show,” but also provides fairly sophisticated appendices describing analytic strategies to improve estimates of program impact when no-shows are an inevitable problem in randomized field experiments. The analyses reported here are based on strategies developed by Bloom (1984) and also more recently used successfully in other randomized field experiments evaluating the impact of job search programs on displaced workers (Vinokur, Price, Caplan, 1991). Another nice feature of this monograph is the use of subgroup analyses, particularly comparing female and male participants, that suggested that female participants enjoyed substantially more program-induced earnings than did men, and the gains for men were actually more short-lived. In addition, subgroup analyses suggested that the relationship between education, age, prior occupation, and other participant characteristics and program impact is extremely complex, and their roles in program impact probably vary substantially across different programs and different target groups. If there is a concern I have about this book, it has to do largely with its atheoretical approach to understanding reemplo~ent programs and evaiuating their impact. There is little attempt to interpret or to provide a larger theoretical framework to understand the dynamics of program impact. Without some appropriate theoretical frame or some understanding of the original guiding ideas underlying the JTPA demonstration projects, it is hard to learn many larger lessons from this monograph. Nevertheless, this is a workman-like volume that will have something to teach both experienced social scientists interested in applied science and program evaluators who have a sound background in research design and multivariate methods. In addition, experienced evaluators should be reminded of a number of important evaluation issues by reading this book. On the other hand, it is not clear that this volume will help program developers much, and its results are too limited to inform broad policy concerns. Nevertheless, this is an evaluation report by an experienced evaluator who has managed to accomplish some teaching by example while providing a careful program evaluation of governmentaf efforts important to both personat and social well-being.
REFERENCES BLOOM, KS. (1984). Accounting for no-shows in experimental evaluation designs. Evaluation Review, 8(2), 225-246.
VINOKUR, A.D,, PRICE, R.H., & CAPLAN, RD. (1991). From field experiments to program implementation: Assessing the potential outcomes of an experimental intervention program for unemployed persons. American Journal of Community Psychology, 19(4), 543-562.