䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵䡵
From the Editor
A Stewardship Report to AEA Members
This is my last issue as editor of AJE. Incoming editor Mel Mark, currently Associate Editor, will take over the helm with issue 20(3), and I’ve passed on to him all of the AJE (and earlier EP) files and manuscripts. To assure a smooth transition, we recently spent two days together examining all those manuscripts, and Mel has asked me to inform all their authors that (1) he is in general agreement with the feedback and judgments I have already shared with them, and (2) he expects all authors who are revising or rewriting manuscripts to respond fully to the feedback they have received before resubmitting their manuscripts to him. Readers should not draw the wrong inference from learning that Mel and I carefully discussed all the cargo the good ship AJE is carrying at the time he takes over command. Those who know Mel at all well already know that he will chart his own course, choose his own crew, and re-outfit this ship any way he deems it best. Still, because he has now served as associate editor for nearly a year, and has heavily influenced the course AJE now steers, don’t be shocked if he decides that the best course for AJE lies at least for now in the direction we have already steered together, with our editorial colleagues, thus far. And although Mel has honored me by inviting me to serve hereafter on AJE’s Editorial Board, that is very much (and most appropriately) a shore role, so no one need fear that even my ghost will be found stalking the decks of AJE in the future. As for the journal’s future, it is an understatement to say that, with Mel Mark, it is left in very good hands. Accounting for AJE’s Status and Progress As the journal now passes to another editor, it is time for me to (1) account to the AEA members for what has transpired during my watch, and (2) report on my stewardship to the AEA Board of Directors who charged me with making a good journal even better. Knowing that self-report data is always a bit suspect, perhaps I can boost the credibility of this self-accounting by leading off with one indisputable fact—any improvements in this journal in the past few years are largely, if not primarily, due to the tremendous editorial crew with which I’ve been blessed during my editorship. Indeed, my greatest accomplishment as AJE editor has probably been choosing and obtaining the commitment of great editorial advisory board members and associate and assistant editors who have rendered such outstanding service that it would have been hard to have failed in our efforts. I will list shortly those to whom credit is due for AJE’s progress and present status, but first let me list what I view as some of the most significant accomplishments of our editorial crew during the past four years. xi
xii
AMERICAN JOURNAL OF EVALUATION, 20(2), 1999
1.
2.
The opinions of AJE readers were sought and heard. When first appointed editor, I sent a mailed questionnaire to our readership to solicit suggestions about what they wanted from their journal (then entitled Evaluation Practice), and what they would most like to see changed. The opinions of the 236 respondents were summarized nearly four years ago, in issue 17(1), and have guided many of the subsequent changes made in the journal. Six new features and sections have been added. Based on both our personal judgment and feedback from our readers and our editorial board members, we kept the best features Midge Smith had developed as EP editor, jettisoned the rest, and then added the following new features: ●
●
●
●
●
●
3.
Exemplary Evaluations. A section in which evaluation studies judged to be outstanding on one or more dimension could be published, along with an analysis of their exemplary features and how they were attained. (Under Jody Fitzpatrick’s guidance, this section has evolved into a description of the evaluation, coupled with an interview with the evaluator about some of the decisions and challenges involved in conducting the study.) Ethical Challenges. A section aimed at examining ethical dilemmas often encountered by evaluators, with suggestions for how those dilemmas might be resolved so as to conform to AEA’s “Guiding Principles” and the Joint Committee’s Evaluation Standards relating to ethics. (Mike Morris has provided intriguing scenarios and selected commentators to apply the ethical guidelines and standards in describing how an evaluator should proceed in that situation.) Evaluating Evaluations. A new section in which two or more metaevaluators examine prior evaluation studies (whether good, bad, or indifferent) to see how well they measure up with (1) the Principles and Standards cited above, and (2) other criteria held by the meta-evaluator. The intent is to use meta-evaluations to teach ways to better our individual and collective evaluation practice. Revisiting Fundamentals. A new section designed to help all evaluators keep in mind important but sometimes neglected cautions about possible errors and oversights that can greatly limit the usefulness of an evaluation study. For newcomers to evaluation, it is hoped this section might also introduce them to useful methods or techniques that go beyond their training. Tips on Teaching Evaluation. A section developed to provide a forum in which teachers of evaluation can exchange suggestions about how to improve instruction in evaluation courses and workshops. Defining Moments. A section used only occasionally, when an event occurs that clearly has a broad impact on the field of evaluation and those who practice in it.
An ecumenical and eclectic editorial stance has been maintained toward disparate philosophical and methodological orientations that co-exist in the field of evaluation. Our editorial team committed at the outset that we would not allow this journal to be “captured” by any particular philosophical or methodological persua-
From the Editor
xiii
sion, even those we favor personally. We believe we have kept faith with that pledge and have favored no paradigm, method, or technique over another. We have eschewed senseless debates (e.g., pointless and divisive wrangling between proponents of quantitative methods and advocates of qualitative approaches) and encouraged instead productive discussions of how evaluation can fruitfully draw on strengths inherent in the various evaluation approaches that have been proposed. We have made decisions about each manuscript based on how well it justified itself, not whether we warmed personally to that justification. Perhaps the best test of how well we’ve succeeded in this regard would be to ask readers who do not know our editorial crew well to identify, based on the contents we have elected to publish in this journal, what particular philosophical bent and methodological preferences we have. We think—and hope—readers would find that very difficult to do. 4. Our editorial stance has insisted that disagreements and critiques be presented professionally, not in personally derogatory language. We feel strongly that professionals should express criticisms and differences of opinion without engaging in unkind attributions and cruel caricatures. That stance has required us to restate or edit the comments of some reviewers and ask them to thereafter express their views constructively, not destructively.1 Those reviewers who continued thereafter to insist on phrasing their critiques offensively or lacing them with pointless personal denunciations were simply dropped from our reviewer list. This editorial philosophy has also resulted in our asking some of those who have squared off in debates published in these pages to blunt their verbal darts a bit, even asking some of our field’s finest wordsmiths to sacrifice or soften some of their more devastating “zingers.” They have responded magnificently, all finding a less caustic but no less plain way to make their point. Some have even thanked us for calling attention to their unintentionally bitter tone and thus allowing them to redraft their criticisms into points that, while no less sharp, impaled their opponent’s ideas rather than their personas. We are grateful that these commentators have shared our view that criticism, so essential to continued growth of any field, can be expressed clearly without being demeaning, if one simply resists the temptation to “win points” with heavy-handed “humor” more worthy of Don Rickles’ nightclub act than of principled professional prose. 5. We initiated the name change from Evaluation Practice to American Journal of Evaluation. In an effort to make this journal more inclusive of those who contribute to the field of evaluation, we suggested that the journal be re-titled, working with our Editorial Board and the AEA Board of Directors and Publication Committee to obtain their support for such a change. We trust that the contents of the first five issues of AJE has set a direction that allays the concerns of any evaluation practitioners who may have feared that the change in title merely presaged the journal taking on a pompous and irrelevant academic orientation. In our view, the journal was made far stronger in extending its boundaries to include the best examples of both theory and practice, and we believe the name change will prove important in fulfilling the AEA Board’s desire to have the association publish the very best journal(s) in the field. 6. The flow of submitted manuscripts has increased, as has their quality. Perhaps one of the best ways to chart the progress of AJE is to note that several well-known evaluators who could barely be persuaded (or, in some cases, could not be
xiv
AMERICAN JOURNAL OF EVALUATION, 20(2), 1999
persuaded) to submit manuscripts to the journal four years ago are now submitting unsolicited manuscripts to it. We have celebrated that change. 7. Photos of authors, editorial staff, and AEA award winners have become this journal’s standard format. While a few authors have proven photo-phobic, and only sent in photos when threatened that caricatures of them would appear instead, most cheerfully complied with this requirement. Readers have reacted to this addition even more positively than to any other change made during this editor’s term. 8. The newsletter previously published and distributed with each issue of EP was separated from this journal. This was done both to convey better that this journal is not AEA’s news organ, but rather a scholarly, refereed journal, and to avoid diluting the time of the journal staff on a task not seen by the AEA Board as fulfilling the journal’s primary purposes.2 9. Citation status has been attained for AJE in the Social Sciences Citation Index, as well as continued in the several other citation services in which EP was previously cited. Not everything our editorial crew has hoped for has come to pass, however. Though we are probably slower to notice the barnacles on AJE’s hull than to call attention to its stronger sails or tighter rigging, there is one blemish large enough to demand our notice. That is our total inability to get AJE reviewers to complete their reviews in anything like a reasonable turn-around period. Our request for a four-week turnaround is routinely ignored and the period often stretches into months, leaving me and my associates to apologize to authors who either presume their manuscript has been forever lost or conclude we are running a pretty loose ship. Gentler “nagging” does not help, and blunter urgings merely result in manuscripts being returned, unreviewed. Perhaps it is the price we pay for seeking as our reviewers talented persons who are in high demand. Yet we suspect that somehow we should have done this better, for few other journals we know take as long with their reveiw process as we have. (Conversely, we have been told by some impatient authors that, once received, the reviews have been worth the wait; one recently said, “We are grateful for the thoughtful, thorough, fair, and constructive nature of your overall review process and your editorial judgment; speaking personally, I haven’t seen anything in that league in over 25 years of dealing with journals,” so we take some small comfort from that.) There are probably many other editorial failures on my part to which I am nearsightedly oblivious. If so, do not tell me—since I am on the last leg of my editorial voyage, even summative evaluation will not help me correct such failures now. Tell our new editor,3 and he can use my mistakes to improve the journal further in the years ahead. Meanwhile, let me introduce the contents of this issue, and then end by acknowledging the many persons who have contributed significantly to whatever improvements have been made in this journal during my term as editor. IN THIS ISSUE Articles ●
This issue begins with an article submitted by Mel Mark, AJE’s new editordesignate, along with his colleagues Gary Henry and George Julnes. In it they
From the Editor
●
●
●
●
●
xv
lament the divisiveness that paradigm schisms have introduced into the field of evaluation, noting its fragmentation into dozens of supposedly distinct evaluation approaches that are apparently incompatible. They offer an integrative framework based on realist philosophy, evaluation purposes, and categorization of evaluation methods into four inquiry modes. Although this article is co-authored, it provides a timely glimpse of at least some of the orientation and priorities of AJE’s soon-to-be editor. In a provocative article, Linda Mabry argues that current codes of ethical conduct in evaluation are necessarily stated in general, broadly applicable terms, but that such generality robs them of usefulness, leaving evaluation practitioners to depend on their own subjective judgments about what is and is not ethical in any particular situation. Practitioners, she concludes, must adjust and prioritize the general standards, creating in effect a personalized, situation-specific hierarchy of valuestandards, and thus introducing an “inevitable but troubling subjectivity” into a field where professional disinterest and detachment have traditionally been assumed. She offers case examples to support her stance. Reading this intriguing piece is not likely to leave you unmoved. Robert McCall, Carey Ryan, and Beth Green propose an approach they have found useful when forced to use non-randomized, constructed comparison groups because using randomized comparison groups proved impossible due to (1) ethical or legal constraints associated with denying treatment, and/or (2) the existence of too-few non-treated individuals, due to earlier experimental studies using the same limited population(s). They describe a method they have found useful in dealing with this problem in evaluating intervention programs for young children. Those who find themselves in situations with constraints similar to those described by the authors should find this article worth pursuing. Noting that a better tool is needed to help program developers and evaluators to develop and put program theory into operation, Souraya Sidani and Lee Sechrest offer their conceptual framework for use in defining a program’s present problem and the program’s target population, specifying the causal linkages underlying the program effects, and identifying both the factors that affect treatment processes and the expected outcomes. The categories of variables in the authors’ framework (input, process, and output, reminiscent of categories in Stufflebeam’s CIPP model) are examined in terms of their implications for program evaluation. This piece should be of interest to evaluators who use a theory-driven evaluation approach. Many evaluators who use survey research involving minors are finding local school districts and some federal and state agencies requiring written or “active” consent from parents/guardians. Knowlton Johnson, Denise Bryant, Edward Rockwell, Mary Moore, Betty Waters Straub, Patricia Cummings, and Carole Wilson describe a strategy they have used for obtaining active written parental consent on an outcome evaluation of an ATOD abuse prevention program in 10 school districts. They report on the consent rate attained with their strategy and assert that it is a more cost-effective method for obtaining parental consent than those used in previous studies. Readers who find they must seek parental/guardian consent may find this article very useful. The next article deals with electronic surveys— collecting data via the Internet. In an article not intended for “techies,” Jon Supovitz tells a story of a series of design and
xvi
AMERICAN JOURNAL OF EVALUATION, 20(2), 1999
●
technical challenges he and his colleagues encountered in their effort to construct a Website to collect evaluation data, offering the lessons they learned. Although Supovitz’s conclusion that “Web-based data collection has several possible advantages” is drawn from his personal experiences and not from tightly designed research, the experiences he recounts should prove useful to others who have not yet learned that taking full advantage of electronic surveys may not be quite as easy as it appears. The final article in this issue, by George Balch and Donna Mertens, also recounts lessons learned from the authors’ practical experience, but in a different arena, that of conducting focus groups with individuals who are deaf or hard of hearing. They offer eight “lessons learned” that should help evaluators attempting to use focus groups with this population, as well as evaluators who find that such persons are unintentionally included in focus groups selected on other grounds. Although the reviewers—and this editor—feel that the value of these lessons stems more from the authors’ personal insights than from their research design, we believe the practical wisdom offered here is not only pertinent to conducting focus groups with the hearing disabled, but may be useful in probing the limits of focus group methodology and its applicability with other unique populations.
Forum and In Response Readers who attended the plenary sessions at AEA’s last annual meeting will already have heard Carol Gill assert that persons with disabilities are frequently present, though unrecognized, in programs whose evaluators seem unaware of these persons’ presence and the need to tailor their evaluation approach to their needs. Citing the incidence of those with “invisible disabilities,” Gill argues that all evaluators should be more attentive to disability as a dimension of humankind, possibly considering it along with other demographic variables in selecting and stratifying the populations and samples used in evaluations. In her brief commentary, also offered at AEA, Barbara Lee echoes, underscores, and reinforces most of Gill’s points. Placing Gill’s plenary address and Lee’s reaction in the Forum section invites commentary from any who have relevant views they may wish to express. Midge Smith recently submitted a review of Cousins and Earl’s volume on participatory evaluation for AJE’s book review section,4 but it was far too long to be published in that section, mandating it be treated as a traditional, reviewed manuscript. Smith’s critique also raised questions that made it desirable to place it in this section so the book’s editors could be invited to respond. Thus, the Forum section contains Smith’s review, while Brad Cousins and Lorna Earl’s response, and a final rejoinder by Smith appear in the following In Response section, thereby closing an engaging and hopefully enlightening dialogue about both the book and the approach to evaluation it proposes. Readers may enjoy not only the interchange, but also the sailing ship metaphor it generated (assuming that readers are not uncomfortable with rolling waters and tilting decks).5 In any event, I trust readers will find this exchange of views informative. Evaluating Evaluations This is the second issue in which this section has appeared, and here we use an alternate format to that used in issue 20(1). An evaluation submitted as a meta-evaluand by Robert
From the Editor
xvii
Stake and Rita Davis was assigned to two hand-picked meta-evaluators from AJE’s editorial board, Lois-ellin Datta and Patrick Grasso. They were asked to critique how well Stake and Davis’ evaluation fares on the existing Joint Committee Standards, the AEA Guiding Principles, and other criteria they believe appropriate to use in judging the study. Mel Mark, AJE’s Associate Editor, has taken these evaluators’ and meta-evaluators’ manuscripts and shaped them into a section we believe most readers will find helpful in improving their own evaluation studies. And that’s what this section is all about. Ethical Challenges Michael Morris, editor of this section, has asked John Owen and Gail Barrington, colleagues respectively from Australia and Canada, to comment on a scenario entitled, “The Pilot’s Demise,” in which an internal evaluator confronts a dilemma in evaluating a community outreach program for low-income, elderly citizens. The evaluator frets that “The only thing you know for sure is that you’re being invited to take an uncharted journey through a political minefield, where the chances of disaster are nerve-wrackingly uncertain.” Readers will enjoy seeing how Owen and Barrington propose that the evaluator tiptoe out of that minefield, or avoid entering it in the first place. Book Reviews This issue includes: two reviews of Pawson and Tilley’s book on realistic evaluation, by Patricia Rogers and Michael Patton; Arthur Horton’s review of the latest edition of Aiken’s text on psychological testing; a review by Monika Schaffner of Cunningham’s book on classroom tests; Jason Palmer’s review of Vedung’s volume on public policy and program evaluation; and a review by Douglas Horton of Patton’s most recent treatment of utilization-focused evaluation. Comments about two of the reviews may be helpful. First, I had not yet read Pawson and Tilley and, therefore, had not realized that it contained some rather strong criticism of Patton’s work, which he (understandably) rebutted in his review. Had his been the only review of Pawson and Tilley, I might have been uneasy about publishing a critique that could well be viewed as rather partisan, and therefore unsuitable to be the sole voice to be heard. Coupled as it is with Roger’s more disinterested (not uninterested, mind you) view, however, any objection I might have had disappears, for Patton offers the very useful perspective of one whose thinking has been challenged by these authors, making the discussion of Pawson and Tilley’s volume all the richer. Besides, Michael’s pieces are always such good fun! Second, in the prior issue of this journal, in describing why the third edition of Patton’s Utilization-focused Evaluation received only one, rather than the two reviews we attempt to give major evaluation books, I said: “ . . . we have decided to apply that policy only through the first two editions of such books, under the assumption that third and subsequent editions are likely to have less new content to review.” Not so. Apparently, I had made that policy clear only to myself, which I realized when Perry Sailor, AJE’s Book Review Editor, sent me for this issue Horton’s review of this same third edition of Patton. I would feel worse about my misstatement, except for the fact that it may underscore why a change of editors is timely. So, having cleared the decks for our new editor, let me end by thanking all who have helped to keep AJE afloat and moving forward during my editorship.
xviii
AMERICAN JOURNAL OF EVALUATION, 20(2), 1999
Bon Voyage, with Thanks to Many When I accepted this editorship, I expected it to be mostly tedious editing and proofreading, along with having authors peeved at me for not accepting their manuscripts, and sacrificing my free time to all the necessary but unfulfilling managerial aspects of “running a journal.” It seemed a good way to serve and help AEA, but I didn’t expect it to be much fun. It would be hard to be more wrong. It was not long before I discovered that I enjoyed this editorial role enormously, thanks to: ●
●
●
●
●
●
●
Great colleagues willing to sign on as associate editors, and serving splendidly in that role, nearly becoming co-editors – Connie Schmitz and Jody Fitzpatrick served longest in this capacity, while Leslie Cooksy, Mel Mark, and Lyn Shulha have stepped into that role more recently, but performed it no less well. Bright and committed assistant editors and administrative assistants – Sheila Jessie, Steve Jones, Nancy Puhlmann, Michael Chen, Audrey Matsumoto, and Joyce Brinck handled all the mechanics of the review process and the many details involved in getting each issue off to the press. Highly qualified fellow evaluators who have given sage advice and strong support as members of the Editorial Advisory Board – Michael Bamberger, Robert Boruch, Valerie Caracelli, Brad Cousins, Lois-ellin Datta, Wei-Li Fang, Deborah Fournier, Patrick Grasso, Jennifer Greene, George Grob, Ernie House, Paul Johnson, Con Katzenmeyer, Anna-Marie Madison, Michael Mangano, Mel Mark, Ricardo Millett, Dianna Newman, Michael Patton, Hallie Preskill, Chip Reichardt, Lee Sechrest, Stacey Stockdill, Charles Thomas, Rosalie Torres, Carol Weiss, and Paul Wortman. Interested and supportive colleagues who have served as members of AEA’s Board of Directors and/or Publication Committee during the period of my editorship – though too numerous to list here. Their approval of changes we recommended has been crucial in moving the journal forward. Midge Smith deserves special mention for the way in which she generously shared her accumulated editorial wisdom. The committed colleagues who have taken on responsibility for specific sections of the journal and performed them so well – Perry Sailor, Jody Fitzpatrick, and Michael Morris have been wonderful in the way they have handled their respective “Book Reviews,” “Exemplary Evaluations,” and “Ethical Challenges” sections of AJE. The 260 unselfish evaluator-colleagues willing to take on the thankless, anonymous role of AJE reviewers, the critical scholarly function that lies directly at the heart of every refereed journal; some reviews have been of such quality and depth as nearly to warrant publication in their own right. The many authors who have submitted manuscripts and published their work in AJE (and EP) during this editor’s term—appreciation is due to all the authors who were patient with the review process and responsive to the requested changes in revising and resubmitting their work.
To all these wonderful folks who have made my editorship rewarding and the load seem nearly light, my heartfelt thanks. Together, I believe we’ve come far toward meeting AEA’s
From the Editor
xix
charge to make this journal tops in our field. Together. Which is why I am excited about passing the command on to Mel Mark. It is hard for me to imagine a better editor— or a finer professional—to take AJE to its next level. And even though I am pleased with the progress made while I have enjoyed being at AJE’s helm, at the end of Mel’s editorship, which I hope does not come anytime soon, I look forward to AJE being a much better journal than he inherited. So, let me offer a hearty Bon Voyage! I look forward to watching this journal go far under its new captaincy. Meanwhile, I will look forward to sending the new crew my missives from shore. Having now learned—from many authors—all the niceties of exceeding space limitations by fudging on margins, packing pages densely with reduced font size, and so on, I aspire to submit a manuscript with margins so narrow that the words extend 1/8⬙ off the page on all sides! As an editor, I expect Mel to be less enthused about such creativity, even though I learned the techniques from a manuscript he sent to me recently. But role reversals can be fun. B.R.W. NOTES
Blaine R. Worthen
1. As an example of the scathing and personally derogatory language supplied to us by some reviewers, let me quote just one snippet from one review: “No person in their right mind could have written this tripe. The author thinks no better than the level of the brain-damaged students in the treatment group. Really, the writing is so confusing that it looks like the product of a diseased mind.” 2. The fact that the newsletter has not appeared as regularly after this separation does not stem, I believe, from a flawed logic that led to the separation, but rather to the inability of the management group selected to run AEA to put out the newsletter in the way they had assured our Board they could. Hopefully, the newsletter will soon be re-instated as an ongoing service sent regularly to all AEA members, either in hard copy
or via the Internet. 3. His e-mail address is ⬍
[email protected]⬎; his mailing address appears earlier in this issue. 4. Smith and another reviewer had agreed to review this book, in keeping with our effort to obtain two reviews of all major evaluation books, but the second reviewer never delivered on his commitment. 5. Whether the marine metaphor used earlier in this editorial was prompted by these colleagues’ use of seagoing similes, or by my recent enjoyment of painting tall ships at sea, is less important than whether it offends my more serious scholar-colleagues; if so, I’d merely urge them to remember that this is my last issue, so any seasickness they’ve suffered should soon subside.