An extinction-retardation strategy for educational evaluators

An extinction-retardation strategy for educational evaluators

Forum articles are intended to speak to and about thephilosophical, ethical, and practical dilemmas of ourprofession (see the “Contribution categories...

560KB Sizes 5 Downloads 36 Views

Forum articles are intended to speak to and about thephilosophical, ethical, and practical dilemmas of ourprofession (see the “Contribution categories”onp. 312 for a full description of the focus of this section). N. B. Obviously, many of the views expressed in this section will be different from those of EP’S editors. A case in point is Popham S opinion in the following comment about the decline of educational evaluation. Even a casual reading of my article that Midge Smith published in this journal earlier this year (EP, Vol. 16, No. 1, p. 29) shows that Popham and Isee the status of educational evaluation somewhat diJferent[v. This section is intended to be hospitable to diverse views, in the hope that such diversity will prompt productive professional dialogue.

An Extinction-retardation Strategy for Educational Evaluators W. JAMES POPHAM

ABSTRACT In recognition of marked reductions in the number of educational evaluators now functioning in our schools, the position is taken that educational evaluation must be promoted because of its direct and personal benefit to the individuals who authorize such evaluations. Specifics are provided of a two-pronged persuasion play to use with authorizers of evaluation studies. Popham

First love, they say, is almost impossible to forget. And, if someone ever undertakes a retrospection-based survey to probe that proposition, I can supply self-report data that I still vividly recall the day I fell in love with Ruth Brenner, my second-grade classmate, as she played the piano during an assembly in a Portland, Oregon grade school. Although my affection for Ruth had seriously waned by the time I reached third grade, eight-yearold boys having notoriously short attention spans, I still retain a vision of my very first beloved in the Sunnyside School Auditorium banging out an ethereal rendition of Chopsticks. W. James Pophnm

l

Director, 10X Assessment Associates, 5301 Beethoven St., Ste. 109, Los Angeles, CA.

Evaluation Practice, Vol. 16, No. 3, 1995, pp. 267-273.

Copyright @ 1995 by JAI Press, Inc.

ISSN: 0886-1633

All rights of reproduction in any form reserved.

267

EVALUATION PRACTICE, 16(3), 1995

268

My original affection for educational evaluation was every bit as intense, for it marked the first time in my career 1 ever became truly enamored of the potential contribution to be made by an educational sub-specialty. If I recall correctly, it was in 1966 that I first read pre-publication copies of the classic evaluation essays by Michael Striven (1967) and Bob Stake (1967). Several of my colleagues, over in Arizona, had been touting the virtues of both essays and had supplied me with almost illegible copies of each. (At that time, of course, high-class copying machines did not abound.) I was so entranced with Striven’s thought-provoking notions and Stake’s incisively logical view of the evaluative process that I even trotted to the library and looked up Cronbach’s (1963) Teachers College Record essay about educational evaluation. (I was, at that time, not a regular reader of Teachers College Record. I’m still not.) Spurred by (a) the evaluation requirements embedded in the Elementary and Secondary Education Act of 1965 (ESEA), (b) the frequently expressed need for educational evaluation voiced by public school practitioners who wanted ESEA dollars, and (c) the persuasive analyses of Striven, Stake, and Cronbach, I found myself hopelessly entranced with educational evaluation. And I did so without the romantic background refrains of Chopsticks.

THE ZEAL OF THE RECENTLY

CONVERTED

Recent converts to any religion are especially enthusiastic. And so it was with me and educational evaluation. 1 saw myself joining an enlightened crusade of informationsuppliers. We would collect pertinent evidence regarding a wide range of educational decisions, present that evidence to decision-makers who needed it, then watch with warranted pride as they opted for better choices (than would have been the case had our information not been at hand). The pathway to educational improvement was astonishingly straightforward. And I was on it. That is the way 1 tried to behave for a number of years, namely, by corralling information that decision-makers needed for program-improvement decisions (Striven’s formative mission for evaluation) or digging up data to inform program-continuation decisions (Striven’s summative role for evaluation). During the late sixties and early seventies, 1 suspect 1 uttered the words “formative” and “summative” far more frequently than 1 said “ham” and “eggs.” (And, because I had not yet been caught up with today’s low-fat frenzy, I really liked ham and eggs.) I was, it is clear, thoroughly captivated by the potential of evaluation to make education better. Moreover, I believed that most reasonable people agreed with me. I was wrong. As 1 said, I had imagined that I would conduct an evaluation study, describe its results in a pithy final report, deliver it to the pertinent stakeholders in a timely manner, then watch while dramatically more defensible decisions were made-more defensible, at least, than would have been the case had my illuminating evaluation report not existed. Again, I was wrong. (During the late sixties and early seventies, I soon had an unbroken string of “wrong agains” going when it came to educational evaluation.) By and large, the evaluation reports that my evaluator colleagues and 1 were issuing made precious little difference in anything.

Avoiding Extinction of Educational Evaluators UNRECOGNIZED

269 OBSTACLES

Why was it that decision-makers were not deferentially accepting the guidance in the evaluation reports that other educational evaluators and I were whipping out? I think there were several reasons. First, the evaluative data were rarely presented in such a way that one decision-option was ringingly endorsed while other decision-options were emphatically reviled. The world of education is more complex than we wish, and the multivariate nature of that world meant that evaluators (even of well circumscribed instructional interventions) often found themselves equivocating in their final reports. Striven often urged us to engage in comparative evaluation by spelling out the virtues and vices of competing interventions. But there were not many situations in which everything, including costs, could be held constant except for the differences in the interventions. Results of major evaluations, therefore, were far more ambiguous than either the evaluators or the person funding the evaluation had anticipated. So even the proevaluation decision-makers who would have happily deferred to a set of compelling evaluative results became frustrated by the unclear decision implications of many evaluation studies. Another reason evaluation-based pearls were often unappreciated by educational decision-makers was that real-world decisions are made far more often on the basis of political or personal considerations than on the basis of quantitative and/or qualitative evaluation studies. Many school-board members, for example, make choices based almost exclusively on political factors. Each month’s board meeting finds a flock of quid pro quos flying back and forth among board members. And I have seen many instances in which school administrators have discounted the evidence presented by evaluators if the decision implied by such evidence would have had a negative impact on the administrator’s favored co-workers. We like to be liked by the people we like. I do not wish to suggest that decision-makers such as these are malevolent villains who are disrespectful of children’s best interests. All they are is human. They are not likely to cast aside political or personal proclivities so that, with Aristotelian circumspection, they can select the best evidence-based course of action. That is not the way people behave. Those of us paddling the 1970s evaluation canoe should have recognized such realities. But many of us were so caught up with evaluation’s potential that we thought decisionmakers would do a bit of genuflecting when we delivered our reports. I admit it was naive, but please make an allowance for love-induced blindness. A REASONABLE

EXPECTATION

What we should have recognized back in those days is that educational evaluation can make a contribution to the quality of educational decision-makers but, for the aforementioned reasons, that contribution is apt to be modest. And today’s educational evaluator should take solace in this recognition, If the efforts of an educational evaluator can improve the quality of the educational enterprise by only a tiny percent, that improvement will beneficially influence the lives of many children. And that is an improvement that would not have occurred if the evaluator had not been there.

270

EVALUATION PRACTICE, 16(3), 1995

Had we commenced the educational evaluation game in this country with more muted blarings of trumpets and the release of smaller balloons, both the evaluation community and the decision-making community would have greeted evaluation’s modest contributions with far less chagrin. Hopefully, today’s educational evaluators will not over-perceive or over-promise what evaluation can do for education. A SPECIES EMERGES

When ESEA was born, public school folks believed that they really had to evaluate their funded programs. And they did. Federal regulations and federal fervor sent the message, “evaluate or else.” Because federal dollars pack a solid motivational wallop, educators evaluated. That is why prominent educational evaluators like Striven, Stake, and Stufflebeam were in such demand. Educators believed evaluation was required, hence sought guidance from the prophets. Policymakers were so caught up with the potential payoffs of evaluation that state legislators and even locally elected officials routinely began to add evaluation requirements to newly established programs. Formal evaluation of educational interventions was eminently fashionable. As a consequence, the need for educational evaluators expanded exponentially. Educational specialists who had been trained in different fields, e.g., statistics, measurement, or counselling, quickly leafed through a few essays about evaluation and hopped aboard the evaluation express as bona f& evaluators. Graduate programs at various universities began to crank out flocks of freshly minted educational evaluators. At UCLA, for example, Marv Alkin and I developed what we thought was a dandy doctoral program for educational evaluators. We turned out evaluators-and they got evaluation jobs! For educational evaluation, the seventies were truly exciting times. A SPECIES BEGINS TO VANISH

But, after a decade or so of disappointment with the less-than-earth-shaking impact of educational evaluation, legislatively imposed requirements for evaluation began to disappear. No longer were legislators obliging implementers of new programs to evaluate those programs systematically. Federal largesse began to shrivel while federal incentives for educational evaluation softened. At the district level, with tax support shrinking, once formidable evaluation staffs became small or nonexistent. If educators were not obliged to evaluate, they chose not to evaluate at all. This disinclination to evaluate one’s programs on a totally volitional basis is altogether consonant with the way that human beings routinely behave. We do not choose to be evaluated, unless there is reason to do so, because MBemight he,found wanting. Thus, in spite of the lofty and compelling arguments about the virtues of program evaluation, we can realistically expect few educators to place themselves voluntarily in the evaluative spotlight. That spotlight might detect blemishes. This recognition of human nature brings us to the problem of how to head off the increasingly rapid demise of the educational evaluator. 1 will conclude this analysis by proposing a scheme that may have a chance to retard the extinction of this potentially valuable species of professionals.

271

Avoiding Extinction of EducationalEvaluators

A TWO-PRONGED

PERSUASION

PLOY

1 believe that, without the very unlikely renewal of federally imposed evaluation activities, educational evaluation will become an even less frequent activity unless we alter the way educators evaluate. Unlike strategies based on an advocacy of evaluation because “it is the right thing to do” or because “children will benefit,” I contend that we must start to sell educational evaluation because of its likelihood to directly and personally benefit the individuals who authorize it. High-sounding and abstruse rationales for educational evaluation, in my experience, are destined to fail. Given the possibility that educators can, if evaluated, be judged deficient, there is little chance that high-minded persuasion ploys will work. What educational evaluators need to devise are ways to help those who must initiate and support educational evaluations (the evaluation study’s authorizers) to see the realistic possibility of a personal benefit. Sometimes those authorizers will be the decision-makers whom the evaluation study should assist, for example, district superintendents. In other cases, the authorizers and decision-makers will be different, as when a district school board authorizes a study whose decision-maker audience consists of the district’s principals. Years ago, when I was a graduate student, 1 read a book by Donald Snygg and Arthur Combs. They argued quite persuasively that the dominant motivation for a human being was “to preserve and enhance the phenomenological self.” That observation stuck with me because I have seen it confirmed so many times in others and in myself. We act when we see that we are personally apt to benefit. What I am recommending, therefore, is that educational evaluators identify the individuals who must authorize an educational evaluation, then persuade those authorizers that the conduct of the educational evaluation will, in a direct manner, preserve and enhance the manner in which the authorizer is regarded by others. I am not suggesting that an evaluation authorizer’s self-benefit be the only reason that the conduct of evaluations be endorsed. On the contrary, I believe that the overriding reason for conducting educational evaluations is that they can help improve the quality of schooling. In the wonderful film, People Will Talk, Cary Grant plays the role of a physician who reduces his perceived medical mission to the simple proposition that a doctor should “make sick people well.” In the same vein, I suspect that most educators could boil their mission down to “teaching boys and girls” what should be learned. Well, 1 propose that those of us who believe in the potential dividends of educational evaluation studies should provide the authorizers of evaluation studies with a doublebarreled rationale for such studies. First, and most important, we must directly point out that the study’s results will help improve the way we teach boys and girls. Second, and far less directly, we make it clear that the evaluation study’s results are apt to have a personal benefit to the individual authorizing the evaluation. In other words, we hold out a twofor-one benefit package to the individual(s) who need to authorize a proposed evaluation study. “First, the evaluation will benefit children; second, it will benefit you.” The second part of this two-for-one rationale, depending on the acumen of the authorizer, will need to be pitched at differing levels of subtlety.

272

EVALUATION PRACTICE, 16(3), 1995

ILLUSTRATIVE

TWO-PRONGED

PERSUASION

PLOYS

To make more tangible how these two-pronged persuasion ploys might work, I will provide two fictitious examples of the rationale (in quotes) that an evaluator might present to an evaluation study’s authorizer and (in italics) an authorizer’s probable reaction to that rationale. Example

One

Authorizer: Rationale:

Authorizer’s Perception: Example

School District Superintendent “The most important payoff from the proposed evaluation study, of course, is that more of the district’s children will learn how to truly comprehend what they read. But we shouldn’t forget the likely impact of those improved reading skills on the annual achievement tests your board requires. Increased comprehension skills will surely boost those test scores.” The proposed evaluation study could help us teach ho,v.s and girls better. It might also Ket me a raise.

Two

Authorizer: Rationale:

Authorizers’ Perception:

Nine-Member Elected School Board “The proposed evaluation study is intended to identify the most costeffective way to reduce the district’s dropout rate. And, of course, because there are likely to be cost reductions associated with the proposed study’s results, the district’s taxpayers will remember those stewards who wisely expended local tax dollars.” The proposed evaluation can help reduce the dropout rate in a fiscall.~ sound fashion and, as a consequence, help us teach kys and girls. It will also help get us re-elected.

In the main, the two-pronged persuasion ploy being recommended to have appeal for the authorizers of evaluation that are, at least initially, a formative function.

IMPLICATIONS

here is more apt intended to serve

FOR EVALUATOR-TRAINING

What 1 have just proposed is the essential element of a strategy to slow down and, possibly, even reverse the current demise of educational evaluation. The implications for evaluators already in the field should be obvious. That is, they should adopt such persuasion ploys in attempting to get their evaluation studies authorized. And then, of course, they should strive like crazy to deliver on both ends of the double-barreled promise, namely, on the educational dividends as well as the self-benefit payoffs. For the preparation of evaluators, I see two implications. First, and most importantly, prospective evaluators must not be misled into thinking that the real world of education

273

Avoiding Extinction of Educational Evaluators

is breathlessly awaiting their contributions. These are the nineties, not the seventies. Today’s educational evaluators will need to help create the need for evaluative studies. Few legal requirements exist that will incline educational policymakers to embark on evaluation studies, especially in tight-budget times. Second, I think prospective evaluators should be deliberately taught how to isolate potential self-benefit payoffs and how to present those potential benefits to evaluation study authorizers in a suitably subtle fashion. If the self-benefit side of the two-pronged persuasion ploy is too blatant, the authorizer might be forced to dismiss such benefits as too obviously self-serving. If the persuasion ploy is too subtle, the authorizer might miss the phenomenological kicker altogether. Ideally, after authorizers have listened to an appropriately crafted rationale for an evaluation study, they would make the final intuitive leap themselves when it dawns on them that “this evaluation study could benefit me!”

STILL ENAMORED

AFTER

ALL THESE

YEARS

1 worry about the demise of educational evaluation. If the number of educational evaluations in this country diminishes, I think we will teach boys and girls less effectively. But unless we begin to take some decisive action to alter what is happening in educational evaluation, I believe we will continue to see educational evaluation erode even further. Perhaps the two-pronged persuasion ploy I have recommended here may not work well enough to meaningfully slow down the disappearance of educational evaluation. But I know we have to try something. You see, I still believe in the worth of educational evaluation. That is the way it is with first loves.

REFERENCES Cronbach, L. J. (1963). Course improvement through evaluation. Teachers College Record, 64,67283.

&riven, M. (1967). The methodology of evaluation. In R. E. Stake (Ed.), Curriculum evaluation. American Educational Research Association Monograph Series on Evaluation, No. 1. Chicago: Rand McNally. Stake, R. E. (1967). The countenance of educational evaluation. Teathers College Record, 68, 52340.