From time to time, articles are published in EP that evoke comments from readers. In Response is reservedfor this dialogue. Contributions should be to the point, concise, and easy for readers to track to targeted articles. Comments may be positive or negative, but tf the latter, then keep them at least relatively nice! Personal attacks and offensive, degrading criticisms will not be published. Please keep the length of comments to the minimum essential.
Perhaps Professor Popham Has Overlooked Educational Evaluators’ Migrations and Mutations RICHARD VAN SCOTTER
INTRODUCTION I read with some amusement James Popham’s article in the last issue of EP, An Extinction-Retardation Strategy for Educational Evaluators, in which he equates his initiation to evaluation with his first love back in second grade. As much affection as one might have for the value of well-executed VanScatter program assessment, this analogy, no doubt, reveals more about Professor Popham and perhaps others in his trade than it does about the pleasures of either romance or evaluation. The wonderful whimsy of Popham’s prose aside, there are two issues or which I must disagree with him, and those are his two major theses. First, he asserts that educational evaluators are a vanishing breed, because when federal evaluation mandates were withdrawn, “If educators were not obliged to evaluate, they chose not to evaluate at all” (p. 270). Second, he proposes that, in the absence of legislatively imposed requirements for evaluation, the only way to get educators to evaluate their programs is to trick them into it by convincing them that conducting an evaluation would benefit them personally.
Richard Van Scatter Springs, CO 80906. Evaluation
l
Vice President,
Education
Policy, Marketing,
Practice, Vol. 17, No. I, 1996, pp. 79-82.
and Evaluation,
Junior
Achievement,
Copyright
ISSN: 0886.1633
@ 1996 by JAI Press, Inc.
All rights of reproduction
79
Inc., Colorado
in any form reserved.
EVALUATION PRACTICE, 17(l), 1996
80
Popham’s first assertion is at best an exaggeration and at worst simply wrong. His suggested solution is cynical and, from where 1 stand, unnecessary. Let me briefly describe why 1 differ with Popham on each of his two major assertions. Educational
Evaluators: A Vanishing
Breed?
Popham’s portrayal of fewer federally mandated and funded evaluation activities is accurate. but the inference he draws from it is not. It is a large leap from saying there are fewer mandated evaluation positions in schools to declaring that educational evaluators are near extinction. Popham’s analysis overlooks the fact that many educational agenciess schools included routinely contract with external evaluators to carry out educational program evaluations, even in the absence of any legal requirement. In my frequent travels on Junior Achievement business, I find schools evaluating their programs without any governing board or legislative requirement to do so. For example, I was recently in a small city where 1 learned that the two school districts (one city, one suburban) had, in the past two years, contracted with external evaluators to conduct evaluations of the following programs: a drug-free schools program, a migrant student/ parent education program, educational technology training systems, a year-round school program, a site-based decision making program in two high schools, the efficacy of separate facilities for ninthgraders, an innovative primary-grade reading curriculum, and a high school’s computerbased writing curriculum. Not a single one of these evaluations was conducted in response to any requirement. The school staffs and administrators indicated that they routinely tried to arrange to evaluate every significant new program they install. Now 1 know not every school district is as committed to evaluation as those I’ve just described. Popham’s frustration with fickle educators is understandable, for there are many who only evaluate if forced to (and probably some who would even abandon student assessment if state, school boards and parental pressures were relaxed). However, those seem to me to be far less prevalent than they seem to Popham. Perhaps his earlier affection for evaluation waned as quickly as did his second-grade infatuation with his piano-playing classmate. Or put differently, his conversion to evaluation may have caused him to be so preoccupied with heathens who still spurn the gospel of evaluation that he has overlooked the numerous converts who are quietly practicing educational evaluation. Admittedly, the current scene is quite different from the ESEA era Popham describes, when evaluators were more likely to be on the school payroll, busily complying with legislative requirements to evaluate Title 1 or Title 111 programs. Now the educational evaluators seem more likely to be employed by evaluation contracting agencies, ranging from small consulting firms to large evaluation centers. But the evaluators in those agencies who spend their time evaluating educational programs are no less educational evaluators just because they are not housed in a school district or because the school district money for their paychecks passes through their agency’s hands en route to their pockets. One could agree more with Popham if he limited his concern about “extinction” to schoolh~~scdevaluators, rather than to educational evaluators, who are currently based in various agencies and ply their trade in a myriad of educational settings. As an example. examine the frequentnearly continual&evaluations that are undertaken by many developers of educational curriculum materials. Since these evaluations seldom are published in professional journals, Popham and many other evaluators may be oblivious to their existence. But it may warm his heart to learn that
In Response
81
there are program developers who place great stock in comprehensive evaluation. Junior Achievement (and I suspect many other education organizations) does want tangible evidence of program impact. We understand that this requires an extensive, in-depth, and unbiased assessment. And so we turn to well-qualified outside evaluators. We know that sound evaluations cost money and, unlike paying for a new Lexus or bouquet of roses, there is no guarantee the results will be pleasurable. Yet any organization with integrity will insist on the evaluator telling it the undiluted truth about its programs and products. Junior Achievement, Inc. has been spending substantial time and money-and taking a risk of encountering negative outcomes-by evaluating its school programs for more than a decade. As a result, our funders, board members, and staff have gained immense respect for evaluation’s payoff. These days, Junior Achievement wouldn’t consider introducing a new curriculum without conducting a thorough formative evaluation before implementing it, and seeking impact data once it had been implemented. While stakeholders’ want to see a program’s effect on student learning, we recognize that good formative evaluation is the key that enables us to refine and improve our materials to the point where they can have a beneficial impact on students. For example, working with WIRE (Western Institute for Research and Evaluation) on the evaluation of our Elementary School Program the past several years has convinced us that thorough formative evaluation reveals weaknesses and strengths of a program that are invaluable to program development. When honestly reviewed and courageously used, this information can go a long way to ensuring a successful program. As we have worked with WlRE and other external evaluators across the past decade, we have certainly not noticed any shortage of good educational evaluators. Worthen (1995) argued in an earlier issue of this journal that mandated evaluations originally used only for compliance with legislative edicts have been largely replaced by educational agencies autonomously initiating their own evaluations because they desire honest information about their programs. Our experience at Junior Achievement and many other educational agencies we know will bear out this contention that evaluation is not declining so much as mutating into new forms. Or, to return to Popham’s metaphor, 1 submit that his perception that educational evaluators are a vanishing breed may stem from his overlooking the fact that they have simply migrated to other breeding grounds, in which they are thriving and contributing much to those agencies who seek them out and solicit their help. Rather than bemoaning the passing of a species, it may be more useful to explore their new habitats and determine how the species is evolving and which subspecies are best suited for conducting particular educational evaluation tasks. Are Appeals to Self-Interest Really Necessary? Essentially, 1 agree with.Professor Popbam that when properly marketed, evaluation can be appealing to educators. However, I’m not quite cynical enough to believe that they need to be flattered with seductive sweet talk. Like nurturing a solid long-term romance, it’s best to level with your future mate from the outset, indicate that he or she doesn’t begin with a perfect product, and that the road to happiness is paved with much work along the way. Hopefully good relationships-and educational evaluations-will be rooted more in altruism than in selfism. I accept and share Popham’s belief in the worth of educational evaluation, and I am convinced that enough educators concur that it is in little danger of extinction, even without ploys to expand its use by appealing to educator’s self-interest.
82
EVALUATION
PRACTICE, 17(l), 1996
REFERENCES Popham,
W. J. (1995). An extinction-retardation
Practice,
Worthen,
B. R. (1995). Some observations
Practice,
strategy for educational
evaluators.
Evaluation
of evaluation.
Evaluation
16(3), 267-273. 16(l), 29-36.
about the institutionalization