Roots: Back to our first generations

Roots: Back to our first generations

Presidential Address Roots: Back to Our First Generations LEE SECHREST American Evaluation Association Chicago, Illinois, November 1, 1991 I have ti...

524KB Sizes 0 Downloads 72 Views

Presidential Address

Roots: Back to Our First Generations LEE SECHREST American Evaluation Association Chicago, Illinois, November 1, 1991

I have titled my remarks for this paper, my one and only chance to reach this entire, highly distinguished group of readers, “Roots: back to our first generations.” Most of you will know immediately the inspiration for my title. The rest of you may not care for an explanation, but I will give one anyway. Abroad in the land, not entirely unlike Disneyland, of Evaluation is the notion of a new generation of approaches to evaluation, a Fourth Generation, it seems. A principal exponent of that-those? I am not sure-approach or approaches to evaluation is our Past-president, who made an impassioned plea on its behalf at our annual meeting in 1990 (Lincoln, 199 1). I could not resist the temptation afforded by the opportunity to reply to, I am sure, a substantially overlapping audience. I have a different view of our field, of the contributions we may make, and of the opportunities it affords. I propose that we take a few steps back toward our origins and then, in a manner akin to punctuated equilibrium, that we mutate a bit and start off in another direction. I want our future generations of evaluators to look rather different from the portrayal of the Fourth Generation.

GENERATIONS? The generational metaphor-and the new generation is fond of metaphors (see Smith, 1981)-is, itself, bothersome. Metaphors are meant to be loose constructions, but they are also somewhat constraining since they do, intentionally, direct thoughts in some particular directions rather than others. The first thing that bothers me about the generational metaphor is the image, the implication, of earlier generations being replaced by later ones in a sort of inevitable progression. “The first, second, and third generations? Away with them! They are tiresome, garrulous, and soak up too many scarce resources!” Lee

Sechrest . Professor.

Evaluation ISSN:

Pmcttce.

0886-I

633

Vol.

Department

of Psychology.

University

of Arizona.

Tucson.

AZ X572

13. No. I. 1992. pp. l-7

I Copyright

All

rights

0

of reproduction

1992 by JAI Press Inc. in any form

reserved.

1

EVALUATION

2

But my view wisdom

of wisdom-admittedly

is, or at least should

of adolescent

rebels

without

merely

be, incremental

a personal

and cumulative.

cause other than to replace

PRACTICE,

construction-is

13(l), 1992

that

Not at all like a group

their forebears.

The genera-

tional metaphor needs to consider that, save for mutations, usually lethal, all of our genes come from our ancestors. The Fourth Generation does mean to r~~>lrc~~ what it sees as the preceding three. What I take to be the bible of the Fourth Generation (Guba and Lincoln. 1989) offers no quarter: it states that there can be no compromise, no integration. It expounds a philosophy-ontology,

epistemology,

and ethics-to

substitute

for the philosophy

that has been our foundation in the past. I do not intend to comment here on the philosophical position claimed as the underpinnings of the Fourth Generation. I am not a philosopher: it is all I can do to get through each day without worrying about whether there is any higher purpose served in doing so. The Fourth Generation also proposes a methodology, which I do not pretend to understand any better than I understand the philosophy. to replace our current methodologies. To the extent that I do understand the methodology, however, I do not particularly object to it. It might be a valuable addition to the others we have available (Cook and Reichardt, 1979). But that is the problem. The methodology espoused in the Fourth Generation, and I think much more generally by proponents of what is generically termed “qualitative evaluation,” is not proposed as an addition to our field, to be used in conjunction with other, e.g.. quantitative, methods. but as a complete replacement for other methods. That we could improve our field by getting rid of methods of proven, even if limited, value, is beyond my ken. I have never believed that we are likely to achieve methodological advance, that is, addition, by subtraction. Now let me turn to a second problem with the generational metaphor. The idea of generations implies fairly regular and dependable transition from one generation to the next. I ask myself. could we of the preceding generations possibly have given rise to this new, Fourth Generation? Where in our makeup are the origins of this new creature so unlike us‘? I am reminded here of a Calvin and Hobbes cartoon strip in which Calvin’s father observes some of Calvin’s typically outrageous behavior for three panels and then says. “Actually, I have great genes, it’s just that they’re all recessive!” That is the way I feel about the new generation of evaluators so strongly and narrowly committed to qualitative methods. I sense a very real and large generation gap. Finally. the generation metaphor disturbs me because of what I see as the too rapid, much too rapid. demise of whole generations. The ERS was only founded about 20 years ago with all three of what are seen as the first generations flourishing. Now we are dead. or at least we are to pack up our design and analysis kits and slink off to some intellectual junkyard to rust away quietly and. one supposes. peacefully, without protest. Well, my generation. the quantitative generation is too young to die. We are not like some animal species with a foreshortened version of a human life span, the twenty years of a horse or the seven years of a dog. We want our full life span at least.

3

Roots: Back to Our First Generntiom

FUTURE OPPORTUNITIES new

FOR PAST GENERATIONS

130 ),I992

4

being manufactured. At the end of the visit Grissom was invited to say a few words to the assembled employees. He looked at them momentarily and then grunted his recommendation: “Do good work.” That is a laudable, qualitative goal. But it has no standards-nothing to measure up t-and got Grissom into a bad water landing at the end of his space flight and he later burned up in a faulty capsule. He might better have said something like “Manufacture at seven sigmas!” to use Wiggenhom’s termnology, i.e., manufacture to a very small tolerance of error.

THE DECLINE

OF NUMERACY

Where are the quantitatively sophisticated evaluators going to come from‘? Not, I fear, from our ranks. The extent, and I think the quality, of quantitative and methodological training in the social sciences has declined substantially over the past two decades or so and is at a low point (Aiken, et al., 1988; Merenda, 1990). In AEA, membership in the Qualitative Methods TIG has grown by leaps and bounds (about 550) now, while membership in the Quantitative TIG has been limited (to about 150). Unfortunately, overlap in membership is also very low (only about 5 percent of Qualitative Methods TIG members belong also to the Quantitative TIG). suggesting that the two groups are substantially divergent. Why has quantitative training declined? Why is interest and. I believe strongly. capability, at such a low level’? One answer might be that qualitative methods have been so greatly successful that they have simply replaced quantitative methods. I accept that qualitative methods have become quite popular. but I do not believe that is because of their overwhelming success. For example, I have heard a number of complaints on the part of those who fund evaluations about bias against qualitative approaches but no symmetrical complaints from the quantitative side (although they have their own complaints about funding problems). Certainly qualitative evaluation is not greatly popular because of success as it has traditionally been viewed. Our publications are filled with articles extolling qualitative evaluation approaches and with articles telling how to do qualitative evaluations. What they are not filled with are examples of successful. effective qualitative evaluations. At the GAO. which does more evaluation work than any other place in the world. qualitative evaluations are pretty rare. For example, Eleanor Chelimsky reports only seven case studies since 198 1 and no other clearly identifiably qualitative evaluations. Interestingly, The Fourth Generution (Guba and Lincoln, 1989) contains no examples whatsoever of fourth generation. or third-and-a-half level evaluations unless one counts a qualitative evaluation of a quantitative evaluation as such. My own opinion is that qualitative evaluation is proving so attractive because it is. superficially, so easy. Aversion to what is seen as the rigors of quantitative methods. to their difficulty, is, I think, what is accounting for the declining emphasis on quantitative training in the social sciences. We are heading nationally toward national mathematical illiteracy, “innumeracy.” as John Allen Poulos ( 1989) puts it, and we are accepting that innumeracy in the field of evaluation.

Roots: Back to Our First Generations

5

Recently, I participated in a review of a group of research proposals, and one proposal was from a qualitative researcher. He was, he claimed, intending to devote .0325 percent of his time to the project. He may not have understood the difference between a percent and a proportion, and he had his decimal off anyway because he wanted support for 32.5 percent of his time. Another amusing example is provided by a group that put out an announcement of a meeting for qualitative research types and asked for reply by “Feb. 30.” I do jest in providing these specific examples, but I am dead serious about the decline of interest and competence in quantitative approaches to evaluation. If I am wrong in this matter, it will require very little effort to set me straight, e.g., to show that self-declared qualitative evaluators actually have high-level quantitative skills, that qualitative researchers really mean their methods to complement quantitative methods, that qualitative methods are actually quite successful in supplying information that is dependable and highly valued by its recipients, and so on. I stand ready to be corrected, and am not at all reluctant to think of being corrected, for my own view of evaluation all along has been that it requires all the skills and methods we can muster of every kind. I am not amused, however, by the overt hostility and nasty tone of the invitation just referred to with the Feb. 30 reply deadline. That invitation displayed a hostility toward science and quantitative efforts in evaluation that I find shocking, deplorable, and even offensive. Following is a portion of the text of that invitation: If you share our frustration with the experimentalist/scientific tone to recent AEA meetings; if you feel there isn’t a strong enough voice for a new moral vision of evaluation; if you have been forced to collect data you can’t defend but a client has demanded it/ if you would like to be proactive in redirecting the intellectual agenda for evaluation, then you should JOIN US for a few days of stimulating discussion. We hope that this group might meet on a regular basis (perhaps one of our first tasks is to name ourselves, e.g.s, [sic] the Fourth Generation Evaluators, The New Power Generation, The Moral Evaluators) to advance the study of evaluation.

The text provides ample evidence of the point I want to make about hostility toward earlier generations of evaluators. One wishes for a bit of Chinese-like ancestral filiality. The most offensive part of the text, however, is the arrogation of the term “moral” by these new generation, rebellious evaluators. As constructionists, they should know that morality, like so many other things, is in the eye of the beholder. They do not look so extraordinarily moral to me. The early promise of the field of evaluation was that a wide range of methods would be available and would be brought to bear as appropriate on every evaluation question. That promise was met in any number of highly successful evaluation programs of which I am aware. Harrison and Arlene McKay and their colleagues developed a highly successful nutritional and educational rehabilitation program for barrio children in Cali, Colombia, and Ronald Gallimore and Roland Tharp developed a similarly successful educational program for Hawaiian children. Tharp and

EVALUATION

6

PRACTICE,

13(l), 1992

Gallimore (1979) described their development efforts quite persuasively in an E~ulrration Studies RevirM) An~~nul volume, coining the term “evaluation succession” for the process. They describe in detail the skillful and insightful way in which quantitative and qualitative methods were woven together. integrated with high effectiveness, to develop, evaluate, and demonstrate the effectiveness of an important educational innovation. True. a lot of social programs have not fared so well, and it may be that the often dismal results from evaluations of people’s favorite social interventions contributed greatly to decline in quantitative evaluations. “Shoot the messenger” was the message sent out. Janet Spence, former President of the American Psychological Association, pointed out in several public talks that the terms “hard headed” and “soft hearted’ are not opposites; they are orthogonal, in fact. Quantitative evaluators whom I know want very much to find favorable program effects; they want to ameliorate the social condition. They are simply not willing to fudge very much to do so.

HAPHAZARD

TRAINING

A major problem faced by our field today is, I believe, the lack of organized, formal, recognized training programs for evaluators. We never had many, and several that I know once existed are now nowhere to be found. That lack is, I suspect, very much implicated in the difficulties we are having in bringing together the full range of approaches and skills useful in program evaluation. Our younger colleagues are being trained narrowly in too many instances. They are being trained haphazardly, here and there by individual, often isolated, professors. who may or may not be fully qualified for doing training. Their training is likely to rellect the idiosyncratic views of those individual faculty rather than the wider range of perspectives. methods, approaches, and so on. At a time when our research and statistical methods-structural equations designs, and so modeling, confirmatory factor analysis, regression-discontinuity on-are finally catching up to the complexities of our problems. it is disappointing that training in those methods should be declining. Our students are not currently being trained systematically in programs designed specifically to produce program evaluators. They are not being exposed to a deliberately assembled faculty selected to represent diverse views and a range of methodologies. They are not being educated within the guidelines of carefully considered curricula designed to produce exposure to the right materials and the right experiences at the right times. How can we expect to produce broadly knowledgeable and generally skilled program evaluators under present circumstances? I submit that we cannot. It is up to us to do something about it. AEA ought to take the lead in attempting to restore the kinds of training programs that existed in the 1970s and to develop those programs to reflect the many advances that have been achieved in our methods, especially quantitative methods, over the years. and the circumstances that prevail with respect to social programs and needs for their evaluations. Program evaluation has a potentially bright future. We ought to prepare for it.

Roots: Back to Our First Generations

7

REFERENCES Aiken, L.S., West, S.A., Sechrest, L., and Reno, R.R. (1990). Graduate training in statistics, methodology, and measurement in psychology: A survey of Ph.D. programs in North America. American Psyc~holoqist, 45, 72 l-734. Cook, T.D. and Reichardt, C.S. (Eds.) (1979). Quali~afi1~e urd Quanritari~~e Methods in Evaluation Reseur~+t. Beverly Hills, CA: Sage Publications. Guba. E.G. and Lincoln, Y.S. (1989). Folr~h Gene/a/ion E~~~hrurion.Newbury Park, CA: Sage Publications. Lincoln, Y.S. (1991). The arts and sciences of program evaluation. Evuluufion Pructic,e, 12(l), l-7 (February). Merenda, P.F. (I 990). Brief note on graduate training in statistics, methodology, and measurement in psychology. Perc,eptuuI and Motor Skills, 71. I 106. Poulos. J.A. (I 989). Innumerucy: Mathematical Illireraq and its Consequences. New York: Farrar, Straus & Giroux. Smith, N.L. (Ed.) (198 I ). Meruphors ffjr. Evuluution. Beverly Hills, CA: Sage Publications. Tharp, R.G. and Gallimore, R. (1979). The ecology of program research and evaluation: A mode1 of evaluation succession. In: L. Sechrest, S.G. West, M.A. Phillips. R. Redner, and W. Yeaton (Eds.), E~uluution Studies Re\ieHj Annual, Volume 4, 3940.