Responsive evaluation: A personal reaction

Responsive evaluation: A personal reaction

STUDIES IN EDUCATIONAL Volume 2, No. 1. EVALUATION Spirng 1976 RESPONSIVE EVALUATION: A PERSONAL REACTION RICHARD E. GALLAGHER Wayne State Unive...

115KB Sizes 12 Downloads 89 Views

STUDIES

IN EDUCATIONAL

Volume 2, No. 1.

EVALUATION

Spirng 1976

RESPONSIVE EVALUATION: A PERSONAL REACTION RICHARD E. GALLAGHER

Wayne State University In examining the Responsive Model, a design for program evaluation, I have approached it, not from the point of view of the evaluation theorist, which I am not, but from the point of view of a faculty member in a health sciences institution who has some responsibility for the evaluation of school programs. My comments and reactions are based not only on Dr. Stake’s presentation here, but upon other published and unpublished materials of his. My summary of responsive and pre-ordinate evaluation designs which follow are drawn freely from his work.

THE RESPONSIVE APPROACH: A PERSONAL REACTION I am persuaded by Dr. Stake’s arguments that there is an important program evaluations for designs that emphasize:

place in our

1. Emerging educational issues. 2. Observation of program participation or activities. 3. A more continuous attention to audience information needs. I have difficulty, however, in understanding how the responsive approach can be characterized as being more “naturalistic”. What is unnatural about the other approaches to evaluation? Dr. Stake has stated unequivocally his position that a major purpose of an evaluation study is to help solve the problems of a particular program rather than to gain a generalizable understanding about programs in general. I believe that is a legitimate goal, albeit a pre-ordinate one, for evaluation studies. I do not believe, however, that it can be the sole program evaluation goal. I find it difficult to conceive of an education program where one would not be interested in a measurement of the quality of output based on external criteria - not for the sake of obtaining generalizable findings or isolating the effect of a single variable, but simply as a measure of quality control. Dr. Stake has argued that evaluation plans should yield results that are more relevant - “more relevant to what is of importance, is happening in learning, in teaching, in administration; more relevant to the concerns of participants and audiences of a particular program.” The kind of data Dr. Stake is talking about is very experimental in nature and appears to depend heavily upon written narratives and portrayal formulated by the evaluator and based upon the observation of various observers, both participants and audiences. 33

34

GALLAGHER

Dr. Stake is not unmindful of the measurement concerns presented by the sources of data or the methods of data collection that he advocates. He has stated in his presentation that this approach will yield measurements that will be less accurate, the samples less representative, and the findings less generalizable - but more relevant! Ultimately, however, the relevancy of the data is dependent upon its quality. I do not quarrel with the concerns or focus of responsive evaluation. I do not object to the source or kind of data collected. I wonder why, however, in de-emphasizing what he has labeled the pre-ordinate approach to evaluation, that Dr. Stake has not chosen to employ the quantifying aspects of measurement, where appropriate, to the kinds of data for which he makes such a persuasive case. My final observation has to do with Dr. Stake’s belief that the evaluator is faced with two antithetical roles “. . .One being the researcher with abstract data no one else understands, so who then, in effect, acts as a surrogate decision-maker (pre-ordinate evaluation) - or the researcher with experiential data to share, so who then arranges things to help decision-makers be surrogate researchers.” I found that statement to be very moving but not accurate. In one sense, the evaluator cannot avoid elements of decision-making in his evaluative role. I believe that there is decision-making of the type Dr. Stake is concerned about in both types of evaluation. In the pre-ordinate design the evaluators’ judgements and decision-making are more explicit, public, and replicable. In the responsive design, the evaluators’ judgements and decision-making is less explicit, less public, and replicable to an unknown degree.

THE AUTHOR RICHARD GALLAGHER is Director of Educational Research and Services and Associate Professor of Medical Education in the School of Medicine at Wayne State University, Detroit, Michigan.