Evaluation and Program Planning, Vol. I, pp. 383-386, Printed in the USA. All rights reserved.
1984 Copyright
0149-7189/84 $3.00 + .OO D 1985 Pergamon Press Ltd
APPLYING THE EVALUATION STANDARDS IN A DIFFERENT SOCIAL CONTEXT
DAVID Tel-Aviv
The development,
NEVO University
teristics. Evaluators outside the United States need also to pay attention to those characteristics that might be unique to the social context and the evaluation perceptions of the United States, the country in which the Standards have been conceived and developed. This paper will discuss some of the issues that should be considered in an attempt to apply the Standards in a social context other than that in their country of origin. Those issues will be discussed from the perspective of one educational system (The Israeli Educational System) which differs from the American system in size, structure, tradition, ways of operation, and the way evaluation is being perceived. Such a discussion is somewhat limited in its scope and might apply to some educational systems and not others, but as an example it might be useful to many of them. We shall point out some relevant differences between the two educational systems and conclude with some inferences regarding the applicability of Standards for an educational system in a different social context.
field testing, and publication of the
Standards for Evaluations of Educational Programs, Projects, and Materials by the Joint Committee on
Standards for Educational Evaluation (Joint Committee, 1981) is probably one of most prominent endeavors of the last decade in the field of educational evaluation. It is still too early to assess the impact of the Standards on the study of evaluation and on the improvement of its practice, but the constructive discussions they have stimulated so far among evaluators’ have already demonstrated their potential contribution to evaluation in education and other related fields. Regardless of the support or resistance that they gain in various professional circles, they can hardly be ignored by any evaluator. Evaluators around the world must consider the Standards and their potential benefits and if they decide not to utilize them, at least they have to develop a justification for rejecting them. No evaluator should develop his/her attitude toward the Standards without a careful consideration of their characDIFFERENCES
IN THE SOCIAL CONTEXT
1. The Israeli educational system is a centralized system with a national curriculum, standard salary schedules for teachers across the country, and a central ministry of education that exercises its authority by means of national matriculation examinations at the end of high school and other means of control. Although its overall size might be equal to the size of some of the large American school districts, it is different from them in its being a national system of a sovereign state. At the same time, the power of the Israeli local school agencies is minimal and their status is in strong contrast with the American local school district. Most of the illustrative cases provided at the end of each standard
SYSTEM
are in the context of the American local school district, and are thus of limited relevance to the Israeli system. 2. Evaluation activities are relatively rare in the Israeli educational system (Lewy, et al., 1981). There is no long tradition of utilizing empirical data for decision making and policy development. Funds provided for evaluation are scarce and mandatory evaluation based on legislation requiring evaluation, such as ESEA of 1965 or PL 94-142 of 1975 in the United States, does not exist at all in the Israeli system. In many cases evaluation is avoidable and even where it has been inevitable it did not create enough interest among administrators to set standards for the conduct of evaluation or to assess its quality. 3. Almost all program evaluation activities in the Israeli educational system are being conducted by uni-
‘See, for example, a series of critiques of the Standards which appeared recently in Evaluation News (Vol. 2, No. 2, May 1981). Requests for reprints should be sent to David Nevo, Senior Lecturer 978, Tel Aviv, P.O.B. 39040 Israel.
OF THE EDUCATIONAL
of Education,
383
Tel Aviv University,
School of Education,
Ramat-Aviv
69
DAVID
384
versity faculty members and in most cases as part of their regular research interests rather than a special attempt to serve decision making or policy development. This is significantly different from the American scene, where many of the evaluations are conducted by local and state administrative departments and by numerous commercial firms. Thus, being associated with the prestige of institutions for higher education, Israeli evaluators might be less exposed to critique by school administrators. On the other hand, any attempt to review their performance is in danger of clash with the principle of academic freedom. 4. Compared to the United States, Israel is a small country with a small educational system and a very
DIFFERENCES
NEVO
small number of people who are involved in the conduct of educational evaluation. The “evaluation community” of Israel is small in size and in most cases based on close relations among its members. Any attempt to conduct “meta-evaluation,” intended to assess the work of one member of the group by another, may be perceived as unreliable within the educational system and as a source of tension within the professional community of the evaluators. The use of non-Israeli evaluators to assess the work of their Israeli colleagues is usually involved with high costs to cover travel and other expenses. These difficulties are in addition to the regular resistance to meta-evaluation which appears to exist in educational communities everywhere.
IN THE PERCEPTION
The Standards are not geared to any particular “evaluation model” or “evaluation theory” and as stated by their developers they “encompass a valid and widely shared conception of evaluation and the conventional wisdom about its practice” (Joint Commitee, 1981, p. 6). This claim is supported by the fact that the Standards were developed through deliberations of a committee of 17 members, representing 12 professional organizations associated with educational evaluation in the United States, and extensive input from many people. No systematic attempt has been made yet to determine the precise evaluation perceptions of Israeli evaluators, but it is apparent that many of them perceive evaluation in a way that might be in some disagreement with the conception of evaluation on which the Standards are based. Following are some of such points of disagreement. 1. The Joint Committee defined evaluation as “the systematic investigation of the worth or merit of some object” (Joint Committee, 1981, p. 12). This judgmental definition is shared only by a minority of evaluators in Israel. The Tylerian approach, defining evaluation as “the process of determining to what extent the educational objectives are actually being realized” (Tyler, 1950, p. 69), is very popular among Israeli evaluators. Another widely accepted approach in this country fails to make a distinction between “research” and “evaluation” and perceives as evaluation any investigation of relationships among variables or other theoretical propositions not necessarily associated with the assessment of merit or worth of educational objects. In a few cases “evaluation” is being perceived as a synonym to “sloppy research.” 2. The Joint Committee recognized two major functions of evaluation: the formative function (which is served when evaluation is used to improve an object while it is still being developed) and the summative function of evaluation (used for accountability, certification, or selection). In many cases, it is apparent that
OF EVALUATION
evaluation in Israel is not serving any formative function nor is it being used for accountability or other summative purposes. However, it is being used for other functions suggested by the professional literature (Dornbusch & Scott, 1975; Weiss, 1977; King & Thompson, 1983; Lewy & Nevo, 1981; Nevo, 1983). One such function is what might be called the psychological or sociopolitical function to create awareness of special activities, motivate evaluees, or promote public relations. Another function unrecognized by the Joint Committee is the use of evaluation for the exercise of authority. In formal organizations it is the privilege of superiors to evaluate their subordinates and not vice versa. Evaluation is used sometimes in this administrative function to demonstrate the superiority of one person over others or the superiority of one organization over another. Not to recognize the last two functions or to refer to them as misuses of evaluation might not be an appropriate solution in a system in which they are being practiced quite often. 3. A basic assumption of the Joint Committee which has been expressed in many of the Standards is that evaluators should identify their clients and serve their information needs. To use the words of the Standards: “In essence, evaluators are advised to gather information which is relevant to the questions posed by clients and other audiences . . .” (Joint Committee, 1981, p. 7). The client has never been the idol of the Israeli evaluator, who would rather worship the socalled “scientific community.” Being a member of this distinguished community he or she is also expected to know which kinds of information are important and therefore should be collected. This person would not waste too much time identifying the specific audiences for their evaluation and serve their information needs. He or she would not interface with clients more than was requested to develop rapport and ensure cooperation. Not like the Standards’, his or her approach is not very client-oriented.
385
A Different Social Context 4. The Standards encourage the use of a variety of methods of inquiry. The Joint Committee did not declare allegiance to any specific methodology. Surveys, experimental and quasi-experimental designs as well as jury trials, advocate teams, and Modus Operandi Analysis are all perceived in the Standards as legitimate methods to be used in the conduct of evaluation. This is not the case in our country. Survey methods and experimental designs are the most and sometimes the only acceptable methods to conduct a worthwhile evaluation. Several large-scale surveys have been conducted but (as anywhere else in the world . . . ) a true experimental design has hardly ever been implemented by an Israeli evaluator. However, we never lost faith in the future . . . WHAT IS APPLICABLE
5. The Joint Committee suggested four groups of standards (utility, feasibility, propriety, and accuracy) with an implicit preference for utility standards. Although they recommended that meanwhile all standards be considered equally important, it looks like their published order, in which the utility standards come first and the accuracy standards come last, was not without purpose. In the Israeli scene, preference is given to accuracy standards, with secondary importance ascribed to utility and feasibility standards. Accuracy standards seem to be the most, and sometimes the only concern of many influential members of the local evaluation community. Strangely enough, they are also supported by many school people.
IN A DIFFERENT
SOCIAL CONTEXT?
In light of the differences in the social contexts of the American and Israeli educational systems, and the differences between the evaluation perceptions underlying the Standards and those prevalent among Israeli evaluators, can the standards still be applied in this different social context? Can they be applied in any other different context? The answer to these questions depends on how one perceives the Standards and what he or she means by applying them. Those who perceive the Standards as a set of mechanical rules and foolproof prescriptions for “how to succeed in evaluation”* will be disappointed to find out that many of the details in their content are tied into the social context of the American educational system and based on the evaluation perceptions of the members of the Joint Committee. Thus, they may not be applicable in a different social context nor can they be used by those with different perceptions of evaluation. However, those who perceive the Standards as guiding principles for improving evaluation practice, developing its theoretical frameworks and promoting public credibility for educational evaluation will find in the Standards and in the process of their development an invaluable contribution regardless of the social context in which they operate on their evaluation perception. The pioneering project of the Joint Committee and the result of its endeaver provide several principles which should be applied by evaluators regardless of their social origin and “evaluation persuasion.” Following are some of those principles. 1. Evaluators should not only preach to others that they should evaluate their deeds but also evaluate their own evaluations. Such an exemplary conduct is espe-
cially important for a profession that believes that evaluation is an inevitable part of any human undertaking but has so little empirical evidence to demonstrate its usefulness and its contribution to the improvement of education or other related areas. 2. In setting standards for evaluation there is a crucial need to promote communication and collaboration among a wide range of parties associated with evaluation and having an interest in its consequences. The fact that a group of 17 members representing 12 organizations could reach consensus on 30 standards is a demonstration that such a collaboration is possible.3 3. The process of developing standards for evaluation and maintaining their ongoing vitality and dynamism requires a significant investment of human energy and other resources. The 4 years that have been devoted to the development of the Standards provided a solid basis for their quality but if no additional resources will be provided in the United States or in other countries on a continuous basis to facilitate revisions, upgrading, and further development they may turn into a static and rigid entity and lose their spirit as a means for renewal and innovation of educational evaluation. 4. Evaluation cannot limit itself to standards of accuracy and must also consider other standards such as utility and feasibility. Regardless of the specific content of one standard or another, it seems to be clear that evaluators cannot ignore the expectations of the societies in which they live and sacrifice society’s information needs for scientific accuracy. The attempts to draw a clear distinction between research and evaluation should be encouraged.
‘The Joint Committee stated clearly that “the standards presented are not mechanical rules; they are guiding principles” (Joint Committee, 1981, p. 9), but unfortunately the mechanistic application of the standards is not totally unpopular among evaluators.
‘For the benefit of the profession an attempt should be made to study the infrastructure of the Joint Committee so that a valuable lesson can be learned about the effective ways for collaboration among individuals and groups associated with evaluation.
386
DAVID NEVO
5. The Standards should not be used only to guide evaluation practice and assess its merit and worth but also to serve as a framework for research on evaluation and as a source for the development of evaluation theory.4 The significance of the Standards in years to
come will be proved not only by the extent that they will be used to regulate evaluation practice, but also by the extent that they will stimulate research on evaluation and promote the development of evaluation theory.
CONCLUSION An attempt has been made in this paper to point out some of the issues that should be considered in applying the Standards in a social context different from the one in which they have been developed. Our analysis of the case of Israel suggests that although the specific content of the Standards seems to be geared to the American educational system, and thus may not be applicable to other social contexts, they suggest several principles that could be applied to many social contexts to improve evaluation practice and stimulate the systematic study of evaluation and its conceptualization. Our conclusion may be useful to evaluators outside the United States considering the application of
the Standards. It might also have some implications for the Joint Committee involved in further development and dissemination of the Standards. We suggest that in addition to its attempts to revise the content of the Standards, the Joint Committee also devote its efforts to extend and elaborate the basic principles of the Standards and gain wide support for their rationale. We also suggest that the Joint Committee make deliberate attempts to discourage evaluators in other countries as well as in the United States from using the Standards in a rigid way as a set of mechanical rules and prescriptions.
REFERENCES DORNBUSCH,
S. M., &SCOTT,
exercise of authority. San Francisco:
W. R. (1975). Evaluation and the Jossey-Bass.
JOINT COMMITTEE ON STANDARDS FOR EDUCATIONAL EVALUATION. (1981). Standards for evaluations of educational programs, projects and materials. New York: McGraw-Hill. KING, J. A., & THOMPSON, B. (1983). Research on school use of program evaluation: A literature review and research agenda. Studies in Educational Evaluation, 9(l), pp. 5-22. LEWY, A., KUGELMASS, BORUCH, R. F., DAVIS,
S., BEN-SHAKHAR, G., BLASS, N., D. J., NEVO, B., NEVO, D., TAMIR,
I. (1981). Decision-oriented evaluation in education: The case of Israel. Philadelphia: International Science Services.
P., & ZAK,
LEWY,
A., & NEVO,
D. (Eds.). (1981). Evaluation roles in educaand Breach.
tion. New York: Gordon
NEVO, D. (1983). The conceptualization An analytical review of the literature. Research, 53(l), pp. 117-128. STUFFLEBEAM, promote effective sional standards. 119-124.
of educational
evaluation:
Review of Educational
D. L. (1983). Reflections on the movement to educational evaluation through use of profesStudies in Educational Evaluation, 9(l), pp.
TYLER, R. W. (1950). Basic principles of curriculum and instruction. Chicago: University of Chicago Press. “In relation to this recommendation, see Stufflebeam (1983 and the special issue of Studies on Educational Evaluation (9(l), 1983) devoted to research on evaluation.
WEISS,
C. H. (Ed.).
making. Lexington,
(1977). Using social research in public policy MA: Lexington-Heath.