Evaluation and Program Planning, Vol. 14, pp. 173-179, 1991 Printed in the USA. All rights reserved.
ROLE CONFLICTS
Copyright
FOR INTERNAL
0
0149-7189/91 $3.00 + .oo 1991 Pergamon Press plc
EVALUATORS
SANDRA MATHISON
SUNY at Albany
ABSTRACT Internal evaluators, by definition of their work, must perform several different roles in their organizational context. These roles are related to being a professional evaluator, a member of a community focusing on some substantive issue or topic, and a member of some organization. These various roles are inherently conflictual making the work of internal evaluators an exercise in negotiating and managing conflict.
INTRODUCTION sume several fundamentally conflicting roles in the conduct of her work. I will explicate these roles, the nature of the conflicts, and suggest some ways to manage these inevitable conflicts. To do this I will use my most recent experiences as the Evaluation Coordinator of the University of Chicago School Mathematics Project as an example, but other examples could easily have been used.
The professionalization of program evaluation combined with increased use of internal evaluators has raised some fundamental conflicts for evaluators working within organizations. With any newly created function in an organization there is a period of uncertainty and negotiation over roles, duties, and responsibilities. In this paper, I contend that even after this initial period of negotiation, the internal evaluator is left to as-
THE UNIVERSITY OF CHICAGO SCHOOL MATHEMATICS AND THE EVALUATION COMPONENT
PROJECT
developers could be monitored. The need for an evaluation function was not born from a perceived need by the project itself nor the initial funding agency, the AMOCO Foundation. However, other project funding sources, such as NSF, have provided monies on the condition that evaluations by conducted. The Evaluation component has two part-time faculty directors who have delegated responsibility for managing the component to the UCSMP Evaluation Coordinator, a position I occupied from 1985 to 1988. The Evaluation Component has its own budget and a staff of nine. At the most general level, the Evaluation component is responsible for evaluating the output of the other components of the project. Typically this involves working with a group of users identified by developers and/or the Evaluation Component. Some evaluation work is seen as formative and some as summative, a va-
The University of Chicago School Mathematics Project (UCSMP) is a foundation-funded curriculum and teacher development project which began in 1983 and will be funded at least through 1989. The UCSMP operates out of the Departments of Education and Mathematics at the University of Chicago and is divided into components: an elementary curriculum development component, a secondary curriculum development component, an elementary teacher development component, a resource component, and an evaluation component. Each of these components is headed by a University of Chicago faculty member and staffed with soft money positions and graduate students. The directors of each component report to an overall project director. The function of the Evaluation Component was initially conceived by the University of Chicago administration as a means by which the claims of the curriculum
An earlier version of this paper was presented at the 1987 American Evaluation Association meeting in Boston, MA. Requests for reprints should be sent to Sandra Mathison, School of Education, SUNY Albany, Albany, NY 12222. 173
174
SANDRA
riety of evaluation methodologies are employed, and evaluation studies are smdl and large in scale. There are many issues that arise from the experience of having been the UCSMP Evaluation Coordinator. For example, it is apparent that the conduct of internal evaluations is not primarily a contractual agreement between evaluator and program director. Unlike an external evaluation which is based on a negotiated contract, internal evaluators, even when an evaluation plan is negotiated, must be prepared to change that plan. It is a nonbinding agreement that can be superseded at any time in the best interests of the organization generally or the program specifically. And, the expectation is that evaluators can and should respond to the changing needs of program developers. Seldom do drastic changes occur, but it is certainly possible. In fact, on a few occasions the Evaluation Component has abandoned certain data collection activities for the “good” of the project being evaluated. Another issue arising when evaluation is done from ROLES ASSUMED
MATHISON
within an organization is that the evaluation office must vie, aIong with everyone else in the organization, for resources. Within the UCSMP, all components compete for funds. There is no agreed upon proportion of the budget designated for evaluation work and each fiscal year the negotiation over who gets what share takes place. Typically, the evaluation office (and probably research offices, generally) is at a disadvantage. The evaluation is not the heart of the organization’s work (in the case of the UCSMP this is curriculum development) and can more justifiably have its resources limited. Because evaluations are often informative but not remedial, they are seen as nice to have although possible to get along without. And, not least significantly, evaluators on occasion criticize the work of developers, and consequently evaluation work may become less popular and sometimes unpopular. Another issue, which will be the focus of the rest of this paper is the several different roles internal evaluators assume in the course of their work.
BY INTERNAL
There are three dominant roles evaluators assume: that of a professional evaluator, a member of a substantive field, and a member of an organization. We all assume diverse roles in our lives, for example, balancing career and family, so there is nothing unique about having to play more than one role. These roles that evaluators perform are, however, inherently in conflict, and many of the issues about internal evaluation arise from the conflicting nature of these roles. First, what are these roles? The role as a professional evaluator is easiest to describe and in personal terms probably the most important to any evaluator. Most internal evaluators have been trained as evaluators, typically doing graduate work in some program that has “Evaluation” in the title. Through this training and subsequent work experiences, evaluators develop a notion of evaluation as a separate field with its own issues, methodologies, ethics, and so on. Typically evaluators belong to evaluation associations (like the American Evaluation Association (AEA) or the American Educational Research Association’s (AERA) Division H), subscribe to and read evaluation journals, have shelves lined with evaluation books, and some teach evaluation courses to the next generation of professional evaluators. Doing program evaluation is what evaluators see as their work. Both internal and external evaluators evaluate something. There is a particular substantive area in which one evaluates and to some lesser or greater degree the evaluator also becomes part of that substantiveBeId. In my case this is the community of mathematics educators. Belonging to a substantive community is compelling for an internal evaluator because she is al-
EVALUATORS
most exclusively involved in evaluating within that substantive field. Unlike an external evaluator who moves on to the next evaluation job, the internal evaluator must operate in the same substantive field over an extended period of time. The effect of this identification with a substantive field is the introduction of a new peer group for the evaluator. One is expected to attend math education professional meetings, to publish in mathematics education journals, and to give talks to interested groups, such as school districts, on evaluation findings. And, when an internal evaluator gets the rare opportunity to do contract evaluation or consulting, it is typically in the substantive field in which she has been working. As an internal evaluator goes about the business of being a professional evaluator within some substantive field, she is also a member of an organization. Apart from evaluation activities and specific program activities, organizations have certain goals and purposes. It is within the context of these organizations that all members work together toward some more or less agreed upon ends. It is supposed that everyone within the organization has, at a general level, a shared vision and purpose in the execution of their work. There are, of course, different audiences within an organization each having different needs and purposes in mind when evaluations are conducted. Assuming most organizations are bureaucratic and political, evaluators must play by the organization’s rules. And, to the outside world, few distinctions are made about roles played within the organization: Everyone is a public relations officer for the organization (Fig. 1).
175
Role Conflicts for Internal Evaluators PROFESSIONAL EVALUATOR
\
/
conflict
conflict
\
/ MEMBER OF SUBSTANTIVE COMMUNITY
MEMBER OF ORGANIZATION
Figure 1. Relationships
among roles played by internal evaluators.
CONFLICT
AMONG
How are these various roles that internal evaluators play in conflict? I will focus on the conflicts arising between the roles of professional evaluator and a member of a substantive community, and then those between professional evaluator and member of an organization.
PROFESSIONAL
EVALUATOR
ROLES
I leave aside possible conflicts that might arise between being a member of a substantive field and a member of an organization. Not because there are no conflicts here but because they are of secondary importance from an evaluation point of view. I
VERSUS MEMBER
The conflicts inherent in these two roles affect an evaluator personally as well as the ways in which program evaluation is conducted. These two roles present the internal evaluator with different peer groups which are important for intellectual exchanges, professional development, and career advancement. Two areas in which these roles are manifest are membership in professional associations and publishing in relevant professional journals. Take, as an example, professional associations. Most of us received our training as evaluators in some general social science field, and thus identify ourselves as evaluators in education, psychology, sociology, public health, or whatever. As a consequence, we perceive the greatest professional benefits accruing from being active in those areas. Typically we are not very specialized in a social science field; although I am in education, I perceive mathematics education to be but a subset of the whole field. So, when one chooses associations and meetings to attend, an evaluator’s first choices are probably an evaluation association (such as AEA) and a general social science association (such as AERA, the American Psychological Association, or the American Sociological Association.) These preferences will often be at odds with an expectation that evaluation findings ought to be primariIy disseminated to a more narrowIy defined substantive community. And, that the interest’s of the organization and the evaluation will best be served if the evaluator’s interests also lie within this restricted substantive area. This is sometimes accompanied by a lack of apprecia-
OF SUBSTANTIVE
FIELD
tion of evaluation as a separate field by others within the organization. In my case, the expectation is that evaluation results be reported at annual meetings of organizations such as the National Council of Teachers of Mathematics. The math educators in the UCSMP have probably never heard of AEA and for the most part do not participate in AERA, which creates little overlap of professional organizations of import to UCSMP evaluators and curriculum developers. It depends on the particular organization for which an evaluator works, but one will be encouraged, if not required, to participate first and foremost in associations in the substantive area. Because attendance at professional meetings is usually subsidized by the organization, a certain degree of leverage is built in. Sometimes funds are available only for meetings in the substantive area. Or, funds are available for other professional meetings only if there is a clear relationship between the content of the meeting and the content of the evaluation. If resources, including time and money, were unlimited there would be no conflict, but resources are usually quite limited and evaluators must make choices and sometimes sacrifices. These same concerns apply to the issue of what jour-
rl am grateful to Rosalie Torres (1991) for pointing out significant conflicts between these two roles that might have potentially serious effects for internal evaluators. For example, if the profitability of the substantive part of the organization the internal evaluator is associated with is questioned, the evaluator’s job might be in jeopardy.
176
SANDRA
nals to publish in-evaluation journals, general social science journals, or substantive journals. These professional activities are important and internal evaluators are faced with too many choices-a focus on a substantive area identified with the organization, and a focus on evaluation promoted by a self-identification as a professional evaluator. At the risk of sounding too careerist, these dual peer groups present internal program evaluators with difficult choices which seriously affect personal career development. Related to these professional activities is the potential to become identified as an evaluator of X, in the case of UCSMP, mathematics education programs. None of us want to have our evaluation expertise restricted, or have the larger social science community perceive our expertise to reside in a restricted substantive area. However, a long and singular association with a particular field will de facto have such an effect. Opportunities to work in evaluation outside one’s organization are likely to be in this same field because often those are the people with whom internal evaluators associate and who are familiar with the evaluator’s work. Even in consulting or contract evaluations it is difficult to break out of the substantive field the internal evaluator is involved with.
MATHISON
Another less personal conflict that arises from these two roles is what evaluation methodologies will be used (see Torres, this issue). In mathematics education, and education generally, there is great attention to outcome measures and standardized test scores particularly. Although frequently such data do not answer questions of interest to the mathematics curriculum developers, there is an epistemology in any field that legitimizes certain knowledge and delegitimizes other knowing. The use of any evaluation methodologies must be at minimum palatable to those whose work is being evaluated, otherwise, the effort is wasted. The issue is not whether one or another evaluation approach is “right,” but rather to what degree are evaluation methods constrained by the field’s epistemology. The evaluator is faced with balancing an evaluator’s notion of “good” methodology with rendering the evaluation results useless because they do not conform to math educators’ notions of appropriate procedures or valid knowledge. The conflicts that arise from assuming a role as a professional evaluator and a member of some substantive community are therefore both personal and professional. Because internal evaluators belong to two peer groups they are presented with choices that affect their personal careers as well as the way they do evaluation.
PROFESSIONAL EVALUATOR VERSUS MEMRER OF THE ORGANIZATION What about conflicts created by simultaneously being a professional evaluator and a member of some organization? Every organization has a mission and a purpose, and sometimes organizations employ evaluators to strive for generally agreed upon goals. In fact, most discussions of internal evaluation are based on a decision-making model of evaluation which primarily defines the evaluator’s role as serving the needs of the organization (Love, 1983). For the UCSMP, the common goal is to recast math education (for students and teachers) for the 1990s and beyond. To do so, the project focuses on mathematics of contemporary importance, utilizes technological advances, and attempts to increase student proficiency in real-life problem solving. Obviously, the curriculum developers take this mission seriously, and project evaluators are also expected to work toward these same goals. This expectation is manifest in a variety of ways. Evaluators, like everyone else on the project, are expected to act as good will ambassadors for the UCSMP. In a project such as the UCSMP, public relations are important because everyone is part of a potential market for materials produced by the project. Evaluators are expected to simultaneously be advocates and critics of the project. A problem for all evaluators, but of particular salience for internal evaluators, is cooptation. The degree
to which the organizational goals are internalized by the UCSMP evaluation staff varies, but for the most part the staff become project advocates. Evaluators are disappointed when some aspect of the project goes badly and when one has to report critical findings. Sometimes knowingly, sometimes unknowingly, we intervene to make the project more successful. For example, more than once primary school teachers have been observed attempting rather unsuccessfully to implement instructional strategies they have learned in teacher in-service workshops. Sometimes the teacher’s frustration and demands for assistance have lead to such actions by evaluators as teaching a demonstration lesson in the classroom, or monitoring students as they try something new, or giving the teacher a personal critique of her instruction. It is all well and good to say this is not the evaluator’s role, but it is only possible to say that if one has not been put in such a position. The desperation of one teacher and its potential effects on other project teachers in a school overcome abstract arguments about objectivity and impartiality. It is not merely humanitarianism that provokes evaluators to act in ways that will make the project more successful. Sometimes one’s job is dependent on the successful continuance of a project. Sometimes one’s performance is evaluated on criteria (such as good public relations or good service to curriculum developers)
Role Conflicts for internal Evaluators associated with the success of the project. It is usually in the evaluator’s best interests to have the project do well. This is also manifest in what evaluation findings are reported and how. There are frequent exhortations in the evaluation utilization literature about reporting frequently, verbally, and informally (Alkin, Daillak, & White, 1979; Patton, 1978). There is a great deal of what I call hallway planning and reporting done in the UCSMP. While this is often a good way to convey findings, it is also a good way for people to ignore or forget what has been said. It is easiest to ignore a conversation, not quite so easy to ignore a memo, and harder to ignore a formal evaluation report. Decisions about how and if evaluation results will be disseminated, inside and outside the project, are constantly negotiated. Evaluators want to make reports based on full and honest disclosure, while project curriculum developers want to make reports that promote the project. This is not to suggest that either evaluators or curriculum developers are deceptive, but that they often have different motives for reporting evaluation findings. There is also a focus on positive evaluation findings. When I have completed an evaluation which is fairly positive, the curriculum developers are happy and appreciate the work of the evaluators. When a critical evaluation has been completed the evaluators are more likely to be accused of not understanding the curriculum developers intents, of being diverted by uninteresting issues, of not knowing enough about mathematics education, or, infrequently, of outright incompetence. If results are negative and do not provide a means with which project materials can be marketed, they will likely be ignored or subverted. And, often, there is Iittie the evaluator can do about it. Being a member of an organization also influences how evaluations are conducted. Just as substantive fields recognize certain procedures and methods as valid, so do organizations, what Kennedy (1983) refers to as an organization’s problem-solving style. She suggests that evaluators must adapt to organizational problemsolving strategies, but paradoxically such adaptation “assured continuing organizational support for the evaluation enterprise, and it often resulted in failure to meet the professional standards of the evaluator’s role.” The power of organizational forces to shape the conduct of program evaluation is great. Most organizations are bureaucracies and operate to some degree on what are referred to as “standard operating procedures.” SOPS are the rules of conduct that allow an organization to function regardless of who occupies certain roles. There are procedures for paying salaries, for ordering office supplies, and so on. To some extent evaluation methods become part of an organization’s standard operating procedures (Mathison,
177
1983). This reduces uncertainty on the part of program staff, makes doing evaluations more efficient, and conveys an often misguided notion of fair treatment by evaluators. For example, a teacher survey instrument, once developed, will likely be used repeatedly in a variety of situations because it has been agreed upon by evaluators and program developers. The evaluation methodologies, to a lesser or greater extent, become part of the organizational operations. Besides being a part of the organization’s standard operating procedures, there is a tendency for the evaluation to focus on the questions of greatest interest to administrators/managers. This focus on serving the needs of administrators is partly a result of the structure of organizations and an evaluator’s place within that structure. For many of the reasons already described, administrators become the most impo~ant audience for evaluation. It is sometimes difficult to attend to larger issues about curriculum development that arise from teachers’ experience when one is pressed to provide bottom-line information, such as analyses of student achievement data. Granted there are many internal evaluators who perceive this to be the most desirable situation and that indeed the main purpose for evaluating is to provide information to decision makers (Chelimsky, 1987; Love, 1983; Sonnichsen, 1987; Stufflebeam et al., 1972). Others see the approach as less benign (House, 1980). The point is not to eschew the practice of program evaluation that has high utility for administrators but rather to avoid slavish adherence to only the interests of administrators. The interests of teachers involved in the evaluations of UCSMP textbooks are also important and by inference the interests of potential future users of the materials. This is a problem shared by internal and external evaluators but the typical internal evaluator’s “position at the elbow of the decision-maker” (House, 1980) makes it more difficult for evaluators to expand their purview to include many interests, in addition to those of decision makers. The issue of maintaining independence is a more pressing one for internal evaluators. Structurally and organizationally evaluators are aligned with administrators. There are few incentives, save one’s ethical stance as an evaluator, to look beyond the questions asked by and for decision makers. However, for evaluation to maintain its integrity, internal evaluators must see the big picture and have the moral gumption to pursue potentially unpopular issues in the conduct of program evaluation. Organizations work against self-reflection and self-criticism and thus internal evaluators must often go against the organizational zeitgeist. Many of the conflicts created by being an evaluator and a member of an organization arise from different motivations and the inevitable fact that one will have to
178
SANDRA MATHISON
face the same project staff after completing an evaluation. Getting repeat business is a problem for all evaluators and the intensity of this problem is great for internal evaluators. One very critical evaluation could potentially cost an evaluator her job, or less drastically MANAGING
provide the logic for ignoring subsequent evaluation studies. And, the perceptions of completed evaluations influence an evaluation office’s access to resources and freedoms allowed an evaluator.
ROLE CONFLICTS
Although the picture of conflicts I have painted seems pessimistic there are ways of managing the conflicts to minimize personal stress and to avoid compromising one’s evaluation integrity, The aspect of internal evaluation not discussed here is the potential for conducting program evaluations that are sufficiently sensitive to a particular context and that take advantage of the politics of an organization to promote the use of evaluation findings. This potential makes it worth recognizing the inevitable conflicts I have described and concentrating on managing, rather than avoiding, these conflicts. In fact, the utilization of evaluation is dependent on successful management of these inevitable conflicts. Kennedy (1983) describes four categories of roles assumed by school district in-house evaluators-the technician, the participant, the management facilitator, and the independent observer. Each of these roles assumes a different prioritization of the three major roles described earlier in this paper. Although Kennedy does not identify a role of participant observer, such a role could combine an interest in clients’ and decision makers’ needs from a sympathetic but neutral position. The prescriptions offered here are in keeping with a more complex, multifaceted perspective of the internal evaluator’s role. Although it seems trite, perhaps the most important strategy for managing conflict is for the internal evaluator to be an educator. Not only an educator in the sense of revealing something through evaluation (Wise, 1980), but also an educator about evaluation-about what evaluation is, about what evaluation can and cannot do, about evaluation as an area of intellectual interest. Just because an organization decides it should have its own evaluation office, this is no guarantee of a practical understanding of what evaluation is. It is wise for an internal evaluator to assume that her capabilities and functions will be ill-understood by others within the organization and to concentrate on developing a better understanding within the organization. For example, I have identified the perceptions of evaluation held by each UCSMP component director and try to work within their framework. By recognizing their definition of evaluation I can work toward educating them about what my role should be. Sometimes this is a calm rational effort, and sometimes it is not. For example, I succeeded after 2 years of discussions and some unilateral action to convince a compo-
nent director that sampling students within classes for achievement testing would provide sufficient data for understanding the effects of the curriculum-a seemingly small point, but a necessary point of discussion and ultimateIy agreement which will make future evaluations for this component director more effective. Another strategy for managing conflict is to take nothing for granted. At the risk of seeming pedantic, the evaluator must assume responsibility for clarifying what it expected and articulating conflicts when they occur. Only the evaluator can make it clear to others in the organization when conflicting demands are being made. For example, if it is important, an evaluator should negotiate what professional associations she will be involved in and strike a compromise that satisfies the needs of the evaluator and the needs of the organization. Or, attempt to develop some guidelines for deciding when you should intervene because the use of project materials is failing miserably. Or, promote open discussions about whose interests should be served in the conduct of evaluation studies. Or, negotiate who owns evaluation data by setting general policy about access to and use of data. The specific issues will vary from one organization to the next, but the need to continue a dialogue about conflicts is common. The very crucial characteristic of independence must also be actively promoted and maintained by internal evaluators. Not only must they monitor their own evaluation practice to ensure evaluations (including questions asked, methods used, and audiences served) are independent and take a critical posture toward that which is being evaluated but, also, monitoring of evaluator’s independence must be done by others. Metaevaluation (see, e.g., Stake, 1986) and evaluation auditing (Schwandt & Halpern, 1988) are strategies sometimes employed to monitor the conduct of program evaluations, although neither strategy seems quite adequate for the task. However, it is ultimately in the best interests of internal evaluators and their organizations with which they work to evaluate the evaluations, but this should be done on a broad basis and not merely reflect a particular model of program evaluation. Evaluators probably have to be prepared to give a little more than they take when compromises are made. Compromises must not jeopardize an evaluator’s ethics, but must also recognize organizational operations and politics. The political nature of evaluation takes on new
179
Role Conflicts for Internal Evaluators and more dimensions for internal evaluators who must attempt to control their own environment while not jeopardizing relationships with other staff. Doing evaluation is stressful work. Doing internal evaluation is especially stressful, both personally and
professionally. However, internal evaluation is important useful work and the conflicts which are the source of these stresses can be managed-not eliminated, but managed.
REFERENCES ALKIN, M.C., DAILLAK, ations. Beverly Hills: Sage.
R., & WHITE,
P. (1979).
Using evalu-
CHELIMSKY, E. (1987). The politics of program evaluation. New directions for program evaluation, no. 34. San Francisco: Jossey-Bass.
HOUSE,
E.R. (1980). Evaluating
with validity.
evaluator.
Eval-
LOVE, A. (1983). (ed.). Developing effective internal evahiation. New directions for program evaluation, no. 20. San Francisco: Jossey-Bass.
MATHISON, S.M. (1983). The standardization of evaluation practice: A case study of a Canadian community college. Unpublished master’s thesis, University of Illinois at Urbana-Champaign.
M.Q. (1978). Utilization-focused
STAKE, R.E. (1986). Quieting reform: Social science and social action in an urban youth program. Urbana, IL: University of Illinois Press.
Beverly Hills: Sage.
KENNEDY, M.M. (1983). The role of the in-house uation Review, 7, 5 19-541.
PATTON, Sage.
SCHWANDT, T.A., & HALPERN, E.S. (1988). Linking auditing and meta-evaluation: An audit model for applied research. Newbury Park, CA: Sage.
evaluation.
Beverly Hills:
STUFFLEBEAM, D., FOLEY, W.J., GEPHART, W.J., GUBA, E.G., HAMMOND, R., MERRIMAN, H.O., & PROVUS, M.M. (1972). Educational evaluation and decision making. Itasca, IL: F. E. Peacock Publishers. SONNICHSEN, R.C. (1987). An internal evaluator responds to Ernest House’s views on internal evaluation. Evaluation Practice, 8, 34-36. TORRES, R. (1991). Improving evaluator as consultant-mediator. 14, 189-198.
the quality of internal evaluation: The Evaluation and Program Planning,
WISE, R.I. (1980). The evaluator as educator. New directionsfor gram evaluation, no. 5. San Francisco: Jossey-Bass.
pro-