The inherently political nature of program evaluators and evaluation research

The inherently political nature of program evaluators and evaluation research

0149-7189187 $3.00 + .OO Copyright o 1987 Pergamon Journals Ltd Evaluation and Program Planning, Vol. 10, pp. 83-93, 1987 Printed in the USA. All rig...

1MB Sizes 11 Downloads 37 Views

0149-7189187 $3.00 + .OO Copyright o 1987 Pergamon Journals Ltd

Evaluation and Program Planning, Vol. 10, pp. 83-93, 1987 Printed in the USA. All rights reserved.

THE INHERENTLY POLITICAL NATURE OF PROGRAM EVALUATORS AND EVALUATION RESEARCH JACQUELINER. MCLEMOREAND JEAN E. NEUMANN Case Western Reserve University

ABSTRACT This paper discusses the importance of addressing the political dynamics inherent in any evaluation research process. The theoretical orientation contributing to the denial and avoidance of politics is articulated and contrasted with one which posits politics as normative and useful. A connection is drawn between stakeholder and case study approaches to evaluation. It is argued that both are effective routes to evaluations which facilitate decision making. A case example of a study of Community Development Block Grant agencies is used to detail problems accompanying avoidance of politics. Guidelines for effectively addressing and using politics are offered.

INTRODUCTION The interaction of politics with any kind of research process is likely to provoke reaction from the steadiest, calmest researcher. In the case of evaluation research, clarity about one’s conceptualization of evaluation determines whether politics is considered separate from research or an essential part of it. Traditional views of scientific research, emphasize objectivity and consider politics as bias which requires control. More action-oriented and subjective orientations use politics to generate valid information and build commitment

for programmatic changes. Stakeholder-based evaluation and case study research designs accept politics as an essential and enriching component of the evaluation research process. Positive inclusion of political groups in each stage of the evaluation increases the quality of data collection and analysis, the accuracy of theory generation and hypothesis testing, and the likelihood that results will be used by organizational members.

ON THE NATURE OF PROGRAM EVALUATION Earlier conceptualizations of evaluation research emphasize its summative purposes. However, contemporary researchers focus primarily on formative evaluation or, at least, some combination of both. This shift in emphasis can be linked to the history and development of evaluation activity and to specific advances in organizational theory. Gural(1975) traces the surge of program evaluation activity to Robert McNamara’s early years in the Department of Defense. McNamara required agencies under his administration to demonstrate “rationalized decision-making”- a demonstration documented by systems for evaluating program effectiveness and efficiency. Both effectiveness and efficiency were defined in terms of cost benefit (Gural, 1975, p. 11). This approach spread to other Federal departments establishing a spare and limited conceptualization.

Deming (1975) illustrates this narrow orientation with his definition of evaluation, “a pronouncement concerning the effectiveness of some treatment or plan that has been tried or put into effect” (Deming, 1975, p. 53). The summative quality of “a pronouncement” holds evaluation as a static activity. The past tense places evaluation in a temporal relationship to the program itself. A treatment or plan occurs and is followed by the evaluation. This linear, cause and effect framework, inferred by Deming, assumes an evaluator external to a program. In fact, an external evaluator is necessary to insure adequate policing of federal funds. Wise (1980) expands the notion of effectiveness to include unearthing causal relationship between given interventions and outcomes. He advocates dissemination of successful interventions to settings beyond a specific evaluated program. Indeed, this approach of83

84

JACQUELINE

R. McLEMORE

fers more berth than the perspectives of McNamara and Deming. The linear, cause and effect framework remains summative for the particular program while becoming formative for future applications of a given intervention. The idea of dissemination helped evolved the early priority of establishing cost effectiveness with summative evaluation, into facilitating programmatic learning and social change with formative evaluation. A gradual shift in favor of emphasizing formative approaches can be gleaned from a plethora of other writings. For these contemporary researchers, program evaluation characterizes a social activity aimed at determining the merit or value of specific social programs (Carnall, 1980; McCorcle, 1983; Nunnally, 1975; and Weiss, 1972). A “social activity” conceptualization moved evaluation from a policing activity using empiricist research methods to assess efficiency, to an inquiry activity, which used empiricist methods to investigate how to change human behavior. However, in these discussions of summative or formative evaluation, evaluating is still external to the program even though the idea of results impacting other programs appears. No longer an end in its summative self, evaluation research enters, and attempts to form, the larger social system. In fact, some authors discuss evaluation as having essential responsibility for social change (Bunda, 1985; Kirkhart, 1985). Evaluation of programmed activities by people external to the organization has a high degree of congruency with other aspects of U S. culture: schoolteachers grade, experts investigate, impartial parties assess. As more social programs initiated attempts at societal changes, summative evaluation became a kind of cultural banner of objectivity. Program personnel developed budgets, hired staff, identified goals, targeted populations, and implemented activities. Eventually,

and JEAN E. NEUMANN accountability to funding agencies and society-at-large had to be proven. Were these programs resulting in any changes? Could their results be replicated by other programs? Program evaluation became a requirement for most third-party funding. In fact, it has become so much a part of administration, that Sjoberg avers that “evaluation is implicit in all social orders” (Sjoberg, 1975, p. 33). It is precisely the pervasiveness of evaluation activity that affiliates evaluation research with organizational behavior research. An integration of evaluation research, organizational theory, and action research illustrates this connection (McCorcle, 1983). While progress has been made from a limited conception of evaluation research as linear and cause and effect to something more dynamic and non-linear, major progress in this direction has been constrained by the conception of evaluation as something outside the organization, as a form of documentation and record-keeping, or as a form of research. All these consider politics as separate from the research process. However, the location of the evaluator as internal or external to the organization is different from conceiving evaluation as a part of the organization. A consideration of evaluation as implicit in all social orders and seeing program evaluation as a component of the organization means that politics will impact the evaluation process. The more researchers conceive of evaluation as behavior characteristic of social systems, rather than as activity outside of an organization’s boundaries, the more dynamic and non-linear conceptualizations of evaluation become. Such a conceptualization suggests an acceptance of politics as behavior which is normative in social systems and, as such, is a source of information useful in evaluation processes.

PROGRAM EVALUATION AND THINGS POLITICAL There are five presuppositions which form the ground of our argument. These are as follows: Organizations are viewed as open systems and, as such, are dependent on the larger environment for a host of resources. l Evaluation is imphcit in organizing and is common in organizations in this culture. l Evaluation processes increase uncertainty and turbulence. 0 There are always a multiplicity of desired outcomes (and preferred paths to these outcomes) in any organizational area. l The pursuit of multiple demands of stakeholders in the organization is evidenced by political behavior. l

An open systems perspective prescribes a dynamic interplay of interrelated elements within and outside of the organization. Organizations viewed as open systerns are dependent on their environment for certain crucial resources. To contrast, closed systems are viewed as self-sustaining and not needing to input resources of any kind. Further, the boundaries in closed systems are more rigid than in open systems. Finally, a systems view of organizations prescribes input, throughout and output processes of some kind. When organizations are viewed as open systems, the linkage to evaluations is evident (McLemore & Neumann, 1984). Just as organizations may be more or less formal, so too are evaluation processes. Evaluation may be casual and sporadic or quite routinized and

Inherently Political Nature well-budgeted. In formal organizations, growth and development are contingent on the input and throughout processes. New information and other resources must be taken in and processed in ways which include interpretation, “what does this mean?” and evaluation, “is this right, appropriate, best?” (see Daft & Weick, 1984 for more discussion of this idea). Interpretation and evaluation infer that not all resources will be deemed appropriate and therefore, some will be discarded, shelved or otherwise ignored. Coalitions and politics play a major role in the activities of the organization Crozier, 1964; Cyert & March, 1963; and Weber, 1947). Gray and Ariss have noted that “. . . organizations are construed as coalitions of participants with differing motivations, who choose organizational goals through a process of continual bargaining” (Gray & Ariss, 1985, p. 708). By changing “participant” to “stakeholders,” it is clearer that organizational coalitions extend through and beyond the boundaries of the internal organizational environment. Again, this is consistent with the open systems view of organizations. Bargaining and competing for goals, outcomes and particular paths to these desired outcomes results in turbulence or uncertainty within the organization. These activities also occur at the organizational level

85

between organizations or programs. Evaluation processes, by their very nature, call into question priorities and relationships. Retrospective analyses and formative conclusions of an evaluation may threaten any episodic calm being evidenced in the organization. Political behavior abounds at these times. Pfeffer (1981) comments that politics “involves those activities taken within the organization to acquire, develop, and use power and other resources to obtain one’s preferred outcomes in a situation in which there is uncertainty or dissensus about choices.” These conditions are rife when evaluations are in progress. Multiple stakeholders means that there are multiple goals and outcomes. Even the coalescing of interests among stakeholders fails to significantly decrease the multiplicity of desired outcomes. The formation of coalitions is political in itself. Political behavior is an inherent reality for organizations and for the evaluation processes that are characteristic of organizations in our culture. When there are multiple demands and several possible ways to meet the demands, political behavior ensues. To conceive of evaluation, human activity that it is, as being set apart from all this is reckless, though not unexpected, given the scientific traditions which have nurtured evaluation research.

THE “OBJECTIVITY” OF RESEARCH AND AVERSION TO POLITICS Defining evaluation research as political contradicts the older, more traditional view of research as strictly objective. As Moursund explains the traditional view, “ . . . there is an intensive effort to maintain objectivity -to separate the researcher from his subject matter” (Moursund, 1973, p. 14). In fact, “the ethos of social science is the search for ‘objective’ truth” (Myrdal, 1969, p. 1). Objectively exists as a concept in contrast to subjectivity. Rarely does a discussion of one appear without the other offered as counterpoint. Simon’s basic text on research methods illustrates: “Scientific communication must be objective, rather than subjective, in both the words and the concepts used. ‘Subjective’ here means that thoughts that are inside one person’s head and that are unavailable for checking by other people. ‘Objective’ here means those statements that are public and checkable” (emphasis in original, Simon, 1978, p. 23). Myrdal challenges the assumption of objectivity in social research citing tradition, environment, and personality as three sources unavoidably influencing a researcher (Myrdal, 1969, p. 3). He would have no difficulty considering evaluation research as political. “Indeed, no social science or particular branch of social research can pretend to be ‘amoral’ or ‘apolitical.’ No social science can ever be ‘neutral’ or simply ‘factual,’ indeed not ‘objective’ in the traditional meaning of

these terms. Research is always and by logical necessity based on moral and political valuations, and the researcher should be obliged to account for them explicitly” (Myrdal, 1969, p. 75). Politics and subjectivity are factors at every stage of any research. Because of the historical tradition out of which evaluation research grew, researchers demonstrate an aversion to politics and a lack of awareness of their own subjectivity in the inquiry process. To admit fully to political motives places one’s credibility as a rigorous, neutral researcher into question. “Values, as such, have no place in the traditional research scheme -or, perhaps more accurately, the supreme value is the acquisition of knowledge to which all else is secondary (Moursund, 1973, p. 14). A quote from the Block Grant study, reported elsewhere in this issue of Evaluation and Program Planning captures the researchers’ aversion to politics even though they claim to be “cognizant of the significant political factors” (Iutcovich & Iutcovich, 1987):

“To overcome this problem [of having no control over how evaluation research results are used by client systems], one might argue that evaluation researchers need to acquire positions of power and authority in order to control our destiny. Of course, many would argue against this since such positions would jeopardize our credibility and ability to conduct impartial, objective evaluations. And,

86

JACQUELINE

R. McLEMORE

furthermore, we all know that power corrupts, thus we would probably end up no different from those who we claim are presently interfering with our proper conduct of evaluations. Besides, who is to say that social scientists have a corner on the market when it comes to ‘morality’?” (Iutcovich & Iutcovich, 1987)

Obvious from this quote are the researchers’ subjective reactions to the politics of evaluation. Feelings of powerlessness and frustration fuel a self-righteous tone and rationalization. “We researchers are above politics,” they seem to say. As will be discussed later in this article, these same researchers engaged in extremely political acts at every stage of their evaluation. Had they not been operating out of traditional assumptions about research and attempting to “conduct an impartial, objective evaluation,” they might have been able to address the political factors in a more forthright and effective manner. Assumptions about the nature of social science and society determine whether or not it is acceptable for program evaluation to be a political process. The vaiue judgment against politics leads researchers to behave in certain ways with client systems at each stage of the evaluation process. The same is true of those who judge politics neutrally-as a given to be managed during an evaluation. They behave in ways different but driven by their assumptions about social science and society. Burrell and Morgan (1979) make a significant contribution to understanding the various paradigms underwhich social scientists conduct research. They identify two broad dimensions along which debate in social science has developed: the objective-subjective dimension and the regulation-radical change dimension. From these dimensions, they form a model of paradigms. The objective-subjective dimension analyzes assumptions about the nature of social science. “The objectivist approach to cohesiveness . . . the basic questions which it asks tend to focus upon the need to understand why society . . . tends to hold together rather than fall apart” (Burrell & Morgan, 1979, p. 17). In contrast, the radical change view of society emphasizes conflicts between interest groups. Theorists with this orientation attempt to find “. . . explanations for . . . radical change, deep-seated structural conflict, modes of domination and structural contradiction” (Burrell & Morgan, 1979, p. 17). Society has great potential but is not meeting basic human needs. When formed into a two-by-two diagram, these dimensions suggest four theoretical paradigms for conducting research: functionalist, interpretative, radical humanist, and radical structuraIist (see Figure 1). Researchers operating under a functionalist paradigm take an objectivist approach to social science and view society in terms of regulation. Researchers operating under an interpretive paradigm take a subjectivist approach to social science and view society in terms of

and JEAN E. NEUMANN Radical

Change

Regulatton Figure 1.

regulation. A radical paradigm positions the researcher to take a subjective approach to social science and view society in terms of radical change. Researchers operating under a radical structuralist approach to social science and view society in terms of radical change. Clearly, evaluation research is firmly located within the functionalist paradigm. Not only are objectivist approaches favored, but regulation continues to be the raison d’etre for evaluation research. Even shifts toward an inquiry mode which facilitates organizational learning do not contradict a primary function of evaluation as being change for the better, i.e., toward greater effectiveness. Functionalists assume an integration theory of society (Burrell & Morgan,, 1979, p. 13) in which stability, functional coordination, and consensus predominate. Politics and power tend to play a minimal role in such a view except as an integrative force. Conflict of interests go unacknowledged as do dysfunctional aspects of power differentials. Consequently, aversion to politics is embedded in “objectivity.” To a great extent, denial of politics is enmeshed with the functionalist paradigm. This objectivist, unitary view has not served evaluation research well. Ironically, evaluation researchers have often found themselves in the position to defend themselves against accusations from “pure” functionalists. Simply because evaluation research is “applied,” pure functionalists bristle at the introduction of subjectivism into the research process-an automatic implication of applied, field-based methodologjes. Indeed, formative evaluators, while still clearly functionalist by nature, borrow from the interpretive framework. Formative evaluation assumes that programmatic life “is an ongoing process, sustained and ‘accomplished’ by social actors” (Burrell & Morgan, 1979, p. 196). These social actors have data important for an evaluation which researchers collect, analyses, and feedback to these same people for the purpose of pro-

Inherently Political Nature grammatic improvement and change. These data and methods are subjectivist. And, the interpretive paradigm offers decades of methodological developments for collecting and analysing subjective data. The data collection-analysis-feedback-change cycle in formative evaluation closely resembles action research and it is here that we begin to see the link with yet another theoretical paradigm. Those who use some form of action research “. . . as a basis for their analysis of organizational situations usually do so in recognition of the fact that any social situation is characterized by a plurality of interests” (Burrell & Morgan, 1979, p. 209). Conceptually, pluralism implies conflict of interests and power differentials. Introducing pluralism into the functionalist paradigm introduces politics and power into the research process. This introduction comes from influences in the radical structuralist paradigm. Pluralism assumes political behavior. Rather than being judged negatively, political behavior and power struggles are viewed as normal dynamics of society. Instead of avoiding and denying, the researcher views politics as a normal dynamic of a program or organization. However, unlike the radical structuralists, functionalists who recognize pluralism consider effective dealing with political behavior as having a harmonious

87

effect. Conflict is seen as a force toward integration, consensus, and cooperation for the good of the whole. Pluralistic functionalists believe that conflict is resolvable. (This, afterall, is the basis of democracy.) The evaluation researcher so inclined considers political behavior a variable to be addressed from the start of the research process to the finish. The purpose for doing so is to increase the effectiveness of the evaluation and any decisions which might result. Thus, the researcher also enages in political behavior -something basic to all people. Myrdal summarizes this idea, “As social scientists, we are deceiving ourselves if we naively believe that we are not as human as the people around us and that we do not tend to aim opportunistically for conclusions that fit prejudices markedly similar to those of other people in our society” (Myrdal, 1969, p. 43). So, evaluation research methodologies which deal successfully with politics, borrow an appreciation of subjectivism from the interpretive paradigm and an appreciation of pluralism from the radical structuralist paradigm. More than just a theoretical stretch, subjectivism and pluralism hold definite implications for methodology. These methodological implications further dictate how the researcher behaves with the political dynamics in the client system.

EVALUATION APPROACHES WHICH DEAL WITH POLITICS Both the stakeholder approach and case study research design reflect an appreciation for pluralism and for subjectivism in evaluation research. The former builds pluralism into the evaluation process and the latter eschews pre-judging any organizational phenomenon including political behavior. As such, these alternatives to traditional, objectivist orientations offer some clear guidelines for addressing political issues in evaluation research. Administrators initiating an evaluation might do so in order to further their own political agendae or in an attempt to diminish the role of politics in decision making. As we have been arguing, politics must be assumed in any program evaluation. Marshall lists those general situations in which this is true: “Where intergovernmental relations are affected, where goals include social action and reform, where money and careers are at stake, evaluation may be utilized more for political reasons than for decision-making” (Marshall 1984, p. 254). This list elicits a grin: doesn’t that cover almost any situation in which a program evaluation would be initiated? Bryson and Cullen (1984) provide a simple, yet profound, definition of those evaluation situations likely to be “politically difficult. ” “We argue that the degree

of agreement on goals is synonymous with the political difficulty of an evaluation situation. We argue further that political difficulty increases as the number of groups involved increases and as the degree of value agreement among them declines” (Bryson and Cullen, 1984, p. 270). It was precisely for such situations that the stakeholder approach was developed. Much has been written about this pluralist methodology since it was first developed in the mid 1970s. Involvement of relevant individuals, groups, and organizational units in the design and conduct of the evaluation characterizes the stakeholder approach. “Stakeholders are individuals, or groups, who have a direct interest in the program being evaluated” (Lawrence & Cook, 1982, p. 327). This research approach assumes pluralistic interests. In reviewing the decade-long history of stakeholder evaluations, Weiss reports that the National Institute of Education-sponsor for and advocate of early explorations-“. . . seems . . . to have expected the stakeholder approach to reduce the level of conflict in decision making about the program. Multi-lateral participation in the evaluation process would help the several groups involved in the program to develop mutual understanding and to appreciate one another’s

88

JACQUELINE

R. McLEMORE

viewpoints” (Weiss, 1983, p. 12). Some authors have complained that multi-level representation in stakeholder groups increases conflict and requires all parties to engage in more frequent, detailed, and affect-laden interactions than is perhaps necessary (Cohen, 1983; Gold, 1983; Murray, 1983). But “political difficulty” is just that. Bringing differences to the surface where they can be dealt with openly increases the chance for resolution: a rising above the “backstage” of gossip, power plays, and other forms of political maneuvering. The stakeholder approach “. . . represents a recognition of the political nature of the evaluation process. Unlike early formulation of the mission of evaluation, it does not cast evaluation as the impartial, objective judge of a program’s worth” (Weiss, 1983, p. 11). Instead of being ignored or denied, the informational needs of stakeholders are built into the overall objectives and design of the evaluation. Since stakeholders are the prospective users of the evaluation results, their active participation in the evaluation process increases the likelihood of implementation. There is some indication that case study research is the most compatible design for use with the stakeholder approach (Bryk, 1983). Lawrence and Cook, evaluators who advocate the use of surveys with stakeholder groups, lay the groundwork for a rationale in favor of a case study approach. They emphasize the importance of a “pre-evaluation effort” as a way to understand the total context in which the evaluation will take place: “ . . . the environment within which the evaluation is carried out can affect, in negative as well as positive ways, the conduct of an evaluation, and therefore the results obtained from it. Yet, evaluators all too often ignore the political and operational dynamics of the program environment, and of the evaluation itself” (Lawrence & Cook, 1982, p. 328). The case study approach offers the flexibility of design and methodology to address this ambiguous and complex situation. Yin has written extensively on this approach. His definition is especially illuminating: “A case study is an empirical inquiry that: investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used” (Yin, 1984, p. 23). Further, “case studies are the preferred strategy when ‘how’ or ‘why’ questions are being posed” and “when the investigator has little control over events” (Yin, 1984, p. 13). Clearly, each of these conditions characterizes a

and JEAN E. NEUMANN typical evaluation situation. By definition, program evaluations take place at the program’s site. Distinguishing between the evaluating activity and the program’s dynamics is practically impossible (and, as we have argued, theoretically undesirable). Other boundaries, equally confusing, include service delivery from administration, central from outlying units, program from funding agencies, program from other service deliverers, original services from supplemental services, etc. Evaluation questions frequently fall into the “how” and “why” genre: “how should we restructure to be more effective?,” “ why aren’t we getting the clients we want?, ” “how has our treatment affected the clients?,” “why are we having a particular difficulty?,” These sorts of questions require creativity in identifying multiple sources of data so that investigators feel adequately equipped to answer. Finally and most obvious, behaviors relevant to a program evaluation rarely can be manipulated in any meaningful way. Within the case study research design, a variety of methodologies may be employed depending on the informational needs of the stakeholders. Case study research does not automatically imply qualitative data collection and analysis methods (Lawrence & Cook, 1982; Yin, 1984); although, a “qualitative, illuminative, ethnographic, process-oriented evaluation” seems more amenable to addressing politically sensitive data (Weiss, 1983; Marshall, 1984). This sensitivity grows from the essentially valueladen quality of political behavior. Methodologies which do not involve the researcher directly with the people whom she or he is evaluating lose the sensitivity to discern value-laden data. Cohen, in comparing a stakeholder approach using in-depth interviews with other methodologies, reports, “In practice, this meant that a single evaluation contract would become a vehicle for managing and expressing far more values that it had in the past” (Cohen, 1983, p. 74). The link between value-laden data and political behavior suggests an argument in favor of subjectivist approaches to data collection and analysis. Such approaches assume that the researcher does not know in advance what he or she will find in an organization. The full complexity unfolds as the research progresses. All data become relevant, thus, increasing the validity of the research. Politics is not ruled out initially nor is it given more weight than warranted. Marshall (1984) states firmly, “. . . the case study methodology emphasizes processes, cross-organizational perspectives, and descriptive findings with clear policy implications . . . [for managing] organizational and political tensions in evaluation” (Marshall, 1984, p. 265). Both the stakeholder approach and the case study design require researchers to engage in behaviors ex-

Inherently Political Nature plicitly for the management of politics. An evaluation process is not a stakeholder evaluation unless multiple groups are involved at the outset in dealing with the four problems addressed by a research design: “what questions to study, what data are relevant, what data to collect, and how to analyze the results” (Yin, 1984, p. 24). While “the inclusion of multiple groups in the evaluation process is an attempt to redress the inequitable distribution of influence and authority” (Weiss, 1983, p. 73), the primary purpose is more practical. Lawrence and Cook discuss the results of early involvement, “Knowledge of stakeholder perceptions and information needs affected the subsequent course of the study in four ways: (1) improving evaluators’ understanding of the program; (2) indicating stakeholder expectations of the evaluation; (3) permitting explanation of the evaluation, as then envisioned, to important constituencies; and (4) guiding evaluation objectives and proposed study design” (Lawrence & Cook 1982, p. 332). Thus, early involvement speeds up entry and provides an initial assessment of the evaluation’s context. Along with inclusion of multiple stakeholders, aligning the evaluation questions with stakeholders’ needs smooths the way for negotiating a compromise research design that satisfies the needs of conflicting groups (Marshall, 1984, p. 255). Weiss calls this identifying

POLITICS IN ACTION: DEVELOPMENT

89

“the key decision pending” (1972, p. 16) while Marshall calls it identifying “a precise policy and management question” (Marshall, 1984, p. 257). These themes of involvement and alignment repeat themselves in iterations throughout the evaluation process. Cycles of data collection, analysis, and feedback typical of field studies-provide several points at which opportunities for more involvement and alignment appear. As long as stakeholders participate in decision making relevant to the research design then the spirit of pluralism will be served. When data collection methodologies, including sampling techniques, appropriate to a case study design are used, then the spirit of subjectivism will be served. While critical of the stakeholder approach, Murray, (1983), nonetheless, “. . . learned that the stakeholder approach is a useful device for getting the leading players to cooperate, for understanding a program intimately, for attracting attention to interim evaluation findings, and perhaps even for getting decision makers to take evaluation findings into account when they make decisions” (Murray, 1983, p. 59). Marshall, an advocate of case study evaluation research, finds that the quality of data produced by this method, “. . . reduces the political impact of negative findings, allows the sponsoring agency to avoid crises, and applies directly to policy debates” (Murray, 1984, p. 255).

THE CASE OF KURC AND THE COMMUNITY BLOCK GRANT AGENCY EVALUATION

The evaluation of six human service agencies receiving Community Development Block grant monies is an excellent example of what can go awry when the political dynamics of the evaluation research process are inadequately addressed. The situation presents a classic dilemma when multiple layers of interest groups meet dwindling resources. Additionally, these dilemmas surfaced even though researchers initiated the work with awareness of the political implications. The Community Development Block Grant article unfolds a sequence of highly political snafus occurring during the evaluation being conducted by a private research corporation, identified as KURC (Keystone University Research Corporation). KURC’s client, a city department known as the Office of Policy Planning and Management (OPPM), contracted to evaluate the agencies they funded. The evaluation was intended to provide a base of knowledge from which OPPM could make informed decisions regarding anticipated cuts in allocations. The scenario included interface with the watchful Federal government and a hovering advisory board, the Community Development Commission (CDC). CDC membership included

administrators from some of the funded agencies. The authors mention that many of the CDC representatives were black and Latino-a significant fact. Without adding much more complexity, the environmental picture of this evaluation research project is illustrated in Figure 2. Each group in the diagram has vested interests with at least one other constituency. For example, it is obvious that the city needs to maintain their relationship with HUD: a clear case of resource dependency

A1

A2

A3

Figure 2.

A4

A5

A6

JACQUELINE R. McLEMORE and JEAN E. NEUMANN

90

with one’s larger environment. With resources and regulations changing rapidly, there were important reasons for wanting to respond to those needs. The picture becomes more complex when it is revealed that a Mayoral primary was scheduled at the same time the recommendations from KURC were to be revealed. The Mayor was concerned with maintaining his relationship with minorities’ consequently, there was great concern with the report. KURC itself adds to the multiplicity of interests. They identify themselves as having a liberal philosophy, a social science research orientation, and a drive to anchor their business and make it economically viable. Therefore, we depicit the overall research context in Figure 3. While OPPM is the client, any information shared with them affects other areas of the environmental context. KURC’s contract with OPPM enmeshed them in a system where all contacts have farreaching significance. The authors of the CDBG article aver their awareness of the political dynamics. Nevertheless, they proceed, apparently absent of caution. In the process, several politically oriented choices- made by the researchers-stand out: KURC reframed the clients’ question at the onset of the project while discussing the purpose of the evaluation. KURC’s choice to present the research design approved by HUD and OPPM to the CDC members indicated a choice to disallow CDC’s influence in shaping the project, thus diminishing their propable ownership. KURC allowed OPPM to push through the research despite resistance from CDC. KURC’s description of the findings carried recommendations for managing the allocation problems.

CITY

l

l

l

KURC agreed not to present their own findings lest some kind of uproar occur. KURC appealed to the Mayor when the CDC refused to authorize payment. KURC agreed to a closed meeting to review the findings

Clearly, these evaluators behaved politically. If these political dynamics had been conceived of as part of the design and implementation process and had been thus addressed, we predict that an evaluation more useful to the many stakeholders in these programs would result. While the authors of the CDBG article do not detail all the steps in their process, some information can be inferred. Their political “choice points” may be examined in the light of the evaluation research sequence along with some alternatives for addressing the politics of the situation. We use Marshall’s five stages in the evaluation research process (Marshall, 1984). Stage One

A precise policy and management question is identified. The inevitable politics of this step could have been addressed if the researchers had: formed an evaluation coordinating committee comprised of representatives from OPPM, CDC, and staff representation from some of the programs. consulted with someone from each of the stakeholder groups prior to presenting a completed design. developed a formal, written document covering all aspects of the evaluation and, then, circulating it widely for comments and suggestions. Stage Two

Identify the evaluation’s purpose, audiences, and the biases, politics and constraints and needs of the au-

GOVERNMENT

KURC A2

Al

\

MINORITY

COMMUNITY

4 Figure

3.

A3

A4

A5

A6

Inherently Political Nature diences. The politics of this phase could have been addressed through the following: involving staff groups from each of the agencies in developing a picture of the organization as it was originally designed and as it is now - a past state and a current state. conduct structured interviews with a sample from the OPPM, CDC, each of the agencies, and someone from the larger community. have a sub-group from the constitutent groups generate a list of biases or constraints to doing the evaluation and post the list for easy and frequent reference. use on-site observations.

Stage Three Identify the appropriate research design and the essential human and monetary resources. This phase of the evaluation work could be advanced through: an informal polling of some “experts” external to the project to get some ideas on the most appropriate design given the data about the organizations. provide a stakeholder group with information on the options which best fit the organization, have them discuss it and make recommendations to the evaluation team. develop a matrix of how the designs fit with the particulars of each program; then, make design decisions based on the organizational idiosyncrasies as well as on the traditional features explicated in a study. write a letter (this may be an iterative process) which spells out the resources, be they in-kind, tacit or concrete.

91

Stage Four Manage the reporting of the findings. The politics of this step can be fairly unwiedly. Suggestions for managing them include: clearly and repeatedly discussing the criteria being used. structure meetings with representatives from each of the stakeholder groups during which participants collectively attach meaning to the data which has been collected. allow periodic, informal reviews of drafts before they are formally presented. Stage Five Evaluators participate in policy making to increase the chances of utilization of the findings. This can be addressed is the evaluators are willing to: check periodically to make certain the evaluation is relevant to the decisions that need to be made. provide a final report of the process and findings. develop an alliance of key parties who would be likely to support the changes following the evaluation. With respect to the CDBG study, OPPM, CDC and perhaps someone with influence in the Mayor’s office could be included in this group. These suggestions indicate how the CDBG evaluation could have unfolded in quite a different manner, had political considerations been made at each stage of the process. Political considerations are mercurial in character. Therefore, the conception of a process where the context and players in the political arena change according to the “work agenda” of the evaluation is important. Iterations of involvement and alignment are crucial.

GUIDELINES FOR MANAGING POLITICS IN EVALUATION RESEARCH There can be no recipe for perfect management of political dynamics in the conduct of evaluation research activities. There are, however, some guidelines which steer the process in the direction of evaluations which are useful for decision making versus evaluation which gather cobwebs in agency drawers. Guidelines One Be inclusive rather than exclusive when involved in any stage of the evaluation research process. Inclusiveness extends to the definition of environments relevant to the organization, as well as to who is involved in discussions at any of the phases identified by Marshall or others. It is better to broadly include those who might be interested in some aspect of the program under study rather than rule out someone who later catches the researchers unaware and unprepared. In the case

of CDBG, if the minority community had been construed as a stakeholder early on, it is likely that some of the KURC’s problems could have been avoided. Further, it is wise to consider evaluation research as a kind of negotiation process. With multiple preferred outcomes, conflicting views of what the organization’s goals may be, and radically different views on how to proceed, researchers who accept roles beyond that of chronicling historical data must also negotiate. Consensus development, mediation, and identification of several options for achieving objectives are important skills. Guideline Two Maintain flexibility in the conception and implementation of the process. This of course, flies in the face of the previously discussed traditionalist approaches to

JACQUELINE R. McLEMORE and JEAN E. NEUMANN

92

research. While, flexibility may be experienced as wavering towards a particular bias, this is not the case. Flexibility is important because the key players may change along the way. Additionally, the game and its rules are likely to change at many points. The adoption of rigid, theoretical blinders mitigates against a rich, full analysis of what is occurring within the organization. Guideline

Three

Make a point to initiate discussions about the politics of the evaluation process with all stakeholders at each

stage. These discussions frame politics as normative and not as something to deny or avoid. Explicit discussions of politics also provide opportunities to design strategies with high probability of success. Awareness and acceptance of political dynamics go a long way towards enriching the outcomes of evaluations. Most evaluators have values relating to social change. Therefore, understanding and embracing the political dynamics of this process is imperative. The result may be research processes which improve the quality of life in organizations and in the larger social environment.

REFERENCES IUTCOVICH, J. M., & IUTCOVICH, M. (1987). The politics of evaluation research: A case study of Community Development Block Grant funding for human services. Evaluution and Program Ph?nnmg, 10, pp. 71-81.

GUREL, L. (1975). The human sides of evaluating human service programs: problems and prospects. In E. L. Struening & M. Guttentag (Eds.). Handbook of evoluufron reseurch. Beverly Hills, CA: Sage publications.

BRYSON, J. M., & CULLEN, J. W. (1984). J. W. A contingent approach to strategy and tactics in formative and summative evaluation. Evaluation and Progrum Planning, 7, pp. 267-290.

KIRKHART, K. E. (1985). Analysing mental health evaluation: Moral and ethical dimensions. Evaluation and Program Plunning, 8, pp. 13-23.

BUNDA, M. A. (1985). Alternative systems of ethics and their application to education and evaluation. Evaluation and Progrum Plunnmg, 8, pp. 25-36. BURRELL, G., & MORGAN, G. (1979). Sociologiculpurudtgms ond orgumzutronul anulysis. Portsmouth, NH: Heinemann Educattonal Books. BRYK, A. S. (1983). Srukeholder-bused evuluution. San Francisco, CA: Jossey-Bass. CARNALL, C. A. (1980). The evaluation of work organization change. Human Relutrons, 33, pp. 885-916. COHEN, D. K. (1983). Evaluation and reform. In A. S. Bryk (Ed.), Sfakeholder-bused evaluation. San Francisco, CA: Jossey-Bass, pp. 73-82. CROZIER, M. (1964). The Bureuucrufic Phenomenon Umversity of Chicago Press.

Chicago:

CYERT R., & MARCH, J. (1963). A behavioral theory of the firm. Englewood Cliffs, NJ: Prentice-Hall. DAFT, R. L., & WEICK, K. E. (1984). Towards a model of organizations as interpretation systems. Academy of Munugemeni Revrew, 9: 284-295. DEMING, W. E. (1975). The logic of evaluation. In E. Struening & M. Guttentag (Eds.), Handbook of evuluatron research. Beverly Hills, CA: Sage Publications, pp. 53-68. GOLD, N. (1985) Stakeholders and program evaluation: Characterizations and reflections. in A. S. Bryk (Ed.), Stukeholder bused evaluutron. San Francisco, CA: Jossey-Bass, pp. 63-72. GRAY, B., & ARISS, S. S. (1985). Politics and strategtc change across organizattonal life cycles. Academy ofh4unugemenf Review, 10, pp. 707-723.

LAWRENCE, J. E. S., & COOK, T. J. (1982). Designing useful evaluations: The stakeholder survey. Evuluution and Program Plannrng. 5, pp. 327-336. MARSHALL, C. (1984). The case study evaluation: A means for managing organizational and political tensions. Evuluution and Program Plannmg, 7, pp. 253-266. McCORCLE, M. (1983). Bridgrng evaluation research andplanned orgonizatronul change. Unpublished doctoral dissertation. New Haven, CT: Yale University. hlcLEMORE, J. R., & Neumann, J. E. (1984). The case of Shelhot: Evaluation research as an intervention into conflict at organizattonal interfaces. Working paper. Cleveland, OH: Case Western Reserve University. MOURSUND, J. P. (1973). Evaluation: An introduction to research desrgn. Monteray, CA: Brooks/Cole Publishing Company. MURRAY, C. A. (1983). Stakeholders as deck chairs. In A. S. Bryk (Ed.), Stakeholder-bused evaluation. San Francisco, CA: JosseyBass Pubhshers, pp. 59-62. MYRDAL, G. (1969). ObJecIivity rn social research. New York: Pantheon Books. NUNNALLY, J. C. (1975). The study of change in evaluation research: Principles concerning measurement, experimental design, and analysis. !n E. Struening & M. Guttentag (Eds.), Handbook of evoluutron reseurch. Beverly Hills, CA: Sage Publications, pp. 101-137. PFEFFER, J. (1981). Power rn orgunizufions. Marshfield, MA: Pittman Publishing. SIMON, J. L. (1978). Busic research methods in sociul science.- The

Inherently Political Nature art of emplrical investigatron. (2nd ed.). New York: Random House. SJOBERG, G. (1975). Politics, ethics, and evaluation research. In M. Guttentag & E. Struening (Eds.), Handbook of evaluation research, Volume Two. Beverly Hills, CA: Sage Publications. WEBER, M. (1947) Theory of social and economic organization. New York: Free Press. WEISS, C. H. (1972). Evaluation Prentice-Hall.

Research. Englewood,

NJ:

93

WEISS, C. H. (1983). The stakeholder approach to evaluation: Origins and promise. In A. S. Bryk (Ed.). Stakeholder-based evaluatron. San Francisco, CA: Jossey-Bass, pp. 3-14. WISE, R. I. (1980). The evaluator as educator. In New DIrections for Program Evaluation. San Francisco, California: Jossey-Bass, Inc., 1980, pp. 11-18. YIN, R. K. (1984). Case Study Research: Design and Methods. Beverly Hills: Sage Publications, 1984.