The evaluator as program consultant

The evaluator as program consultant

Evaluation and Program Planning, Vol. 6, pp. 69-76, 1983 Printed Copyright in the USA. All rights reserved. THE EVALUATOR AS PROGRAM 0149-7189/83...

1002KB Sizes 1 Downloads 72 Views

Evaluation and Program Planning, Vol. 6, pp. 69-76, 1983 Printed

Copyright

in the USA. All rights reserved.

THE EVALUATOR

AS PROGRAM

0149-7189/83 $03.00 + .OO c 1983 Pergamon Press Ltd

CONSULTANT

MITCHELLFLEISCHER Indiana University of Pennsylvania

ABSTRACT This paper presents a model for the role of the evaluator in which the evaluator acts as a Program Consultant who operates in three domains: Program Development, Planned Change, and Evaluation Technology. This permits the performance of evaluations that have greater validity and utility. The model is presented both from the theoretical perspective of linkage and from the context of the evaluation of a Case Management program operated by a Community Action Agency. Each of the domains is discussed with examples from the Case Management Evaluation. The paper concludes with a discussion of the contexts in which the model is best applied.

INTRODUCTION ation to the likely users of its results (Patton, 1978). Clearly, the evaluator’s influence is diminished to the extent that others gain it. Some authors, such as Patton, suggest that this loss of control is desirable, while others, such as Tornatzky (1979) and Fairweather (1980) suggest that the problem is so serious that evaluators need to take on a role of “scientist/administrator” that is in complete control of the program. The model of evaluator as Program Consultant presented here is one in which an optimal combination of both utilization and evaluator control can be achieved. The model will be presented from both a theoretical and practical context. The practical context will be drawn from an evaluation recently completed at a rural Community Action Agency (CAA). The program that was evaluated involved providing the assistance and support of paraprofessional case managers to low income families. The general goals of the program were first, to insure that the families received the appropriate assistance and training from the full range of human service agencies in the community, and ultimately, to increase the extent to which these families were self-sufficient. The evaluation was conducted as a randomized experiment, accompanied by an in-depth process analysis.

This paper presents a model for the role of the evaluator in which the evaluator acts as a Program Consultant. A Program Consultant is defined as one who operates in three domains: Program Development, Planned Change, and Evaluation Technology. Operating in all three areas permits the performance of evaluations that can have both higher quality and greater utilization than is possible operating under a more narrow definition of the evaluator’s role. The model is based on two themes found in both the evaluation literature and the knowledge utilization literature. One theme is the concern with increasing the utilization of research in general, and evaluation research in particular. The second theme is the concern of evaluators to increase their control over the program environment so as to improve the quality of their evaluations. Although this concern is often thought of as a concern with improving the validity of a research design, it also includes such problems as the accurate collection of data and access to records. While few would argue that utilization and quality in evaluations are not desirable, they are also very difficult to achieve. In fact, the impression is sometimes given that the two are, to some extent, incompatible. The reason for this incompatibility is that utilization is typically enhanced by ceding influence over the evalu-

This is a revision of a paper presented at the Evaluation Research Society, Baltimore, MD, 1982. Requests Mitchell Fleischer, Department of Psychology, Indiana University of Pennsylvania, Indiana, PA 15705.

69

for reprints

should

be sent to

IO

MITCHELL

ROLE

OF THE

Traditionally, the evaluator’s role has been viewed as a relatively narrow one. The evaluator is seen as a technician or specialist whose sole purpose is to design and conduct an evaluation, with an occasional attempt to follow-up after the evaluation is over (or a report turned in) to improve utilization. This is the role presented in most textbooks on evaluation, including one of the earliest (Suchman, 1967), one of the most widely used (Weiss, 1972) and one of the more recent (Posavat & Carey, 1980). Over the past few years a broader role has been proposed for the evaluator. This has been most clearly stated by Atkisson, Brown, & Hargreaves (1978) who suggest that the evaluator needs to be an “integrator, coordinator, and decision maker” (p. 82). Davis & Salasin (1975) have proposed that the evaluator should

ELEMENTS

OF THE

One basis for this broader role has been suggested by Atkisson, Brown, & Hargreaves (1978) in their discussion of Havelock’s (1973) notion of linkage. A linkage role is one that facilitates the communication of knowledge between a user system and a resource system. The need for such linkage roles in evaluation has been emphasized by Caplan (1979) when he describes the evaluator as a translator between the “two communities” of “research” and “practice.” Havelock describes nine linking roles or different ways that linkage can be provided. Only four of these are relevant to the present discussion. The first of Havelock’s roles is that of “Conveyor.” The Conveyor is a relatively passive role in which information is passed along in a form usable to the consumer. Havelock gives examples of this role, such as a science reporter or an agricultural extension specialist. The second of Havelock’s roles is the “Consultant.” The Consultant takes on a variety of tasks that will directly aid the user system in its use of information. Thus, the consultant might assist in the identification of resources, act as a group facilitator, help with adaptation, or provide advice about process. The third role is the “Knowledge-Builder as Linker.” In this role the “creator” of knowledge takes direct action to act as an advocate for the use of “his/her” knowledge. The final role relevant to this discussion is that of the “Leader” who creates linkage either by example or by directing knowledge use. Each of these four roles can be used by the evaluator. In particular, there are two sets of circumstances that provide the opportunity for their use. Linkage in the Evaluation The Evaluator as Change As noted above, the typical in the evaluation task is

Task: Agent view of the evaluator’s role a technical one. However,

FLEISCHER

EVALUATOR act as a change consultant. Others who have recommended a broadening of the evaluator’s role include Tornatzky (1979), Morel1 (1979), Patton (1978) and Alkin, Daillak, & White (1979). Essentially, this position is that while the evaluator must take responsibility for the technical aspects of evaluation, that alone is simply inadequate. In order for evaluations to be performed and utilized effectively the evaluator needs to do much more than design good measures, carry out a good design, or write a readable and useable report. In fact, evaluators typically need to overcome resistance to carrying out the evaluation, resistance to filling out data collection instruments, and problems of programs “disintegrating” before their very eyes, among others. What are the elements of this broader role, and what are the consequences of taking it?

BROADER

ROLE

there are several ways this can be broadened through the use of Havelock’s linkage roles. Primarily, these involve the evaluator acting as a “change agent.” Change agent has been defined as “any individual or group operating to change the status quo in a client’s system such that the individuals involved must relearn how to perform their roles” (Zaltman & Duncan, 1977, p. 186). Such a definition implies the assumption of one or more linkage roles since the necessity to “relearn” implies the transfer of knowledge or information. The evaluation task involves change in two ways. First, implementation of an evaluation is in itself a serious change in organizational practice. It can be seen as a linkage or knowledge transfer task in that knowledge about evaluation needs to enter and be used by the organization (user system). Thus, all evaluators are acting as change agents, and can probably do so more effectively by becoming aware of the various linkage roles. Second, the utilization of evaluation results often involves changes in organizational structure and/or practice. This also involves the obvious linkage problem of transferring knowledge from the evaluator (resource system) to the line and administrative components of the organization (user).

Consultant. The evaluator as change agent most often acts in the linkage role of Consultant. For example, Patton’s (1978) description of “utilization-focused evaluation” requires the evaluator to act as a consultant during such times as during the development of goals and objectives and the construction of measures. This is particularly evident in his discussion of “the personal factor.” However, the consultant role really goes beyond such obvious evaluation tasks. Evaluations involve such activities as convincing line personnel to collect data, creating a new evaluation unit,

The Evaluator as Program Consultant moving office space, etc. The problem-solving, facilitation, and process orientation of the consultant can be extremely important under these circumstances (Zeigenfuss & Lasky, 1980). Knowledge-Builder as Linker. In this linkage role the evaluator acts as an advocate for the utilization of evaluation results. This can include active persuasion attempts or, more subtly, using a variety of communication techniques to insure a fair hearing for the results. Davis and Salasin (1975) discuss this role in depth in their discussion of the evaluator as “change consultant.” Leader. The evaluator can act in the “leader” linkage role when he/she is in the functional position of “scientist-administrator” (Tornatzky, 1979) or “administrator-evaluator” (Atkisson et al., 1978). The evaluator in this position is able to direct subordinates to implement an evaluation plan or to use particular results. In such a position, the evaluator is also able to lead by example through personal participation in the evaluation or by personal use of results. Linkage in Program Development

In order to understand the role of the evaluator in the program development process it is necessary to have a model for that process. For convenience, a simplified process model will be proposed here, based on several sources (Fairweather & Tornatzky, 1977; Lewis & Lewis, 1983) as well as the author’s experience. Program development can be seen as having six stages: Needs Assessment, Initial Planning, Later Planning, Implementation, Evaluation, and Redevelopment. The potential role of the evaluator will be discussed under each stage. Needs Assessment. The role of the evaluator in needs assessment has been very widely recognized. Most recent evaluation texts include chapters in needs assessment techniques (Atkisson, Hargreaves, Horowitz, & Sorenson, 1978; Posavac & Carey, 1980; Rossi, Freeman, & Wright, 1979). Three of Havelock’s linkage roles are useful at this stage. The evaluator can act as a Conveyor by reviewing the scientific literature on the population in question and reporting back to agency personnel in a form appropriate for them as users. The evaluator can act as a Consultant by working with agency personnel to focus the needs assessment questions into areas that are of interest to them and within their realm of action. Finally, the evaluator can assume the role of Knowledge-Builder as Linker by acting as an advocate for the use of data collected during the needs assessment. Initial Planning. Several activities take place during this stage: Goals are set for the program; a search is made for possible alternatives to meet those goals; modifications are made in the chosen alternative; and a proposal is prepared for approval or funding. The

71

evaluator can take on at least two linkage roles during this stage. First, as a Conveyor, the evaluator can pass along (and translate) information about evaluations of alternative programs being considered that should be invaluable in making decisions about them. In addition, since many, if not most, evaluators have experience and training in some discipline (e.g., psychology, public health, etc.) other than evaluation they are able to convey knowledge from that discipline that may not be available to other members of the planning team. In the second role, as a Consultant, the evaluator can provide assistance by doing one of the things evaluators usually do well, stating program objectives and operations in clear, concise, operationalized terms. In this role the evaluator also can help to see that the evaluation process is “built-in” from the beginning, rather than being added on in the middle. Later Planning. This is a stage of more detailed planning that takes place after approval/funding has been received. This includes such activities as the detailed development of procedures and staff training. The evaluator’s primary linkage role here is as a Consultant to insure that evaluation procedures are integrated fully into the detail of program operation. The evaluator can also act to insure full participation by all relevant decisionmakers in the evaluation by acting as Consultant during this stage. For example, decisionmakers can be involved in the development of data collection instruments if that would insure their greater use of the data (Patton, 1978). Implementation. During this stage the evaluator’s primary linkage role is that of Consultant. The evaluator’s main function here is to monitor the program and provide feedback to management and others. This can mean either using evaluation in the “formative” sense or monitoring to keep the program “steady,” so that procedures are not changed covertly. An additional function during implementation is to keep all relevant parties informed and participating in the evaluation process. Redevelopment. From time to time, either at the end of the evaluation or during one or more formative steps, the program is reassessed and changes are made. The evaluator can assume several linkage roles here. First, as a Knowledge-Builder as Linker, the evaluator communicates the results of the evaluation and can act as an advocate for its use. Second, just as during the initial planning stage, the evaluator can assume the role of Conveyor by providing information available through his/her professional discipline. Third, as a Consultant, the evaluator acts to facilitate modifications in the program based on evaluation results and other sources of data. Finally, in the case of an “administrator-evaluator,” assuming the Leader role, modifications could be directed.

12

MITCHELL

FLEISCHER

THE CASE MANAGEMENT The Case Management Evaluation was conducted in a Community Action Agency (CAA). The location of the agency was a rural county of about 92,000 population, with an economic base primarily of coal mining, agriculture, and light industry. Economic conditions were deteriorating steadily during the course of the evaluation, due to the severe national recession. The agency itself was of relatively large size for the county, with about 50 employees and a budget of close to a million dollars. The Case Management Program that was evaluated was a federally funded project that was designed to be a systematic extension of services already provided by the CAA. A typical service that the agency offered to low income families was the provision of information about local services as well as formal referral to agencies providing such services. In rare instances this Information and Referral (I&R) service was expanded to include having the paraprofessional community workers of the CAA accompany the client on a visit to an agency, provide assistance with budgeting, or provide informal advice and counseling. Because of a perception that these expanded services were very effective in motivating a client to use services and become selfsufficient, a decision was made to attempt to provide it as a formal “Case Management” service and to evaluate its effectiveness. The formal “evaluation” experience of the agency prior to the Case Management Evaluation was minimal. The agency did have a fairly good computer based (at a university computer center) information system. However, no formal evaluations had ever been conducted of any of its programs, although the agency had been subjected to evaluation “site visits” from state and federal funding sources. The evaluator in this evaluation was a university based consultant, with a background in community psychology. The evaluation will be briefly described here in order to provide a context for the evaluator’s role. A more complete description is available from the author. The basic design was a randomized experiment, with low income families randomly assigned to receive either the new Case Management intervention or the usual one time Information and Referral service. Client data were collected at three time points: prior to random assignment, and at 7 and 14 months after the commencement of case management. In addition, a variety of data was collected concerning implementation and intervention processes. Linkage

in Program

Development

Initial Planning. During the two years prior to the Case Management Program, the evaluator, as a university faculty member, had developed a good ongoing relationship with the CAA and its staff. The evaluator

EXAMPLE

had assisted the agency with several community development projects that were prepared by the agency for local governments as well as with the establishment of its information system. Based on this long-standing relationship, the administrators and staff were able to trust the evaluator both in terms of his abilities and intentions toward the agency. Thus, when the initial concept for the Case Management Program was raised, the evaluator was immediately asked to participate as a member of the planning team. Although the basic concept was already set, the evaluator was able to act as a Conveyor by providing input for decisions involving such things as what activities were to be included in case management, how long they would last, which types of client families would be included, and the like. In the Consultant role, the evaluator provided the focus for discussions about how the program would be evaluated. However, these decisions were not exclusively the evaluator’s. All team members were involved in both discussions and decisions about the evaluation, and in fact, many changes were made in the evaluator’s initial recommendations. The evaluator also helped to prepare several sections of the application for grant funding.

Later Planning. Once the program

had been funded, the evaluator acted in the Consultant role and was closely involved in many of the decisions about how the program would operate. For example, the first part of the case management process (as well as of the traditional I&R service provided to the control group) was to complete a needs assessment questionnaire on each client family, detailing available resources and problem areas. The evaluator was able to assist in the design of this instrument so that it would serve the “double duty” of pre-test data collection and needs assessment instrument. The evaluator also participated in the hiring of the agency’s new data processing personnel (hired specifically for the Case Management Project), and in planning the training of the new Case Managers that were to be hired.

Implementation. Although he was hired on a consulting basis, the evaluator maintained a regular presence in the agency during the course of the program. For the first several months he participated in most of the staff meetings with the Case Managers. This allowed the Case M_anagers to become familiar with the evaluation and the evaluator. Through these staff meetings the Case Managers were able to participate in decisionmaking about data collection and the data collection instruments. They were also informed on a regular basis about the progress of the evaluation and any findings that were available. Participation in the staff meetings served an important additional purpose for the evaluator, in that they provided him the opportu-

The Evaluator as Program Consultant nity to learn a great deal about the process of case management. Much of the meeting time was taken up with problems and “war stories,” all of which provided considerable insight into what it was the case managers were doing. Often, the stories proved to be more enlightening than the case managers’ written case notes. Several problems arose during the course of the program that the evaluator acting as a Consultant was able to assist with. One problem, which is probably endemic to programs of this type, was that the “line” staff continually tried to change the nature of the intervention. In Case Management this meant that the Case Managers had to be frequently reminded that the ultimate purpose of the program was to increase the clients’ self-sufficiency. Thus, behaviors that would make the clients dependent on the case managers had to be avoided. Although the case managers agreed with this in principle, such dependence producing behaviors were difficult to avoid. Because he was attuned to studying the implementation of the intervention he was able to remind both the program’s administrators and the case managers about the problem and to suggest possible solutions. Another problem that arose midway through the project was an inability to find a sufficient number of eligible cases for the second “wave” of the project. The evaluator was able to suggest procedures by which an adequate number of clients could be found. Redevelopment. The major conclusion that was derived from the program, both at an interim report and in a final report, was that, while the program was successful at increasing client use of local human service programs, it had no apparent effect on increasing selfsufficiency. Thus, it was suggested that the program be retained in abbreviated and modified form in order to meet one of the primary goals of the CAA, to make services more available to low income families. Also recommended however, was the development of new programs that would provide greater incentives for participation, and might attempt to “empower” (Rappaport, 1981) the clients to aid their self-sufficiency. Thus the evaluator was able to act in the role of Conveyor. The evaluator then participated in the preparation of several grant proposals that would implement these suggestions. At the time of this writing, at least one of the proposals had received funding. In addition, the suggestion to maintain case management in modified and reduced form was carried out. Because of the extent of informal communication between the evaluator and program personnel, these changes and new developments were actually accomplished before the written final report was turned in. The Evaluator

as Change Agent

There were two major events that took place during the Case Management Evaluation that most clearly

73

demonstrate the value of the use of the Change Agent Linkage Roles. In addition, the evaluator was able to play a linkage role in gaining utilization. The Revolt of the Supervisors. In order to understand this “revolt” it is necessary to have some knowledge of the organizational structure of the CAA. At the start of the Case Management Program, the CAA was organized as follows: Below the Executive Director were two Units, a Community Development Unit and a Human Services Unit. Case Management was conducted within the Community Development Unit. This unit consisted of a Unit Director, several technical specialists, and the Information and Referral (I&R) Component (the location of the Case Management Program), headed by the I&R manager. Below the I&R Manager was a supervisor, and below the supervisor were four Community Workers. The I&R Manager had once been a Community Worker under the existing supervisor. When Case Management was implemented an additional supervisor was transferred from the other unit of the CAA, and four new community workers were hired. Two of the “old” community workers, and two of the “new” community workers were chosen to act as Case Managers in the new program. Two of the Case Managers were supervised by the old supervisor and two by the new one. Within a few weeks of the beginning of Case Management, problems began to arise in the I&R Unit that appeared to threaten not only the evaluation, but the program itself. One example of the kinds of problems that were arising occurred during the second week of the program as the Case Managers were being trained to complete formal goal plans with their clients. The new supervisor was leading the training session. About halfway through the session several Case Managers said they would refuse to complete these goal plans because they felt clients would refuse to sign them, and if a signature were forced it would reduce their rapport with the clients. The old supervisor supported the three dissident case managers quite strongly, while opposed to them were the new supervisor, the evaluator, the Program Director (I&R Manager) and one (the most experienced) case manager. The issue was never satisfactorily resolved, although the program director ultimately ruled that goal plans must be completed. The only concession was that the clients could initial the plans, rather than sign them. Interestingly, it turned out that goal plans were never used on a systematic basis, in spite of repeated efforts by the program director to encourage and require their use. This type of conflict between the two supervisors, and between supervisors and program director became increasingly frequent over the first two months of the program. Finally, the program director asked the evaluator for advice, as morale and performance seemed to be dropping to dangerously low levels.

14

MITCHELL

The evaluator, acting in the linkage role of Consultant, suggested what is a fairly typical organizational change process (French & Bell, 1978). First, information should be collected about the possible problems. This then should be returned as feedback to the members of the unit for discussion. Information was collected in several ways. First, the evaluator/change consultant held discussions with the program director, the unit director, the supervisors and several of the case managers to get their perspectives. Second, a survey feedback questionnaire was designed, similar to that used by Tornatzky et al. (1980), but oriented toward issues that seemed salient to the context of the CAA. Unfortunately the survey feedback questionnaire revealed only that there was considerable disagreement within the unit about what the problems were. Group discussion about these results was not very productive. In addition to the efforts at feedback, another consultant was brought in to lead a Team Building meeting (Huse, 1975). This was intended to increase communication within the unit and to build a sense of team spirit. As an all day session, the Team Building meeting went smoothly, if only in a seemingly superficial manner. Near the end of the meeting, however, people opened up much more. This was the result of a discussion about the relationships between supervisors and The evaluator (not very innocently) case managers. pressed what seemed to be a minor point about these relationships, and the whole meeting exploded. Fierce accusations were shouted across the room, and several individuals were in tears. Yet, a great deal of new knowledge was revealed about the problems plaguing the group. Within two weeks of the explosive team building meeting, both supervisors had been fired, and one of the case managers had quit. New supervisors were hired (one of whom was a promoted case manager) who were much more compatible with each other and the ideals of the program, and no further problems of this type arose again over the remainder of the program. While the evaluator obviously did not “resolve” the problems the program was having, he did play an important role in their resolution.

Agency

Reorganization. Midway through the Case Management Evaluation the director of the CAA unilaterally decided that the agency should be reorganized. The rationale for his decision was that the I&R component had grown too large, primarily as a result of the APPLICABILITY

OF THE

MODEL

Evaluator Skills It should be clear from the preceding examples that for the evaluator to act as a Program Consultant, he/she needs a fairly wide range of skills. As a Conveyor the needed skills are an ability to search for relevant infor-

FLEISCHER Case Management project, and was making management of the Community Development Unit difficult. A consultant was brought in from the state Department of Community Affairs to provide aid and advice. A staff group was created to make a formal recommendation about the reorganization. The group included both unit directors, the I&R manager, several other staff from the CAA, and the evaluator. Although the consultant from the state lead the group for the first meeting, she was only able to participate in a few additional meetings of the group. As the only “outsider,” the evaluator, again assuming the Consultant role, was asked to lead the group in her absence. The group finally made recommendations to the director that the agency should have three divisions, and that I&R should become a separate division, with some added responsibilities. This made the conduct of the evaluation somewhat easier, since almost all decisions could now be made by the I&R manager (now a unit director), shortening the chain of command.

Utilization. In order to insure utilization of the evaluation findings the evaluator assumed the KnowledgeBuilder as Linker role. In this role the evaluator did several things that made the link between himself as the knowledge resource and the agency as the user. The most obvious of these was the preparation of formal reports. In addition to a final report, an interim report was prepared, based on data collection that took place halfway through the program. Each report was as nontechnical as possible. The interim report was not intended as a “formative” document, although it could have served that purpose. Rather, it served to provide management with formal feedback on program operations that was used to help keep the program “on track” in terms of what the case managers were doing. In addition, the evaluator got feedback from management about what was useful to them. A less obvious form of linkage leading to utilization was the informal meetings that the evaluator held with both management and line personnel prior to preparing the formal reports. At these meetings the results were discussed so that they could be used immediately without the delay inherent in preparing a report. The meetings also served to provide the evaluator with feedback on his interpretations of the data. During these extended discussions about the results the evaluator was able to suggest program modifications and show clearly how they related to the evaluation findings. TO EVALUATION

PRACTICE

mation and a background in some professional discipline combined with an ability to translate that information from scientific jargon into practical language. In the Consultant linkage role the skills needed are in four areas: technical expertise (in terms of problem

The Evaluator as Program Consultant solving), communication skills, interpersonal skills, and administrative skills (Kelley & Mathis, 1982). The Knowledge-Builder as Linker role requires similar skills to that of Consultant. Surprisingly, it should not be all that unusual to find a range of skills such as this in an evaluator. Several papers have been written describing the kinds of skills an evaluator ought to have, and to a great extent they overlap well with the kinds of skills needed by an evaluator acting in these roles (Beane & Korn, 1980; Hargreaves et al., 1978). More important, several of those programs claiming to train evaluators include these kinds of skills in their training programs (Connor, Clay, & Hill, 1980). Thus, it seems reasonable to expect a substantial number of evaluators to have the skills needed by a Program Consultant. Type of Evaluation No claim is made that this model is applicable to all

types of evaluations. The clearest applicability is for the type of evaluation that Morel1 (1979) has called a “modality test,” where a specific model of a program is being tried and judged. Thus, when one is attempting to determine the effectiveness of a new or modified program, it makes great sense for the evaluator to act according to the program consultation model, as was the case in the Case Management Evaluation. The model also seems to have great relevance for ongoing, comprehensive evaluation “systems,” those that might be called a pure form of “formative” evaluation. This type of evaluation typically involves an inhouse evaluator, who might be a regular member of a permanent planning team, and, as an in-house staff member is also available as an internal change agent. The model begins to lose relevance when considering less sophisticated evaluation systems, that are involved in the more limited task of “monitoring” the program or agency. Such a system suggests a less influential role for the evaluator; perhaps one in which the evaluator has only minimal involvement in program development. Nevertheless, the desirability of having evaluator participation in program development is clearly still present, if only to insure the integration of the evaluation with program activities. As a less influential individual, it seems unlikely that the evaluator would

75

be called on to perform change agent duties under such circumstances. Finally, the model probably has no applicability at all to what might be called a “one shot” evaluation visit. This is because no long term relationship is developed during such an evaluation. If Program Consultation requires anything, it is a long term relationship. Type of Evaluator

In the Case Management Evaluation the example given was that of the evaluator who has hired as a consultant. An obvious question is, would the model apply to an in-house evaluator? As noted above, the model should apply. Thus, the real issue is, what are the advantages and disadvantages of one versus the other? On the one hand, the internal evaluator has the advantage of relative ease in being able to develop longterm relationships with individuals within the organization. On the other hand, many external evaluators have the opportunity to develop such relationships, but many may not wish to bother, especially since it may involve volunteer work for a period of time. Another advantage that the internal evaluator has is greater knowledge of the organizational system and the individuals within it. Finally, the in-house person may not appear to be as much of a threat as might an outsider. There are of course disadvantages to the in-house role. The issue of “turf” becomes much more salient, particularly if the insider has some degree of power that may be seen as expanding (perhaps correctly) when the Program Consultant role is assumed. In a related issue, the insider may have participated in numerous previous “battles” within the organization. The scars remaining from these disputes (on both sides) may serve to impair the evaluator’s effectiveness in many areas, but particularly as a change agent. Another inhouse disadvantage (and advantage for a consultant) is that the in-house evaluator may lack the “mystique” that the outside “expert” often tends to have. This expertise, or lack of it, may have little basis in reality, but it affects the evaluator’s credibility a great deal. Finally, the in-house person may lack the perspective that an outsider often brings to a situation.

SUMMARY

A model has been presented in which the evaluator acts as a Program Consultant, one who performs in the context of three roles: evaluation technician, program development linkage agent, and change agent. An examk’e of how this larger role was carried out was provided as well as some suggestions about when and where the model might be most applicable. Although the model might be described as “ideal” in some sense, it should not be viewed as idealistic in that evidence

has been provided here, and elsewhere (e.g., Alkin, Daillak, & White, 1979; Patton, 1978), and that the broader role proposed by the model has been effectively carried out. While there is no expectation that all evaluators will act as, or even strive to act as, Program Consultants, to the extent that they do so, better evaluations, that are utilized to a greater degree, are likely to result.

76

MITCHELL

FLEISCHER

REFERENCES ALKIN, M. C., DAILLAK, Beverly Hills: Sage, 1979.

R., & WHITE,

ATKISSON, C. C., BROWN, T. Role and functions of evaluation C. C. Atkisson, W. A. Hargreaves, son (Eds.), Evaluation of human Academic Press, 1978.

P. Using Evaluations.

R., & HARGREAVES, W. A. in human service programs. In M. J. Horowitz, & J. E. Sorenservice programs, New York:

ATKISSON, C. C., HARGREAVES, W. A., HOROWITZ, M. J., & SORENSON, J. E. (Eds.), Evaluation of human services programs. New York: Academic Press, 1978. BEANE, W. E., & KORN, J. K. How not to become an evaluator: A review of selected training materials in print. Evaluafion and&ogram Planning, 1980, 3, 297-300. CAPLAN, N. The two-communities theory and knowledge tion. American Behavioral Scientist, 1979, 22, 459-470.

utiliza-

CONNOR, R. F., CLAY, T., & HILL, P. Directory of evaluation training. Washington, D.C.: Pintail Press, 1980. DAVIS, H. R., & SALASIN, S. E. The utilization of evaluation. In E. L. Struening & M. Guttentag (Eds.), Handbook of evaluafion research, Vol. I, Beverly Hills: Sage, 1975. FAIRWEATHER, G. W. Community psychology for the 1980s and beyond. Evaluation and Program Planning, 1980, 3, 245-250. FAIRWEATHER, G. W., & TORNATZKY, L. G. Experimental merhods for social policy research. New York: Pergamon Press, 1977. FRENCH, Englewood

W. H., & BELL, C. H. Organizational Development. Cliffs, NJ: Prentice-Hall, 1978.

HARCREAVES, W. A., ATKISSON, C. C., HOROWITZ, M. J., & SORENSON, J. E. The education of evaluators. In C. C. Atkisson et al. (Eds.), Evaluation of human service programs, New York: Academic Press, 1978. HAVELOCK, R. G. Planningfor innovation through dissemination and utilizafion of knowledge. Ann Arbor, MI: Institute for Social Research, 1973.

HUSE, E. F. Organization development West Publishing, 1975.

and change.

KELLEY, R. E., & MATHIS, C. L. Deciding career. Consultation, 1982, 1, 42-49. LEWIS, J. A., & LEWIS, M. D. Managemenr programs. Monterey, CA: Brooks/Cole, 1983. MORELL, Pergamon

New York:

on a consulting

of human service

J. A. Program evaluation in social research. New York: Press, 1979.

PATTON, M. Q. Sage, 1978.

Ulilizalion focused

evaluation.

Beverly

Hills:

POSAVAC, E. J., & CAREY, R. G. Program evalualion: Methods and case studies. Englewood Cliffs, NJ: Prentice-Hall, 1980. RAPPAPORT, J. In praise of paradox: A social policy of empowerment over prevention. American Journal of Community Psychology, 1981, 9, l-27. ROSSI, P. H., FREEMAN, H. E., & WRIGHT, S. R. Evaluaiion. A systemuric approach. Beverly Hills: Sage, 1979. SUCHMAN, Foundation,

E. A. Evaluative research. 1967.

TORNATZKY, L. G. The triple threat Program Planning, 1979, 2, 11 l-l 16.

New York:

evaluator.

ZALTMAN, G., & DUNCAN, New York: Wiley, 1977.

Sage

Evaluation and

TORNATZKY, L. G., FERGUS, E. 0.. AVELLAR, FAIRWEATHER, G. W. Innovation andsocialprocess. Pergamon Press, 1980. WEISS, C. H. Evaluation research. Englewood Hall, 1972.

Russell

Cliffs,

J. A., & New York:

NJ: Prentice-

R. Strategies for planned change.

ZEIGENFUSS, J. T., & LASKY, D. I. Evaluation and organizational development. Evaluation Review, 1980, 4, 665-676.