Effective engagement of decisionmakers in program evaluation

Effective engagement of decisionmakers in program evaluation

Eva/uation and Progrum Planning, Vol. 11, pp. 335-339, 1988 0149-7189/88 $3.00 + .OO Copyright 0 1988 Pergamon Press plc Printed in the USA. All...

551KB Sizes 0 Downloads 64 Views

Eva/uation

and Progrum

Planning,

Vol. 11, pp. 335-339,

1988

0149-7189/88 $3.00 + .OO Copyright 0 1988 Pergamon Press plc

Printed in the USA. All rights reserved.

EFFECTIVE

ENGAGEMENT OF DECISIONMAKERS IN PROGRAM EVALUATION

TIMOTHY W. HEGARTY and DOUGLAS L. SPORN Food and Drug Administration

ABSTRACT The necessity for evaluators who produce program information to forge links with decisionmakers who use program information is a major theme in the literature on evaluation use. This paper describes techniques successfully developed in one federal agency to forge those links. The techniques are designed to engage decisionmakers at five key points in the evaluation process: (a) during the identification of candidate programs for evaluation; (b) in the selection among candidates; (c) while the evaluation is being conducted; (d) during the reporting of evaluation results; and (e) in assessments of the impact of completed evaluations.

INTRODUCTION “In view of the timing involved in informing us, we assume it is actually an investigation. . . . ” That’s what a program manager once wrote when he heard an evaluation was underway which involved his program. His words and the tone of his memo suggested he considered the evaluation to have all the characteristics of a grand jury probe. This was not the response the agency evaluators wanted to engender or hear. They heard it because they disregarded one of the techniques that is key to engaging program managers and other decisionmakers in constructive evaluations. This paper describes these techniques. It also tells how the evaluators responded to the perceived investigation. The program manager in question works for the Food and Drug Administration (FDA). FDA is a federal agency responsible for assuring the safety of human food, animal feeds, cosmetics, and the safety and efficacy of human drugs, medical devices and radiological products, biologics, and animal drugs. The agency has a traditional line/staff configuration. The Office of Planning and Evaluation manages the Agency’s evaluation process, and is a staff office to the Commissioner of FDA. Agency evaluators conduct the evaluations. The programs and activities evaluated are line functions. The primary managers of the programs are located in one of five Centers of the agency. Requests for reprints should be sent to Timothy Fishers Lane, Rockville, MD 20857.

W. Hegarty,

Results of studies on the empirical use and nonuse of evaluations are recurring features in evaluation literature. Studies of nonuse appeared first (Cohen & Garet, 1975; Deitchman, 1976; Weiss, 1972). Connolly and Porter (1980) identified categories to organize the various theories explaining nonuse. Studies of use appear later (Barkdoll, 1980; Patton, 1978; Wholey, 1983). Springer (1983) identified categories to organize the various themes he found in the literature on facilitating the use of evaluation. A major theme in that literature is the need for evaluators who produce information to forge links with decisionmakers who need and can use the information. This paper is a contribution to that theme. The techniques described in this paper are refinements of approaches which have been developed over the past nine years and continue to evolve. Barkdoll (1980) discusses the evolution and use of some of the techniques up to 1980. He describes the techniques used to conduct supportive consultant (Type III) evaluations, which are characterized by consultation and consensus among evaluators, program managers, and other parties with an interest in a program. Supportive consultant type evaluations contrast with the prior Type II evaluations which were rigorously analytical, not antagonistic to, but somewhat aloof from, program manag-

Office of Planning

33.5

& Evaluation

(HFP-13),

Food and Drug Administration,

5600

336

T.W. HEGARTY

ers. Both contrast with earlier Type I evaluations, characterized by Barkdoll as “investigative reporting,” which were confrontational. Barkdoll identifies three factors which determine the applicability of supportive consultant evaluations: (a) trust among program managers, evaluators, and agency officials; (b) evaluators talented in interpersonal as well as analytical skills; and (c) a cooperative management style. The techniques described here have been developed and applied to enhance the effectiveness of supportive consultant evaluations. One of the propositions under-

IDENTIFICATION

lying supportive consultant evaluations is that program managers are often the most important individuals affecting the impact of evaluations. These techniques are designed to positively engage those individuals in the evaluation of their programs. The techniques are used in the five key points of the FDA evaluation process: (a) during the identification of candidate programs for evaluation; (b) in the selection among candidates; (c) while the evaluation is being conducted; (d) during the reporting of evaluation results; and (e) in assessments of the impact of completed evaluations.

OF CANDIDATES

During the first five years of supportive consultant evaluations, agency evaluators identified the list of candidates. The Commissioner selected from the list. The identification criteria were straightforward. The first year, all programs were candidates. The second year, all were candidates but those evaluated the first year. And so on. Occasionally, agency evaluators would suggest to the Commissioner programs they considered more “important” candidates than others. Agency evaluators notified program managers, and their Center Directors, of the “good news” that their programs were selected for evaluation. The evaluators then met with the program managers. These initial meetings were often “white knuckle” affairs for both the program manager and the agency evaluators. The evaluators had to invest considerable energy to overcome the initial resistance and suspicion of the program manager. Beginning in 1982, agency evaluators decided to test a different technique for identifying candidates for evaluation. They encouraged managers to nominate their programs. Agency evaluators decided on the test for four reasons. First, self-nomination would facilitate a positive engagement between evaluator and program manager. Second, program managers have information relevant to the timing of an evaluation. Third, program managers would be given an equal chance at competing for scarce evaluation resources. Fourth, enough managers would welcome an evaluation which helped them better manage their programs. The nomination process has been conducted twice since 1982 and marks the second generation of supportive consultant evaluations. Senior agency evaluators meet with program managers individually. The evaluators describe the benefits of participating in an evaluation, citing positive experiences of other program

SELECTION The second phase of the evaluation tion of the projects to be evaluated.

and D.L. SPORN

mangers such as the development of more efficient program strategies or confirmation of the appropriateness of current ones, establishment of cooperative relationships with stakeholders, and the highlighting and resoIution of important program issues. They obtain the program manager’s nomination for a place on a list of candidates, at the top or at the bottom. Thirty-seven program managers were interviewed in the first nomination process. Eighteen nominated their programs to be top candidates. Of the 19 who did not, eight said they had no issues to evaluate; seven said the timing was inappropriate because of pending reorganizations; four said their term projects were winding down and scheduled for completion. Twenty-seven program managers were interviewed in the second nomination process. (Managers of programs evaluated as a result of the first process were not interviewed. Intervening reorganizations consolidated other programs.) This time ten of the 27 nominated their programs to be top candidates. Of the 17 others nine said the timing was inappropriate; eight said they had no issues to evaluate. One reason agency evaluators decided to test selfnomination is the conviction that enough program managers would welcome an evaluation. That conviction has been reinforced by the number of nominations and the importance of the programs nominated. Consider size of programs in dollars, for example, as an indicator of importance. Managers of programs ranked first, second, and third in size have nominated themselves as candidates. So too have managers of programs ranked seventh, tenth, thirteenth, sixteenth, and others in between. The effect of self-nomination is the engagement of program managers before the selection of evaluations, and a fast, positive start once selection is made.

AMONG

process is the selecThe Commissioner

FOR EVALUATION

CANDIDATES

or Deputy Commissioner They have the nominations

makes the final selection. of the program managers in

Effective

Engagement

hand. (The list by then has been reviewed by the Center Directors responsible for the programs as an initial screen. Center Directors have vetoed no candidates and strongly endorsed several.) They also may add their own candidates. In addition, agency evaluators provide the Commissioner an estimate of the number of evaluations they can undertake. From the first list of 18 program manager nominations, the Commissioner selected six, adding none to the list. To the second list of 10 nominations a new Commissioner added three candidates of his own. Agency evaluators could undertake only four new evaluations that year. The Commissioner selected two of the 13, neither of them his own candidates; rejected four; and postponed decision on the remaining seven while agency

CONDUCT

337

evaluators researched his three nominations. They provided the Commissioner enough information within six weeks to answer sufficiently his three questions. He then selected two more. The result consisted of four selections from the original list of ten top nominations. Two benefits attached to this selection process. First is the engagement of the Commissioner and Deputy Commissioner, ensuring that the evaluation agenda is consistent with their agenda and that they have a stake in the results. Second is reinforcement of the engagement of program managers. They know that other key decisionmakers are interested in the evaluation results. All program managers interviewed are notified of the results of the selection process. The evaluators who notify the managers selected bring good news this time.

OF EVALUATIONS

The third phase of the evaluation process is the design and conduct of the evaluation. One technique has been found to be particularly effective during this phase. The program manager is engaged by now. The Commissioner and Center Director have committed to the evaluation. In a complex government agency such as FDA, these are not enough engaged decisionmakers to ensure that the evaluation will be useful. Neither the program manager nor the Center Directors have control over all the resources and policy decisions of a program. There are almost always other stakeholders in the program who consequently have a stake in the outcome of the evaluation. Agency evaluators invest a significant amount of time engaging all significant stakeholders in the design and conduct of the evaluation. Engaging stakeholders early is necessary for several reasons. First, stakeholders often have a perspective on a program which the program manager does not, and which is critical to the study design. Second, they have data and information which the evaluator needs. And third, they influence the use of an evaluation, positively if they are engaged; negatively if they are ignored or feel threatened. Soon after selection, evaluators identify and meet with stakeholders to: (a) inform them of the program manager’s questions; (b) discuss the questions the stakeholders need answers to; (c) identify sources of data and information; and (d) obtain their input to the study design. The evaluator’s goal is, in Michael Patton’s words, “to be sure that the people who are going to be the primary users of evaluation findings are the same people who decide what the focus of the evaluation will be” (Patton, 1978, p. 87).

REPORTING

of Decisionmakers

A few years ago agency evaluators were reminded of the importance of adequately engaging stakeholders. The evaluators were several weeks into an evaluation of a large program. They had met several times with the program manager and his staff. They were gaining familiarity with a large data base. It was too early, they thought, to engage the stakeholders. Besides, they were too busy learning about the program. Then they put their preliminary thoughts in writing and sent them to the program manager for review. That memo did not look like a study plan to them, but it did to others who happened to see it. One of the people who read their initial thoughts was a critical stakeholder in the manager’s program. He concluded that an evaluation was well underway, behind his back. He said so in a memo to the program manager: “Any effort like this should have been conducted as in past evaluations with members (from his organization) on the study team. In view of the timing involved in informing us, we assume it is actually an investigation. . . ” (Personal communication, 1982). The evaluators quickly scheduled a meeting with the stakeholder and his staff. Antagonism characterized the meeting at the start, but it dissipated as the evaluators explained their behavior and turned the meeting into a session on study design. The stakeholder and program manager subsequently became coclients of the evaluation. The evaluators were rebuked for ignoring one of their own techniques for positively engaging stakeholders. There was good news in the rebuke, however. That stakeholder expected to be involved. This is the expectation the evaluators want.

EVALUATION

The fourth phase of the FDA evaluation process is reporting results. The technique used in this phase com-

plements program

RESULTS those of the previous phases, that is, once the manager and stakeholders are engaged, this

338

technique is designed to nique is simply to share quence as it is developed. appears to the receivers “sequential purchase of p. 119). The sequential l

l

l

T.W.

HEGARTY

keep them engaged. The techevaluation information in seThe communication probably as something like Wholey’s information” (Wholey, 1983, sharing has several benefits:

It is a form of discovery which allows decisionmakers to use information as it is developed; It aids interpretation by eliciting explanations for findings which evaluators can then test as the evaluation continues; and It provides several pauses in which everyone involved can reexamine the direction of the evaluation.

Agency evaluators rely heavily on sharing information in face-to-face meetings. A device they use to display information is the small chart pad on a table easel. The advantage of this device is that it allows the evaluators during their presentation to sit with the program manager and stakeholders as part of a group. The evaluators can control the focus and order of the presentation. The program manager or stakeholders can control its pace- they can hurry it up, or they can slow it down and talk. These meetings soon develop an atmosphere that is relaxed, informal, participative, and open. Small reproductions of the presentation charts are often left behind ASSESSMENT Assessing the impact of completed evaluations is the fifth phase in the FDA evaluation process. Identifying potential improvements in the process is a part of that assessment. A supervisory evaluator conducts a follow-up meeting with the program manager and stakeholders after an evaluation to discuss the use they have made, and intend to make, of the evaluation information. One set of follow-up meetings is often not sufficient to identify the impacts of an evaluation. Some decisions do not get implemented for a year or two after an evaluation, and are not explicitly associated with it. On the other hand, the follow-up meeting may result in the program manager requesting assistance in implementing the decisions. One evaluation, for example, resulted in agreement among the managers of a relatively young program on its public health goals, objectives, and priorities.

and D.L. SPORN after the meetings, with an introductory narrative and explanatory and connecting paragraphs. The value of this discovery technique was demonstrated recently during an evaluation presentation. An agency evaluator was presenting results of an analysis of trends in the number of inspections of several kinds of pharmaceutical establishments, for example, manufacturers, warehouses, repackers. The data showed a steep increase in warehouse inspections compared with the others. This was an unanticipated finding. The program manager and a principal stakeholder were the only decisionmakers at the meeting. They stopped the presentation when it came to the trend, discussed the reasons for the trend, concluded it would be inappropriate for the trend to continue, and decided to reduce inspection intensity of pharmaceutical warehouses. The evaluator then continued the presentation having just made a substantial improvement in the effectiveness of the program. The sequential sharing of information influences the final report of the evaluation. Agency evaluators have found that recommendations are often unnecessary in a final report. Actions which recommendations would address have already been taken or planned during the sharing of information. Instead, the report identifies the actions already taken by decisionmakers and their ongoing efforts at further improvement.

OF IMPACT One priority was to determine the untoward effects of a diagnostic procedure. At the follow-up meeting the program manager asked assistance in the determination through an analysis of national survey data. Agency evaluators did the analysis, outside of the process described in this paper. Identifying improvements in the evaluation process usually requires one set of follow-up meetings. Among the suggestions for improvement received and acted upon are: (a) to meet even more frequently, (b) to provide more interim documentation of findings, and (c) to reduce the time to complete an evaluation. The follow-up meetings serve another purpose. They continue the positive engagement among evaluators, program managers, and stakeholders in the sharing of information.

SUMMARY Engaging decisionmakers in supportive consultant evaluations is more than informing, asking questions, being friendly, and delivering products on time. It is establishing a reciprocal relationship in which, while remain-

ing independent, both “consultants and managers can work toward mutual interests” (Turner, 1982, p. 120). The interests of consultant evaluators are in solving program puzzles and seeing the solutions used. To

Effective Engagement do this they have to engage managers are in having program puzzles solved solutions. Agency evaluators have found that gram managers in an effective reciprocal best begun early, during the identification for evaluation and their selection. They

whose interests and using the engaging prorelationship is of candidates have also found

339

of Decisionmakers

that the engagement must continue during the conduct of evaluations and the reporting of results because evaluations are an iterative discovery process sustained by interim interpretations and conducive to sequential use. Continuing the engagement beyond the “final report” is necessary to identify impact, assist implementation, and assess the evaluation process.

REFERENCES BARKDOLL, G.L. (1980). Type III evaluations: consensus. Public Administration Review, 40(2),

consultation 174-179.

and

COHEN, D., & GARET, M. (1975). Reforming educational policy with applied social research. Harvard Educational Review, 45, 17-41. CONNOLLY, the utilization

T., & PORTER, A. (1980). A user-focused model for of evaluation. Evaluation and Program Planning, 3,

131-140.

SPRINGER,

J.F. (1985). Policy analysis and organizational

Administration TURNER,

decisions.

and Society, 2, 475-508.

A.N. (1982). Consulting

is more than giving advice. Har120-129.

vard Business Review. September-October,

WEISS, C.H. (1972). Evaluating educational and social action programs: A treeful of owls. In C.H. Weiss (Ed.), Evaluating action programs (pp. 3-27). Boston, MA: Ally & Bacon.

DEITCHMAN, S. (1976). The best-laid scheme: A tale of social research and bureaucracy. Cambridge, MA: MIT Press. PATTON, M.Q. (1978). Utilization-focused evaluation. Beverly Hills, CA: Sage.

WHOLEY, J.S. (1983). Evaluation and effective public management. Boston, MA: Little, Brown and Company.