Learning and Teaching Others Practical Evaluation Skills
LEARNING PRACTICAL EVALUATION "ON THE JOB" Rolf II. Sartorius Minneapolis, Minnesota
This article describes my experience as staff evaluator with the American Refugee Committee(ARC) from March through December 1988. It details my insights, emotions, and successes and failures as an evaluator with no prior evaluation experience. Recommendations are presented for others caught in similar circumstances. BLISSFUL IGNORANCE I had been on the job hunt for several months and my research assistantship at the University of Minnesota was rapidly runningout of funding. My M.A. in public policy and international development was recently completed, and I had interned for six months in Nairobi. I felt ready for an international developmentjob. I contacted the new executive director of ARC and asked if any research or evaluation work was needed.During the first meeting, I said thatalthough I had no evaluation experience, my solid background in international developmentresearchwould enable me to learn howit should be done-and that I was a quick study. After two more meetings, I had a job. The director and I agreed that a monthof background readingshould beadequate preparation for me to begin evaluating. 48
49
ANGUISHED IGNORANCE I actually spent two months reading at ARC and was very isolated during that time. This was a stressful period because I had been hired to do a job I had no idea how to do. In early March, I made a visit to Mike Patton to ask for reading suggestions. He suggested that I take his course in the spring and mentioned that the greatest problem for novice evaluators was gaining confidence. I wondered whether reading could substitute for experience in building confidence. Reading was efficient and safe-it would postpone risk taking in the short run and help me avoid mistakes in the long run. During the second interview, the ARC executive director gave me a copy of Patton's (1986) Utilization-Focused Evaluation (UFE). It was one of my first evaluation readings, but it meant little without suggestions for concrete methods. Nonetheless, I took seriously the book's ideas that building a sizable methods repertoire and training staff in evaluation were of key importance. These ideas implied no small amount of expertise and called for a great deal more research. I continued to survey the general literature and hunted hard for the scarce literature on evaluation for international private voluntary organizations. Another source of anxiety stemmed less from my lack of knowledge and expertise than from concern about how I was perceived by staff. The literature indicated that the initial impression of the evaluator would be a lasting one. What did people think of me when I spent all day reading? What did staff think of me? How would I establish my credibility? How much did I need to know before I could begin to evaluate? What would the director think (or do) if I did not begin evaluating soon? What would happen if my first evaluation bombed? These questions arose from a grounding in the literature. The director said that he wanted a good show when he formally introduced me at a general staff meeting. At this first public appearance, I announced that I would not be doing the traditional sort of evaluation-that the emphasis would be on doing useful evaluations to help improve programs, that we would learn this "stuff' together, and that I would help them to think it through. People seemed indifferent. "began to make visits to one of the programs I was to evaluate. Although I still was not ready to evaluate, it was time to explore. People were friendly but also somewhat leery. They thought they were being evaluated.
50
LEARNING AND TEACHING ONTlIIN ICE
Late into the first month on the job, ARC's Development Department asked me to write some "evaluation sentences" for one of their largest grant proposals. When I saw it in draft form it was not hanging together very well. I wrote comments to that effect and made suggestions on how to conduct the evaluation. Developmentwas not happy about the comments. Later I met with them to discuss the proposal and the evaluation clauses. I suggested monitoring and evaluating only the most important parts of the proposed program to avoid generating a lot of costly and useless information. They told me to "lighten up" and thought I was arrogant.The director later told me I was "too used to communicating with academics" and that I needed to modify my approach. Shortly afterward, I began writing a proposal to fund my own position. This provided an ideal opportunity to clarify my role at ARC, in my own mind and in the minds of the directors, and to layout a tentativeplan of action. I consolidated what I had learned in my reading and detailed a series of evaluation trainingworkshops for staff. The proposal was submitted in April and I, in the meantime, was allowed to proceed as though it had been approved. It was at about this time that 1began to worry about how I would do at the first workshop, knowing that my "credibility" would be on the line. I had prepared note cards for the session weeks in advance and just glancing at them made me uneasy. I had enrolled in Patton's course and by coincidence his first workshop was held two days before my own. This provided me with a very useful model and boosted my confidence. Still, the night before the workshop, 1slept restlessly and very little. LEARNING FROI"I SUCCESS: THE FIRST TWO WORKSHOPS
The first workshop began with a "participant observer" exercise straight out of UFE (p. 40). People were asked to get up and evaluate the room. After receiving instructions, there were looks of surprise and embarrassment. We all milled about nervously.The executive director put his ear to the wall as if to listen. People laughed. I stopped the exercise after a few minutes as people began to eye each other from across the room. The discussion ibat followed brought out many of the points that one would hope to make during such an exercise: that differentobservers observe
51
different thing s; that instructions beforehand about what exactly to observe would make things easier, and so on. Next, I began my "lecture," with note cards in hand. I talked about evaluation history, basic evaluation concepts, "user-focused" evaluation, what evaluation could do for programs, how many evaluations were never used, the importance of focus, and the importance of evaluation in project planning. People nodded and there was good discussion . We worked through the "Daniel" case (patton, 1987), which allowed the Minnesota staff to air their views in front of the executive director. At the end of the session I felt very relieved and excited. Later during the day I received a number of compliments on the meeting. The main objective of the second workshop was to develop a list of evaluation questions for each of ARC's projects. I began the session by explaining the criteria for a good, "user-focused" evaluation question (Patton, 1986, pp. 69-70) and instructed participants to list as many questions as possible for their own projects. The directors were asked to list questions for any or all projects that interested them. After a pregnant pause, people began writing. Roughly fifty questions were generated, with the project managers' questions being the most clearly focused . I reviewed the questions with the group, ask ing for clarification and explanation. People eagerly participated. After completing the lists, I said that project-by-project evaluations would try to hit the most important questions. Then I asked the project managers to volunteer one of their projects as our case study. People laughed, since one manager had already "been volunteered." The next day the director of the Minnesota program said that his staff seemed to be "buying into the process pretty well now." I took this to be a measure of success.
LEARNING FROM ERROR: THE TIIIRD AND FOURTH WORKSHOPS The main objectives of the third workshop were technical: to clarify the focus of the case study, to review the project's theory of action, and to develop a list of stakeholders. The executive director and a few others were unable to attend this and the fourth session. I began the session by explaining a full-sized "evaluation clock" (Santo Pietro, 1983). Then I presented the theory of action for the case study; this did not seem to register clearly, possibly for lack of context. The next part was a stakeholder analysis in which staff were asked to list the various groups or individuals who "had a stake" in the project. The group produced a fairly
52
comprehensive list, but was not eager to participate. Ratherthan continue, I decided to come back to it next time. Finally, I led the group to a discussion of the projectmanager's evaluation questions. This quickly became a three-way debate among the Minnesota program director, the project manager, and myself. The others became passive and bored. In chats with staff afterward, it was clear that the session had been a failure; many of the topics did not seem relevant, and people had little opportunityto participate. I had done a poor job facilitating, and had allowed the discussion to get out of hand. The fourth workshop was temporarily postponed because staff were unusually busy. People were appreciative. Although the topic for the fourth workshop was fairly technical-"Appropriate Evaluation Methods, Measurement, and Design"-I was determined not to repeat my earlier mistakes. I would combinelecture with more participatory exercises. The session began with a review of the "evaluation clock" to pinpoint where we were in the case study. Next was an overhead slide presentationon "practical evaluation." After a short break, we reviewed a handout on the theory of action for the case study, looking critically at its goals and objectives. Progress toward each goal could be monitoredusing a differentset of indicators. A second handout listed the stakeholders that the group had identified previously. I asked the group about the major values/concerns for each stakeholder. Finally, we talkedabout theevaluationdesign for the case study, the interview guide, and my first interview. After the workshop, peoplewere excited and enthused and offered very positive feedback. LEARNING THROUGH PRACTICE
The focusof the case study had been determined by the project manager, who indicated an interest in learning more about how the project's American volunteer mentors were faring. This was good material for a case studythree of the Minnesota projects used volunteers to tutor refugees, and the study did not require training translators. The project manager seemed to enjoy participating in the selection of mentor interviewees, although she warned that if I talked only with people "who had nasty things to say about the project, I'll have your neck!" I reminded her that we were striving for balance and that she need not worry. The initial interviews were somewhat awkward as I was not skilled at putting the interviewees at ease. After each session I made critical notes on
53
my interviewing technique. Later interviews were much improved as I memorized the interview guide and became more relaxed. Transcribing the tapes was very tedious, taking a full two weeks. (Later evaluationswouldusc morecost-effective group interviews and more careful note taking and highlighting of key quotes.) When the data were compiled, drafts of the report were circulatedto the project manager and her program director.1 pushed them to develop recommendations and we met to discuss their ideas. During the weeks before, the executive director had become anxious to see results. I had been at ARC for over three months and had not produced an evaluation. Moreover, mygrant proposal had notyet beenapproved.There was a temporary return to a period of stress and self-doubt. During July my grant proposal was approved-the same day that my first report was distributed. I was redeemed. During the summer and fall I continued to work through the Minnesota program, project by project. Experience was greatly expanding my capabilities, while staff actively participated in the evaluation process. We began to designand implementevaluation and monitoring systems. Detailed evaluation plans were written for each project and were included as a line item in each project budget. A fifth workshop on building evaluation into project planning is scheduled. My hope is that as my capabilities as staff evaluator have increased, so has ARC's institutional capacityto usc and benefit from evaluativeinformation. Staff interviews during the first part of 1989 will look at the short-term impact of the evaluation process. My recommendations fornewevaluatorsor seasonedevaluatorsconsidering new approaches are as follows: • • • • • • • • • • • •
Developa thoroughgroundingin the appropriateliterature Exploreand adapt a range of relatedevaluation techniques Keepcurrent on the literature Developa detailed work plan with generous time frames Train staff in the uses of evaluationand try hard to get them on your side Scheduleoccasionat retraining sessions Be sensitiveto slaff time constraints Actively involvestaff in the evaluation process Begin by evaluatingsmall-scale, less demanding projects Do not be afraid to take calculated risks Record and benefit from your successes Record and learn from your mistakes
54
• Make a concerted effort to develop interpersonal, interviewing, and group facilitationskills • Include inslitutional development and evalualion sustainability on your list of priorilies.
REFERENCES Patton, M. Q. (1986). Utilization-focusedevaluation (2nd ed.) . Beverly llills, CA: Sage. Patton, M. Q. (1987). Creative evaluation. Newbury Park, CA: Sage . Santo Pietro, D. (t983). Evaluationsourcebookfor private and voluntary organizations. New York: American Council of Voluntary Agencies for Foreign Service.
TEACHING EVALUATION THROUGH THE PRACTICUM EXPERIENCE David P. l\Ioxle,}' Wayne Slate University Richard J. Visingardi Michigan State University
Teaching and learning program evaluation are challenging tasks for educators and students. At the Developmental Disabilities Institute located at Wayne State University in Detroit, Michigan, learning to evaluate service delivery to developmentally disabled people is a core competency area for the Institute's graduate social work interns. The mission of the Institute is to provide resources such as technical assistance, applied research, and program evaluation to community agencies serving people with developmental disabilities. The evaluation practicum has emerged within the Institute as a major vehicle for teaching social work students the many realities and demands of conducting evaluations within actual agency settings, An agency with evaluation needs is identified, and one student or a group of students initiates an evaluation, beginning with problem definition and question formulation and moving through all major steps of the project. A social work faculty member serves as preceptor for the completion of the project.