Evaluation and Program Planning 22 (1999) 363±372
www.elsevier.com/locate/evalprogplan
Involving program stakeholders in reviews of evaluators' recommendations for program revisions Paul R. Brandon* Evaluation Oce, Curriculum Research and Development Group, University of Hawai'i at Manoa, 1776 University Avenue, Honolulu, HI 96822, USA
Abstract
Why is the use of stakeholder expertise a sound rationale for involving program sta and bene®ciaries in reviewing evaluators' recommendations for program revisions? What procedures might evaluators use for involving these stakeholders in this task? What can we conclude about the likely success of these procedures? This article seeks to address these questions. In the context of educational program evaluation, some of the theoretical and practical issues about the participation of stakeholders in reviewing recommendations are discussed. The participation of program stakeholders for the purpose of tapping their expertise is explained; a recent evaluation, in which program stakeholders reviewed evaluators' recommendations for program revisions, is described; and the implications of the article for stakeholder-based evaluation and practical participatory evaluation are presented. 1999 Elsevier Science Ltd. All rights reserved.
#
Keywords:
Stakeholders; Evaluation recommendation
1. Background
1.1. Involving stakeholders to increase evaluators' understanding of programs
Stakeholder participation in program evaluation in the United States and Canada has been widely studied and advocated since the early 1970s. Early work was on stakeholder-based evaluation (SBE) (Bryk, 1983; Mark & Shotland, 1985). More recently, the focus has * Tel.: +1-808-956-4928; fax: +1-808-956-4933. E-mail address:
[email protected] (P.R. Brandon) 1 Another type of participatory evaluation is transformative participatory evaluation (Cousins & Whitmore, 1998). The purpose of this approach is to empower program participants (Fetterman, Kaftarian & Wandersman, 1996). This approach has been prevalent in international-development settings and is not discussed in this paper, which focuses on American and Canadian settings. 2 Some contributors to the evaluation literature prefer the term program clients to the term program bene®ciaries. The latter term is used here because it is the more appropriate of the two in the context of educational evaluation, which is the focus of this article. 0149-7189/99/$ - see front matter PII: S 0 1 4 9 - 7 1 8 9 ( 9 9 ) 0 0 0 3 0 - 0
been on the practical participatory-evaluation approach (PPE) (Cousins & Earl, 1992, 1995; Cousins & Whitmore, 1998).1 By focusing on the issues and values that are closest to stakeholders who intend to use evaluation ®ndings, SBE and PPE are intended to enhance the utilization of these ®ndings. An alternative rationale for involving stakeholders, which has been less widely discussed than the utilization rationale, is to involve them in evaluations simply to enhance evaluators' understanding of the programs (Brandon, 1998). This is a stakeholder-expertise rationale. Huberman and Cox (1990, p. 165) said, ``The evaluator is like a novice sailor working with yachtsmen [i.e., program sta and program bene®ciaries] who have sailed these institutional waters for years, and know every island, reef and channel''.2 Stakeholders' program knowledge is helpful when evaluators are developing evaluation questions, developing evaluation instruments, or interpreting evaluation ®ndings. The expertise rationale borrows from the theory of expert-novice dierences. Experts have factual and
# 1999 Elsevier Science Ltd. All rights reserved.
364
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
practical knowledge unknown by novices, and they know patterns and themes of which novices are unaware (Glaser & Chi, 1988). Experts also can identify information that is relevant to decision making and communicate their knowledge to non-experts eectively (Shanteau, 1988). 1.2. Stakeholders' reviews of evaluators' recommendations for program revisions
Stakeholders can be particularly helpful when reviewing evaluators' recommendations for program revisions. Recommendations to program personnel are commonly expected in evaluation reports. For example, Braskamp, Brandenburg, and Ory (1987, p. 66) quoted an evaluation client who said, ``I don't believe data ever speak for themselves. Only evaluators know what the results are. I need recommendations. Expect them. You're only half done if no recommendations are made''. Many evaluators concur with this opinion, believing that ``recommendations are one of the most critical products of any evaluation'' (Hendricks & Papagiannis, 1990, p. 121) and that ``evaluators should almost always oer recommendations'' (Hendricks & Handley, 1990, p. 110). Evaluation ®ndings alone, however, usually do not provide a sucient basis for developing appropriate recommendations for program revisions (Sadler, 1984; Scriven, 1991, 1993). For recommendations to be well informed, evaluators need to become expert in programs' historical, administrative, managerial, and operational contexts. However, they usually do not have the time or funding to gain this expertise, particularly when evaluating small programs. Without input from stakeholders who are intimately familiar with program context, recommendations might only be hypotheses about future actions. By tapping stakeholders' knowledge about program context, evaluators are not constrained to using the limited information that they gain during the course of a study. The stakeholder groups that have the appropriate program expertise for reviewing evaluation recommendations are program sta or faculty and program bene®ciaries such as students.3 Of all program 3 Several stakeholder groups, such as school board members, taxpayers, or the public at large, are potential participants in evaluations that emphasize evaluator-stakeholder interaction. Evaluators most value the participation of program personnel and bene®ciaries because they have signi®cant expertise about program context. Other groups may have important contributions to make to evaluations, but, at least in the evaluations of curricula and educational programs, these contributions are less likely to be based on expertise than on values, political interests, and so forth. The issue of the participation of other stakeholder groups (Mark & Shotland, 1985) in other evaluation contexts deserves further exploration and discussion in the evaluation literature.
stakeholders, these groups have the most expertise about the history, administration, and management of programs. In brief evaluations of small programs, program faculty and sta know more than external evaluators about program history, administration, management, and daily operations. Program bene®ciaries know about aspects of program implementation or outcomes that neither evaluators nor program personnel know. Also, when they contribute their program expertise, program bene®ciaries can help ensure that evaluations are not coopted by program faculty or sta (Dawson & D'Amico, 1985; Perry & Backus, 1995). The use of program sta's and program bene®ciaries' expertise is most appropriate when reviewing external evaluators' recommendations for revisions of small single-site programs, for at least two reasons. First, decisions about small programs are often made on site. (This is also the type of setting in which stakeholder participation is likely to enhance the use of evaluation ®ndings, as espoused in the SBE and PPE approaches to stakeholder participation.) In contrast, administrators of large multi-site programs are unlikely to ®nd that local stakeholders' knowledge of the vagaries of program implementation and outcomes is useful for decision making. Second, in evaluations of small programs, evaluators can conduct a thorough review of recommendations, because they potentially can tap all program sta's and bene®ciaries' expertise. Sta or faculty and bene®ciaries in a small program have the full body of information that is available about program context; the knowledge they apply in a review of recommendations is the most that can be obtained from any source. In contrast, in an evaluation of a large program, it is unlikely that evaluators can enlist the help of all stakeholders, resulting in incomplete (and perhaps biased) reviews of recommendations, because the full range of stakeholder knowledge about program context cannot be tapped. 1.3. Methods for involving stakeholders for the purpose of tapping their expertise
When tapping stakeholders' program expertise, evaluators should use carefully developed methods. How might these methods be structured? First, when involving program stakeholders in reviewing evaluators' recommendations for program revisions, the meetings in which evaluators present their recommendations to stakeholders should be carefully organized and conducted. Evaluators will not fully gather stakeholder expertise if meetings are haphazardly planned or poorly conducted. The recommendations that are reviewed in the meetings should be clear, jargon-free statements presented in ecient formats useful for oral presentation, such as chart essays (Torres, Preskill &
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
Piontek, 1996). Experienced facilitators should conduct stakeholder meetings. The purpose of stakeholder participation should be explicitly and openly explained at the beginning of the meetings, and instances of prior successful involvement of similar stakeholder groups should be described to the participating stakeholders. This is a particularly important motivational tool for stakeholders such as program bene®ciaries, who might be recalcitrant about participating if their opinions have routinely been ignored in the past. Second, stakeholder groups should be given opportunities to provide expertise in a nonthreatening manner. This will help ensure that the full range of stakeholder expertise is gleaned. For example, program bene®ciaries should meet without program sta present, and stakeholders' comments in group meetings should be recorded anonymously. Third, meeting procedures should help ensure that group discussions are not aected more by stakeholder personality than by stakeholder knowledge of program context. Organizational theory shows that decision making is aected not only by stakeholders' expertise but also by their personalities (Bacharach & Lawler, 1980), and considerable research has demonstrated that group discussions can be inordinately aected by outspoken group members' opinions (Fitzpatrick, 1989). To help ensure that stakeholders' personalities aect discussions less than their program expertise and that all participating stakeholders' relevant insights are discussed, evaluators should ask probing questions and repeatedly encourage meeting participants to speak openly. Fourth, meetings with stakeholders should be brief, convenient, and ecient. Program stakeholders are more likely to be motivated to participate fully if excessive time requirements are not asked of them.4 Ideally, evaluators should meet with stakeholders in regular program or curriculum events, such as sta meetings or cohort-wide student meetings.
4 Several researchers have voiced concerns about the time requirements for stakeholders participating in evaluations (Ayers, 1987; Brandon, Newton & Harman, 1993; Dawson & D'Amico, 1985; Donmoyer, 1990; Greene, 1987; Mark & Shotland, 1985; Tovar, 1989). Extensive stakeholder involvement often is particularly impractical for program bene®ciaries, who in most cases will be donating their time to participation in evaluations. Therefore, some carefully-crafted but time-consuming methods for tapping stakeholder expertise such as the Multi-attribute Utility Technique (Edwards, Guttentag & Snapper, 1975) and concept mapping (Trochim & Linton, 1986) are often not feasible for collecting program information from bene®ciaries, particularly in brief evaluations of small programs. 5 Students' ®rst two years at the medical school are organized into units Ðthree in the ®rst year and two in the second.
365
2. Stakeholders' review of evaluators' recommendations for program revisions: an empirical example
2.1. Background
A method for stakeholders' reviews of evaluation recommendations for program revisions was developed, tested, and applied in an external evaluation of a medical-school problem-based learning (PBL) curriculum. In PBL, students meet in small groups where, guided by tutors, they direct their own learning. The evaluation of the PBL curriculum focused on the ®ve curriculum units that consisted of the ®rst two years of medical school.5 One set of evaluation tasks (among several others) was to develop, pilot-test, and administer the Curriculum Implementation Questionnaire (CIQ). The purpose of administering the questionnaire was to collect data on the extent to which curriculum components had been implemented fully and adequately. The CIQ was administered to tutors and students at the end of each curriculum unit. At the conclusion of each of four curriculum units over a two-year period, the evaluators used the CIQ ®ndings when preparing drafts of recommendations for versions in the curriculum. The recommendations were to be included in annual reports to the school's policymaking committee (the primary evaluation audience). However, the evaluators were cautious about making recommendations, because they were unfamiliar with the nuances of the history, administration, management, and daily operations of the curriculum (see Cooper, Brandon & Lindberg, 1998). Therefore, before writing the ®nal versions of the annual reports, they set out to involve stakeholders in reviewing recommendations. The evaluators believed that the recommendations would be better informed if the faculty participating as PBL tutors had an opportunity to make suggestions about draft versions of recommendations. Likewise, because it was likely that the students' experience of the curriculum gave them knowledge that neither evaluators nor faculty had, the evaluators believed that students also should make suggestions about draft versions of the recommendations. Steps were taken to involve both groups. The heads of curriculum units were asked to present the evaluators' request to review recommendations to the tutors, and the school administrator in charge of the curriculum was asked to consider an appropriate time for the evaluators to meet the students. In the end, because of time constraints on the tutors and logistical diculties, arrangements were made to meet only with the students. 2.2. Methods for reviewing the recommendations
The evaluators met with students once at the end of
366
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
Fig. 1. Sample of a chart essay presented in meetings to review recommendations.
one school year and three times the following year, for a total of four meetings. Two meetings were conducted with each of two cohorts of students, called here Cohort 1 and Cohort 2. The meetings were held immediately following brief, regularly-scheduled end-unit meetings of the students with faculty and administrators. Cohort-1 students were at the end of their ®rst year of medical school in the ®rst meeting and at the end of the ®rst unit of their second year in their second meeting, and Cohort-2 students were at the ends of each of the ®rst two units in the ®rst year of medical school in their meetings. The length of the meetings ranged from about 40 min to about 1 h. The purposes of the meetings were to (a) present summaries of the key ®ndings of the CIQ, (b) present draft versions of evaluators' recommendations for program revisions based on the ®ndings, and (c) elicit students' opinions about the recommendations. The meetings were carefully crafted. The evaluators presented one-page chart essays on overheads, with paper copies distributed to the students (Jones & Mitchell, 1990; Torres et al., 1996). Each chart essay addressed a curriculum component and included an evaluation question, a brief overview of the CIQ data collection, a succinct summary of the CIQ ®ndings collected on the students in the unit, and the evaluators' recommen-
dations for curriculum improvement (and, in many chart essays, commendations). The recommendations were based on the evaluation sta's interpretation of the ®ndings in light of their accumulated understanding of the program. In two of the meetings, ®ve chart essays each were discussed, and in the other two, six each were discussed. The chart essays were distributed to the students and shown on overheads. A sample chart essay is shown in Fig. 1. The meetings were ecient in several ways. First, they were held at the conclusion of brief cohort meetings that had already been scheduled for the ®nal day of each unit. Second, none of the meetings lasted more than an hour, during which time the students discussed each of the chart essays. Third, the chart essays presented information in a complete and understandable format. Only one chart essay was displayed at a time, and the meeting facilitator focused the discussion on the ®ndings and recommendations about the least-implemented aspects of the curriculum. Fourth, the meetings did not require extensive additional time or resources of the evaluation sta. The chart essays had already been prepared for the ®nal report; other than conducting the meetings and summarizing meeting notes, little sta time was necessary. To encourage an open, nonthreatening, and full dis-
367
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
Table 1 Student participation in the review of recommendations Participation
Cohort 1
Cohort 2
1st meeting (N = 46) 2nd meeting (N = 45) 1st meeting (N = 55) 2nd meeting (N = 45) Female (%) 55.8% Male (%) 44.2% N and % of students making at least one comment 20 (43%) N and % of students making more than one comment 4 (9%) Total number of comments 28
cussion, the facilitator (a) asked the faculty and administrators to leave the room at the beginning of the meeting, (b) assured the students that all their comments would be reported anonymously, and (c) encouraged clari®cation, rebuttals, or elaboration. In addition, to help ensure that all comments were clear and complete, the principal investigator asked probing questions and restated students' comments when necessary. Each of these features of the meetings encouraged students to participate openly and fully, thereby helping ensure that stakeholder expertise was fully gleaned. 2.3. The extent to which students participated fully
Program stakeholders' review of the recommendations can bene®t evaluators to the extent that entire groups of stakeholders participate fully. Otherwise, some important knowledge about programs might not be gleaned. To examine the extent to which students participated fully in the reviews of recommendations, data were collected on (a) the total number of student comments made, (b) the number of students making at least one comment each, and (c) the number of students making more than one comment, as shown in Table 1. For each of the two cohorts, the number of students attending was about the same in both the ®rst and second meetings. Student participation in all meetings was at a level expected by the evaluatorsÐnot high, but not exceptionally or unacceptably low, either. About half of the Cohort-1 students and about onethird of the Cohort-2 students made comments during a meeting. For each cohort, the number of students making at least one comment was about the same in both meetings. As might be expected, some chart essays evoked more student interest and attention than others. In observations of three of the four meetings, the number of 6
Questionnaire response rates ranged from 89 to 98% of the students attending the end-unit meetings.
55.0% 45.0% 21 (47%) 11 (24%) 51
45.1% 54.9% 16 (29%) 7 (13%) 49
47.7% 52.3% 15 (33%) 6 (13%) 30
student comments for each chart essay was recorded. For most chart essays, students made comments. The results show a wide variation in the number of comments, ranging from 3 to 22 per chart essay in one Cohort-1 meeting, from zero to 14 per chart essay in the ®rst Cohort-2 meeting, and from zero to 20 per chart essay in the second. In all meetings students readily made comments without prompting. Many times students nodded their heads in agreement with the recommendations or with other students' comments without talking aloud. At the conclusion of each of the meetings, the evaluators distributed a questionnaire asking students to rate the extent of their participation.6 The average levels of self-reported participation are given in the ®rst row of Table 2. The results show moderate levels of participation, con®rming observations made during the meetings. 2.4. The extent to which students participated equitably
To help ensure that the full body of student expertise was tapped during the review of the recommendations, the evaluators strived to inhibit some students, such as outspoken males, from dominating the discussion. To examine the extent to which gender aected participation, dierences between male and female levels of participation were examined in analyses of variance (ANOVA) of the questionnaire data collected at the four meetings. The independent variable in each of the four analyses was gender and the dependent variable was student self-report about the extent of their personal participation in the meetings. Percentage distributions for the independent variable are shown in Table 1. Descriptive statistics for the dependent variable, by gender, and the ANOVA results are shown in Table 3. The ANOVA results are mixed. For Cohort 1, statistically signi®cant dierences between the gender groups' level of participation were found for both endunit meetings. Males showed higher levels of participation than females. These dierences suggest that participation was not fully equitable among gender groups. (Cohort 2 males also showed higher levels of
368
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
90.0 06.0 13.3 01.0 36.0 35.3 41.0 29.0 21.2 ES
M
s
M
ES
M
s
M
ES
M
s
M
gniteem eht ni de®idom erew snoitadnemmocer s'rotaulave eht taht thguoht stneduts hcihw ot tnetxE
gniteem eht gnirud detseggus snoitadnemmocer s'rotaulave eht ot snoitac®idom htiw deerga stneduts hcihw ot tnetxE
gniteem eht ni detapicitrap yllanosrep stneduts hcihw ot tnetxE
b
gniteem fo tcepsA
.)dedda scilati( ''?yadot ereh noissucsid eht ni etapicitrap yllanosrep uoy did tnetxe tahw oT`` ,deksa eriannoitseuq eht ,snoitartsinimda eerht rehto eht nI .)dedda uoy did tnetxe tahw oT`` ,deksa eriannoitseuq eht ,1 trohoC ot tnemurtsni eht fo noitartsinimda tsr® eht nI b .tol a=4 dna ,emos=3 ,elttil a=2 ,lla ta ton=1 htiw ,4±1 fo elacs a no dednopser stnedutS a
51.0 59.0 28.2 31.0 97.0 93.3 71.0 50.1 73.2
gniteem ts1
citsitatS
noitadnemmocer eht ni edam segnahc eht ot etubirtnoc yllanosrep
01.0 07.0 17.2 01.0 86.0 01.3 41.0 00.1 77.1
gniteem dn2
scilati( ''?yadot
11.0 57.0 77.2 90.0 65.0 92.3 31.0 88.0 70.2
gniteem ts1
1 trohoC
gniteem dn2 2 trohoC snoitadnemmocer weiver ot sgniteem eht fo sgnitar tnedutS 2 elbaT
a
participation than females, but these dierences were small and not statistically signi®cant.) Somewhat contradictory ®ndings are seen in the second row of Table 2. Cohort 1 students, including females, showed strong agreement with the modi®cations made to the recommendations during the meetings. Furthermore, analyses of gender dierences on this variable (given in the second part of Table 3) show that females' agreement with the modi®cations was higher than males' agreement in all meetings, with the results for one Cohort-1 meeting being statistically signi®cant. What conclusions might we reach about the Cohort1 ®ndings on self-reported levels of participation vs ®ndings about the level of agreement with the modi®cations made to the recommendations? The ®rst possible conclusion is that the ®ndings for Cohort 1 raise suspicions about the extent to which within-group dierences can be suciently diminished in stakeholder meetings. A second, more positive conclusion is that female students' agreement with the modi®cation of recommendations made during the meetings suggests that all students were suciently encouraged to participate. A third conclusion is that female students' reports of their levels of participation were based on less stringent standards of participation than the male students' standards, thus leading females to report lower participation when in fact they did not participate less. (Actual gender dierences in participation are unknown because observational data on participation were not classi®ed by gender.) A fourth conclusion is that female students' agreement with the recommendations simply re¯ected the eects of female socialization, which in¯uences women to be agreeable. Together, this mix of possible conclusions raises some doubts about equity of male-female participation but also suggests that we cannot show with sucient certainty that gender dierences aected the meetings or diminished the breadth of students' review of the evaluators' recommendations. 2.5. Eect of students' review of recommendations
Nearly all the students' comments re¯ected their knowledge about program operations and context. Comments were speci®c suggestions about activities that PBL curriculum administrators might revise or initiate. The suggestions ranged from comments about entire curriculum components to comments about speci®c people. The students' comments were not novel nor controversial to the extent that the evaluator believed it was necessary to have them validated by other stakeholder groups. The comments reinforced the evaluators' beliefs in the appropriateness of most of their recommendations for curriculum revisions. Many of the
369 P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
.level 50.0 eht ta tnac®ingiS c .)dedda scilati( ''?yadot ereh noissucsid eht ni etapicitrap yllanosrep uoy did tnetxe tahw oT`` ,deksa eriannoitseuq eht ,snoitartsinimda eerht rehto eht nI .)dedda uoy did tnetxe tahw oT`` ,deksa eriannoitseuq eht ,1 trohoC ot tnemurtsni eht fo noitartsinimda tsr® eht nI b .tol a=4 dna ,emos=3 ,elttil a=2 ,lla ta ton=1 htiw ,4±1 fo elacs a no dednopser stnedutS a
01.4 c 68.0 21.3 71 76.0 26.3 12 38.4 c 51.1 67.2 71 68.0 50.2 12 gniteem ts1
54.2 86.0 63.3 91 65.0 76.3 42 50.4 c 09.0 24.2 91 78.0 78.1 32
oitar F
s
M
s
N
s
F
N
M
oitar
M
s
N
M N
selaM selameF
redneG
b
a
gniteem fo tcepsA
detapicitrap yllanosrep stneduts hcihw ot tnetxE
selameF gniteem eht gnirud detseggus snoitadnemmocer s'rotaulave eht ot snoitac®idom htiw deerga stneduts hcihw ot tnetxE
selaM
noitadnemmocer eht ni edam segnahc eht ot etubirtnoc yllanosrep
13.0 37.0 70.3 72 95.0 81.3 22 40.0 80.1 57.1 82 28.0 07.1 32 gniteem dn2
scilati( ''?yadot 72.3 75.0 41.3 12 15.0 54.3 02 27.0 69.0 81.2 22 08.0 59.1 12 gniteem ts1
1 trohoC citsitatS
gniteem dn2 2 trohoC
snoitadnemmocer eht ot snoitac®idom eht htiw tnemeerga dna noitapicitrap ni secnereid redneG 3 elbaT
370
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
comments gave the evaluators additional impetus to emphasize their recommendations in the written reports to the school policy-making committee. The evaluators used the comments to revise these reports. They did not report the comments verbatim but used them to hone their original recommendations, and, in one case, to reverse a recommendation. The questionnaire distributed to students after they reviewed recommendations at each of the four endunit meetings asked students to rate the extent to which the evaluators' recommendations were modi®ed in the meetings. On a 4-point Likert-type scale, mean ratings ranged from 2.71 to 3.31, as shown in the third row of Table 2. These ®ndings suggest that the students believed their participation aected the recommendations and help con®rm that the evaluators' recommendations were modi®ed. Students were also asked to rate the extent to which they agreed with the changes made in the recommendations. Mean responses to this question, given on a 4-point Likert-type scale, ranged from 3.10 to 3.53. (See the second row of Table 2.) These ®ndings show strong student agreement with the modi®cations to the recommendations, thereby suggesting that the modi®cations re¯ected the opinions of the majority of the group.
3. Conclusions
This study contributes to the discussion of methods for involving stakeholders in program evaluation, particularly for the purpose of reviewing evaluators' recommendations for program improvement. The evaluators in the study used their program knowledge when preparing draft recommendations for program revisions, but knew that they might have had insucient expertise about the nuances of program history, management, administration, and operations to write well-informed recommendations. Therefore, they asked students to review the draft recommendations. The review helped the evaluators obtain information for gauging the extent to which they should modify the recommendations before writing the ®nal versions of evaluation reports. The meetings were ecient and carefully crafted, information was collected in a nonthreatening manner, and steps were taken to ensure that no students or group of students dominated the discussion. Both male and female students perceived that participation resulted in modi®cations to the evaluators' recommendations for program revisions and agreed with the modi®cations made to the evaluators' recommendations. Male students' self-reported levels of participation were greater than female students' levels in one of the two student cohorts, but not
enough to conclude ®rmly that gender dierences signi®cantly aected the review of recommendations. The methods described here add not only to studies showing how to involve stakeholders in reviewing recommendations but also to the empirical SBE and PPE literature on stakeholder participation in both the early and late stages of evaluations. By and large, this literature (at least the portion focusing on educational program evaluation) has not included carefully or thoroughly described methods for involving stakeholders. Instead, stakeholder participation has commonly been described in language such as ``the stakeholder group reviewed the information [about conducting evaluations] provided by the evaluator'' and ``made the ®nal decisions regarding the areas to include in the study and methods they were capable of using'' (Ayers, 1987, p. 267). Only in exemplary instances has this not been the case (e.g., see Greene's (1987) discussion of methods for stakeholder participation in the early stages of evaluation). In contrast, the present study gives an in-depth description of the procedures for collecting information from program stakeholders. This description can help SBE and PPE evaluators take the steps necessary to ensure that stakeholder knowledge is fully tapped. Another dierence between the approach described in this article and the approaches typically described in the empirical SBE and PPE literature is that the former emphasizes the importance of enlisting program bene®ciaries' participation. Full bene®ciary participation has been emphasized little in the published literature on applying the SBE and PPE approaches to educational evaluation. For example, program bene®ciaries or their proxies, such as the parents of young students, have been involved little in participatory educational evaluations (e.g., Cousins & Earl, 1995). This is in large part because the purpose of SBE and PPE is to enhance intended evaluation use by program personnel. Program bene®ciaries do not contribute to the use of evaluation ®ndings; therefore, SBE and PPE evaluators encourage bene®ciaries' participation in evaluation studies less than they encourage the participation of program personnel. As Cousins and Whitmore (1998, p. 6) said, ``Part of the rationale for limiting participation to stakeholders associated closely with program support and management functions is that the evaluation stands a greater chance of meeting the program and organization decision makers' time lines and needs for information''. This article shows that, at least in small educational evaluations involving stakeholders for the purpose of fully tapping the range of relevant knowledge about program context, it is critical to include bene®ciaries in the groups of participating stakeholders. Students must be suciently articulate and knowledgeable to participate; therefore, students in lower-education settings are too young to
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
contribute their program-bene®ciary expertise. However, the methods presented here are viable for representatives of lower-education students such as parents and for students in secondary-, higher-, and adult-education settings. Bene®ciary participation alone, however, does not ensure a complete review of recommendations. This article demonstrates that student participation helped evaluators develop recommendations for program revisions, but program tutors also had knowledge that they might have contributed. Without systematically collecting their feedback in a manner similar to that described here for students, the review of recommendations was incomplete and potentially biased in favor of students. Reviews of recommendations are most feasible when they can be added to an existing curriculum event. In the study described here, students reviewed recommendations at the conclusion of brief meetings regularly conducted at the end of each curriculum unit; a request to gather students in another manner would have been logistically demanding and might have been seen as making excessive demands on students' time. Indeed, logistical and time constraints prohibited the tutors from participating in a review of the recommendations. Thus, the time requirements of stakeholder involvement, as discussed in previous research, continue to be an issue (Henry & Rog, 1998). If evaluators are to take steps to involve stakeholders in evaluation tasks, they should have a sound rationale for stakeholder involvement. In most of the evaluation literature, the rationale that evaluators have used for involving stakeholders has focused on enhancing the use of evaluation ®ndings. In contrast, the rationale for involving stakeholders in the study presented in this article simply had to do with using stakeholders' considerable program expertise to inform evaluators' recommendations. This rationale, which is of a technical nature, might not have the cachet of the utilization rationale, which is more of a political nature, but it is sucient for involving stakeholders, at least in the evaluation of small programs.
References
Ayers, T. D. (1987). Stakeholders as partners in evaluation: a stakeholder-collaborative approach. Evaluation and Program Planning, 10, 263±271. Bacharach, S. B., & Lawler, E. J. (1980). Power and politics in organizations. San Francisco: Jossey±Bass. Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: bridging the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation, 19, 325±337. Brandon, P. R., Newton, B. J., & Harman, J. W. (1993). Enhancing validity through bene®ciaries' equitable involvement in identifying
and
prioritizing
371
homeless
children's educational problems. , 16, 287±293. Braskamp, L. A., Brandenburg, D. C., & Ory, J. C. (1987). Lessons about clients' expectations. In: J. Nowakowski, The client perspective on evaluation: New directions for program evaluation, vol. 36 (pp. 63±74). San Francisco: Jossey±bass. Bryk, A. (Ed.) (1983). Stakeholder-based evaluation: New directions for program evaluation 17. San Francisco: Jossey±Bass. Cooper, J. E., Brandon, P. R., & Lindberg, M. A. (1998). Evaluators' use of peer debrie®ng: three impressionist tales. Qualitative Inquiry, 4, 265±279. Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational evaluation and Policy Analysis, 14, 397± 418. Cousins, J. B., & Earl, L. M. (Eds.) (1995). Participatory evaluation Evaluation and Program Planning
in education: studies in evaluation use and organizational learning.
Washington DC: Falmer. Cousins, J. B. & Whitmore, E. (1998). Framing participatory evaluation In: E. Whitmore, Understanding and practicing participatory evaluation. New Directions for Evaluation 80 (pp. 5±23) San Francisco: Jossey±Bass. Dawson, J. A., & D'Amico, J. J. (1985). Involving program sta in evaluation studies: a strategy for increasing information use and enriching the data base. Evaluation Review, 9, 173±188. Donmoyer, R. (1990). Curriculum evaluation and the negotiation of meaning. Language Arts, 67, 274±286. Edwards, E. L., Guttentag, M., & Snapper, K. (1975). A decisiontheoretic approach to evaluation research. In: E. L. Struening, & M. Guttentag, Handbook of evaluation research (pp. 139±181). Beverly Hills: Sage. Fetterman, D. M., Kaftarian, S. J., & Wandersman, A. (Eds.) (1996). Empowerment evaluation: knowledge and tools for selfassessment and accountability. Thousand Oaks, CA: Sage. Fitzpatrick, A. R. (1989). Social in¯uences in standard setting: the eects of social interaction on group judgements. Review of Educational Research, 59, 315±328. Glaser, R., & Chi, M. T. H. (1988). Introduction: what is it to be an expert?. In: M. T. H. Chi, R. Glaser, & M. J. Farr, The nature of expertise (pp. xv±xxiix). Hillsdale, NJ: Lawrence Erlbaum. Greene, J. C. (1987). Stakeholder participation in evaluation design: is it worth the eort?. Evaluation and Program Planning, 10, 379± 394. Hendricks, M., & Handley, E. A. (1990). Improving the recommendations from evaluation studies. Evaluation and Program Planning, 13, 109±117. Hendricks, M., & Papagiannis, M. (1990). Do's and don't's for oering eective recommendations. Evaluation Practice, 11, 121±125. Henry, G. T., & Rog, D. J. (1998). A realist theory and analysis of utilization. In: G. T. Henry, G. Julnes, & M. M. Mark, Realist evaluation: an emerging theory in support of practice. New directions for evaluation, 78 (pp. 89±102). San Francisco: Jossey±Bass. Huberman, M., & Cox, P. (1990). Evaluation utilization: building links between action and re¯ection. Studies in Educational Evaluation, 16, 157±179. Jones, B. K., & Mitchell, N. (1990). Communicating evaluation ®ndings: the use of a chart essay. Educational Evaluation and Policy Analysis, 12, 449±462. Mark, M. M., & Shotland, R. L. (1985). Stakeholder-based evaluation and value judgments. Evaluation Review, 9, 605±626. Perry, P. D., & Backus, C. A. (1995). A dierent perspective on empowerment in evaluation: bene®ts and risks to the evaluation process. Evaluation Practice, 16, 37±46. Sadler, D. R. (1984). Evaluation and the logic of recommendations. Evaluation Review, 8, 261±268. Scriven, M. (1991). Beyond formative and summative evaluation. In: M. W. McLaughlin, & D. C. Phillips, Evaluation and education: at quarter century. Ninetieth yearbook of the National Society for
372
P.R. Brandon / Evaluation and Program Planning 22 (1999) 363±372
(pp. 19±64). Chicago: National Society for the Study of Education. Scriven, M. (1993). Hard-won lessons in program evaluation: new directions for program evaluation, 58. San Francisco: Jossey± Bass. Shanteau, J. (1988). Psychological characteristics and strategies of expert decision makers. Acta Psychologica, 68, 203±215. Torres, R. T., Preskill, H. S., & Piontek, M. E. (1996). Evaluation the Study of Education, Part II
strategies for communication and reporting: enhancing learning in organizations. Thousand Oaks, CA: Sage. Tovar, M. (1989). Representing multiple perspectives: collaborativedemocratic evaluation in distance education. The American Journal of Distance Education, 3(2), 44±56. Trochim, W. M. K., & Linton, R. (1986). Conceptualization for planning and evaluation. Evaluation and Program Planning, 9, 289±308.