Aligning collaborative and culturally responsive evaluation approaches

Aligning collaborative and culturally responsive evaluation approaches

Evaluation and Program Planning 35 (2012) 552–557 Contents lists available at SciVerse ScienceDirect Evaluation and Program Planning journal homepag...

129KB Sizes 0 Downloads 59 Views

Evaluation and Program Planning 35 (2012) 552–557

Contents lists available at SciVerse ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Aligning collaborative and culturally responsive evaluation approaches Karyl Askew a,*, Monifa Green Beverly b, Michelle L. Jay c a

School of Education, University of North Carolina – Chapel Hill, United States College of Education, University of Central Florida, United States c Department of Educational Studies, College of Education, University of South Carolina, United States b

A R T I C L E I N F O

A B S T R A C T

Article history: Available online 31 December 2011

The authors, three African-American women trained as collaborative evaluators, offer a comparative analysis of collaborative evaluation (O’Sullivan, 2004) and culturally responsive evaluation approaches (Frierson, Hood, & Hughes, 2002; Kirkhart & Hopson, 2010). Collaborative evaluation techniques immerse evaluators in the cultural milieu of the program, systematically engage stakeholders and integrate their program expertise throughout the evaluation, build evaluation capacity, and facilitate the co-creation of a more complex understanding of programs. However, the authors note that without explicit attention to considerations raised in culturally responsive evaluation approaches (for example, issues of race, power, and privilege), the voices and concerns of marginalized and underserved populations may be acknowledged, but not explicitly or adequately addressed. The intentional application of collaborative evaluation techniques coupled with a culturally responsive stance enhances the responsiveness, validity and utility of evaluations, as well as the cultural competence of evaluators. ß 2012 Elsevier Ltd. All rights reserved.

Keywords: Collaborative evaluation Culturally responsive evaluation Cultural competence

1. Introduction Collaborative evaluation (O’Sullivan, 2004; Rodrı´guez-Campos, 2005) is an approach to evaluation that intentionally incorporates program stakeholders into the evaluation process and views their participation as essential for generating evaluation findings that are meaningful, useful and effective. Culturally responsive evaluation (Frierson, Hood, & Hughes, 2002; Kirkhart & Hopson, 2010) is an approach to evaluation in which the recognition of and appreciation for a program’s culture, including the cultural backgrounds of program stakeholders, is viewed as fundamental to any assessment of the program’s value, merit, and worth. Both approaches seek to promote evaluations that are specific, relevant and valid for programs and their clientele by placing significant value on garnering input from program staff and stakeholders. In so doing, both approaches acknowledge the importance of valuing stakeholder knowledge as part of a larger effort to better understand a program’s operation and impact. Consequently, supporters of both approaches argue that evaluations that fail to adequately address multiple aspects of the cultural context are likely to result in evaluations that are significantly flawed (Cousins & Earl, 1995; SenGupta, Hopson, & Thompson-Robinson, 2004). While collaborative evaluation offers clear guidelines to systematically engage stakeholders and foster mutually beneficial

* Corresponding author at: 2017 J D Court, Chapel Hill, NC 27516, United States. Tel.: +1 919 360 7421; fax: +1 919 967 4040. E-mail address: [email protected] (K. Askew). 0149-7189/$ – see front matter ß 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2011.12.011

evaluator–stakeholder partnerships, it lacks the explicit directive to investigate issues of race, power, and privilege that affect program administration and outcomes. Conversely, while culturally responsive evaluation places central importance on the role of race, power, and privilege in understanding program dynamics, the approach may benefit from additional tools used to sustain stakeholder participation. The similarities between collaborative evaluation and culturally responsive evaluation can be traced to their roots in ‘‘participant-based’’ (Fitzpatrick, Sanders, & Worthen, 2003) and ‘‘clientcentered/responsive’’ evaluation approaches (Stufflebeam, 2001) that have evolved over the last 20 years. Indeed, in his critical review of 22 evaluation approaches in use at the beginning of the new millennium, Stufflebeam (2001) argued that such approaches belong to a larger group of ‘‘social agenda/advocacy’’ evaluation approaches that are ‘‘. . .strongly oriented to democratic principles of equity and fairness and employ practical procedures for involving the full range of stakeholders’’ (p. 63).1,2 With regards to responsive evaluations specifically, he noted that the evaluators who embrace them are charged with continually interacting with various clients and stakeholders and responding to their evaluative needs.

1 The official label he used was ‘‘client-centered studies (or responsive evaluation).’’ 2 The other three included ‘‘constructivist evaluation,’’ ‘‘deliberative democratic evaluation,’’ and ‘‘utilization-focused evaluation.’’

K. Askew et al. / Evaluation and Program Planning 35 (2012) 552–557

In collaborative evaluation, being responsive entails not only acknowledging and responding to clients’ needs but also adapting to the differences among clients with regards to their ‘‘evaluation readiness and expertise’’ (O’Sullivan, 2004, p. 25). Thus, in collaborative evaluation, understanding a program and its cultural context are requisite for being responsive. However, the ‘‘culture’’ of the context remains relatively undefined. In contrast, adherents to culturally responsive evaluation, such as Hood (2001), argue that ‘‘to be responsive fundamentally means to attend substantively and politically to issues of culture and race in evaluation practice’’ (p. 32). Thus, explicit attention to these matters is at the heart of that approach. We are African American women evaluators trained in the collaborative evaluation approach but whom, for reasons related to our personal/professional positionality, also embrace a culturally responsive evaluation stance. As evaluators who straddle two unique approaches to our work, we viewed a comparative analysis as an opportunity for better understanding and improving our practice. Thus, for the past several years, we have been in continual dialog about our on-going attempts to make sense of the commonalities and differences that exist between the two approaches that govern our professional work as evaluators. In essence, we have sought the answer to one primary question: What are the essential elements of conducting a collaborative evaluation that lend themselves to cultural responsiveness? In this article, we attempt to develop a more nuanced understanding of the relationship between these two approaches, with an explicit eye toward examining collaborative evaluation practices and their ability to promote cultural responsiveness. We believe that our formal training in collaborative evaluation, along with our personal commitment to advocating for marginalized and underserved populations, led us to our current practice – one that lies at the junction of both approaches. We argue, therefore, that collaborative evaluation is an orientation to evaluation that can promote culturally responsive practice through the application of prescribed tools and strategies. To support our argument, we begin by providing an overview of both collaborative and culturally responsive evaluation approaches and strategies. Next, we demonstrate how we systematically compared the two approaches in order to achieve a deeper understanding of how specific tenets and practices from each interact in our work. Our analysis highlights similarities and points of departure. We close by reflecting on the potential shortcomings of a collaborative evaluation approach that lacks explicit attention to a culturally responsive stance, and we suggest that drawing on both perspectives can enhance the cultural responsiveness of evaluators and the evaluations they conduct. Ultimately, we hope that our analysis serves to deepen our readers’ understanding of both approaches and provide them with a meaningful opportunity to reflect upon the potential value and utility these approaches may hold for their own practice. 2. Collaborative evaluation A member of the family of ‘‘collaborative, participatory and empowerment’’ approaches to evaluation, collaborative evaluation emphasizes the importance of including stakeholders throughout the evaluation process.3 In comparison to distanced evaluations that call for little or no contact between evaluators and program practitioners, a defining feature of collaborative evaluation is its emphasis on systematic stakeholder engagement throughout the evaluation process (O’Sullivan, 2004; Rodrı´guez-Campos, 2005). This explicit focus on stakeholder involvement is viewed as critical 3 O’Sullivan (2004) noted that in current evaluation practice, collaborative evaluation may be used interchangeably with participatory evaluation.

553

to the ultimate aim of ‘‘collaborative, participatory approaches,’’ which is ‘‘to make evaluation findings more meaningful and empowering to stakeholders, more useful for decision makers and more effective within an organization’’ (Torres et al., 2000, p. 28). Two major assumptions lie at the heart of all collaborative evaluation approaches. The first is that program stakeholders possess valuable program knowledge, understanding and insights about their programs and the clientele they serve. Consequently, they bring vital information to bear on the conduct of any evaluation of their program. Thus, including program staff and stakeholders as partners in the evaluation process is viewed as essential to the success of the evaluation (and ultimately to the success of the program). To the degree that they are able, capable and willing, staff and stakeholders are encouraged to fully participate in the evaluation process. A second assumption fundamental to these approaches is that stakeholder engagement and staff participation in the evaluation process is a necessary prerequisite for evaluation utilization, also viewed as essential for program success. Indeed, proponents of collaborative evaluation argue that one of its greatest benefits is the increased likelihood of crafting a credible evaluation, with results and recommendations that stakeholders, by virtue of their participation in the process, not only understand but also are prepared to address (O’Sullivan, 2004; Stufflebeam, 2001). In light of these assumptions, we understand the term collaboration in collaborative evaluation to imply that both the evaluator and program staff/stakeholders share responsibility for the evaluation—its process and its product. In articulating the essential value of collaborative evaluation approaches, Cousins (2001) says that engagement in such approaches engenders ‘‘deep levels of understanding, by evaluators and program practitioners alike, of the phenomenon under study situated within the local context’’ and noted that, in utilizing these approaches, ‘‘organizational learning capacity will be enhanced’’ (p. 116). Thus, there are many mutual benefits that are presumed to accrue from the partnership between program practitioners/stakeholders and evaluators within collaborative evaluations. For program folks, the opportunity to learn evaluation approaches, methods, and techniques are not only viewed as valuable in and of themselves, but are also viewed as vital to an organization’s ability to both assess and conduct evaluations of their program (O’Sullivan, 2004). For the evaluator, the benefits of the approach include the increased likelihood that the process will yield multiple perspectives on the program; shed light on the cultural context of the program; and reveal potential hidden agendas, unintended consequences, and sensitive topics that might otherwise go unaddressed (Preskill & Torres, 1999). Further, evaluation reporting may benefit from an increase in the quality of data due to the active participation of program staff. All of these aspects are thought to contribute to a more thorough, deeply nuanced rendering of the program (O’Sullivan, 2004), and the production of a credible and actionable evaluation featuring ‘‘. . .better questions, solutions, and results’’ (Rodrı´guez-Campos, Martz, & Rincones-Go´mez, 2010, p. 116). 2.1. Collaborative evaluation techniques According to O’Sullivan (2004),4 collaborative evaluation employs four evaluation techniques that place the organization’s and stakeholder’s culture at the center of the evaluation efforts: collaborative planning, evaluation technical assistance, evaluation capacity building, and evaluation fairs. 4 O’Sullivan’s approach to collaborative evaluation, which she refers to as ‘‘evaluation voices’’ combines elements of cluster evaluation with collaborative models of community development, and ‘‘works well with multiple programs of similar intent’’ (2004, p. 28).

554

K. Askew et al. / Evaluation and Program Planning 35 (2012) 552–557

Collaborative evaluation planning. Promotes stakeholder involvement as the program staff frames the evaluation from the beginning and collaborates in the process. Evaluation activities can be integrated in existing program events and woven into the fiber of the program. Technical assistance. Evaluator’s expertise enhances stakeholder’s ability to answer critical evaluation questions. It allows program participants to be part of the evaluation team by progressively increasing their evaluation competence. Capacity building. Enhances the knowledge and personal expertise of program personnel to effectively understand evaluation and enable programs to better manage and execute their evaluation responsibilities. Evaluation fair. Provides stakeholders with an opportunity to share their evaluation strategies and program accomplishments. This cluster networking builds evaluation expertise from within by empowering stakeholders to be the experts of their program and share their experiences, limitations, and success with each other. 3. Culturally responsive evaluation Culturally responsive evaluation has been identified as an evaluation approach, framework, and stance (Hopson, 2009). As an approach, it guides the manner in which evaluations are conducted. Culturally responsive evaluators execute their work in ways that invite and legitimize diverse perspectives and complex understandings of the program, its clients, and the broader environmental context (Hood, 2009). As a framework, it is a flexible organization of steps and procedures (Frierson et al., 2002; Kirkhart & Hopson, 2010). For each stage of the evaluation, practitioners offer guidelines for being proactive and reactive in order to capture an accurate picture of the program, its operation, and its impact. As a stance, it requires the evaluator to raise issues of differential service delivery and access attributed to race, gender, economic status, and power (Hood, 2009; Hopson, 2009). Derived from responsive evaluation approaches (e.g., Stake, 1983) and race based theories (e.g., Ladson-Billings & Tate, 1995; Stanfield & Dennis, 1993), the evaluator assumes the role of conscientious advocate for members of traditionally underserved and underrepresented groups and communities (Hopson, 2009). The evaluator is expected to diligently interrogate the underlying assumptions and deficit perspectives held by the program personnel about underrepresented populations. The aim of culturally responsive evaluation is to enhance the social, political, and economic conditions of persons from traditionally underrepresented and underserved communities by executing valid evaluations (Hood, 2009; Nelson-Barber, LaFrance, Trumbull, & Aburto, 2005). Valid evaluations minimize erroneous assessment, interpretation, and representation that arise from the lack of understanding of how concepts transfer, if at all, across cultural boundaries (Kirkhart, 2005; Nelson-Barber et al., 2005). Using a culturally responsive evaluation approach is particularly useful when programs’ personnel and clientele involve persons from different cultural backgrounds (Symonette, 2004). Evaluators use this approach to invite, highlight, and legitimize difference in the lived experiences between groups in order to examine how the multiple layers of culture intersect to influence program implementation and explain differential access and outcomes across and within groups (Hood, 2009). All members of the evaluation team have a responsibility to increase their awareness, knowledge, and appreciation of the culture of program participants (Hood, 2009; SenGupta et al., 2004; Symonette, 2004). To be effective as instruments of the evaluation, evaluators must interrogate their own assumptions, biases, and understandings of the cultural context in which the program operates (Symonette, 2004). Moreover, evaluators must

become students of culture and context (Hood, 2009). Differences that exist between the evaluator and program participants in age, race, gender, socioeconomic markers (educational credentials), and language fluency can influence the success of the evaluation (Nelson-Barber et al., 2005). Scholars debate whether evaluators can achieve cultural competence in a culture other than their own (Betrand, 2006; Nelson-Barber et al., 2005; SenGupta et al., 2004; Symonette, 2004). To achieve competence in a culture one must have shared lived experiences: a collective understanding of cultural norms, values, language, and behavioral codes (Nelson-Barber et al., 2005). Shared lived experiences provide a sense of what it means to live and exist through the eyes of another. For example, without these experiences, an African American male evaluator may experience challenges in identifying, recording, understanding, and interpreting the experiences of a Native American female program provider or participant (e.g., Hood, 2009). However, culturally responsive evaluation scholars do agree that steps can be taken by all evaluators to be responsive to the cultural context (Hood, 2009; SenGupta et al., 2004). 3.1. Culturally responsive evaluation techniques Frierson et al. (2002) offered guidelines for explicitly placing culture and cultural context at the center of nine stages of the evaluation. Expanding on this framework, Kirkhart and Hopson (2010) proposed additional questions and directives5 that provided a nuanced understanding of Frierson et al.’s work. The nine steps are summarized below and specified in Table 1. Assemble the evaluation team. Attend to the sociocultural context of the evaluand by assembling a team of evaluators who are knowledgeable of and sensitive to the context. Engage stakeholders. From beginning to end, seek out and involve members from all stakeholder groups, attending to distributions of power. Identify evaluation purpose and intent. Examine the social and political climate of the program and the community in which it operates paying particular attention to equitable distribution of resources and benefits. Frame the right questions. Using a democratic process, assess whether the evaluation questions reflect the concerns and values of all significant stakeholders including the end users. Design the evaluation. Design comprehensive and appropriate evaluations that take advantage of qualitative and quantitative methods to examine and measure important cultural and demographic variables. Select and adapt instruments. Instruments should be identified, developed, adapted and validated for the target population, using culturally sensitive language. Collect the data. Select data collection methods that are appropriate and respectful of the cultural context of the program and the target population. Analyze the data. Involve representatives from various stakeholder groups, as cultural interpreters, to review data and validate evaluators’ inferences and interpretations. Disseminate and utilize results. Distribute findings broadly using multiple modalities and in ways that are consistent with the original purpose of the evaluation and can be understood by a wide variety of audiences.

5 Kirkhart and Hopson’s (2010) questions/directives are included in the summary of each step provided within this section.

K. Askew et al. / Evaluation and Program Planning 35 (2012) 552–557

555

Table 1 Mapping collaborative evaluation and culturally responsive evaluation. Culturally responsive evaluation techniques 1. Assemble evaluation team Be informed by the sociocultural context of the evaluand Ask if the evaluation team’s lived experiences are appropriate to the evaluand’s context 2. Engage stakeholders Develop a stakeholder group that represents the population(s) served Include direct and indirect consumers Pay attention to distributions of power Include multiple voices in preparation and activities 3. Identify evaluation purpose and intent Ask how well the program is connecting with its intended consumers Ask if the program is operating in ways that are respectful of cultural context Ask if program resources are equitably distributed Ask who is benefiting and who is burdened from the program Ask what conceptual models are culturally relevant with the program 4. Frame the right questions Include questions that are relevant to significant stakeholders Determine what will be accepted as evidence Notice whose voices are heard in the choice of questions and evidence Ask if the lived experience of stakeholders is reflected in the choices 5. Design the evaluation Select a design appropriate to both evaluation questions and cultural context Seek culturally appropriate methods that combine qualitative and quantitative approaches Collect data at multiple points in time. Extend the time frame as needed Construct control and comparison groups in ways that respect cultural context and values 6. Select and adapt instruments Establish reliability and validity of instruments for the local population Use appropriate norms to the group(s) involved in the program Make language and content of instruments culturally sensitive Adapt instruments as needed 7. Collect the data Used data collection procedures that are responsive to cultural context Attend to nonverbal and verbal communications Train data collectors in technical procedures and culture Use shared lived experiences to provide optimal grounding for data collection

Collaborative planning

Evaluation technical assistance

X X

X

X X X X

X

X

X

X

X

X

X

X

X

X X

X X

X

X

X

X

X

X

X

X

X

X X

X X

X X

X

9. Disseminate and utilize results Use culturally responsive communication mechanisms Inform a wide range of stakeholders Make use consistent with the purpose of the evaluation Consider community benefit

X X

We systematically mapped each of the culturally responsive strategies outlined by Frierson et al. (2002), and further specified by Kirkhart and Hopson (2010), onto the four collaborative evaluation techniques of O’Sullivan (2004). We inspected each culturally responsive strategy and came to consensus regarding whether or not the strategy was represented in any, some, or all of the four collaborative evaluation techniques. Specifically, we asked if the collaborative approach called for the particular culturally responsive evaluation strategy and if it did not, we viewed it as not aligning with the culturally responsive approach. We concluded

Evaluation fairs X

X

8. Analyze the data Disaggregate data to examine diversity within groups Examine outliers, especially successful ones Seek cultural interpreters to capture nuances of meaning Use stakeholder review panels

4. Mapping collaborative evaluation and culturally responsive evaluation

Evaluation capacity building

X

X

X X

X X X X

X

X X X

X X

X X X

that the four collaborative evaluation techniques outlined by O’Sullivan (2004) align, in part, with the nine-step strategies for achieving culturally responsive evaluations. Aspects that did not align directly onto the collaborative evaluation techniques were those culturally responsive strategies that specifically address the broader culture and cultural context of program participants and providers, particularly those from marginalized or underserved communities (see Table 1). Taking the nine steps as our reference, we believe that collaborative evaluation techniques and culturally responsive strategies align when assembling an evaluation team and engaging stakeholders. Both instruct evaluators to examine the sociocultural context of the program and assemble an appropriate evaluation

556

K. Askew et al. / Evaluation and Program Planning 35 (2012) 552–557

team. Both approaches also ensure that stakeholders are engaged and their voices are heard throughout the evaluation. In addition, both approaches focus on understanding the program’s context to grasp how best to orient the evaluation given the program’s unique history and circumstances. The steps where the techniques do not fully align with the culturally responsive evaluation strategies are the following: identifying purpose and intent, framing the right questions, designing the evaluation, selecting/adapting instruments, collecting data, analyzing data, and disseminating/utilizing results. We want to be clear that these steps may be taken if collaborative evaluators are not strategically operating from a culturally responsive approach. For example, with regards to purpose and intent, Kirkhart and Hopson (2010) suggested that culturally responsive evaluators ask if the program operates in ways that are respectful of cultural context. In addition, they suggest that the evaluator should ask if the program resources are equitably distributed, who is benefiting from the program, and if these benefits are equitably distributed. Collaborative evaluators may ask about environmental factors and how the program connects with intended participants, but may not ask if the program is respectful of cultural context, and if the program’s resources and benefits are equitably distributed. Collaborative evaluators may create evaluation questions with significant stakeholders, but may not examine whose voices are heard in the choice of evaluation questions or may not always ask whether the lived experiences of all stakeholders are reflected in the choices made during the evaluation. Collaborative evaluators may collect qualitative and quantitative data at multiple points in time but may not determine if the language and content of the instruments are culturally sensitive. Collaborative evaluators may establish validity and reliability of instruments but may not use appropriate norms for the groups involved in the program. In addition, collaborative evaluators may partner with members of the organization and community but may not select data collectors who have shared lived experiences with program participants to ensure culturally responsive data collection. Moreover, collaborative evaluators may not use the broader cultural context as a component for interpreting disaggregated data to examine diversity, or use a cultural interpreter to capture nuances of meaning. Finally, collaborative evaluators may consider community benefit but may not use culturally responsive communication mechanisms to engage all levels of stakeholders. When used effectively, collaborative evaluation techniques immerse evaluators in the cultural milieu, integrate community expertise, and facilitate the co-creation of more complex understanding. However, we note that without explicit attention to considerations raised by the culturally responsive framework, the voices and concerns of underrepresented populations may be acknowledged but not explicitly or adequately addressed. For example, the collaborative evaluator must include cultural brokers, inquire about the equitable distribution of program resources, and advocate for equitable distributions of power throughout the evaluation process.

impacted by the evaluator–stakeholder relationship as a result of the evaluation process. First, we assert that while the use of collaborative evaluation techniques assists evaluators in accurately capturing the organizational context of the program, they may not necessarily attend to the broader cultural context in which the program operates (including explicit attention to the racial, ethnic, or cultural backgrounds of program clients). In short, collaborative evaluators risk missing opportunities to meaningfully engage program clients, particularly clients who may belong to marginalized or underserved populations. For example, in evaluating an academic program that sought to increase the number and percentage of low-income students enrolled in academically rigorous, higherlevel academic courses, we sought an understanding of the cultural context of the program via conversations with program staff (who developed the program) and teachers (who implemented the curriculum). However, we failed to directly engage the program clients (the students), many of whom were students of color, in the overall development and execution of the evaluation and thus missed the opportunity to have their voices and experiences influence our questions and our conclusions. Further, culturally responsive evaluators assert that spaces and places should be intentionally created, and permission explicitly sought and granted, to bring up and respond to issues of race, power, and privilege. This practice can be especially helpful with programs, like the one noted above, that explicitly target marginalized or underserved populations, but may be designed and/or administered by people who are culturally dissimilar (e.g., Jay, Eatmon, & Frierson, 2005; Zulli & Frierson, 2004). Second, we conclude that while the use of collaborative evaluation techniques fosters stakeholder evaluation capacity building, collaborative evaluators may overlook they ways in which the process does and should also impact their own professional development (Hood, 2009; SenGupta et al., 2004; Symonette, 2004). Adopting a culturally responsive stance can reframe the evaluation as an opportunity for evaluators to enhance their own cultural competence (e.g., SenGupta et al., 2004; Symonette, 2004). For example, in working with a college access program that served 20 ethnically and racially diverse districts, our collaborative evaluation approach ensured that we sought input from program directors regarding the different components and technical aspects of their individual programs, but we neglected to solicit information on how the different racial and socioeconomic dynamics of their communities may have impacted the implementation of their programs. Consequently, we missed out on a potential transfer of cultural knowledge from stakeholder to evaluator. A culturally responsive stance assumes the transformative potential of a bidirectional exchange between evaluator and stakeholder of content and cultural knowledge. Such a process increases the chances that program operation and impact will be better understood and recorded, while allowing for shared ownership of the evaluation process and product, the procurement of socially responsible actionable outcomes, and mutually beneficial professional relationships.

5. Discussion

6. Conclusions

Our purpose in mapping collaborative evaluation techniques (O’Sullivan, 2004) and culturally responsive evaluation strategies (Frierson et al., 2002; Kirkhart & Hopson, 2010) was to seek out similarities and distinctions between the two approaches. We found considerable overlap: emphasis on program context and stakeholder engagement to define the evaluation priorities and processes. However, there were two major points of departure between the two approaches: the specification of culture and cultural context and the emphasis on who is and should be

In this article, we have attempted to examine the logic behind what, in part, we understood and intuitively executed in our own practice. We have often wondered if our culturally responsive orientation was part of what we do, or simply a function of who we are and whether our collaborative evaluation practices and processes enhance our ability to listen to the voices from underrepresented groups that we are entrusted to represent. In our practice, we have found the two approaches to be complementary and have resisted the pull to abide solely by the tenets of

K. Askew et al. / Evaluation and Program Planning 35 (2012) 552–557

one or the other. Reflecting on these questions has led us to see moments where, in our own evaluation practice, we failed to bring attention to cultural concerns that would have enhanced multicultural validity (Kirkhart, 2005). We assert that we are collaborative evaluators with a culturally responsive lens. This lens is fashioned by the nature of who we are and our social justice agenda; Moreover, it is honed by our study of culturally responsive evaluation practices. This comparative analysis of the two evaluation approaches and continual reflection on our techniques ensures that we will be intentional and strategic about being both collaborative and culturally responsive. While we have not yet arrived at completely satisfactory answers to all the questions that fascinate us about the marriage of these two approaches, we do believe that in the context of the collaborative evaluation approach, critical reflections are a necessary and ongoing part of making the journey toward cultural competence conscious and deliberate (Symonette, 2004).

References Betrand, T. C. (2006). Cultural competency in evaluation: A Black perspective. Unpublished Dissertation, Florida State University, Tallahassee, FL. Cousins, J. B. (2001). Do evaluators and program practitioner perspectives converge in collaborative evaluation. The Canadian Journal of Program Evaluation, 16(2), 113– 133. Cousins, J. B., & Earl, L. M. (Eds.). (1995). Participatory evaluation in education: Studies of evaluation use and organizational learning. London: Falmer. Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2003). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Upper Saddle, NJ: Prentice Hall. Frierson, H. T., Hood, S., & Hughes, G. B. (2002). Strategies that address culturally responsive evaluation. In J. Frechtling (Ed.), The 2002 user-friendly handbook for project evaluation. Arlington, VA: The National Science Foundation. Hood, S. (2001). Nobody knows my name: In praise of African American evaluators who were responsive. New Directions for Evaluation, 92, 31–43. Hood, S. (2009). Evaluation for and by Navajos: A narrative case of the irrelevance of globalization. In K. E. Ryan & J. B. Cousins (Eds.), The SAGE international handbook of educational evaluation (pp. 447–463). Thousand Oaks, CA: Sage. Hopson, R. K. (2009). Reclaiming knowledge at the margins: Culturally responsive evaluation in the current evaluation moment. In K. E. Ryan & J. B. Cousins (Eds.), The SAGE international handbook of educational evaluation (pp. 429–446). Thousand Oaks, CA: Sage. Jay, M., Eatmon, D., & Frierson, H. T. (2005). Cultural reflections stemming from the evaluation of an undergraduate research program. In S. Hood, R. K. Hopson, & H. Frierson (Eds.), The role of culture and cultural context in evaluation: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 197–212). Greenwich, CT: Information Age. Kirkhart, K. E. (2005). Through a cultural lens: Reflections on validity and theory in evaluation. In S. Hood, R. K. Hopson, & H. T. Frierson (Eds.), The role of culture and cultural context: A mandate for inclusion, the discovery of truth, and understanding in evaluative theory and practice (pp. 21–39). Greenwich, CT: Information Age.

557

Kirkhart, K. E., & Hopson, R. K. (2010). Strengthening evaluation through cultural relevance and cultural competence. Paper presented at the American Evaluation Association/Centers for Disease Control 2010 Summer Evaluation Institute. Ladson-Billings, G., & Tate, W. F. (1995). Toward a critical race theory of education. Teachers College Record, 97, 47–68. Nelson-Barber, S., LaFrance, J., Trumbull, E., & Aburto, S. (2005). Promoting culturally reliable and valid evaluation practice. In S. Hood, R. K. Hopson, & H. Frierson (Eds.), The role of culture and cultural context in evaluation (pp. 61–85). Greenwich, CT: Information Age. O’Sullivan, R. G. (2004). Practicing evaluation: A collaborative approach. Thousand Oaks, CA: Sage. Preskill, H., & Torres, R. T. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage. Rodrı´guez-Campos, L. (2005). Collaborative evaluations. Tamarac, FL: Llumina Press. Rodrı´guez-Campos, L., Martz, W., & Rincones-Go´mez, R. (2010). Applying the model for collaborative evaluations to a multicultural seminar in a nonprofit setting. Journal of MultiDisciplinary Evaluation, 6(13), 109–117. SenGupta, S., Hopson, R. K., & Thompson-Robinson, M. (2004). Cultural competence in evaluation: An overview. New Directions for Evaluation, 102, 5–19. Stake, R. E. (1983). Program evaluation, particularly responsive evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 287–310). Norwell: Kluwer-Nijhoff. Stanfield, J. H., & Dennis, R. M. (Eds.). (1993). Race and ethnicity in research methods. Newbury Park, CA: Sage. Stufflebeam, D. (2001). Evaluation models. New Directions for Evaluation, 89, 7–98. Symonette, H. (2004). Walking pathways toward becoming a culturally competent evaluator: Boundaries, borderlands, and border crossings. New Directions for Evaluation, 102, 95–109. Torres, R., Stone, S. P., Butkus, D., Hook, B., Casey, J., & Arens, S. (2000). Dialogue and reflection in a collaborative evaluation: Stakeholder and evaluator voices. New Directions for Evaluation, 85, 27–38. Zulli, R. A., & Frierson, H. T. (2004). A focus on cultural variables in evaluating an Upward Bound program. New Directions for Evaluation, 102, 81–93. Karyl Askew is a doctoral candidate in the School of Education at the University of North Carolina at Chapel Hill. Her concentration is in educational psychology, measurement, and evaluation. Askew investigates the role of informal science education and college-access programs on the educational and vocational choices of underserved populations. She has conducted evaluations of statewide college-access and scienceoutreach initiatives, as well as museum-based education programs. Monifa Green Beverly is an Assistant Professor of Educational Research in the Department of Educational and Human Sciences at the University of Central Florida. She specializes in qualitative research methods, program evaluation, and foundations of education. Her research interests include Critical Race Theory and educational issues concerning people of color, Black education/history, college readiness programs, and cultural competence in research and evaluation. Michelle L. Jay is an Assistant Professor of Foundations of Education in the Department of Educational Studies at the University of South Carolina where she teaches courses on the role of race and racism in education, sociology of education, and qualitative research methods. Jay’s research interests include critical race theory in education, anti-biased/anti-racist education practice, and collaborative and culturally-responsive evaluation approaches. Her work has been published in New Directions for Evaluation, the International Journal of Qualitative Studies in Education, and Multicultural Perspectives.