Evaluating minority retention programs: Problems encountered and lessons learned from the Ohio science and engineering alliance

Evaluating minority retention programs: Problems encountered and lessons learned from the Ohio science and engineering alliance

ARTICLE IN PRESS Evaluation and Program Planning 31 (2008) 277– 283 Contents lists available at ScienceDirect Evaluation and Program Planning journa...

174KB Sizes 1 Downloads 51 Views

ARTICLE IN PRESS Evaluation and Program Planning 31 (2008) 277– 283

Contents lists available at ScienceDirect

Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan

Evaluating minority retention programs: Problems encountered and lessons learned from the Ohio science and engineering alliance Jeffry L. White a,, James W. Altschuld b,1, Yi-Fang Lee c,2 a b c

Ashland University, 401 College Ave., 225 Schar Hall, Ashland, OH 44805, USA The Ohio State University, 1900 Kenny Road, Columbus, OH 43212, USA National Chi Nan University, 1, University Road, Puli, Nantou 545, Taiwan

a r t i c l e in fo

abstract

Article history: Received 5 June 2007 Received in revised form 8 January 2008 Accepted 31 March 2008

The retention rates for African-Americans, Hispanics, and Native-Americans in science, technology, engineering, and mathematics (STEM) are lower than those of White or Asian college students. In response, the National Science Foundation formed statewide partnerships of universities to develop programs to address this disparity. The deliberations and experiences in evaluating one such partnership are retrospectively reviewed. Problems and issues encountered during conceptualization and implementation are presented. Lessons learned from this endeavor should generalize to similar situations and provide guidance for others new to or interested in evaluating STEM retention programs as well as those evaluating collaborative endeavors. & 2008 Elsevier Ltd. All rights reserved.

Keywords: Minority Retention STEM evaluation

1. Purpose The ideas in this paper are based on a retrospective analysis of the authors’ experiences as they planned and implemented an evaluation of the Ohio Science and Engineering Alliance (OSEA). Problems encountered in conceptualizing the evaluation and implementation issues are briefly examined. Although seasoned evaluators (in one case more than 35 years of evaluation practice), we were unfamiliar with the nature of minority retention programs and relatively new to work in a very complex consortium environment. This led us to approach conceptualization of variables and strategies in a somewhat unfettered manner using intuition and the literature as guides. Factors observed during the evaluation that contributed to its success should be useful to others engaged in evaluating consortium types of endeavors or those just entering into the evaluation of retention programs for underrepresented minorities in STEM.

2. Setting The authors were asked to evaluate the OSEA, a consortium of 15 public and private institutions. Its goal is to increase the  Corresponding author. Tel.: +1 419 289 5643.

E-mail addresses: [email protected] (J.L. White), [email protected] (J.W. Altschuld), [email protected] (Y.-F. Lee). 1 Tel.: +1 614 292 6541. 2 Tel.: +886 049 2910960. 0149-7189/$ - see front matter & 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2008.03.006

recruitment and retention of underrepresented minorities (URM) earning baccalaureate degrees in science, technology, engineering, and mathematics (National Science Foundation, 2006; Ohio Science and Engineering Alliance, 2003). Another more futuristic goal was that more such students would pursue graduate study in science, technology, engineering, or mathematics (STEM). Beyond these outcomes, what OSEA might do had to evolve as the consortium began to develop. The Alliance was created in recognition that URM departure rates in Ohio were similar to those observed nationwide (Hayes, 2002). Simply stated, the number earning degrees in STEM is low compared to White and Asian students (Armstrong & Thompson, 2003; Astin, Parrott, Korn, & Sax, 1997; Hayes, 2003; Jonides, 1995; Marguerite, 2000; Mashburn, 2000; Morrison, 1995; Swail, Redd, & Perna, 2003; Wyer, 2003). Since 2003, OSEA conducted statewide student research forums, fostered a student exchange program across universities, and sponsored internships and faculty training workshops. A governing board of university administrators meets annually and a faculty steering committee 3–4 times per year. The latter group has responsibilities for planning most of the OSEA programs. What the consortium specifically would do across the 15 institutions had to be negotiated. Many of the universities already had programs in place for minority students in STEM and while some of them were similar, enough differences existed that would make it prohibitively expensive to attempt to evaluate such entities in so many locations. Aside from these programs, there was very little in the way of common efforts across the schools.

ARTICLE IN PRESS 278

J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

There were limited resources for evaluation. Via the Louis Stokes Alliances for Minority Participation (LSAMP), the Alliance received $750,000 per year from the National Science Foundation for 5 years for the 15 OSEA members (National Science Foundation, 2005; Ohio Science and Engineering Alliance, 2003). Less than $30,000 (approximately 4%) was available annually for evaluation purposes. To leverage the available resources, a graduate student volunteer joined the evaluation team (one faculty member at 10% time and one part-time research assistant) in exchange for the ability to conduct dissertation research.

3. Problems and issues in conceptualization The team as noted was experienced but not knowledgeable in the complexities of minority retention issues in STEM. Because of this, it started to review the evaluations of the 28 LSAMP projects that preceded the one in Ohio and related technical publications (American Association for the Advancement of Science, 2001; Nagda, Gregerman, Jonides, von Hipple, & Lerner, 1998; National Academy of Science, 1987; National Science Board, 2002, 2003; National Science Foundation, 2003; Ohio Board of Regents, 2004; Ohio Science and Engineering Alliance, 2003, 2004, 2005; Smith, 2000; Unrau, 1999; US Department of Education, 2004a; US Office of Science and Technology Policy, 2000). The impression that emerged was that the focus was primarily on head counts of participation and graduation rates of minorities in STEM fields. It was also interesting to the authors that no experimental evaluation was evident in the LSAMP literature, a stark contrast to the increasing reliance on experimental design by the federal government (US Department of Education, 2004b). The team then moved to the general retention literature that led to the first point in Table 1.

Table 1 Problems and issues in conceptualizing STEM evaluations Problem

Issue

Comments

Complex phenomenon

Admission and graduation rates not helpful in understanding why URM prematurely depart from STEM

Start with technical publications (LSAMP reports), expand to general retention literature and research and evaluation studies Mapping concepts is helpful

Approach considerations

Research and evaluation approaches are useful for thinking about this specific evaluation

Evaluations were summative, formative, needs assessment or cost– benefit analysis Research seeks to confirm or support theoretical foundations

Choice of guiding theory

Two main assumptions guide most retention studies

Developmental or environmental models, and/or combination of both

Methodology considerations

Predominantly quantitative with some mixed methods

Literature indicates need for more qualitative data

Data collection and sampling considerations

Evaluators use multiple data collection techniques and single group perspective

Questionnaires and database analysis popular. Evaluations need other groups perspectives See prior column comment

Add in part from research Variables and significant findings

Numerous variables and retention factors

Balance between significant factors and retention program activities

Be attentive to critical variables

Variables important to URM retention in STEM missing from evaluation and research studies

With numerous possible variables, must decide which ones most important for evaluation

3.1. Complexity of the phenomenon While numbers are useful, they do not provide detailed understanding of the reasons underlying early departure. It became clear that what happens in retention programs is neither simple nor easy to characterize. For the population of interest here, this may include adjustment to college life, integration into the STEM fields, feelings of self-worth and competence, being a member of a minority group, faculty and peer relationships, academic guidance, cultural issues, etc. (Braxton, 2000; Earwaker, 1992; Haselgrove, 1992; Taylor & Miller, 2002). With this in mind, a more concentrated search of research and evaluation studies in the field for the period 1988–2003 was undertaken via keywords such as retention, persistence, STEM, and minorities. This effort yielded approximately 100 sources from ERIC, Education Abstracts, Higher Education Abstracts, Sociological Abstracts, Social Science Citation Index, and Dissertation Abstracts. Based on criteria such as descriptions of the methodology and emphases on minority retention in STEM, the sources were paired to 27 research and evaluation studies. The sources were examined in terms of methods used, samples, variables, significant findings, and other aspects of their implementation. Commonalities helpful for the OSEA evaluation were noted as well as a significant degree of variability within and between the research and evaluation studies, which added a challenging feature to an already complex problem. What resulted from our examination is given below. 3.2. Approach considerations The evaluations differed from research in a number of ways. Evaluators examined programs in terms of effectiveness

(summative), strengths and weaknesses during implementation (formative), needs assessment, or cost–benefits analysis. Summative evaluation appeared more often with formative and cost–benefit analysis being less prominent. The evaluations had two purposes, providing evidence about what retention programs were doing and the degree to which student outcomes were improved. Researchers, on the other hand, tended more toward verification or contribution to the theoretical foundations of retention. From that perspective, we reviewed theories to identify possible variables for the OSEA evaluation. This led to the second entry in Table 1—What theoretical assumption (s) should we use for the project?

3.3. Choice of guiding theory Two main categories of epistemology guide most retention studies (Upcraft & Gardner, 1989). From a developmental stance retention is viewed in terms of students adjusting to change and making progress toward their educational goals (Chickering, 1969; Sanford, 1967). The second or person–environment perspective looks more at the integration into and involvement in campus life

ARTICLE IN PRESS J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

(Astin, 1975, 1984; Bronfenbrenner, 1979; Tinto, 1975, 1987). Success in the latter comes from the ability to assimilate into different aspects of the institution and its social milieu. The two epistemologies share to a degree interaction with faculty, peers, and others, which seems to be particularly relevant for minority retention and the evaluation of programs designed to decrease attrition. Students who were better at forming new relationships and social networks while making academic progress were more likely to persist. Thus variables for the OSEA project might be student–student and student–faculty interactions. The literature was also informative in terms of the methodologies used in research and evaluation endeavors.

Table 2 Sample of retention variables studied Variables

Attributes

Personal

Interest, motivation, satisfaction, attitude, perceived changes/perceptions, personal goals, etc.

Academic

Pre-college academic achievement (GPA, SAT, ACT), college academic achievement (GPA, quality credit averages, Stanford Test of Academic Skills), multiple study skills, habits, class absenteeism, etc.

Institutional

Supportive climate, commitment, environment, quality and type of institution (public or private), faculty contact, cooperation among students, clear expectations, communicating high faculty expectations, respect for diverse talent, student– faculty ratio, etc.

Cultural

Pre-college values, cultural identity, ethnic/cultural values, family support, etc.

Sociological

Social integration, support/climate, extracurricular activities, role model, living in university housing, etc.

Psychological

Alienation, feelings about college, self-esteem, self-beliefs, comfort, etc. Financial aid, program cost, part-time employment, etc.

3.4. Methodology considerations The methodology was somewhat different based on whether it was evaluation or research. Most research and evaluation studies were quantitative or employed mixed methods designs. Researchers were more likely to opt for a single method and engaged in more descriptive investigations. Even though quantitative approaches were prominent in research and evaluation, some studies indicated the need for more qualitative data.

Economic Background

Demography (ethnicity, gender, age, SES, parents’ educational level, income), credit completion ratio, participation (related to student working), retention status (dismissal, dropout, or persist with/without probation), etc.

Others

Program involvement, organizational fit, etc.

3.5. Data collection and sampling considerations Evaluators were inclined to use multiple data gathering techniques whereas researchers were prone to employ only one. Written questionnaires and database analyses were popular across both types of activities but researchers used more faceto-face interviews than evaluators. Sampling was similar. Single samples of students, faculty or administrators were usually involved. Researchers were more expansive, including other groups such as program leaders, experts in the field, and institutional representatives. 3.6. Variables and significant findings The search for potential variables for the OSEA evaluation proved to be quite interesting. In Table 2, some of the variables that have been studied (1988–2003) are shown. They include personality traits, institutional characteristics, psychological factors, cultural elements, high school preparation, socialization skills, financial aid, interest in STEM, etc. Variables for evaluations ranged from intrinsic factors such as motivation, personal or professional goals to general satisfaction with the university and perceived institutional support. Academic support was most frequently investigated followed by mentoring. Another recurring theme was the availability of retention programs. Most evaluations dealt with an array of services on university campuses. Research endeavors tended to look at retention activities in terms of academic performance, social integration, psych-social development, and attrition rates. Additionally, the results were equivocal. The 1st year Bridge program, for example, was found to be advantageous in one case (Takahashi, 1991), but not in another (Prather, 1996). The difference could have been due to the nature of participants, campus environments, program structures, means of implementation, and so forth.

279

Cultural values have not been extensively assessed (Maras, 2000; Seymour & Hewitt, 1997), even though well-known writers in the field see it as influencing attrition (Astin, 1993; Ibarra, 1999; Kuh, 2001; Kuh & Love, 2000; Kuh, Schuh, Whitt, & Associates, 1991; Pascarella & Terenizini, 1991). Mertens and Hopson (2006) contend that it warrants consideration for STEM program evaluation. Toward that end, the OSEA evaluation included a set of cultural items (White, Altschuld, & Lee, 2006b) in conjunction with traditional ones such as academic preparation, communication skills, coping abilities, interaction skills, financial aid, etc.

3.8. Summary of main features derived from the literature First, quantitative methods (mostly surveys) were more prevalent, with a lesser use of experimental designs. The lack of comparison groups could explain why there was limited evidence for retention program effectiveness. Second, evaluations utilized more multiple data collection techniques (10) than did the research studies (3). Third, while both had the tendency to rely on single samples, researchers were somewhat more open in regard to sampling. Fourth, there was no prevailing pattern to the variables investigated; they spread across the landscape from financial aid, high school preparation, early role models, social integration, and interest in STEM (Carter, 2002; Fletcher, 1998; Marguerite, 2000: Mitchell, 1993; Rodriguez, 1993; Seymour & Hewitt, 1997; Thompkins, 1991; Wright, 1989). Finally, the role culture played in the persistence of minorities in STEM was seen in general but less so in terms of direct assessment.

3.7. Missing variables While there were many variables to choose from for the OSEA evaluation, we observed a gap in prior evaluations. Noticeably absent from the literature were cultural values (Mertens & Hopson, 2006; Seymour & Hewitt, 1997).

4. Problems and issues in implementation After selecting variables, we were confronted with other problems that had to be addressed before implementing the

ARTICLE IN PRESS 280

J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

evaluation. The first one (Table 3) was associated with gaining entre´e to the subjects across a number of different campuses. 4.1. Access to samples 4.1.1. Creating the sampling frame Concerns emerged early in the OSEA evaluation about access to URM students in STEM in 15 institutions across Ohio. This could be a problem in other LSAMP evaluations that need to construct a database across multiple campuses. Preliminary discussions were initiated with the institutional representatives approximately 12 months before starting the evaluation to address access and privacy issues. These efforts paid off and led to a sampling frame of more than 4000 students 2 months before the study began. With the exception of one (citing local conditions), all schools participated. Without early contact and negotiation, much more time would have been added to the process. 4.1.2. Institutional approval (IRB) for the evaluation With multiple institutions there will be difficulties with human subject protection, Institutional Review Board (IRB) approval. It was initially obtained from the lead institution as a way to gain entry to the other 14 schools (with all but one agreeing to then participate). The project would have been in jeopardy if the authors were required to get authorization from each university separately.

Table 3 Problems and issues in implementing STEM evaluations Problem

Issue

Comments

Access to samples

Student sampling frame Human subjects (IRB) approval Defining STEM

Requires working with (merging) multiple databases Privacy issues govern an institution’s approval More than 300 academic majors CIP codes updated periodically

Timing

Sample shelf-life

Very high student attrition, start study ASAP

Multiple population considerations

Student participation Faculty/program administrators sampling frame

Need for incentives and support at institutional level Labor intensive to create database of faculty, staff, and program administrators

Survey administration considerations

Electronic communication issues

Inactive e-mail address, SPAM filters, returned e-mails, etc.

Considerations when working within a governing board for a consortium

Need more time

Pace slower across number of institutions with multiple decision makers Faculty/staff are volunteers, need to bring newcomers up to speed on evaluation activities Not all willing to participate, within steering committee and the institutions Different (non-STEM) perspectives require time to build chemistry and trust Can’t open all evaluation activities, work with project director and selectively limit participation

Anticipate frequent personnel changes

There will be differential support Background of the evaluation team Full participation is not practical

4.1.3. Defining stem Another subtle issue in sampling is determining which students to include due to the academic programs defined as STEM. More than 300 entries in the Classification of Instructional Programs (CIP) constitute STEM as specified by the National Science Foundation (WebAMP, n.d.). This can be problematic because the CIP is periodically updated and changed. In order to ensure a comprehensive sampling frame each university was provided a spreadsheet of CIP codes to identify URM students in those disciplines and a template with the specific data fields requested for the evaluation. This reduced the time it took to respond to our initial request. 4.2. Timing There were difficulties with timing that will exist in other STEM consortia evaluations. Deciding when to send out initial communications and the survey was complicated by different institutional calendars (semesters, quarters, trimesters). By starting the evaluation shortly after students returned from winter break, conflicts with midterm examinations were avoided. 4.2.1. Shelf-life of the sample Related to timing, 60% of URM change majors and 35% drop out soon after beginning their academic careers (Seymour, 1992; Seymour & Hewitt, 1997). The rapidity of attrition will significantly affect a study designed to understand why students are leaving in the first place. Since the sampling frame was compiled somewhat early into the academic year, it will have inaccuracies due to a high dropout rate or switching to a non-STEM major (George, Neale, Van Horne, & Malcom, 2001; Hayes, 2003; Levin & Wycokoff, 1991; Morrison, 1995, Stith & Russell, 1994). While it was important to get the survey out as fast as possible, the earliest it could be distributed was at the start of the winter term. 4.3. Multiple population considerations The decision to change majors, drop out, or persist in STEM is one most minority students do not make alone and typically involves the input of family, friends, faculty, etc. (Seymour & Hewitt, 1997). Thus it was imperative to include other perspectives in the STEM (re-OSEA) evaluation. Yet, of the studies located, most focused on one group rather than multiple ones. This may be partly due to difficulties in obtaining the participation of some constituencies. 4.3.1. Student willingness to participate We explored the willingness of students to respond during focus group interviews (White, Altschuld, & Lee, 2006a). Many commented that the demands of STEM preclude having time to participate in surveys. To counteract this, faculty, staff, and program administrators were asked to make announcements, post notices and send follow-up e-mails. A drawing for a gift certificate was used to enhance response rates. Serious thought should be given to the type of incentive. We planned to use a bookstore gift certificate but this changed after the students suggested a gift card from a national retailer. 4.3.2. Creating the faculty/program administrator sampling frame As critical actors in the retention environment (Astin, 1993) obtaining the opinions of faculty and program administrators was an important part of the OSEA evaluation. Student interaction with faculty is significant for retention of URM (Stith & Russell, 1994) and may have more of an impact on the ‘‘college experience than any other involvement variable’’ (Astin, 1977, p. 223). Besides

ARTICLE IN PRESS J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

teaching, faculty mentor, recruit, provide research opportunities and serve as role models. Since no sampling frame existed for faculty, it had to be manually compiled from university WEB sites, workshop participation, and departmental listings, a time-consuming process. Once completed, attention was switched to program administrators where the process was equally labor intensive. Contacting this group required snowball-sampling techniques—faculty referrals or those from campus officials and information from university WEB sites. The work associated with developing lists of appropriate samples is a major endeavor in STEM evaluations. But it is only one aspect of a complicated process of administering electronic surveys to all of the pertinent groups. 4.4. Survey administration considerations 4.4.1. Electronic communication Electronic surveys were used. Typically they produce return rates ranging between 14% and 30% (Dillman, 2004). Besides local contacts to prompt student returns, the authors sent follow-up e-mails at 2-week intervals. Direct mailings were necessary when an e-mail was undeliverable (from the sample of 1300 students, nearly 200 were in this category). These cases required extra time to look at ISP excess capacity, expired delivery time, invalid address, etc. before resending or contacting by regular mail. One way to reduce undeliverable e-mails is to make sure that the lists are accurate and up-to-date. Besides the university e-mail addresses, the authors requested all electronic contact information such as home e-mail accounts. For some students this was their primary connection (they only periodically check school addresses). The important thing is to not rely solely on the e-mail provided by the institution. SPAM filters also come into play. They intercept correspondence with subject headings such as survey, your comments needed, assistance with study, etc. Phrases that would trigger the filters were avoided. It is a good idea to consult with IT specialists to reduce the pitfalls that can arise. After ensuring the electronic communication issues were satisfied, the authors shifted their focus to the context in which the evaluation would take place. 4.5. Contextual considerations 4.5.1. Evaluations in a consortium environment requires more time While a consortium allows entree to faculty and students in many institutions, it is not without costs. The pace of work will be much slower and affect the timetable of evaluation activities. This primarily comes from the governance framework of a consortium and undoubtedly will generalize to other LSAMP projects. The problems here fall into several categories. 4.5.2. Frequent changes in governance personnel Because faculty members on the OSEA steering committee are volunteers, its composition tends to vary. This can result in delays as newcomers need time to become familiar with the evaluation activities. Turnover can affect support for the evaluation. 4.5.3. Differential evaluation support Not every institution may be open to evaluation. Even when a steering committee member is supportive it does not mean the Registrar’s Office will be cooperative. Moreover some committee members may not perceive themselves as equals. Faculty from small universities might feel they have less status than those from major institutions. This can translate into variability in enthusiasm within the committee as well as across institutions.

281

4.5.4. Background of the evaluators A seemingly benign consideration is the academic orientation the evaluators bring to the project. The authors’ non-STEM background required more time to build trust due to differences between the social and physical sciences. This observation may be more pronounced in consortia like the one in Ohio. The evaluators needed to develop a working relationship with the steering committee. Chemistry with the group and individual members had to be formed before trust and respect could emerge. One way to ease potential conflicts is to use terminology in a way that scientists understand. Early in the OSEA project, the authors sensed they would do better if they did not ‘speak’ in social science jargon when describing evaluation activities. Instead of referring to reliability an analogy was made to the calibration of scientific equipment. The point is not to assume the orientation of one group (evaluators) is necessarily that of another (steering committee). 4.5.5. Full participation of the governing body is not practical Collegiality is desirable but full participation in the evaluation process can slow it down to a crawl. Rather than reviewing every detail of the evaluation, we worked primarily with the program director while capitalizing on the steering committee in other ways, such as data interpretation or suggested implementation strategies. Another tactic is to use a small number of schools to develop and field test the instrument before going to full distribution. By limiting participation in some aspects and opening it up in others, evaluators can maximize the advantages of a consortium while minimizing some of the potential delays. The key is to be patient, choose the right times for focused suggestions and comments, and stay the course.

5. Lessons learned The authors’ experiences can benefit others involved in evaluating consortium work or STEM minority retention alliances. Consortiums are not limited to STEM or minority retention initiatives. The lessons learned presented in Table 4, offer guidance for evaluating LSAMP-STEM retention programs but are also somewhat generalizable. 5.1. Utilize a comprehensive approach First, think broadly when starting the evaluation. Because there are so many possible variables and factors, we developed concept maps as a useful tool for gaining an overview of student retention. Next, we examined retention theories and cultural issues for application with minority students in STEM. Lastly, we looked at research and evaluation studies for methods, data collection strategies, sampling techniques, variables, and results. The comprehensive view helped in seeing the overall picture before narrowing in on what we would actually evaluate. The issues were multifaceted and intimidating. This should not dissuade but encourage others. Rather than an impediment, complexity was an opportunity to scan the landscape for contrasts, comparisons, and glaring omissions that can be used in evaluation. This approach was as illuminating to the steering committee as to the authors. When the array of possible variables was presented to them, the silence was deafening as they sought to grasp the implications for evaluation. Though they had worked on many STEM efforts before, they never sensed nor fully appreciated the scope of potential variables in Table 2. As they grappled with the

ARTICLE IN PRESS 282

J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

Table 4 Lessons learned in conceptualization and implementation Lesson

Implications

Comprehensive approach

Use concept maps, guiding theory, key variables, and review research and evaluation literature for ideas Interview minority retention experts or others experienced in STEM evaluations.

Expanded time requirements

Accept slower work pace and anticipate more time for evaluation activities

Pay attention to group dynamics

Be attentive to within and between group relationship issues

Learn the subtle nuances of what to avoid in language

Do not use social science research and evaluation jargon whenever possible

Numbers don’t tell the whole story

Dig deeper, multi-faceted and complex issues

issues facing evaluation a sense of shared experiences (teamwork and culture) emerged across the committee and evaluation team. Our perception was that this sense was critically important for evaluation in a consortium framework. 5.2. Working with a consortium requires much more time The tempo is more deliberate and often may appeared mired as a result of changes in steering committee personnel or shifting support for the evaluation. 5.3. Working with a consortium requires attention to the group dynamic Full participation of the Steering Committee in the evaluation was not possible or desirable. Evaluators should pick and choose when and where to include everyone. 5.4. Working in stem requires learning subtle nuances of how to describe evaluation procedures and findings for an audience from the sciences Simply put do not use social science terms and vernacular rather than translate them into the language of the audience. This goes a long way towards building relationships. 5.5. Enrollment and graduation numbers do not tell the whole story Numbers are valuable but they do not tell the whole story of why minorities leave (or stay) in STEM fields. It is necessary to dig more deeply for the subtle features of activities designed to improve retention. This was evident in a subsequent investigation we conducted of the differences in survey responses provided by students and faculty. There was a sharp contrast in views (Lee, Altschuld, & White, 2007). By going beneath the surface, we were able to really explore why groups had varying opinions. References American Association for the Advancement of Science. (2001). In pursuit of a diverse science, technology, engineering, and mathematics workforce: Recommended research priorities to enhance participation by underrepresented minorities. Washington, DC: National Science Foundation. Armstrong, E., & Thompson, K. (2003). Strategies for increasing minorities in the sciences: A University of Maryland, College Park, model. Journal of Women and Minorities in Science and Engineering, 9(2), 40–50. Astin, A. W. (1975). Preventing students from dropping out. San Francisco: Jossey-Bass.

Astin, A. W. (1977). Four critical years: Effects of college on beliefs, attitudes, and knowledge. San Francisco: Jossey-Bass. Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25, 297–308. Astin, A. W. (1993). What matters in college: Four critical years revisited. San Francisco: Jossey-Bass. Astin, A. W., Parrott, S. A., Korn, W. S., & Sax, L. J. (1997). The American freshman: Thirty-year trends. Los Angeles: Higher Education Research Institution, University of California at Los Angeles. Braxton, J. M. (Ed.). (2000). Reworking the student departure puzzle. Nashville, TN: Vanderbilt University Press. Bronfenbrenner, U. (1979). The American family: Current perspectives. Cambridge, MA: Harvard University Press. Carter, V.M. (2002). Existence and persistence: The effects of institutional characteristics on persistence and graduation rates at four-year colleges and universities. Unpublished doctoral dissertation, Georgia State University, Atlanta. Chickering, A. W. (1969). Education and identity. San Francisco: Jossey-Bass. Dillman, D. (2004). Internet surveys: Back to the future. The Evaluation Exchange, 3, 6–7. Earwaker, J. (1992). Helping and supporting students: Rethinking the issues. The Society for Research into Higher Education. Open University Press. Fletcher, J. T. (1998). A study of the factors affecting advancement and graduation for engineering students. Dissertation Abstracts International, 59(11), 4076 (UMI No. 9912921). George, Y. S., Neale, D. S., Van Horne, V., & Malcom, S. M. (2001). In pursuit of a diverse science, technology, engineering, and mathematics workforce: Recommended research priorities to enhance participation by underrepresented minorities. Washington, DC: American Association for the Advancement of Science. Haselgrove, S. (1992). Why the student experience matters. In: S. Haselgrove (Ed.), The student experience. Society for Research into Higher Education. Open University Press. Hayes, R. Q. (2002 Junee). 2001– 02 CSRDE Report: The retention and graduation rates in 360 colleges and universitiesCenter for Institutional Data Exchange and Analysis. Norman, OK: Center for Institutional Data Exchange and Analysis, University of Oklahoma. Hayes, R. Q. (2003 Julyy). Executive summary: Retention and graduation rates of 1995– 2001 freshmen cohorts entering in science, technology, engineering and mathematics majors in 211 colleges and universities. Center for Institutional Data Exchange and Analysis. Norman, OK: Center for Institutional Data Exchange and Analysis, University of Oklahoma. Ibarra, R. A. (1999). Multicontextuality: A new perspective on minority underrepresentation in SEM academic fields. Making Strides (American Association for the Advancement of Science), 1(3), 1–9. Jonides, J. (1995). Evaluation and dissemination of an undergraduate program to improve retention of at-risk students. ERIC Document Reproduction Service No. ED414841. Kuh, G. D. (2001). Organizational culture and student persistence: Prospects and puzzles. Journal of College Student Retention: Research, Theory and Practice, 3(1), 23–39. Kuh, G. D., & Love, P. G. (2000). A cultural perspective on student departure. In J. M. Braxton (Ed.), Reworking the student departure puzzle (pp. 196–212). Nashville, TN: Vanderbilt University Press. Kuh, G., & Schuh, J. (1991). San Francisco: Jossey-Bass. Lee, Y-F., Altschuld, J. W., & White, J. L. (2007). Effects of the participation of multiple stakeholders in identifying and interpreting perceived needs. Evaluation and Program Planning, 30(1), 1–9. Levin, J., & Wycokoff, J. (1991). The responsibility for retention: Perceptions of students and university personnel. Education, 8, 461–469. Maras, J. P. (2000). The perceived effectiveness of climate sensitivity and retention infrastructures designed to improve domestic minority student retention at a predominately white, mid-west, comprehensive, public university. Dissertation Abstracts International, 61(07), 2539 (UMI No. 9978845). Marguerite, B. H. (2000). Pathways to success: Affirming opportunities for science, mathematics, and engineering majors. Journal of Negro Education, 69(1–2), 92–111. Mashburn, A. J. (2000). A psychological process of student dropout. Journal of College Student Retention, 2, 173–190. Mertens, D. M., & Hopson, R. K. (2006). Advancing evaluation of STEM efforts through attention to diversity and culture. New Directions in Evaluation, 109, 35–51. Mitchell, G. D. (1993). Factors related to minority student enrollment and retention in the College of Agriculture and School of Natural Resources at the Ohio State University. Dissertation Abstracts International, 54(08), 2851 (UMI No. 9401322). Morrison, C. (1995). Retention of minority students in engineering: Institutional variability and success. NACME Research Letter (National Action Council for Minorities in Engineering, New York), 5, 3–23. Nagda, B. A., Gregerman, S. R., Jonides, J., von Hipple, W., & Lerner, J. S. (1998). Undergraduate student–faculty research partnerships affect student retention. The Review of Higher Education, 22, 55–72. National Academy of Science. (1987). Nurturing science and engineering talent: A discussion paper. Washington, DC: The government-industry research roundtable. National Science Board. (2002). Science and engineering indicators: 2002. Report No. NSB-02-1, Arlington, VA. Retrieved December 4, 2003 from /http://www. nsf.gov/sbe/srs/seind02/toc.htmS.

ARTICLE IN PRESS J.L. White et al. / Evaluation and Program Planning 31 (2008) 277–283

National Science Board. (2003). The science and engineering workforce: Realizing America’s potential. Retrieved July 22, 2004 from: /http://www.nsf.gov/nsb/ documents/2003/nsb0369/nsb0369_5.pdfS. National Science Foundation. (2003). Women, minorities and persons with disabilities in science and engineering: 2002. Retrieved January 14, 2004 from: /http://www.nsf.gov/sbe/srs/nsf03312/pdf/women02.pdfS. National Science Foundation. (2005). Celebrating achievement through performance indicators. The University of Alabama at Birmingham, Louis Stokes Alliances for Minority Participation. National Science Foundation. (2006). Celebrating achievement through increasing opportunities. The University of Alabama at Birmingham, Louis Stokes Alliances for Minority Participation. Ohio Board of Regents. (2004). Enrollment in Ohio’s public institutions of higher education, student count by sex, race and degree level awarded. Retrieved March 20, 2004 from /http://www.regents.state.oh.usS. Ohio Science and Engineering Alliance. (2003). LSAMP: The Ohio Science and Engineering Alliance. NSF Cooperative Agreement No. HRD-0331560. The Ohio State University, Columbus, OH. Ohio Science and Engineering Alliance. (2004). LSAMP: The Ohio Science and Engineering Alliance. Retrieved January 15, 2004 from: /https://www.fastlane. nsf.gov/servlet/showaward?award=0331560S. Ohio Science and Engineering Alliance. (2005). 2005 Glenn-Stokes Summer Research Internship: Status of applications and acceptances. Columbus, OH: The Ohio State University. Pascarella, E. T., & Terenizini, P. T. (1991). How colleges affects students: Findings and insights from twenty years of research. San Francisco: Jossey-Bass. Prather, E. (1996). Better than the SAT: A study of the effectiveness of an extended bridge program on the academic success of minority first-year engineering students. Dissertation Abstracts International, 57(03), 1049 (UMI No. 9622371). Rodriguez, C. M. (1993). Minorities in science and engineering: Patterns for success (persistence). Dissertation Abstracts International, 54(11), 4006 (UMI No. 1342012). Sanford, N. (1967). Where colleges fail: A study of the student as a person. San Francisco: Jossey-Bass. Seymour, E. (1992). The problem iceberg in science, mathematics and engineering education: Student explanations for high attrition rates. Journal of College Student Teaching, 21, 230–238. Seymour, E., & Hewitt, N. M. (1997). Talking about leaving: Why undergraduates leave the sciences. Bolder, CO: Westview Press. Smith, T. Y. (2000). Science, mathematics, engineering and technology retention database. Research News on Graduate Education, 3(2). Retrieved January 12, 2004 from /http://ehrweb.aaas.org/mge/Archives/5/smet.htmlS. Stith, P.L., & Russell, F. (1994). Faculty/student interaction: Impact on student retention. Paper presented at the annual meeting of the Association for Institutional Research. New Orleans, LA, June. Swail, W. S., Redd, K. E., & Perna, L. W. (2003). Retaining minority students in higher education: A framework for success. ASHE-ERIC Higher Education Report, JosseyBass Higher and Adult Education Series. Takahashi, J. S. (1991). Minority student retention and academic achievement. Dissertation Abstracts International, 52(04), 1227 (UMI No. 9128836). Taylor, J. D., & Miller, T. K. (2002). Necessary components for evaluating minority retention programs. NASPA Journal, 39, 266–282.

283

Thompkins, G. O. (1991). Influences and academic interests of freshman minority engineering students at Michigan State University: Implications for minority student retention. Dissertation Abstracts International, 59(07), 2394 (UMI No. 9839707). Tinto, V. (1975). Dropout from higher education: A theoretical synthesis of recent research. Review of Education Research, 45, 89–125. Tinto, V. (1987). Leaving college: Rethinking the causes and cures of student attrition. Chicago: University of Chicago Press. Unrau, L. (1999). Recruiting, retaining minorities in science programs examined. Rice News, 8(25), March 18. Retrieved December 3, 2003 from /http://www. rice.edu/projects/reno/rn/19990318/recruiting.htmlS. Upcraft, M., & Gardner, J. (1989). A comprehensive approach to enhancing freshman success. In M. Upcraft, & J. Gardner (Eds.), The freshman year experience: Helping students to survive and succeed in college. San Francisco: Jossey-Bass. US Department of Education. (2004a) National Center for Education Statistics. Retrieved March 21, 2004 from /http://nces.ed.govS. US Department of Education. (2004b). Policy and Program Studies Service. Review of the Fund for the Improvement of Postsecondary Education (FIPSE) Comprehensive Program, Washington, DC. US Office of Science and Technology Policy. National Science and Technology Council. (2000). Ensuring a strong US scientific, technical and engineering workforce in the 21st century, Washington, DC. WebAMP. (n.d.). National Science Foundation. Online data reporting system for the Louis Stokes Alliances for Minority Participation. Retrieved March 12, 2004 from /http://discovery.qrc.com/nsf/ehr/ampS. White, J. L., Altschuld, J. W., & Lee, Y. F. (2006a). Persistence of interest in science, technology, engineering, and mathematics: A minority retention study. Journal of Women and Minorities in Science and Engineering, 12(1), 47–65. White, J. L., Altschuld, J. W., & Lee, Y. F. (2006b). Cultural dimensions in science, technology, engineering and mathematics: Implications for minority retention research. Journal of Educational Research and Policy Studies, 6(2), 41–59. Wright, R. G. (1989). Factors affecting retention of Black and Hispanic students in a community college system and implications for transfer to senior institutions of higher education in Texas. Dissertation Abstracts International, 50(11), 3444 (UMI No. 9007553). Wyer, M. (2003). The importance of field in understanding persistence among science and engineering majors. Journal of Women and Minorities in Science and Engineering, 9(3-4), 76–90.

Jeffry L. White is Assistant Professor of Educational Foundations at Ashland (Ohio) University. His primary foci are in quantitative methods and minority student retention.

James W. Altschuld is Professor Emeritus of Quantitative Research, Evaluation and Measurement in Education at The Ohio State University. His research and writing interests are in needs assessment and the training of evaluators.

Yi-Fang Lee is Assistant Professor of Comparative Education at National Chi Nan (Taiwan) University. She specializes in methodology and evaluation studies.