An evaluator's workstation: Adding a qualitative research tool

An evaluator's workstation: Adding a qualitative research tool

Tools, Products, Services An Evaluator’s Workstation: Adding a Qualitative Research Tool JEFFREY BEAUDRY INTRODUCTION With the continuing waves of ...

591KB Sizes 1 Downloads 74 Views

Tools, Products, Services

An Evaluator’s Workstation: Adding a Qualitative Research Tool JEFFREY

BEAUDRY

INTRODUCTION With the continuing waves of new, versatile, and more powerful microcomputers, a significant set of tools has been placed in the hands of program evaluators. The hardware capabilities of new generations of microcomputers have enabled software designers to broaden the depth and range of computer applications (Gray, 1988). From word processing to spreadsheets, from graphics to statistics, the technology to support program evaluation has become impressive and extremely useful - thus yielding the term “evaluator’s workstation” (Beaudry, 1989). A basic triad of computer software for an evaluator’s workstation should include packages for word processing, statistical analysis, and qualitative data analysis. While program evaluators are usually exposed to statistical analysis packages, this paper will focus on the added value of using software designed specifically for qualitative research. Since 1987, with the advent of microcomputers like the IBM PS/2 Model 50 series, desktop computers have provided increased data processing speed and vastly expanded memory, putting them on a level with mini-mainframe computers (Gray, 1988). Appropriately selected software can be stored on the computer’s hard disk, thereby creating a permanent environment for an evaluator’s workstation. Computer use must be carefully integrated into the complex tasks of program evaluation to realize its benefits. (Naturally, there are no guarantees that the quality of program evaluation is necessarily improved by computer technology!) It is crucial to advance the technology of program evaluation to improve the collection and analysis of both qualitative (text-based) and quantitative (numeric) data for program evaluation reports. The configuration of software discussed in this paper consists of Wordpe$ect 5.1 (WordPerfect Corporation, 1990), and The Ethnogruph Jeffrey S. Beaudry * Assistont Professor. Division of Administrhve md Instmctional Leadership. St. John’s University. Jamaica. NY 11439. Evaluation Practice, Vol. 13. No. 3, 19% pp. 215-222. ISSN: 0886-1633

Copyright 8 1992 by JAI Rem. Inc. All rights of txpmluction in any formreserved.

215

216

EVALUATION

PRACTICE, (13)3,1992

(Seidel, Kjolsweth, and Seymour, 1988), a complementary set of software programs providing powerful tools to deal with the mechanics of electronic data handling and more sophisticated report writing. Through such a configuration, text-based interview and observation data can be integrated with test scores, survey data, andotherquantified data analysis in a single evaluator’s workstation.

SELECTING

SOFTWARE

First of all let’s go to the question “Why choose Wordpegect and The Ethnograph?” Due to the constraints of available resources, the selection of software programs and computer systems is a problem which confronts all program evaluators (LeBlanc, 1988). Even the most powerful word processing programs like Wordper$ect 5.1 contain basic “word search” capabilities, but nothing to match the needs of analysis and reanalysis of textual data with complex, embedded codes. Actually, there are other word processing programs such as Wordstar for IBM users, and Microsof Word for Macintosh users, that are equally comprehensive and capable of data input (i.e., transcription of interviews), and can be fully integrated with text analysis programs. The important thing is to match your computer hardware and software with the needs you have for a text-based analysis package. Prior to 1984, there were no commercially available software packages for microcomputers which could be used to execute data-based searches of text (i.e., interviews, observations, and fieldnotes). There is a brief mention of computers in ethnography by Fetterman (1989). The most detailed source of information on this point is the book, Qualitative Research: Analysis Types and ~ofhvare Tools by Tesch (1990). Besides giving the most comprehensive discussion of qualitative research software tools to date, this book provides an absolutely refreshing and insightful review of the field of qualitative research. The capabilities of qualitative analysis programs differ from data base managers in the following ways: 1.

2.

You are not required to break text into segments at the time of data entry (to be located in “fields”), nor are you restricted to working with an entire text document in which only words are located by the computer, not segments. You can easily define and redefine text segments, one segment can be “nested’ in another, and segment boundaries can overlap. After you have coded your text segments, a computer search for codes provides you with a compilation of all appropriate segments, containing only the relevant material, not entire “fields.” There is no need to design “report” formats; all progr-ams print their search results automatically in the form best suited for analysis purposes.

What does a text-based analysis program offer that you can’t get from either physical manipulation of the data or from a word processing program alone? Those

An Evaluator’s Workstntion

217

who have developed “cut-up-and-put-in-folder” systems, and “file-card” systems (Bogdan and Biklen, 1982; Tesch, 1990), have solved problems in getting to the results in qualitative data analysis, but have not taken advantage of the intricacies of the process of data analysis offered by text-analysis programs such as The Ethnogruph.In essence, the combination of word processing with a text-based analysis program offers advantages such as (a) safe storage and quick, accurate retrieval of interview and fieldnote data, (b) the ability to do complex coding and complicated data search and manipulation, (c) a greater variety of organized output and visual displays of interview data, and (d) the capability of doing basic, descriptive statistics such as word counts and frequencies of responses.

AN EXAMPLE OF QUALITATIVE

ANALYSIS

The need to add a qualitative research tool to my evaluator’s workstation grew out of work on an evaluation of a grant entitled the “Teacher Opportunity Corps” (TOC). The goal of the program is to recruit and support African-American, Hispanic-American, and Native American students in their efforts to become teachers. I was asked to collect “anecdotal remarks” of program participants to: (a) assess the achievement of program goals and objectives of recruiting, retaining, and educating an ethnically diverse cohort of students, and (b) describe efforts to promote the cooperation of administrators and faculty both within the university and from external agencies. To augment data from achievement tests, psychological profiles, and grade point averages, I chose to do interviews of program participants, students, staff, faculty, and school principals to triangulate perspectives concerning the program objectives and program processes. For the qualitative aspects of the program evaluation, a total of 53 interviews, averaging 30-45 minutes each, were conducted over a two year period. The sample of interviews included students (n = 45), faculty (n = 6), staff (n = 2), and scplool principals (n = 2). Interviews of program participants provided an in-depth source of information regarding strengths and weaknesses of the program, based on participants’ own perceptions and personal knowledge. In addition, there are few reliable survey instruments which measure knowledge or attitudes towards multicultural education. Interview guides with open-ended questions focused on participants’ perceptions of the TOC program stakeholder groups. Participants were asked to discuss the most effective aspects of the program, recommendations for program improvement, and personal interpretations of concepts related to multicultural education. General areas of questioning were as follows: (1) program services, (2) special courses taken, (3) types of field experiences in schools, (4) teacher/role models from elementary and secondary education, (5) personal definition of “multicultural education”, (6) instructional strategies for education that is multicultural, and (7) whether teachers should be matched with students of the same ethnic or racial background.

EVALUATION

218

ANALYSIS

PRACTICE, (13)3,1992

OF QUALITATIVE DATA (PHASE ONE): CODING (DE-CONTEXTUALIZATION)

In a traditional manner, tape recorded interviews and fieldnotes were transcribed directly into Wordpeflect. Using macro definitions in Wordper@, the text was formatted on the left half of the page. The blank half-page facilitated the initial operations of The Ethnograph, such as line numbering, code mapping, and the preparation of text for input of computer codes and code searching. Coding represents the process of breaking apart or “de-contextualizing” the interviews, while the next phase of code searching involves the reassembling of data across interviews or “re-contextuaZizing” (Seidel et al., 1988; Tesch, 1990). One of the disadvantages of The Ethnograph is that code mapping must be done by hand on a printout of the interview. This adds a step to the process, but also forces the evaluator/researcher to re-examine and reflect on the data. Other text-based analysis programs such as Textbase Alpha and Hyperqual allow coding to be done directly on the computer screen. Interviews for TOC were analyzed according to the questions being asked and according to the group responding to the question. Data reduction consisted of reading and coding of data with the numbered, printed copy of the interview transcript. In the context of the TOC program evaluation, there were both structured and semi-structured interview questions. The structured questions such as “How did you hear about the TOC grant?” could be coded very directly under the category of “selection.” In turn, the process of analysis could take all of the segments coded for “selection” and put them into one data document. On the other hand, the open-ended, exploratory questions often required multiple, overlapping coded segments. For example, the codes developed for the TOC interviews appear in Table 1. The codes are grouped by use: (a) strictly for descriptive categories, (b) a mixture of descriptive and exploratory categories, and (c) strictly for exploring relationships and emergent themes. Coding procedures can vary according to the need for balancing efficiency and thoroughness, with the depth of research questions. For the TOC evaluation report it was necessary to portray specific programmatic aspects of the grant, which could be seen as the descriptive categories of analysis. In addition, the interview questions probed the participants’ understanding of concepts like multicultural education and matching teachers and students on ethnic and racial basis. Out of these interview segments, more exploratory or emergent relationships were identified using ethnicity, race, age, and gender, among other more abstract concepts, as important related variables. The Ethnograph software is not difficult to use, especially if you have prior experience in qualitative research. Levels of coding have been set up to match researchers’ experience. The levels of coding are (a) novice - automatically reviews code sets, and (b) expert - no auto-review. A single interview can take up to 9999 lines of text. Each interview can have an identifier as a heading. Types of coded segments

An Evaluator’s

219

Workstation

TABLE 1. Code Words Developed for Analysis of TOC Data CODE I.

DESCRIPTION

Program Evaluation General description of program General attitudes towards program

a. General b. Attitude c. Selection d. Major

Academic Major

e. Computers f. Improve

Use of TOC computer lab Suggested Improvements of TOC

g. Serv-sem

Participation in services and seminars sponsored by TOC Cooperation with other groups/offices

h. Connect II.

Program Components a. ED 231 I b. ED 241 I c. ED 701 I

Courses designed for TOC

d. ED 38 e. Fieldlexp

Regular student teaching course Field experiences

f. At-risk g. Risk-strat

Definition of “at-risk” student(s) Strategies for teaching “at-risk” student(s)

h. Parents i. Barriers

Barriers for becoming a teacher

j. Family k. Teach 1. Teach/tit III.

Exploratory

a. Schoollexp b. Model 1

Attitude toward teaching Teaching as a future occupation and Emergent Themes/Topics Prior personal school experiences Pedagogical role model - elementary school

c. Model 2 d. Match

Pedagogical role model -junior & senior high school Matching and mismatching ethnic groups

e. Match/plus f. work g. Volunteer

Especially useful comment on matching Volunteer work

h. Language

Languages spoken in the home

i. Support j. Private k. Image

Support from TOC group Private/public schools Images of school

include: (a) overlapping segments, and(b) nested segments. Limits to coding segments include: (a) twelve codes per segment, (b) seven levels of nests and overlaps, and (c) for independent nest and overlaps within a nest “it is permissible to squeeze as many independent sets of segments as possible within a larger nesting segment as long as the 7 levels rule is not violated” (Seidel et al., 1988, p. 7-6). An example of a complex coded segment appears in Table 2.

220

EVALUATION

TABLE 2.

PRACTICE,

Complex Coded Segment with Overlapping

Student5 + Student5 TOC E: #-SCHOOUECP SC: +SCHOOL/EXP +MATCH +MATCH/PLUS S-MATCH/PLUS %-MATCH Back to the grammar school. What were the student make-up as far as race and ethnicity of the students?

172 173 174

S5: All black students, mayhe one white child.

176 177

I: What about the teachers?

179

S5: It was pretty well mixed. There were no Hispanic teachers or anything like that, but there were white and black teachers. So there were nevel any problems. High School we had three black teachers and ;1 couple Hispanic teachers. And the majority were Caucasian teachrrs Virginia State I had one white teacher and everybody else was black. It was an all black school basically. It wasn’t real.

181 182 18.7 184 185 186 187 188 189 190 191

I: I! was too isolated’!

193

S5: When you gaduate and go into life you’re not going to be in the dominant race. Cause I’m black and I’m not going to be in the dominant race. In a black populated college. doesn’t make me see the IKI~ world. Cause that’s not how the real world is.

195 196 107 198 199 200 201

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # -##

(13)3,1992

Codes

-S

s s $ S

s S

$ $

s S

S

$

s s $

s s s s

-% % % % % % % % % % % % % % % % % %

o/c %

s

70

S S $

c/o % % % % % % % -%

$ S 9

$

s

-s

Entering code sets for each interview is a job which can be done by secretaries or research assistants provided that the researcher has already insured reliability in the coded interviews. Verification of code sets must be made with the original, coded interview after the interview has been transferred to The Ethnograph. The output of the coded version of the interview can be sent to the computer screen, diskette, and/or printer at any step in the data analysis process. The diskette output of text searches can be mlported as ASCII files directly into Wordpe@xt in a number of different data displays.

An Evaluator’s Workstation

221

TABLE 3. Alphabetic and Frequency List of Codewords for an Interview ALPHABETICAL List of codewords used in coding Student1 8/9/1989 16:30 N

CODEWORD

N

CODEWORD

N

CODEWORD

N

CODEWORD

1

AT-RISK

1

BARRIERS

1

COMMENT

1

COUNTRY

1 1

ED23 IMPROVE

2 1

ED24 MAJOR

2 1

ED38 MATCH

1 1

FAMILY MATCH/PLUS

1 1

MODEL 1 RISK-STRAT

1 I

MODEL 2 SCHOOL’EXP

1 1

PARENTS SELECTION

1 1

PS40 SERV-SEM

1

TEACH

1

TEACH/FUT

1

VOLUNTEER

1

WORK

A summary table of codes for each interview is an example of the data displays, one of which is shown in Table 3. These summary tables give an account of the code sets and information about the location of the code sets within each interview or across interviews. For example, the researcher has information about the following: (a) alphabetical code lists, (b) frequencies of code words, (c) alphabetical list of speaker/section identifiers, and (d) frequency of speaker/section identifiers. A list of the code words not found in the interviews is also generated, indicating possible interviewer inconsistences. Codes for the data can be modified at any time.

ANALYSIS

OF QUALlTATIVE DATA (PHASE TWO): SEARCHING OUTPUT (RE-CONTEXTUALIZATION)

AND

Searching for coded segments is what Tesch (1990) calls the “re-contextualizing” of data. This process replaces the manual “cutting and pasting” of interviews with an electronic, data-based manipulation of text. In the step of “re-contextualizing” categories are identified and emergent themes are noted (Tesch, 1990). Search methods can employ various levels of logical relationship between and among coded segments. The example in Table 2 displays the segment for an interview which included the code segments for, “schooZ/exp ,” “match,” and “match@us.” “Big” or “small picture views” search for the amount of contextual interview data needed for searching. In this process of textual analysis the researcher is guided by analytic frameworks provided by the client’s needs, prior theoretical frameworks for more descriptive results, or the analysis could be more exploratory. Catalogs of data files can be constructed to partition interviews by groups, time of interview, interviewer, or whatever grouping is chosen. In addition, subsets or subcatalogs of data files can be created. This leads to levels of interpretation of data across subjects, times, settings, or other groups of interest. For example, the data from the TOC study was grouped by overall program, and subsets corresponding to students, faculty, and principals were also created. In addition, a template for a face sheet can

222

EVALUATION

PRACTICE,

(13)3,1992

be designed for each catalog. This allows the evaluator to “be able to search data by, and sort segments by, variables such as Age, Sex, and Ethnicity” (Seidel et al., 1988, p. 14-l).

CONCLUSION The evaluator’s workstation offers advantages to both the design of evaluation, and more so, to the flexibility of data analysis. Furthermore, the process of qualitative data analysis has been significantly enhanced by the combination of technological tools like Wordpe$ect and The Ethnograph. And yet, questions of critical concern to program evaluators come to mind, e.g., How much effect does this combination of data analysis have on the utilization of evaluation results? Can evaluators afford the tradeoffs in terms of the time and resources it takes to pursue qualitative evaluation? After reflecting on my experience I believe that program evaluation courses should have components which demonstrate the use of technological tools highlighting applications combining word processing, quantitative analysis, and qualitative analysis. Managing computer environments for the purposes of doing better program evaluations has become part of the hidden curriculum in graduate programs teaching evaluation, and is certainly one of the most challenging professional competencies developed on-the-job. The array of tools of the program evaluation trade has been diversified and strengthened. Ironically, the addition of text-based analysis to the technological tool box may enhance the humanistic dimension of program evaluation.

REFERENCES Beaudry, J. (1989). Microcomputers carryirlg the load: An evaluator’s workstation. Paper presented at the Annual Meeting of the American Evaluation Association. San Francisco, CA. Beaudry, J. (1991). Renata Tesch: Qualitative research: Analysis types and software tools. Evaluation Practice, 12(l), 79-82. Bogdan, R. and S. Biklen. (1982). Qualitative Research for Education: An Introduction to Theory and Methods. Boston: Allyn and Bacon. Fetterman, D. (1989). Ethnography: Step by Step. Newbury Park, CA: Sage. Gray, P. (1988). “The New Generation of Computers.” Evaluation Practice, 9(l), 50-58. LeBlanc, L. (1988). A Method for Evaluating and Selecting Microcomputer Software Packages. Evaluation Practice, 9(4), 57-7 1. Seidel, J., R. Kjolseth, and E. Seymour. (1988). The Ethnograph: A User’s Guide. Littleton, CO: Qualis Research Associates. Tesch, R. (1990). Qualitative Research: Analysis Types and Software Tools. Philadelphia: Falmer Press. WordPerfect Corporation. (1990). Wordpetfect 5.1. Orem, UT: Author.