Articles
Outreach and Program Evaluation: Some Measurement
Issues1
ALAN J. RICHARD, DAVID C. BELL, WILLIAM N. ELWOOD, and CHERYL DAYTON-SHOTTS
ABSTRACT
Literally, the word “outreach” evokes the image of “reaching out,” attempting to touch what is currently beyond one’s grasp. Activities that help organizations to “reach out” are important elements of most services, and some organizations employ outreach paraprofessionals whose primary function is to help recruit participants into a program. Because outreach is supposed to occur before individuals receive the goods or services offered by an organization, evaluations generally do not measure the amount of outreach contact. However, outreach Richard and program intervention activities tend to overlap when an organization encounters resistance from a population of potential consumers. Drawing on 10 years of experience evaluating HIV risk-reduction interventions for drug users who are not in treatment, the authors argue that outreach workers are part of the intervention continuum, and have important effects not only on recruitment, but also on service delivery. We argue that the evaluation and measurement issues raised by pre-enrollment outreach contacts can be best addressed by improving quantitative pre-enrollment data collection. Finally, we present and discuss specific elements of a system for measuring pre-enrollment contacts.
INTRODUCTION
This article addresses
some of the challenges
AND BACKGROUND
that organizational
outreach
activities
pose for
to drug users who are not in treatment. First, a general description of outreach is presented, and potential conflicts between outreach and evaluation are described. Second, the programmatic history of outreach is reviewed, and three functions evolving from the practice of outreach are discussed. Third, the evaluators,
focusing
Alan J. Richard
l
on the somewhat
Affiliated
Systems
Corporation,
unique
case of outreach
3 IO4 Edloe,
Suite
330;
Houston.
Texas
77027.6022;
Fax:
(713)
439-
1924;
[email protected].
Evaluation ISSN:
Practice, Vol.
17, No. 3, 1996, pp. 237-250.
Copyright
0886-1633
All
237
0
rights of reproduction
1996 by JAI in any form
Press Inc. reserved.
238
EVALUATION PRACTICE, 17(3), 1996
current debate over outreach among researchers evaluating HIV risk reduction programs for drug users is discussed, and two positions that have emerged within that debate are considered. Finally, an alternative to these two positions, suggested by our own research, is offered. Outreach:
Definition
and Purpose
“Outreach” is a compound noun (or, secondarily, an adjective). To perform “outreach” is, literally, to “reach out. ” “Outreach,” then, names an activity, a process, rather than a state. Moreover, an organization’s “reaching out” for something implies an extension of the organization into an area currently external to it. In particular, the organization reaches out when it has something to offer; it is offering services to someone in need of those services but not yet receiving them. Thus, organizational “outreach” is an activity by which an organization reaches out to those who have not yet availed themselves of the organization’s services. The degree of outreach activity an organization must perform is primarily a function of the barriers between the organization and those who need the organization’s services. Potential consumers who need services may experience various barriers. They may be unaware of the organization’s existence, and so may be unaware of it’s services in general. Or they may be aware of the organization’s existence, but unaware that it provides the service. Even if potential consumers are aware of an organization’s services, they may not be able to access those services without some assistance. For instance, residents in a low-income neighborhood may need dental care, but are unaware that a local hospital provides indigent dental care, or they may be aware, but lack transportation to the hospital. To deliver these dental services successfully, the hospital may need to overcome the barriers by advertising its services and providing transportation. These activities are examples of outreach activities. Inherent Challenges
with Field Experiments
in Outreach Evaluation
One of the most important purposes of evaluation is to provide feedback regarding the effectiveness of organizational activities for achieving organizational goals (Tomlinson, Bland, Moon, & Callahan, 1994). From a research perspective, the ideal situation for evaluating the effects of a product, service, or program is one in which potentially confounding factors have been reduced to a minimum. For this reason, randomized experimental designs using matched treatment and comparison groups are preferred by many evaluators conducting impact assessments (Campbell & Stanley, 1963; Chen, 1990; Mohr, 1992). Longitudinal randomized experimental designs allow a researcher to compare salient features of two groups before and after specific organizational activities have intervened to offer some service and produce some change in the treatment group. However, most program evaluations involve services that an organization has previously provided routinely, or plans to provide routinely, in a “real-world,” non-laboratory setting. Thus, most experimental evaluations take the form of field experiments (Dennis & Boruch, 1994). In order for evaluators to measure the effects of program activities, both treatment and comparison groups must be identified, and evaluators must be able to measure the baseline features of these groups so that later measurements can determine whether the intervention-the organizational activities provided to the treatment group-has resulted in significantly improved outcomes. Here comes the first of several challenges. Even in evaluations of programs with no outreach component, the process of evaluation often contains its own “outreach” component. Just as an organization must overcome barriers to the delivery of services, evaluators must overcome barriers to research participation in a treatment and a comparison group. Both treatment
Outreach and Program Evaluation
239
and comparison groups must be contacted before the treatment “begins.” Thus, both treatment and comparison groups must receive enough outreach contact to secure their participation in the evaluation. Most evaluators assume that these sorts of outreach contacts have only minimal influence on the evaluation because evaluators interpret contact as “recruitment,” rather than “outreach.” In many cases, they are correct. Standardized recruitment methods, such as random telephone dialing, can be assumed to affect treatment and comparison groups equally, so long as a high response rate is maintained. However, the greater the amount of outreach activity necessary for successful recruitment, the more likely it is that outreach effects will become confounded with program effects. The more barriers there are to a group’s inclusion in an evaluation, the more outreach activity is required to recruit a sample for evaluation. These outreach activities may consist of forms of persuasion that involve the provision or promise of services. This would not threaten the integrity of the study if evaluators could ensure that the services provided did not overlap with the services provided by the organization being evaluated. However, in situations where significant barriers must be overcome, or where ethics and policy dictate that all outreach contacts be offered some service of a specific type (e.g., shelter referral for programs targeting the homeless; bleach and condoms for HIV risk-reduction programs), the outreach process may involve the provision of significant levels of services similar to those being evaluated (Dennis, 1994). Most standard outreach recruitment methods currently used by researchers (e.g., newspaper advertisements, random telephone digit dialing) have been in use for so long that they have become invisible. When evaluators assemble a sample from enrollees in an organizational program, the outreach activities used by the program to secure enrollees are often equally invisible. However, when the recruited population is so different from the populations ordinarily recruited for evaluation that specialized outreach methods must be used, the problem of outreach can become quite visible to researchers. Like a number of other implementation issues of general importance (Conrad, 1994), the problem of unmeasured outreach recruitment has been “discovered” in the course of federally-sponsored efforts to reduce HIV transmission among drug users who are not in treatment.
The Merger of Recruitment, Intervention, and Data Collection in Outreach Evaluation Although every organization employs some outreach, outreach was first acknowledged as an official program element in federal public health projects resulting from the Economic Opportunity Act of 1964 (Leviton & Schuh, 1991). Outreach activities were intended to support multiservice centers that were built in low-income neighborhoods to provide integrated social, health, and job training services to low-income people (Hollister, Kramer, & Bellin, 1974). Community residents whose incomes matched the mean household income in these neighborhoods were given the opportunity to become paraprofessional “community health aides,” and were charged with implementing clinic outreach services. This outreach strategy reflected a belief that many low-income people would be hard to reach without the aid of their peers, and that there were barriers that their peers could help them surmount. Three essential outreach functions evolved from the practices of community health aides. These were: recruitment, intervention, and data collection. The primary function of community health aides was to recruit low-income residents into health clinic programs, where they were referred to other center programs. However, some potential clinic attendees were either incapable or unwilling to attend a clinic. Therefore, community health aides were
EVALUATION PRACTICE, 17(3), 1996 also instructed to intervene by conveying simple health maintenance messages to neighborhood residents in their homes. Finally, community health aides were unofficially used to collect data on community norms and attitudes. These data were used to guide center administrators in their efforts to “get at” the root causes of ill-health and poverty in specified neighborhoods. Even though the data collection efforts of community health aides were informal and not intended for research purposes, accuracy and completeness were regarded as important goals, because the success of neighborhood centers was perceived as depending on their ability to identify the real needs of neighborhood residents (Diehr et al., 1975; Freeborn et al., 1978). When the National Institute on Drug Abuse (NIDA) initiated efforts to prevent the further spread of HIV among drug users, the model of outreach it used was similar to the community health aide model. As in the community health aide model, paraprofessional outreach workers (who were often “indigenous” to targeted communities) were hired to recruit individuals into storefront clinics, and to provide some preventative health services (e.g., bleach and condom distribution) to those who would not or could not participate in clinic programs. “Indigenous” outreach workers who were familiar with-and to-community drug use networks were prefemed, because drug control agencies expected barriers to participation in HIV prevention programs. These programs were implemented at a time when the government was exerting increased pressure on illegal drug users (Pollock, 1993). One of the main barriers to participation was the drug users’ fear of arrest or other punishment. Program planners reasoned that in such a climate, drug users would not believe that information on their drug use and sexual behaviors was valued for HIV epidemiology and prevention, unless the information was enlisted from them by individuals “like themselves.” Individuals who were “indigenous to the targeted social networks” were believed to be able to establish rapport with group members (Wiebel, 1993). Since launching HIV prevention activities, NIDA has become increasingly interested in assessing the effectiveness of specific interventions. To do this, interventions have come to be conducted in controlled environments, rather than in the “street.” However, to recruit participants, NIDA researchers have relied on paraprofessional outreach workers. Paraprofessional outreach workers were also believed to be an important tool for minimizing attrition in what is known to be a geographically unstable population (Dennis, 1994; Williams, McCoy, Menon, & Khoury, 1993). The use of outreach workers for recruitment has raised questions regarding the extent to which storefront intervention effects are confounded with the effects of outreach intervention activities. So far, suggestions about how to eliminate the problem of data contamination due to outreach have focused on (1) substantially limiting outreach activities to those necessary for recruitment, and (2) eliminating paraprofessional outreach workers. Limiting
Outreach
to Recruitment
It has been suggested that potential problems arising from outreach intervention can be prevented by limiting outreach activities to recruitment. Thus, investigators have observed that “the most important part of field researchers [outreach workers’] training will be to achieve acceptance of the researcher’s role in the field, observing, asking questions, but not attempting to intervene unless help is requested” (McCoy & Nolan, 1989, p. 121). However, an experiment conducted by the Southwest Study Group (SSG), a collaborative research effort operating under NIDA’s Community Research Branch, has suggested that attempts to elimi-
Outreach and Program Evaluation
241
nate substantive outreach intervention activities from the outreach process by revising outreach workers’ “ working orders” may be futile, meaning that outreach workers still deliver what can be called “components of the treatment.” The SSG study randomly assigned study participants to two groups, one of which received HIV testing and counseling services, while the other received no formal intervention. Both groups were recruited using street outreach techniques (Williams et al., 1994). Outreach workers were instructed to act only as recruiters. For ethical reasons, persons assigned to receive outreach only were offered HIV testing and counseling at the conclusion of the study, after post-test data had been collected. To guard against floor effects, only high-risk injection drug users (IDUs) having a relatively high level of syringe-related risk were recruited for the study. Data were collected at intake into the study and 30 days after intake. The duration of each outreach contact was reported by outreach workers. Significant behavior changes were reported by subjects in both the “outreach only” and the “outreach plus formal intervention” conditions. Furthermore, there were no significant behavior change differences between subjects assigned to the outreach only and formal intervention groups. Researchers concluded that outreach was a significant behavioral intervention, and that the greater part of behavior change resulting from risk-reduction interventions was likely due to outreach activities, rather than to the formal interventions. Examination of process data showed that, contrary to expectations, substantial risk-reduction activities (riskreduction information, materials, and skills training) were occurring during outreach. These activities were justified by outreach workers as recruitment strategies. Despite efforts to restrict outreach workers to recruitment and to discourage pre-enrollment intervention, “recruitment” efforts appeared to have overlapped with “intervention” activities.
The Impact of Eliminating Paraprofessional Outreach Workers on Outreach Evaluation It has been suggested that the failure to “control” the activities of outreach workers is symptomatic of more serious weaknesses in the provider-client model of service provision (Broadhead & Heckathorn, 1992). Based on extensive experience in the use of outreach workers (Broadhead & Fox, 1990), Broadhead and Heckathom (1992) claim that outreach workers are often absorbed over time into the service bureaucracy, and lose touch with the “street knowledge” for which they were hired. Conversely, outreach workers who retain their contact with the streets are likely to be unreliable employees. They conclude that the role of paraprofessionals as well as professionals in HIV risk reduction should be absolutely minimized, and that the paraprofessional outreach worker’s position should be eliminated. To this end, Broadhead and colleagues (1995) compared the cost-effectiveness of a program utilizing “indigenous” outreach workers (former drug users ethnically matched to targeted neighborhoods) to that of a peer-driven intervention (PDI) which dispensed entirely with paraprofessional outreach workers. In the latter program, an initial group of drug users, recruited through advertisements in local newspapers, was briefly educated regarding riskreduction techniques. They were paid a small amount of cash for every eligible drug user they recruited into the study and also were paid additional money for correct answers given by their recruits on an AIDS knowledge survey. Results indicated that AIDS knowledge test scores for drug users recruited into the PDI program were significantly higher than test scores for drug users recruited into the traditional outreach program, despite the lower cost of the PDI program.
242
EVALUATION
PRACTICE, 17(3), 1996
Broadhead and colleagues’ experiment indicates that PDI is more useful than traditional outreach for recruiting clients who score high on standard AIDS knowledge assessments, and that it costs less. From a program planner’s perspective, this is likely to be its initial attraction. However, careful program planners will defer judgment on the effectiveness of PDI until more data are available. Since AIDS knowledge measures are not reliable predictors of risk behavior among drug users (Becker & Joseph, 1988; Des Jarlais & Friedman, 1988; Longshore, Hsieh, & Anglin, 1992), the use of AIDS knowledge as an outcome measure is an important limitation in Broadhead and colleagues’ study. It is not clear from the study results that PDI is as successful as paraprofessional outreach for generating the kinds of results that are most interesting from a policy perspective. For evaluators, the initial attraction of PDI is that it appears to eliminate the problem of unmeasured pre-enrollment intervention by eliminating the activities of paraprofessional outreach workers. In reality, however, PDI moves the bulk of intervention activity outside the control of program personnel and evaluators. Thus, it reduces the opportunity for accurate measurement of outreach processes. Only the training sessions offered to peer educators can be monitored by project staff. Moreover, the reward system in PDI programs like the one evaluated by Broadhead, et al., does not encourage peer educators to be forthcoming regarding their intervention strategies. Whereas paraprofessional outreach workers are salaried employees of the program they represent, peers in a PDI program remain independent of the program, and are paid strictly according to the performance of their recruits. Serious validity questions arise in a system where activities supposed to generate outcomes cannot be directly monitored by staff, where interventionists are rewarded according to their ability to generate performance outcomes, and where these interventionists regard themselves as “outside” a program. Under conditions where outcome measurement is obtained in the absence of measuring process variables leading to that outcome, final performance results may or may not represent what they are supposed to represent. Thus, Broadhead and colleagues’ results could indicate that drug users in the PDI program were better educated in the substantive issues surrounding AIDS. However, they can’t rule out the possibility that peer educators found a simpler, and perhaps less honest or valid way of obtaining the same results. One method for obtaining high scores on an AIDS knowledge test that is simpler than teaching content, for example, is to coach recruits on the proper order of responses (e.g., for the first question, answer “true;” for the second question, answer “false,” etc.). Another method is to recruit individuals who are already highly knowledgeable about AIDS. Evaluators simply cannot assume that high scores mean that peer educators have successfully taught drug users about AIDS. Outreach as a Communicative
Process
To more fully understand why outreach workers continue to perform substantive intervention activities in the street, even when instructed to limit activities to recruitment, we conducted an ethnographic study of paraprofessional outreach workers in Houston (Elwood, Montoya, Richard, & Dayton, 1995). Although some published studies of outreach have indicated that outreach workers in risk-reduction programs have ethical objections to placing limits on outreach intervention (Broadhead & Margolis, 1993; Deren et al., 1992), our findings differed somewhat from these earlier reports. Houston outreach workers did not voice ethical objections to the elimination of outreach intervention, but they did voice practical concerns. These outreach workers viewed outreach data collection, outreach intervention, and outreach recruitment as components of an integrated communicative process, wherein each activity
243
Outreach and Program Evaluation
functioned to support the others. Changes in one activity produced changes in each of the others. According to outreach workers who had served in earlier outreach intervention efforts, the new imposed limits on outreach intervention decreased the likelihood that the most resistant drug users would ever participate. Thus, these limits had inadvertently encouraged selective recruitment. Over five years of episodic field observation and four months of intensive participant observation, our ethnographers observed a number of instances in which data collection and intervention were necessary for successful recruitment. In our study, the more experienced outreach workers routinely collected extensive data on neighborhoods, drug use groups, and key informants prior to recruitment. Much of the data were memorized, but some were also written down. These data assisted outreach workers in remembering where particular events had taken place, where particular individuals could be found, and which communicative strategies were most effective with particular groups of drug users. Outreach workers also routinely delivered intervention services by answering questions about HIV transmission or drug treatment availability and passing out bleach and condoms, as part of the recruitment process. Sometimes, needle disinfection methods were demonstrated to field contacts. These services were necessary to legitimize their presence as nondrug users in the drug subculture and to distinguish outreach workers from police informants, drug ingenues, or social workers. Thus, outreach workers in Houston were performing the three classical outreach functions (recruitment, intervention, and data collection) that had evolved from the practices of earlier generations of community health aides. Our research suggested that these three functions were not arbitrarily linked. This is not to suggest that all outreach efforts entail equal emphasis on each of these “classical” functions. Rather, it is when substantial barriers to participation are encountered that an outreach worker must call on data collection and intervention to enhance recruitment efforts. For instance, the extensive pre-enrollment activities performed by NIDA outreach workers can be best understood in the context of the illegality of the drug use subculture. In order to remain “hidden” from ridicule and punishment, chronic drug users who are not in treatment develop complex codes of interaction, by means of which “insiders” are distinguished from “outsiders” (Wiebel, 1990). These codes may be locally inflected, and they may involve ethnic identification. However, their function is not to distinguish ethnic groups or neighborhoods, but to distinguish illegal drug users from pretenders. Many codes involve the ingestion of drugs. Others involve a tacit acknowledgment that one uses drugs. Undue curiosity regarding the habits of drug users, coupled with an unwillingness to ingest drugs, can raise serious suspicions (Grund, Blanken, Adriaans, Kaplan, Barendregt, & Meeuwsen, 1992). By displaying their knowledge of HIV risk reduction, NIDA outreach workers demonstrate that they are trained to offer public health advice. By offering bleach bottles and condoms, NIDA outreach workers indicate that they have access to the kinds of materials expected of HIV risk-reduction specialists. Using these tools, NIDA outreach workers establish a legitimate “peripheral membership role” in the drug use subculture that explains their presence as nondrug users within drug use networks (cf. Adler, 1990).
QUANTITATIVE
MEASUREMENT
OF OUTREACH
INTERVENTION
When barriers to program participation exist, neither eliminating outreach intervention nor eliminating outreach workers are viable means of solving the problems raised by pre-enrollment outreach contacts. Eliminating outreach intervention disturbs the communicative pro-
244
EVALUATION PRACTICE, 17(3), 1996
cess necessary for reducing a population’s barriers to program participation. Eliminating paraprofessional outreach workers exacerbates the problem of pre-enrollment contact by decreasing scientific and administrative control over the recruitment process. More accurate measurement of the outreach process-not the attenuation of that process-is the appropriate response to the challenge of pre-enrollment outreach contact.
FRONT
FIELD ACTIVITY REPORT
QaE__~__.__
I!.lsm (minutes)
CONTACT LENGTH --CONTACT
----
(See r+versc for codes)
CONTACT
_
_
w&cHER
other(l1)
(last 4 Lglts of social suxnty
#)
(See revene
_
v
(Code all activitiw
that occurred
See reverse for codes)
(Cucle alI that
--
--
--
--
other (27)
for codes)
_
(self not mncludcd)
combative (2)
lively (3)
lethargic (4)
open (5)
closed (6)
other (7)
I
CLIENT(cIrcle
one)
0
1 2
3 4
dFprtYd
s
hapm
CLIENT?
oo (0)
yes (1)
REVERSE
ACTMTY I.
2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Bleach Distribution Community Update Condom Demonstration Condom Distribution Crisis Intervention Family Concerns Informal Counseling Job Refcrml Medical Referral Needle Cleaning Demonstration Proper Needle Use Discussion Proper Needle Use Encouragement Network Member(s) Discussion
SITE CODES I--street 2--clinic 6--store
_
apply)
friendly (1) -_
---_
OUTREACH_-
ll.-OthPI
7--phone
CODES 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 2s. 26. 27.
Pamphlet Distribution Police Report Provide Food/Drink Recovery Encouragement Recovayrrreatment Referral Safer Sex Discussion Safer Sex Encouragement Shelter Refaral Support Group Referral Transpo~~ion to Center Transportation Elsewhere HIV Counseling Methadone Program Referral other
WEATHER CODES 3--cheat
home 4--restaurant S--bar t&shelter 9-car IO-research center
Figure 1.
l--cold a--dry
2--cool b--rain
3--warm c--humid
Field Activity Report Form
4-hot d--extreme
humldlty
_
245
Outreach and Program Evaluation
In 1992, we developed an outreach-worker-completed instrument for reporting preenrollment contacts (Affiliated Systems Corporation, 1992). Called the Significant Contact Report (SCR), the instrument was a useful tool for evaluating the Center for Substance Abuse Treatment’s (CSAT) Community AIDS Prevention Project (CAPP), a street intervention program for Houston drug users who were not in treatment. The SCR captured demographic information about the individual encountered and information about the contact itself (e.g., relevant topics covered, condoms or bleach bottles distributed). Despite its usefulness, several limitations were observed. Some of these had to do with the awkwardness of traditional pen and paper instruments in field environments, and others had to do with the types of data captured by the instrument. Our experiences evaluating CAPP, together with our more formal study of the outreach process, have resulted in the current design of a Field Activity Report (FAR), shown in Figure 1 (Affiliated Systems Corporation, 1995). Based on the SCR, the FAR captures additional information on the outreach worker’s impressions of a particular contact (warmth, openness, liveliness). It also accommodates identification by number and by verbal identiFRONT
ASCRESEARCH 1001 Tuam Houston, TX 77002
Hours of Operation: MWF 3:00-500 pm
Phone Number: (713) 520-000
Client I.D. #: Outreach
0000 1
Worker:
Jose
Appointment
Day:
Monday,
Appointment
Time:
2:00 u.m.
March 3, 1996
Other:
Figure
2.
Appointment
Card
246
EVALUATION PRACTICE, 17(3), 1996
fier(s). The instrument is designed so that it can be printed on a small, distinctive index card that can be easily carried and quickly completed in the field. Together with the appointment card depicted in Figure 2, the FAR provides the foundation for a simple outreach measurement system incorporating five features we believe to be essential to such a system: casual unobtrusiveness, simplicity, generality, integration, and training. Casual
Unobtrusiveness
To collect data in the field means that at least some potential study participants may resent obvious data collection activities directed toward them. To avoid interfering with outreach activities, measurement instruments should be “casually unobtrusive.” Such instruments need to be public to avoid suspicion and hostile response. They need to be general and analytic sd that they do not raise suspicions regarding how information may be used. And they need to be unobtrusive so that the measurement process itself will not change behaviors. Simplicity The simplicity of an outreach intervention instrument is related to its reliability. The more interpretation is required, the greater the variability in the response. Outreach workers conduct their activities under varying field conditions, and variations in context will have more impact when outreach workers’ encoding tasks are more difficult. Therefore, the more outreach workers can operate on “autopilot” in completing a questionnaire, the more reliable the results will be. Additionally, it is unreasonable to expect an outreach worker to complete a long questionnaire several times a day for months on end. Outreach workers should be able to complete the instrument routinely without having to ignore or defer other activities. Generality Outreach workers know they are valued for their “local knowledge,” and may fear that such knowledge is being recorded in order to “automate” outreach and eliminate the need for this marketable skill. In many programs, outreach workers are required to meet monthly recruitment quotas. Outreach workers attribute some of their success in meeting these quotas to proprietary recruiting tricks. Thus, outreach workers’ may perceive detailed information about their recruitment activities as threatening to their jobs. The prospect of sharing such information with other workers may not be welcomed. Researchers can be sensitive to the concerns of outreach workers without sacrificing comprehensive, accurate reporting of pre-enrollment contacts. Quantified data are necessarily general. Specific events are coded as belonging to more general classes of activities. If researchers are willing to involve outreach workers in the instrument design process, the categories included on the pre-enrollment contact instrument(s) will probably capture most important types of activities, while omitting information about specific outreach recruitment “tricks.” Consider the “pinch,” a recruitment technique utilized by outreach workers in Houston, Texas (Elwood, Montoya, Dayton, & Richard, 1995). The “pinch” is used to recruit an entire network of drug users who were previously resistant to participation despite repeated contacts. The “pinch” begins when one member of the drug use network decides to get tested and receives notice of a negative HIV test, a “clean bill of health.” This member, already familiar
Outreach and Program Evaluation
2347
with outreach workers, boasts of the results to them. In response, they suggest that the participant would be well advised to refuse to inject with other network members until they “come in and get tested.” From the outreach worker’s point of view, the “pinch” may be regarded as a proprietary recruitment trick, and thus “off limits” to researchers. From a researcher’s perspective, the “pinch” may constitute an unrecorded intervention activity, and must be recorded. Our solution was to code the “pinch” as an instance of the more general intervention category on the FAR called “Network Discussion” (see Figure 1). The name of this category was determined in meetings between outreach workers, staff ethnographers, and other research staff. Use of the category allows the outreach worker to protect recruitment information regarded as proprietary, and allows the researcher to compare the “pinch’ and similar intervention activities to other types of intervention activities, such as “bleach distribution,” “crisis intervention,” or “condom discussion.” Admittedly, the use of general categories risks reducing the reliability of the data, as compared, say, to detailed ethnographic descriptions which could be coded later by separate researchers or research assistants. Conversely, it is unlikely that each outreach worker uses more than a few basic recruitment strategies that could double as intervention, and careful training in data collection should enable outreach workers to reliably classify each instance in which these strategies are used.
Integration Measurement of outreach is most useful when outreach data can be integrated into a larger data set that includes data on other program activities. Effective integration involves the ability to match particular outreach cases to cases in the larger data set. We recommend that this be accomplished using business-sized appointment cards or similar devices (see Figure 2). Each card has a unique identifying number, which the outreach worker can record on the FAR (Figure 1) immediately after handing the card to an individual encountered in the field. Outreach workers can request that the individual keep the card and carry it around, and can continue to refer to the number on the card when recording subsequent contacts with that individual. Individuals contacted in the field who subsequently enroll in the program associated with outreach activities would be required to present their appointment card. Thus, every program enrollee could be matched to a specific FAR case. Some individuals who are contacted in the field more than once will be unable or unwilling to show their appointment card when encountering an outreach worker a second time. In anticipation of this, the FAR includes a field for recording the last four digits of an individual’s social security number, and a second field for recording other identifiers, such as nicknames or single-word mnemonic devices that outreach workers already use informally to identify field contacts. Because outreach contacts are not asked for their “real” name, requesting the last four digits of a contact’s social security number is not generally regarded as threatening. Once the outreach worker returns to the storefront field station or other location where the FAR cards are stored, the outreach worker looks up the individual’s previous FAR form using recorded identifiers, and attaches the appropriate I.D. number. Appointment cards have been previously used to match enrollees with peer recruiters in Broadhead and colleagues’ (1995) PDI program. During the Cooperative Agreement experiment, we used appointment cards to remind enrollees of dates when results of their HIV test would be available, and dates when they would become eligible for a follow-up interview (worth $20 remuneration). Most drug users kept these cards for the six-month period between
EVALUATION
248
PRACTICE,
17(3), 1996
intake and follow-up, and many would spontaneously display these cards to project personnel, even though this was not a program requirement. These experiences suggest that appointment cards are a useful and feasible means of matching enrollees to pre-enrollment contact data. Training Like any system of pen and paper data collection, successful use of this system depends on adequate training in the specific instruments and data collection methods used, and even the philosophy behind data collection. Accurate data collection is an essential element of outreach as well as evaluation. For this reason, outreach workers have often been referred to as “ethnographic research assistants” (Broadhead & Fox, 1990; Connors, 1989; Sterk-Elifson, 1993). While the type of data collection ordinarily performed by outreach workers in the course of their work differs from the type of data collection required for research or evaluation, outreach workers are able to appreciate the practical importance of accurate information. Thus, if data collection activities are emphasized from the beginning of an evaluation, if outreach workers are given a voice in the design of data collection instruments, and if the practical importance of items on instruments is fully explained to them, paraprofessional outreach workers can be not only passable, but superior data collection personnel (Fiedler, 1978; Lessler & Kalsbeek, 1992; McCoy & Nolan, 1989).
CONCLUSION Like all outreach, outreach in risk-reduction programs targeting active drug users on the street involves three essential functions: data collection, service delivery, and recruitment. Because successful outreach in these programs must counter resistance as well as remove barriers to program participation, pre-enrollment outreach intervention is unusually intense. This intensity magnifies pre-enrollment contact measurement problems that have always existed. Because successful recruitment in the context of barriers presupposes intervention, these problems cannot be solved by eliminating intervention and focusing outreach exclusively on recruitment. Nor can they be solved by eliminating paraprofessional outreach workers. The measurement of outreach intervention can be substantially improved by implementing a relatively simple system of measurement designed for field contexts. This system depends on the ability of researchers to use more fully the data collection function that is already implicit in the paraprofessional outreach process.
ACKNOWLEDGMENTS The authors would like to thank Walter A. Goodpastor, assistance in the preparation of this manuscript.
M.S.W. and Tina McPherson
for their
NOTES 1. Much of the research described herein was supported by Grant DA96906-05 to collaborating investigators from the Community Research Branch of the National Institute on Drug Abuse. Opinions expressed herein are solely those of the authors.
249
Outreach and Program Evaluation
REFERENCES Adler, P. (1990). Ethnographic research on hidden populations: Penetrating the drug world. NIDA Monograph, 98,96-l 12. Affiliated Systems Corporation. (1992). Significant Contact Report. Houston: ASC. Affiliated Systems Corporation. (1995). Field Activity Report. Houston: ASC. Becker, M. H., & Joseph, J. G. (1988). AIDS and behavioral change to reduce risk: A review. American Journal of Public Health, 78(4), 394-409. Broadhead, R. S., & Fox, K. J. (1990). Takin’ it to the streets: AIDS outreach as ethnography. Journal of Contemporary Ethnography, 19(3), 322-348. Broadhead, R. S., & Heckathom, D. D. (1992, July 19-24). User-driven vs. traditional outreach to combat AIDS among drug injectors: Assessing a national program and a new approach. Paper presented at 8th International Conference on AIDS, Amsterdam, the Netherlands. Broadhead, R. S., Heckathom, D. D., Grund, J. P. C., Stem, L. S., & Anthony, D. L. (1995). Drug users versus outreach workers in combating AIDS: Preliminary results of a peer-driven intervention. The Journal of Drug Issues, 25(3), 53 l-564. Broadhead, R. S., KcMargolis, E. (1993). Drug policy in the time of AIDS: The development of outreach in San Francisco. The Sociological Quarterly, 34(3), 497-522. Campbell, D. J., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research. Boston, MA: Houghton Mifflin. Chen, H. (1990). Theory-driven evaluations. Newbury Park, CA: Sage. Connors, M. M. (1989). Outreach ethnography-anthropology in the expandedfield of IVdrug use and AIDS. American Anthropological Association Meetings; Washington D.C. Conrad, K. J. (Ed.). (1994). Critically evaluating the role of experiments. San Francisco, CA: JosseyBass Publications. Dennis, M. (1994). Ethical and practical randomized field experiments. In J. S. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook ofpracticalprogram evaluation (pp. 155-197). San Francisco, CA: Jossey Bass. Dennis, M., 8~ Boruch, R. (1994). Improving the quality of randomized field experiments: Tricks of the trade. In K. J. Conrad (Ed.), Critically evaluating the role of experiments (pp. 87-102). San Francisco, CA: Jossey-Bass Publishers. Deren, S., Rees, D. W., Tortu, S., Friedman, S., Tress, S., Sufian, M., Pascal, J., & Stull, C. (1992). AIDS outreach workers: An exploratory study of job satisfaction/dissatisfaction. AIDS Education and Prevention, 4(4), 328-337. Des Jarlais, D. C. & Friedman, S. R. (1988). The psychology of preventing AIDS among intravenous drug users. American Psychologist, 43( 1 l), 865-870. Diehr, P., Jackson, K. O., & Boscha, M. V. (1975). The impact of outreach services on enrollees of a prepaid health insurance program. Journal of Health and Social Behavior, 16,326340. Elwood, W., Montoya, I., Richard, A., & Dayton, C. (1995). To hang in the hood: The description and analysis of outreach activities. Journal of Psychoactive Drugs, 27(3), 249-259. Fiedler, J. (1978). Field research: A manual for logistics and management of scientific studies in natural settings. San Francisco, CA: Jossey-Bass. Freeborn, D. K., Mullooly, J. P., Colombo, T., & Bumham, V. (1978). The effect of outreach workers’ services on the medical care utilization of a disadvantaged population. Journal of Community Health, 3(4), 306-320. Grund, J. P. G., Blanken, P., Adriaans, N. F. P., Kaplan, C. D., Barendregt, C., & Meeuwsen, M. (1992). Reaching the unreached: Targeting hidden IDU populations with clean needles via known user groups. Journal of Psychoactive Drugs, 24(l), 4147. Hollister, R. M., Kramer, B. M., & Bellin, S. S. (1974). Neighborhood health centers. Lexington, MA: D. C. Heath.
250
EVALUATION
PRACTICE,
17(3), 1996
Lessler, J. T., & Kalsbeek, W. D. (1992). Nonsampling error in surveys. New York: Wiley. Leviton, L. C. & Schuh, R. G. (1991). Evaluation of outreach as a project element. Evaluation Review, 15(4), 42@440. Longshore, D., Hsieh, S. C., & Anglin, M. D. (1992). AIDS knowledge and attitudes among injection drug users: The issue of reliability. Journal ofAZD,S Education and Prevention, 4( 1). 2940. McCoy, H. V., & Nolan, C. (1989). Staff guidelines for the community outreach intervention. Unpublished manuscript, University of Miami School of Medicine. Mohr, L. B. (1992). Impact analysis for program evaluation. Newbury Park, CA: Sage Publications. Sterk-Elifson, C. (1993). Outreach among drug users: Combining the role of ethnographic field assistant and health educator. Human Organization, 52(2), 162-168. Tomlinson, C., Bland, L., Moon, T., & Callahan, C. (1994). Case studies of evaluation utilization in gifted education. Evaluation Practice, 15(2), 153-168. Wiebel, W. W. (1990). Identifying and gaining access to hidden populations. NIDA Monograph, 98(USGPO), 4-l 1. Williams, M. L., Camacho, L. M., Bowen, A., Jones, A., Rhodes, F., Trotter, R., & Simpson, D. D. (1994). A study of the effectiveness of AIDS risk reduction sessions in community-based outreach programs. Unpublished manuscript. William, M. L., McCoy, C. B., Menon, R., & Khoury, E. (1993). Mobility as a factor in the spread of HIV among intravenous drug users. In B. Brown & G. Beschner (Eds.), Handbook on risk of AIDS: Injection drug users and sexual partners (pp. 328-336). Westport, CT: Greenwood Press.