Evaluation and Program Planning 24 (2001) 13±22
www.elsevier.com/locate/evalprogplan
The effect of the Tarrant County drug court project on recidivism A. Bavon* Department of Public Administration, University of North Texas, PO Box 310617, 167 Wooten Hall, Denton, TX 76203 0617, USA Received 4 July 1999; received in revised form 27 December 1999; accepted 5 June 2000
Abstract The purpose of this study is to examine the impact of a drug court program on the criminal recidivism of its clients. The study uses the nonequivalent comparison group evaluation design to measure program impact by examining differences in outcomes between program participants and a comparison group. The results show program retention and completion rates increased steadily over the 3-year study period. Also, program participants performed better on a number of the indicators of recidivism than the participant group. However, while small substantive project effect sizes can be identi®ed, the study ®nds no statistically signi®cant difference in recidivism between program participants and the comparison group. q 2001 Elsevier Science Ltd. All rights reserved. Keywords: Drug courts; Criminal recidivism; Effect size
1. Introduction The nation has taken a keen interest in drug courts. A 1992 report on recidivism in the Texas criminal justice system, noted that about 47% of the offenders sentenced to prison reported current drug use of one or more drugs compared to 5.8% of the general population. Moreover, the same report suggests that the need for drugs may be a factor in the commission of some crimes. For instance, 29% of offenders drawing prison terms reported immediate drug use (within 24 h) at the time of the crime. In addition, 46% of drug offenders (and 21% of non-drug offenders) who were revoked for new conviction reported that the need for drugs was a factor in their crime (Criminal Justice Policy Council, 1992). Nationally, data on drug use collected on defendants in 23 cities indicate that 51±83% of arrested males and 41±84% of arrested females were under the in¯uence of at least one illicit drug at the time of arrest (US Department of Justice, National Institutes of Justice, 1996). Increasingly, policymakers are seeking ways to address the drug problem because the cost of drug use to the individual as well as to society is staggering. In 1997, the total economic cost of alcohol and drug abuse in Texas was estimated at $19.3 billion, a $2.3 billion increase from 1994 (Liu, 1998). The cost to the nation as a whole in 1992 was estimated at about $246 billion, including health care expen-
* Tel.: 11-940-565-2318; fax: 11-940-565-4466. E-mail address:
[email protected] (A. Bavon).
ditures, lost productivity effects, and other effects on society including crime (Harwood, Fountain & Livermore, 1999). Problems associated with drug use have plagued the nation in the recent past, in part because solutions have been quite elusive. In the late 1980s, a re-examination of the relationship between criminal justice processing and alcohol and drug treatment services led to the concept of treatmentoriented drug courts, which started with the Dade County drug court in 1989 (US Department of Justice, Of®ce of Justice Programs, 1998). Ten years later, almost 300 drug courts have been implemented in 48 states as well as Guam, the District of Columbia, Puerto Rico, a number of Native American Tribal Courts, and one federal district court. Also, as of June 1998, there were more than 425 drug courts in various stages of development nationwide (US Department of Justice, Of®ce of Justice Programs, 1998). The recent growth has been attributed in part to the enactment of the Violent Crime Control and Law Enforcement Act of 1994, which provided $56 million in drug courts funding between 1995 and 1997 (Belenko, 1998, p. 7). The Tarrant County (Texas) DIRECT Project is one of the bene®ciaries of this initiative. The goal of drug court programs that emphasize treatment is to help the offender break the cycle of addiction, criminality, arrest, prosecution, conviction, incarceration, release, readdiction, and rearrest (US General Accounting Of®ce, 1997). Thus, a primary question that all evaluations of drug courts focus on is whether the program has been successful in breaking this cycle. The purpose of this study is to examine the effects of the Tarrant County drug
0149-7189/01/$ - see front matter q 2001 Elsevier Science Ltd. All rights reserved. PII: S 0149-718 9(00)00043-4
14
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
Fig. 1. The DIRECT Project Logic Model.
court on criminal recidivism. It does so by using the nonequivalent control group evaluation design that compares observed outcome changes on program participants with those of a comparable group that was eligible but opted not to participate in the program. While drug use outcomes are important goals of drug court programs, the scope of this evaluation is limited to criminal justice outcomes only, primarily because the DIRECT Project does not have a post-project treatment component to track relapse incidence. Consequently, there is no post-DIRECT substanceabuse data available for past clients of the project. This limitation notwithstanding, a study focusing on criminal recidivism should be of great interest to policymakers, researchers, and taxpayers. Drug courts have become a novel way of dealing with the drug-related crime problem today and they are mushrooming all over the nation thanks to the infusion of federal funds through the 1994 Violent Crime Control and Law Enforcement Act of 1994 and the 1996 Federal Crimes Bill. In addition, ªthe imprimatur provided by the Crime Bill recognition of the importance of drug court activity has also generated support for drug courts from many other sectors, public and private, with ®nancial as well as policy and in-kind contributionº (Drug Court Program Of®ce, 1997, p. 1). Furthermore, drug court outcome-related issues are important because a number of jurisdictions are developing special dockets modeled after the drug court for other classes of chronic criminal offenders such as domestic violence matters (Drug Court Program Of®ce, 1997, p. 8). Questions pertaining to the ef®cacy of drug courts should therefore be of interest to key stakeholders. The rest of the paper is organized as follows. First, it presents the context for the paper by describing the Tarrant County DIRECT Project. Next, it reviews the literature related to the evaluation of drug courts. Third, it presents the method adopted to evaluate the DIRECT Project by specifying the evaluation design, sample selection and data collection procedures, and the statistical technique to
be used in assessing the impact of the program. Fourth, it presents the results of the analysis. Finally, it summarizes the ®ndings, presents conclusions and recommendations, and discusses the implications of the study for future research. 2. The Tarrant County DIRECT Project The Tarrant County DIRECT Project was established in 1995 with a mission to break the cycle of substance abuse and criminal behavior of minor drug offenders ages 17 and over. Offenders who meet the eligibility requirements and volunteer to participate in the program must agree to complete a 12-month treatment program. The typical DIRECT Project referral is an offender who is charged with possession of less than three grams of controlled substance, or possession of more than 4 oz, but less than 1 lb of marijuana, or obtaining or attempting to obtain a controlled substance by fraud. As of May 1998, the project had registered 292 participants since its inception, comprising 130 active participants, 73 successful graduates, and 89 who were removed and referred back to the traditional criminal justice system. The rationale behind the drug court is that the participants will react to the incentives and disincentives in ways that will ultimately result in reduced relapse in drug use and criminal recidivism. In its simpli®ed form, the hypothesized relationship can be diagramed as shown in Fig. 1. The treatment component utilizes a parallel programming concept, referring clients to multiple resources appropriate for their needs and ultimate success in the program. Speci®c tasks are identi®ed to impact the client's ability to remain drug and/or alcohol-free, to reduce the client's risk of recidivism, and to increase the likelihood of the client becoming a productive member of the community. Such a multi-faceted approach is warranted because research shows that indicators of lifestyle and stress are just as important as the other
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
socio-economic and demographic factors that have been identi®ed to relate signi®cantly to drug use (Flewelling, Rachal & Marsden, 1992). The project activities are de®ned by client, program, and community needs and include substance abuse education, substance abuse treatment, education enhancement, employment attainment, stress management, coping skills education, addressing gender issues, addressing culture issues, parenting skills, and learning to utilize community resources and support groups. Available community resources are used for task completion. In addition to utilizing available community resources, task `clinics' are held at the program site to enhance accomplishments. These `clinics' focus on topics such as job interview skills, completing applications for employment, de®ning feelings and needs in a socially acceptable manner, and communication skills. Additional resources and supports, such as job readiness, training, education, and vocational and employment services are accessed through linkage with other agencies and community systems, including the Texas Workforce Commission, the Texas Rehabilitation Commission, the Fort Worth Independent School District, Goodwill Industries, the United Way of Tarrant County, the Tarrant Council on Alcoholism and Drug Abuse, and the University of North Texas Health Sciences Center. The case management team is responsible for coordinating and communicating with all agencies involved in the program. The Tarrant County administrator is responsible for the overall administration of the DIRECT Project. The day-today operation of the program is administered by a project supervisor and a staff of three case managers, a case manager-aide and a receptionist. The drug court component is headed by a judge. A prosecuting attorney, a defense attorney, and two bailiffs provide legal and other support during court sessions. Funding for the program comes from federal, state, and county government sources. 3. Literature review Findings from the growing list of drug court evaluations suggest that drug courts can increase treatment success and reduce recidivism. One of the early programs, based in Miami, was studied by Goldkamp and Weiland (1993) who found that 60% of the defendants processed in that program had favorable outcomes in terms of treatment, lower rearrest rates, and were less likely to be sentenced to prison or jail. Tauber's (1993) evaluation of the FIRST program in Oakland also showed substantial reduction in recidivism compared to a comparison group. Finigan (1998) conducted an evaluation of the Multnomah County (Oregon) STOP Program and concluded that the drug court program participants showed signi®cant reductions in recidivism. There are also, however, a number of studies that found no signi®cant differences in recidivism between
15
program participants and comparison group (Belenko, Fagan & Dumanovsky, 1994; Deschenes & Greenwood, 1994; Gran®eld & Eby, 1997; Smith, Lurigio, Davis, Estein & Popkin, 1994). Thus, there is some contradictory evidence of the impact of drug courts on recidivism. The literature also provides some guidelines regarding the appropriate outcome indicators that are used in measuring program ef®cacy. According to the Criminal Justice Policy Council, recidivism is an indicator of the recycling of offenders in the criminal justice system and can be measured by determining the percentage of offenders released from prison or placed under community supervision who are re-arrested or re-incarcerated after 1, 2, or 3 years (Criminal Justice Policy Council, 1996, p. 1). Variations of this measure have been used in the drug treatment/ drug court literature. For instance, Goldkamp and Weiland (1993) focused on rearrest rates and time to arrest while Van Stelle, Mauser and Moberg (1994) used re-arrest rates, conviction rates, and sentences imposed/served as indicators of recidivism in their study. The evaluation by Gottfredson, Coblentz and Harmon (1996) of the Baltimore City drug program, used the following recidivism indicators: any arrests, any convictions, number of days incarcerated as a result of a new offense, and the number of days ªfree in the communityº (i.e. the number of days from entry into the program until re-arrest or revocation). Bell, Murray, McGee and Rinaldi (1998) looked at a number of indicators including presence of new charges after the end of drug court, time to ®rst new charges (®ling date), number of post-drug court charges, and consequences of charges (prison/jail time). Similarly, Finigan's (1998) study used subsequent arrests, convictions, and incarcerations, and types of crimes committed as the key measures of recidivism. Generally, however, measures to calculate recidivism rates include simply calculating the percentage of individuals rearrested after going through the drug court program. While the follow-up period varies, most studies have tried to include at least 1 year of follow-up. Some have also calculated the average number of rearrests per client, or the length of time to the ®rst rearrest (Belenko, 1998). The appropriate evaluation design to measure the impact of drug courts has also been discussed in the literature. According to Deschenes, Turner and Greenwood (1995), because most of the evaluations use quasi-experimental designs, the general concern is how to address some of the speci®c threats to internal validity, especially with history and selection bias. Some of the studies use comparisons to control for threats to history. Typically, post-program outcomes are analyzed for a sample of drug court offenders relative to an appropriate comparison group that may include similar offenders whose cases were adjudicated before the local drug court began operating or eligible offenders who were referred to the drug court but did not enroll, or a matched sample of drug offenders sentenced to probation. Short of using an experimental design, it is important that comparison groups be selected and an appropriate
16
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
statistical analysis be conducted to control for these threats. According to Belenko, while evaluations that have compared post-program recidivism for drug court graduates and comparison groups ®nd much lower recidivism rates, the more appropriate comparison is between all drug court participants (whether or not they graduated) and a comparison group (Belenko, 1998, p. 17). Similar concerns with the use of the appropriate evaluation design were raised in a General Accounting Of®ce review of the drug courts' records (US General Accounting Of®ce, 1997). The literature suggests different types of statistical analysis for evaluating program ef®cacy using the non-equivalent control group design. Kenny (1975) discussed four statistical tests including analysis of covariance, analysis of covariance with reliability correction, raw change scores analysis, and standardized change score analysis. Fitz-Fibbon and Morris (1987) suggest using the t-test to examine if there are any signi®cant differences between pre-test and post-test scores for the treatment and comparison groups. If the pretest scores for the two groups are signi®cantly different, however, they suggest using a combination of the analysis of covariance, post-hoc matching, and analysis of gain scores to compare post-test scores. Similar analytical techniques have been recommended by Reichardt (1979). Finigan (1998) used the analysis of covariance (ANCOVA) technique in his evaluation of the Multnomah County drug court in which measured pre-test means were used as a covariate of the post-program outcomes. Basically, the approach controls for initial selection differences by statistically matching individuals in the two groups on their pretest scores, and using the average difference between the groups on the post-test to estimate the treatment effect. Trochim (1998a) warns that, while the analysis of covariance is intuitively the expected approach (given that it has a pre-test variable, post-test variable, and a dummy variable for classifying participants), unreliability in the covariates leads to biased estimates. He suggests using the reliabilitycorrected analysis of covariance instead. Others have advocated the use of multivariate regression to control the effects of other variables (demographic and socio-economic) that impact program effects (Mohr, 1995). According to Trochim (1998b), the t-test, one-way analysis of variance (ANOVA) and the regression analysis are mathematically equivalent and would yield identical results. Recent publications in the evaluation and treatment methodology literature draw attention to the dangers posed by overdependence on signi®cance testing which could lead researchers and policymakers to overlook important substantive issues not captured by signi®cance testing (Borenstein, 1999; Cohen, 1992; Dennis, Lennox & Foss, 1997b; Kellow, 1998; Kirk, 1996; Posavac, 1998). Martinez-Pons (1999) has identi®ed at least ®ve major approaches that have been suggested for addressing the more serious criticisms leveled at the null hypothesis testing approach. These include the use of con®dence intervals, effect size to supplement the information provided by the
p-value, alternative hypothesis testing in lieu of null hypothesis testing, replication, and Bayesian statistics. Effect size is the index of the strength of a relation or effect between two sets of variables. It can be calculated in a number of ways for different statistical procedures and different versions have been proposed for detecting differences among groups. The use of con®dence intervals, simple mean differences, and effect size has been featured prominently in the writings of these and other authors. In spite of its identi®ed drawbacks, Kellow encourages evaluators to report on effect sizes because ªeffect size estimates are independent of sample size and thus provide valuable information regarding the true impact of the interventionº (Kellow, 1998, p. 131). 4. Methodology The literature review serves as a guide to the methodology adopted in this study. This section discusses the evaluation design, data collection, outcome measures used, and sample selection. 4.1. Evaluation design The implementation of the DIRECT Project is essentially a policy change designed to achieve speci®c policy goals and its impact can be treated as a quasi-experiment (Campbell & Stanley, 1966; Cook & Campbell, 1979). Following suggestions in the literature, this study adopts the nonequivalent control group design where the results of the treatment group are contrasted with that of a comparison group. In the evaluation literature, the notation for such a design is as follows: O1 £O2 O3 O4 The design notation shows that we have two groups, a program (O1,O2) and comparison (O3,O4) group, and that each is measured before (O1,O3) and after (O2,O4). The comparison group in this study will consist of similarly situated substance-abusing defendants who did not participate in the DIRECT Project. The aim of the non-equivalent comparison group design is to draw causal inferences about a program's effects and address many of the common threats to internal validity to clearly determine what would have happened had the project not been implemented (the counterfactual). The design helps answer our basic research question: does criminal recidivism differ for those who participated in the DIRECT Project compared to those who did not, assuming no a priori differences among groups? Methodologically, this will be achieved by a test of signi®cance of the difference between the outcome measures such as reconviction rates of the two groups. The independent t-test of means will be used to estimate the differences in outcomes between the two groups. Recognizing that the limited sample size
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
available for the study could in¯uence the statistical significance of the results, the analysis will also present the means, con®dence intervals, and effect sizes of the outcomes indicators to give a broader view of the effects of the program. 4.2. Data collected Data for the analysis were collected from three primary sources. The ®rst is the DIRECT Project Closure List, which provides certain basic information including identi®cation number, case number, date of birth, admission and closure dates, and status in the program. Demographic and other socio-economic data (marital status, education level, number of children, etc.) were obtained from the DIRECT Project client ®les. For information on criminal history, the closure list was matched with the Criminal Justice Crime Information System (CJCIS) computer ®les for information on arrests, type of crime (traf®c, drug-related, property, etc.), seriousness of crime (misdemeanor or felony), and the resolution of the case in the criminal justice system (dismissal, convictions to various types of probation, and ®nes and/or jail sentences imposed), prior criminal history, and recidivism. 4.3. Outcome measures Breaking the cycle of drug-use and crime is the primary rationale for the establishment of all drug court programs. It is therefore no accident that all drug-court studies in the past have used recidivism as the primary measure of program success. Following that tradition, the primary outcome measure of interest in this study is recidivism. The most common indicator of recidivism used in the drug court reports is rearrest rates (Belenko, 1998; US General Accounting Of®ce, 1997; US Department of Justice, Of®ce of Justice Programs, 1998). Rearrest rates were calculated by determining whether information on an arrest was recorded in the CJCIS. The criterion date is 1 year after ®nal contact with the DIRECT Project. In other words, the DIRECT closure date for each individual was matched with any arrest record since that date and was counted if it fell within the 1-year window. In addition to rearrests, the study also used other indicators of recidivism including duration of time between disengagement from DIRECT and the next arrest (Bell et al., 1998; Gottfredson et al., 1996), sentences imposed (Finnigan, 1998; Van Stelle et al., 1994), and booking rates, an indicator of brushes with the law and may or may not lead to charges ®led against the offender. 4.4. Sample selection The sample for the impact analysis includes offenders listed on the closure lists from ®scal years 1995±1996 through 1997±1998. The sample is 264 subjects, comprising 157 clients in the participant group (72 graduates and 85 dropouts) and a comparison group of 107 opt-outs.
17
Although the DIRECT Project Closure List has a total of 298 offenders, the ®nal sample of 264 re¯ects cases that were deleted because of double counting (project dropouts and readmits), cases missing in the computer ®les, or cases for which no crime information existed in the CJCIS database. The number of individuals included in each particular analysis varied, due, in part, to incomplete data, particularly on demographic characteristics of those who opted out of the program. The criterion date for determining recidivism is 1 year after leaving the DIRECT Project. Both project participants, including graduates and dropouts and the comparison group were subjected to the same standard. 5. Results This section of the study reports certain basic descriptive characteristics of the sample followed by a presentation of the indicators of recidivism, the primary outcome measure. Finally, an attempt is made to answer the impact question: To what extent can the results found be attributed to the DIRECT Project? 5.1. Sample characteristics The sample in this study is predominantly white (60%), single (59%), male (69%), and mostly-employed (78%). A little more than half have at least high school diploma (54%) and have no children (51%). About 92% had no crime record approximately 1 year prior to the arrest leading to the DIRECT Project referral. Almost 86% did not have an arrest record 1 year after leaving the DIRECT Project, either as program graduates or dropouts. About 75% of the 38 people who were charged with a crime after exiting the DIRECT Project committed some crime leading to arrest/ charges 6 months later. The criminal history of the sample appears to be dissimilar to other drug courts. This is not surprising since the DIRECT Project was designed to target low-level drug abusers. How do the backgrounds of the two groups, DIRECT Project participants and the comparison group, compare? To answer this question, sample characteristics (age, marital status, employment status, crime incidence, etc.) were compared for the subsets of DIRECT Project participants and the comparison group using chi-square analysis and t-tests to determine any measured characteristic differences between the two groups. As Table 1 shows, background data on the DIRECT Project participants and the comparison groups indicate that with the exception of the mean age at which the DIRECT offense was committed, none of the differences observed were statistically signi®cant. 5.2. Recidivism According to the USDOJ, recidivism rates reported by drug courts continue to range from between 2 to 20%. Furthermore, in almost all jurisdictions, recidivism is
18
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
Table 1 Descriptive statistics of demographic/socio-economic attributes by offender status (*P , 0.05) Attributes
DIRECT participants
Comparison group
Difference
Mean age at DIRECT Mean educational level Race (White 1) Gender (male 1) Employment status Marital status (married 1) Number of pre-DIRECT offenses
28.07 11.82 59.5% 58.2% 84.0% 72.5% 0.35
30.97 11.27 40.5% 41.8% 16.0% 27.5% 0.36
t 2.41* . t 21.33 . x 2 1.471 . x 2 0.367 . x 2 1.73 . x 2 2.60 t 0.157
substantially lower for participants who complete the drug court program and, to some extent, for those who do not complete the program as well. The rest of this section of the paper reports on how the program participants performed on these outcome indicators relative to the comparison group to give some indication of program effectiveness. The summary ®ndings are reported in Table 2.
participants. As was shown in the rearrest case example above, drop-outs cases represent a substantial proportion of all program participant bookings. In this instance, the 52 program dropouts who were booked representing about 83% of the 63 participant booking cases.
5.2.3. Jail/prison days sentenced 5.2.1. Rearrest rates The result of matching the closure date with arrest records showed that 38 of the 264 offenders or 14.4% were arrested and charged with a crime within 1 year after graduating or dropping out of the DIRECT Project. The recidivists' ages ranged from 17 to 44 and almost 29% are 17±19 year olds. Together, the 17±24 year olds make up about 58% of all the recidivists. The majority of all the recidivists (55%) were rearrested for drug/alcohol related offenses, with property offenses representing the next most prevalent offense type at 26%. While the analysis shows the overall rearrest rate is 14.4%, the rate for DIRECT Project participants alone is 12.7% and the rate for the comparison group alone is 16.8%, a four percentage point difference. Of the 20 program participants who were rearrested after leaving the DIRECT Project, 18 (or 80%) were program dropouts. Simply stated, only two of the 72 DIRECT Project graduates were involved in a criminal offense leading to an arrest by 1 year after graduation. Considered on their own merit, only 2.8% of the DIRECT Project graduates committed crimes that led to recidivism. By contrast, 21.2% of the drop-outs became recidivists during the criterion period. It is noteworthy that of the 18 dropouts who became recidivist, 61% (11) dropped out during Phase 1 of the treatment.
A total of 19 offenders were sentenced to prison/jail for their post-DIRECT Project rearrests; 10 of whom were DIRECT Project participants and nine who were not. For the DIRECT Project participants, the average sentences for the convictions was 277 days compared to 437 days for nonparticipants.
5.2.4. Time to arrest The average number of months between termination from the DIRECT Project and the commission of a new offense was about 5.3 months for DIRECT participants and 8.2 months for the comparison groups. This suggests that it took less time for participants to get involved in crimes leading to an arrest. To summarize, analysis of the recidivism data show that program participants performed better on a number of the indicators than the participant group. In particular, the breakouts in the rearrest rates show that project graduates are the least likely to recidivate and that the longer a person stays in the program the more likely the person is to bene®t from the intervention. These ®ndings generally conform with other studies reported (Belenko, 1998) and makes it imperative that program retention and completion be made a focal point of drug court programs. 1
5.2.2. Bookings There were 106 bookings during the 1 year period between exit from DIRECT Project and booking. This represents 40.2% of the total sample. Of the 106 bookings, 63 (or 59% of all booking cases) were DIRECT Project participants. Considering that there are 157 Project participants, these 63 cases represent 40.1% of all DIRECT Project
1 A related study used logistic regression to examine the in¯uence of client demographic and behavioral characteristics such as prior criminal record to predict the likelihood of successful project graduation. The study found that men are less likely than women to graduate from the DIRECT Project. Also, younger offenders and those with prior arrest records are less likely to graduate than older offenders or those with no previous arrest records. Finally, Hispanics and Asians are less likely to graduate than either White or African±American offenders in the program (Bavon, 1999).
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
19
0.059 0.22 381 0.13 20.15 20.80 2702 20.64 20.869 21.136 20.624 21.324 The effect size was calculated using the spreadsheet developed by Dennis, Lennox and Foss (1997a). a
46 2.20 556.3 1.76 14 53 277 73 All arrests 1-year post-DIRECT Duration between DIRECT and arrest Number of days sentenced Bookings
19 82 437 98
20.11 20.13 20.38 14
High Low
Standard deviation DIRECT comparison mean DIRECT participant mean Recidivism variable
Table 2 Independent t-test results for treatment and comparison group subjects on select variables
Effect size a
t
Con®dence interval
5.3. DIRECT project impact To what extent can we attribute these results to the ef®cacy of the DIRECT Project? In other words, what is the impact of drug courts on the lives of clients compared to similar drug offenders? The results of an independent t-test of means to detect any signi®cant differences between the two groups are presented in the Table 2. Using the traditional null hypothesis testing approach, the results show that there is no statistically signi®cant difference between the two groups on these indicators, which suggests that the observed difference may be attributable to chance. 2 As noted earlier, the fact that program results do not show statistical signi®cance does not mean it is not substantively important. The con®dence intervals give additional information on detecting program effectiveness. In this study, the con®dence interval is the likely range within which the difference between the means lie. All the 95% con®dence intervals include zero and, therefore, the corresponding signi®cance level yields a p-value higher than 0.05. Ordinarily, in the traditional null hypothesis testing, this result will be determined as statistically insigni®cant, and by implication dismissed outright. The con®dence intervals provide a substantive way to determine program effect. In this example, it can be shown that 1 year after breaking ties with the DIRECT Project, an average of 14 out of 100 participants were rearrested while 19 out of 100 opt-outs were arrested. Thus, the mean difference is ®ve with 95% con®dence interval of 20.15 and.059. This means that at one extreme, the DIRECT Project might decrease the number of arrests by as much as 15; at the other extreme, it might increase recidivism by as much as six. Another way of measuring project effectiveness is the use of effect size. The effect size used in this study is Cohen's d and is derived as the mean outcome for the participant group minus the mean outcome for the comparison group divided by the standard deviation. The pooled standard deviation is used if the two groups have the same variance, otherwise the separate standard deviation is used. The effect size formula can be stated as follows: Cohen0 s d
Mt 2 Mc =SDpooled For the difference between the two groups, an effect size lower than 0.20 can be considered trivial from a practical standpoint, one of 0.20 can be considered small but not trivial, one of 0.50 is moderate and readily noticeable, and one of 0.80 can be considered strong (Cohen, 1992; Kirk, 1996). Following this guideline, the effect of the DIRECT Project can be considered small at best. The results show 2 An analysis was conducted to determine whether the DIRECT Project had any signi®cant impact on the post-DIRECT recidivism. Given the small to moderate differences observed on pre-DIRECT arrest rates, all betweengroup comparisons were accomplished with analysis of covariance (ANCOVA) on post-DIRECT recidivism, with the corresponding preDIRECT arrest rates as the covariate. The results were not statistically signi®cant.
20
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
that the treatment (DIRECT Project) reduces the 1 year recidivism rate by 11% and the duration of time between the program and arrest for a new offense by 13%, both relatively trivial effect size. The effect size for the number of days sentenced to prison or jail is 0.38, a big effect size increase relative to the other effects, and yet relatively small using Cohen's guidelines. It is clear from this data analysis that the DIRECT Project has had minor substantive effects on key indicators of criminal recidivism, even if these results are not statistically signi®cant. 6. Discussion While this study produced results that are of potential practical signi®cance, it is also clear that the intervention did not generate adequate statistical signi®cance. Lack of statistical signi®cance is not unique to this speci®c study. Gottfredson et al. reported few statistically signi®cant estimates for the differences in recidivism outcomes between program and comparison groups in the Baltimore City drug court study (Gottfredson et al., 1996, p. 17). Belenko noted that even though there were large differences in recidivism rates between participants and comparison groups in the Delaware Juvenile Drug Court, they were statistically insigni®cant. He attributes the lack of signi®cance to possibly a re¯ection of the ªsmall sample size for the drug court participantsº (Belenko, 1998, p. 30). In their study of the Denver Drug Court, Gran®eld and Eby (1997) observed that the small sample of 100 cases from each cohort restricts the statistical power of the analysis and concluded that the result of the evaluation should be considered preliminary. Another study of the same program with a sample of 300 offenders also did not show a signi®cant difference in rearrests and revocations (Gran®eld, Ebby and Brewster, 1998). Finally, problems with recidivism comparisons with small samples were discussed by Greenwood (1994), who indicated that samples of less than 200 for treatment and comparison subjects leads to dif®culty in drawing statistically accurate conclusions. The methods literature suggests that such analytical outcomes are not unusual given certain circumstances. According to Lipsey, ªthe basic dilemma is that high power requires large effect size, a large sample size, or bothº (Lipsey, 1998, p. 65). Neither circumstance prevailed in this study. Given that the treatment and comparison groups in this study came from the same cohort of low-level offenders, most of whom have had minimal brushes with the law prior to the offense that got them referred to the DIRECT Project, it is not surprising that there were relatively few differences in the characteristics of the two groups. In addition, the differences between the treatment and comparison groups on the post-DIRECT outcomes were not large. If the effect size is thought of simply as the difference between the means of the treatment and comparison populations, the size of the difference will have an in¯uence on the likelihood of
statistical signi®cance. Thus, the larger the effect the more probable is the statistical signi®cance and the greater the statistical power. In this study, the effect sizes were relatively small and consequently affected the lack of statistical signi®cance. The second limiting factor is the relatively small sample size available for the study. After cleaning up the data, there were a total of 267 cases comprising both the treatment and comparison groups. This number was further limited by missing items on some key demographic and socioeconomic variables necessary for some types of analyses. While the desired level of power for detecting any given effect can be attained by making the samples large enough, in this practical situation, there were relatively few subjects available. Indeed, the sample in this study is actually the population of offender referrals to the DIRECT Project. Clearly, the DIRECT Project treatment group cannot be increased since there were only a ®nite number who were eligible and opted to participate when offered the opportunity. While the size of the comparison group can be increased using other groups, there is the danger of selecting people who may not be directly comparable to the treatment group, creating selection bias problems that could pose a threat to the internal validity of the ®nding. Finally, an additional related consideration is program newness. Belenko (1998) identi®es at least two drug court programs that were evaluated at two periods with different results. The ®rst evaluations conducted during the early phase of the program showed no signi®cant difference in recidivism rates between program and comparison groups. Follow-up data and rearrest rates of later evaluations did ®nd signi®cant recidivism effects. The DIRECT Project is relatively new, having been in existence for only 3 years. As reported earlier, none of the ®rst year participants graduated from the program. In addition, even though this evaluation sought to track post-DIRECT recidivism rates for 1 year after the break in DIRECT Project contact, some of the participants in ®scal years 1997±1998 had only about 5±6 months follow-up data available at the time of performing the analysis since the data for this evaluation had to be collected prior to the full year. 7. Conclusions This study examined the Tarrant County DIRECT Project to determine its effectiveness in reducing recidivism among program participants. It used several approaches to determine if the DIRECT Project had an impact and also what the size of that impact could be. Taking a broad perspective, it can be concluded that the program has made modest gains in achieving program goals. The results of the impact analysis, although not statistically signi®cant, indicate that DIRECT Project participants were arrested/charged fewer times and were sentenced to less prison time on the average than the non-participants. The substantive results show the DIRECT
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
Project graduates doing very well on all the indicators of program success relative to project drop-outs. It can be concluded that the program was more effective for this group of clients than for those who opted to drop out or opt out of it. Generally, the impact analysis ®ndings compare favorably with the ®ndings of other studies that have examined post-program recidivism for drug court graduates and comparison groups. Those programs found much lower recidivism among the comparison group (Belenko, 1998). The limitations identi®ed suggest that this study should be considered an underpowered preliminary study with potentially unreliable effect sizes. Any future study to address this shortcoming needs to take into consideration Cohen's suggestion to use a power analysis approach with an appropriate sample size. For this study, the appropriate sample size would be at least 392 in each group to be able to determine a small population effect size at 0.80 power for a 0.05 alpha in a two-tailed test. 8. Recommendations The limitations of the study notwithstanding, the outcome evaluation results suggest the DIRECT Project appears to be on track to having the desired effect of reducing recidivism among program participants. Within 5 years of program implementation, the DIRECT Project would have matured enough to answer the de®nitive question about the impact of the program. Future evaluation of a more mature project should furnish more compelling evidence of the ef®cacy of treatment for substance abusers in the criminal justice system. A related issue is that even though the program appears to be meeting its objective, it is impossible to say exactly what program aspects, components, activities contributed to the success story. An evaluation of program implementation/processes can be a useful tool to answering that question. Finally, an important consideration is the need to incorporate an aftercare treatment component to the program. The literature suggests that after-care treatment plays a signi®cant role in ensuring that program participants continue to stay off drugs and criminal activity. The Tarrant County DIRECT Project currently does not have an aftercare program and as such program graduates do not get the nurturing and reinforcement that they need to make that clean break. In addition, it is dif®cult to determine the long-term effect of one of the goals of drug courtsÐstaying drug-free. Under the current arrangements, graduates can potentially continue to use drugs so long as they manage to avoid arrest and indictment for crime. Acknowledgements The author wishes to thank the three anonymous reviewers and Professors Robert Bland and Charldean Newell of the Department of Public Administration, University of North Texas for their helpful comments. The author
21
also acknowledges funding provided by the Tarrant County Administration as well as the support of Cora Moseley, Les Smith, and program managers of the Tarrant County DIRECT Project. The views expressed in this report do not necessarily re¯ect the of®cial position of the Tarrant County Administration.
References Bavon, A. (1999). The Tarrant County D.I.R.E.C.T. Project evaluation ®nal report, Forth Worth, TX: Tarrant County Administration. Bell, M., Murray, C., McGee, S. & Rinaldi, L. (1998). King County drug court evaluation: Final report. Unpublished manuscript. Belenko, S. (1998). Research on drug courts: A critical review, New York: The National Center on Addiction and Substance Abuse, Columbia University. Belenko, S., Fagan, J., & Dumanovsky, T. (1994). The effects of legal sanctions on recidivism in special drug courts. The Justice System Journal, 17, 53±82. Borenstein, M. (1999). Retrieved May 28, 1999 from the World Wide Web: http://www.spss.com/cool/papers/borenstein.htm. The case for con®dence intervals in controlled clinical trials, Chicago, IL: SPSS Inc. Campbell, D., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research, Skokie, IL: Rand McNally College Publishing Company. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for ®eld settings, Chicago, IL: Rand McNally College Publishing Company. Cohen, J. (1992). A power primer. Psychological Bulletin, 112 (1), 155± 159. Criminal Justice Policy Council (1996). Recidivism as a performance measure: the record so far, Austin, TX: State of Texas. Criminal Justice Policy Council (1992). Recidivism in the Texas criminal justice system: Sentencing dynamics study report 5, Austin, TX: State of Texas. Dennis, M. L., Lennox, R. D. & Foss, M. A. (1997a). Power analysis worksheet. http://www.chestnut.org/LI/downloads/index.html Dennis, M. L., Lennox, R. D., & Foss, M. A. (1997b). Practical power analysis for planning substance abuse prevention and services research. In K. J. Bryant, M. Windle & S. G. West, Recent advances in prevention research methodology: Lessons from alcohol and substance abuse research. Washington, DC: American Psychological Association. Deschenes, E. P., Turner, S., & Greenwood, P. (1995). Drug court or probation? An experimental evaluation of Maricopa County's drug court. The Justice System Journal, 18 (1), 55±73. Deschenes, E. P., & Greenwood, P. (1994). Maricopa County' drug court: an innovative program for ®rst time drug offenders on probation. The Justice System Journal, 17 (1), 99±116. Drug Court Program Of®ce (1997). 1997 Drug court survey report: Executive summary, Washington, DC: US Department of Justice. Finigan, M. (1998). An outcome program evaluation of the Multnomah County S.T.O.P. diversion program. Unpublished manuscript. Fitz-Fibbon, C. T., & Morris, L. L. (1987). How to design a program evaluation, Newbury Park, CA: Sage Publications. Flewelling, R. L., Rachal, J. V., & Marsden, M. E. (1992). Socioeconomic and demographic correlates of drug and alcohol use, Rockville, MA: National Institute of Drug Abuse. Goldkamp, J. S., & Weiland, D. (1993). Assessing the impact of Dade County's Felony Drug Court: Final Report, Washington, DC: US Department of Justice, Of®ce of Justice Programs, National Institutes of Justice. Gottfredson, D.C., Coblentz, K. & Harmon, M. (1996). A short-term evaluation of the Baltimore City Drug Treatment Court program. Unpublished manuscript. Gran®eld, R., Eby, C., & Brewster, T. (1998). An examination of the
22
A. Bavon / Evaluation and Program Planning 24 (2001) 13±22
Denver Drug Court: the impact of a treatment-oriented drug offender system. Law and Policy, 20 (2), 183±202. Gran®eld, R. & Eby, C. (1997). An evaluation of the Denver Drug Court: The impact of a treatment-oriented drug offender system. Unpublished manuscript. Greenwood, P. W. (1994). What works with juvenile offenders? A synthesis of the literature and experience. Federal Probation Quarterly, 58 (4), 63±67. Harwood, H., Fountain, D., & Livermore, G. (1999). The economic costs of alcohol and drug abuse in the United States, 1992, Rockville, MD: US Department of Health and Human Services. Kellow, J. T. (1998). Beyond statistical signi®cance tests: the importance of using other estimates of treatment effects to interpret evaluation results. American Journal of Evaluation, 19 (1), 123±134. Kenny, D. A. (1975). A quasi-experimental approach to assessing treatment effects in the nonequivalent control group design. Psychological Bulletin, 82 (3), 345±362. Kirk, R. E. (1996). Practical signi®cance: a concept whose time has come. Educational and Psychological Measurement, 56, 746±759. Lipsey, M. W. (1998). Design sensitivity: Statistical power for applied experimental research, Handbook of applied social research methods. Thousand Oaks, CA: Sage Publications. Liu, L. Y. (1998). Economic costs of alcohol and drug abuse in Texas: 1997 update, Austin, TX: Texas Commission on Alcohol and Drug Abuse. MartinezPons, M. (1999). Statistics in modern research: applications in the social sciences and education, Lanham, MD: Oxford University Press of America. Mohr, L. B. (1995). Impact analysis for program evaluation. (2nd ed) Thousand Oaks: Sage Publications. Posavac, E. J. (1998). Toward more informative uses of statistics: alterna-
tives for program evaluators. Evaluation and Program Planning, 21, 243±254. Reichardt, C. S. (1979). The statistical analysis of data from nonequivalent group designs. In T. D. Cook & D. T. Campbell, Quasi-experimentation: Design and analysis issues for ®eld settings Chicago, IL: Rand McNally College Publishing Company. Smith, B., Lurigio, A., Davis, R., Estein, S., & Popkin, S. (1994). Burning the midnight oil: an examination of Cook County's night drug court. The Justice System Journal, 17, 41±52. Tauber, J. S. (1993). The importance of immediate and intensive intervention in a court-ordered drug rehabilitation program: An evaluation of the F.I.R.S.T. Diversion Project after two years, Oakland, CA: Municipal Court, Oakland-Piedmont-Emeryville Judicial District. Trochim, W. (1998a). The statistical analysis of the nonequivalent control group design. Retrieved May 28, 1999 from the World Wide Web: http://trochim.human.cornell.edu/kb/statnegd.htm Trochim, W. (1998b). T-Test for differences between groups. Retrieved May 28, 1999 from the World Wide Web: http://trochim.human.cornell.edu/kb/stat_t.htm US General Accounting Of®ce (1997). (GAO/GCD-97-106). Drug courts: Overview of growth, characteristics, and results, Washington, DC: US General Accounting Of®ce. US Department of Justice. Of®ce of Justice Programs (1998). Looking at a decade of drug courts, Washington, DC: US Department of Justice. US Department of Justice, National Institutes of Justice (1996). Drug use forecasting: Annual report on adult and juvenile arrestees, Washington, DC: US Department of Justice. Van Stelle, K. R., Mauser, E., & Moberg, D. P. (1994). Recidivism to the criminal justice system of substance-abusing offenders diverted into treatment. Crime and Delinquency, 40 (2), 175±196.