Drug and Alcohol Dependence 53 (1999) 125 – 145
Measuring interstate variations in drug problems William E. McAuliffe *, Richard LaBrie, Nicoletta Lomuto, Rebecca Betjemann, Elizabeth Fournier Department of Psychiatry, Har6ard Medical School, National Technical Center for Substance Abuse Needs Assessment, North Charles Research and Planning Group, 875 Massachusetts A6enue, 7th Floor, Cambridge, MA 02139, USA Received 6 February 1998; accepted 22 June 1998
Abstract This article describes the Drug Problem Index (DPI), a composite index measuring the interstate severity of drug abuse problems. The DPI’s components (drug-coded mortality, drug-defined arrest, and drug-treatment client rates) were selected because they were linked closely with drug abuse, data were available for all states, and there was published evidence of their validity. The variables were reliable, and their convergent validity was estimated in a multi-trait, multi-method matrix. We found evidence consistent with the DPI’s construct validity in its relations with other consequences of drug abuse. The DPI correlated significantly with the Block Grant drug need index but not with model estimates of drug dependence based on the National Household Survey. © 1999 Elsevier Science Ireland Ltd. All rights reserved. Keywords: Drug problem severity; Social indicators; Interstate variations; Synthetic estimation; Drug surveys; Validation methods
1. Introduction This article describes the development and validation of the Drug Problems Index (DPI), a composite of drug abuse indicators designed to compare states with regard to their drug abuse and dependence problems. A measure of the quality of life (Cagle and Banks, 1986), the index’s components include indicators theoretically linked to drug abuse and validated in previous research. The study examines the convergent and construct validity of the DPI.
1.1. Existing measures of interstate drug problems and e6idence of their 6alidity Few measures have focused on interstate variations, and no measure has won wide acceptance as valid for this purpose (DeWit and Rush, 1996). Existing measures include a population-based index, survey-based * Corresponding author.
measures, social indicators, and a new approach that combines all three. The current Block Grant formula uses a population-at-risk measure of drug abuse. Congress has mandated that the Substance Abuse Prevention and Treatment (SAPT) Block Grant ($1.3 billion in 1997) be allocated on the basis of relative need (Substance Abuse Funding News, 1997a). The drug abuse measure assesses a state’s need for prevention and treatment services by the proportion of the country’s population at risk of drug abuse that lives in the state. The population at risk of drug abuse is defined as the total number of the people 18–24 plus the number 18–24 who reside in the urbanized areas (Burnam et al., 1997, p. 7). Because this measure correlates strongly with a state’s total population size (r= 0.99), questions can be raised about whether the formula adequately reflects the variations among state populations in the differential rates of drug abuse and dependence. Early epidemiological research found wide variation in drug abuse rates among small area populations (e.g. Chein et al., 1964; Nurco and Balter, 1969).
0376-8716/99/$ - see front matter © 1999 Elsevier Science Ireland Ltd. All rights reserved. PII S0376-8716(98)00123-9
126
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
The only national surveys that currently estimate substance use rates at the state level are the Centers for Disease Control and Prevention’s (CDC) risk factor surveys. The Behavioral Risk Factor Surveillance System’s (BRFSS) telephone surveys of adults measure only alcohol use and limited aspects of alcohol abuse (e.g. driving after drinking) for every state (Bradstock et al., 1985, 1987, 1988; Liu et al., 1997), and the Youth Risk Behavioral Surveillance System’s (YRBSS) school surveys estimate adolescent use of alcohol and several illicit drugs in 45 states (CDC, 1991; Kolbe et al., 1993; CDC, 1995a; Gast et al., 1995; Kann et al., 1995, 1996). Few studies have attempted to validate the substance use estimates produced by these surveys, and the findings have been mixed (e.g. Thompson et al., 1993; Gast et al., 1995; Stein et al., 1995). In a recent evaluation of the current Block Grant’s allocation formula, Burnam et al. (1994, 1997) developed a measure of the state rates of drug dependence using a method known as ‘synthetic estimation’ (Rhodes, 1993). Noting the lack of a widely accepted benchmark of substance abuse at the interstate level, the authors estimated a logistic regression model relating demographic variables to drug dependence measurements in the National Household Survey on Drug Abuse’s (NHSDA). Burnam et al. applied the resulting equation to individual-level state census data to obtain state-level synthetic estimates. Although the investigators carefully evaluated the statistical properties of their model and the financial impact of substituting their synthetic estimates in place of the current Block Grant need formula, Burnam et al. (1997) (pp. 56 – 58) devoted little attention to evaluating the validity of the synthetic estimates. Other researchers have questioned the validity of synthetic estimates (Furst et al., 1981; Ciarlo et al., 1992; DeWit and Rush, 1996; Folsom et al., 1996, p. 65, Folsom and Judkins, 1997, pp. 1 – 21). Responding to policy makers’ need for state-level drug use estimates, the federal government has recently awarded a 5-year contract of $192000000 to develop survey-based state-level estimates of substance use and abuse. The expanded NHSDA will include a minimum of 1000 interviews in every state, and the sampling plan will be modified for estimating state-level parameters. The NHSDA contractors will use a newly developed statistical model to estimate state rates (Folsom et al., 1996; Substance Abuse Funding News, 1997c; SAMHSA, 1997; US Department of Commerce, 1998). Until recently, the NHSDA estimated drug use prevalence for only California (Gfroerer and Brodsky, 1991). To expand the estimations to more states, Folsom et al. (1996) developed a ‘survey weighted empirical Bayesian’ model. The model used data from the 26 states that had the largest number of cases (300 or more) in the combined NHSDA samples for 1991– 1993. The statistical methodology also used county-
level indicator data (drug treatment clients, arrests, and alcohol deaths) and census block group-level demographic data (Folsom et al., 1996, p. 17). Folsom et al. (1996) used the state model to estimate alcohol and drug use in the last month, past-year drug dependence and alcohol dependence rates, and past year treatment utilization and treatment need. According to the NHSDA request for proposals (SAMHSA, 1997), the survey has five major purposes, including ‘assisting federal, state, and local agencies in the allocation of resources’ (p. 7), the Office of National Drug Control Policy (ONDCP) plans to use the NHSDA as its measure of success in ‘‘reducing the public treatment gap’’ (ONDCP, 1998, p. 105), and the Center for Substance Abuse Treatment recently informed states that they should determine how they would use the state-level NHSDA estimates in their future state treatment needs assessments (SAMHSA, 1998). Despite extensively evaluating the model’s statistical properties, Folsom et al. (1996) conducted limited assessments of the validity of its 26 state estimates. The model estimates for past-month alcohol use correlated ‘over 0.85’ with BRFSS telephone survey estimates of the same variable, but estimates for past-year drug abuse treatment admissions did not correlate significantly (rank order correlation= 0.38) with averagelength-of-stay-adjusted treatment client rates from the 1992 and 1993 National Drug and Alcohol Treatment Unit Survey (NDATUS). Similarly, the model estimates of recent arrests (including drug and non-drug offenses) failed to correlate significantly with the Uniform Crime Reports (UCR) past-year arrest statistics, corrected for multiple arrests (rank order correlation of 0.35). Although Wilson et al. (1983) suggested that survey and indicator methodologies may measure different aspects of the overall problem, Folsom et al. (1996) (pp. 3–33) argued that the lack of agreement between the surveybased estimates and the arrest and treatment indicators is difficult to interpret because the validity of the indicators is open to question. Concerns about the validity of substance abuse indicators have been around for some time (e.g. see Gruenewald et al., 1997, and DeWit and Rush, 1996 for reviews). DeFleur (1975) questioned the reliability of drug-arrest-based indicators as a result of her interviews with Illinois police. The GAO (1990) and Bennett (1995) noted that variations over time and across police departments in arrest definitions and organizational priorities may reduce the validity of arrest statistics. Underestimation of drug arrests results because UCR reporting procedures count an arrest as drug related only when the drug charge was the most serious crime for which the person was arrested (see Appendix A). The GAO also noted shortcomings in treatment utilization statistics (e.g. from NDATUS and NASADAD) and the overlap between arrest and treatment data
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
because the criminal justice system refers many clients to treatment. Shai (1994) and Pollock et al. (1991) pointed out examples of inconsistencies in coding of substance-abuse-related deaths. Many authors (e.g. DeFleur, 1975) have suggested that indicator statistics may reflect changes in funding and activity levels by criminal justice and public health organizations that do not directly or indirectly reflect changes in the prevalence of drug abuse. We found only two studies that investigated the use of indicators to measure drug abuse variations among states. Ball and Chambers (1970) ranked states using FBI drug arrest statistics and drug treatment admission rates at the Lexington and Fort Worth Public Health Services Hospitals. The two measures correlated 0.80 (our calculation). Ford (1984, 1985) used census and arrest indicators (e.g. percent nonwhite, percent nonwhite juvenile arrests) as independent variables in a series of regression models of 1980 utilization statistics (met demand) for each of five drug abuse treatment modalities in the states. The models explained a significant amount of variance, but the authors did not cross-validate the model estimates against other measures of drug abuse. Most of the evidence regarding the validity of substance abuse indicator data comes from studies of substate areas or time-series studies that may not generalize to interstate measurement (e.g. Person et al., 1977; Frank et al., 1978; Cleary, 1979; Flaherty et al., 1983; Beshai, 1984; Woodward et al., 1984; Crider, 1985; Wilson and Hearne, 1985, 1986; Schlesinger et al., 1993; Schlesinger and Dorwart, 1992; Simeone et al., 1993; Beenstock, 1995; Sherman et al., 1996; Mammo and French, 1996; Pampalon et al., 1996). Thus, despite the need for a validated measure of interstate variations in drug abuse problems, there is little evidence that existing measures possess validity. Given substantial state concerns about the fairness of federal block grant allocations (e.g. Substance Abuse Funding News, 1997b), the lack of a validated measure of state drug service requirements is a major drawback.
1.2. Validation of a social indicator approach We will attempt to meet this need with a measure based on arrest, treatment, and mortality statistics. To evaluate concerns about the validity of these measures, we will assess the indicators using several standard validation methodologies, including convergent, discriminant, and construct validation (e.g. Cronbach and Meehl, 1967; Nunnally, 1978; McAuliffe, 1984; Cagle and Banks, 1986; Ciarlo et al., 1992). Measurement specialists (e.g. Nunnally, 1978) define ‘validity’ as the proportion of the variance of measurements that reflects the concept of interest, and as such validity is a continuous variable ranging from 0 to 100%. Identifying examples of error in an indicator or inconsistencies
127
between two measures of the same phenomenon does not by itself reveal the extent of invalidity that results from those errors when the indicator is used for a particular purpose. That is, it is necessary to assess what proportion of the measurements’ variance is due to random or systematic error. For example, while the undercounting of drug arrests by the UCR could bias the absolute levels, this error could be sufficiently constant across states that it has little impact on the validity of the variance of arrest statistics as a measure of interstate differences. In this effort, we will test the following hypotheses: 1. A reliable and substantially valid measure of drug abuse problem variations among states can be developed using indicator data. 2. Variations in drug abuse problem rates among states are substantial and are incompletely reflected by a measure such as the Block Grant need index that is based solely on a state’s age structure and urbanicity. 3. The Drug Problems Index scores lead to different conclusions than do the drug dependence estimates from survey-based methods.
2. Methods Because no ‘gold standard’ exists for measuring the severity of drug abuse problems among states, our methodology emphasizes validity in both the construction and assessment of the DPI. First, we employed theory and empirical evidence of validity to select substantially reliable and valid component variables. Inevitably, all measurement hinges on theoretical assumptions regarding the correspondence between the candidate measure(s) and the concept of interest. Accordingly, we selected only variables that we termed ‘drug-defined’ or ‘drug-coded,’ where the original data collection process clearly identified the presence of drug abuse or the most closely associated problem behaviors. For example, in our index we included only ‘drugdefined arrests’ (possession, sales) rather than all arrests or even the general arrests (e.g. prostitution or burglary; see below) in which large percentages of arrestees are drug users.
2.1. Reliability and 6alidity of index components Empirical validation of the DPI included assuring and assessing the reliability of the index’s components as well as the reliability of the composite index itself, for reliability sets an upper limit of validity (Nunnally, 1978). To minimize data errors, we inspected the raw counts and rates of multiple years of the indicator data. In a few cases (see Appendix A), there were obvious outliers, e.g. a state’s rate in a particular year varied
128
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
from its rate in other years far more than was typical. We called the state or federal agency that collected or processed the data. If the value was in error and a corrected value was available, we used the corrected value. To minimize the impact of undetected or uncorrectable annual variations, we summed 3 years (1991 to 1993) of data for every variable. We calculated Alpha (Cronbach, 1951) to assess the reliability of the 3 years of data for each variable (e.g. drug-defined arrests). We investigated whether the theoretically-selected, empirically reliable variables reflected the same underlying concept by assessing their ‘convergent’ and ‘discriminant validity.’ To do so, we examined a multi-trait, multi-method (MTMM) correlation matrix that included parallel measures of both drug and alcohol abuse problems (Campbell and Fiske, 1959; Scherpenzeel and Saris, 1997). Validity is evident if the drug abuse measures correlate more highly with each other than with alcohol abuse measures. The statistical analysis also determined whether the correlations might be due to ‘method effects’ (e.g. similarities between measurements based on the same data collection method or similarities between methods). Although we designed our composite index to ‘average out’ method effects by including just one measure based on as many different methods as possible, the success of this strategy depends on the amount of freedom from overlap among the different data collection methods.
2.2. Index construction We created the unweighted composite DPI by summing the variables’ z-scores using unit (1.0) nominal weights (i.e. Z1 +Z2 +Z3). Converting to z-scores eliminates the variable’s unit of measurement and variance. The effecti6e weight of each variable in the composite (i.e. the proportion of the variance of the composite) equals the sum of its correlations with itself (1.0) and the other variables in the composite divided by the variance of the composite (Nunnally, 1978; see values in Table 3). To aid interpretation of the DPI scores, we scaled them so that the lowest state value was 0 and the highest state value was 100.
2.3. Validation of the composite index To assess the resulting composite index scores, we employed ‘construct validation’ (Cronbach and Meehl, 1967). According to this method, the validation of a set of drug problem measurements stems from their successful use in testing plausible or widely-held substantive hypotheses regarding drug abuse problems. We related the DPI scores to the state rates of other drug abuse problem measures, including contagious diseases (IV-AIDS, hepatitis, TB, and syphilis), property crimes (robbery, burglary, and prostitution), and living situa-
tions (homelessness and incarceration) that research has shown are caused in part by drug abuse. Although the theoretical connections between drug abuse and these variables did not meet our standard for including them in the DPI itself, we used the relationships in construct validation analyses.
2.4. Comparison of the DPI with existing measures of drug abuse se6erity After assessing the construct validity of the index, we compared it to the SAPT Block Grant’s formula’s Drug Need measure, the NHSDA-based synthetic estimates developed by Burnam et al. (1994, 1997), and the Bayesian model estimates developed by Folsom et al. (1996).
2.5. Data sources We described the data sources for this study and related data quality issues in Appendix A.
3. Results
3.1. Selection of component indicators Based on theoretical criteria described above, data availability for all states, and evidence of validity from the literature, we selected the rates per 100000 of Drug-coded Mortality (cases with a drug-coded multiple cause), UCR Drug-defined Arrests, and NDATUS Drug-treatment-only Clients as the constituent indicators in the DPI. Previous studies have reported evidence of the validity of treatment utilization, arrests and mortality statistics as indicators of drug abuse problems (Ball and Chambers, 1970; Person et al., 1976; Frank et al., 1978; Woodward et al., 1984; Schlesinger and Dorwart, 1992; Simeone et al., 1993; Beenstock, 1995). Recent studies of arrestees in for drug possession have found that 86% of males and 84% of females tested positive for drug use (Schlaffer, 1997; also see Craddock et al., 1994), and the figures for drug sales were 76% and 80% for males and females respectively. Despite high levels of underreporting of recent drug use by arrestees, Swartz (1996) found that 45% of Illinois arrestees for all charges self reported symptoms that meet DSM-III-R criteria for a lifetime diagnosis of drug abuse or dependence. We selected the NDATUS client rate over NASADAD admissions statistics as the treatment utilization measure because NDATUS counts individuals who are actively enrolled in treatment on a single day in both public and private facilities, whereas NASADAD counts annual admissions in only publiclyfunded facilities. The number of admissions over the
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
129
Table 1 Correlations of NDATUS treatment client rate with other measures of treatment utilization
NDATUS Drug-treatment-only Client Rate NASADAD drug admissions rate NHSDA direct estimates of past-year drug treatment rate Folsom et al. model estimates of past-year drug treatment rate
NDATUS drugonly client rate
NASADAD drug admissions rate
NHSDA direct estimates of past-year drug treatment
— 0.47* 0.28
— 0.04
—
0.23
0.03
0.54*
* PB0.05.
course of a year could reflect multiple events for the same individuals, and NASADAD data were missing for one or more years from several states (Washington, Oregon, and Wyoming). To assess the relative validity of the NDATUS and NASADAD measures, we compared them to each other, to the direct NHSDA survey estimates of pastyear drug treatment (Folsom et al., 1996), and to the model estimates of past-year treatment that were developed by Folsom et al. (1996) (Table 1). The NDATUS measure correlated most strongly with the other three measures. Despite the differences between the NDATUS client measure and the NASADAD admissions statistics, the two correlated significantly (0.47). The NHSDA direct estimates of the number of people who reported having received treatment in the past year and the related Folsom et al. model estimates did not correlate significantly with the NDATUS drug-only client rates. As noted earlier, although Folsom et al. (1996) used a length-of-stay-adjusted version of the NDATUS rates of drug-only-and-drug-plus-alcohol clients, the authors found no significant rank order correlation between the direct NHSDA past-year treatment estimates and their NDATUS client measure. The NASADAD admission statistics and the two surveybased estimates were effectively uncorrelated. The NDATUS clients are categorized into three groups: Alcohol only (35% nationally), alcohol and drugs (40%), and drug only (25%). On the basis of theoretical and empirical analysis, we selected the NDATUS statistics on drug-only clients rather than the combined drug-only plus drug-and-alcohol-treatment clients as our drug treatment-based measure. The NDATUS statistics revealed that providers in many states (especially Massachusetts, Alaska, New Hampshire, Nebraska and Texas) were more likely to utilize the drug-and-alcohol-treatment clients measure than the drug-only measure, but providers in a few states (Alabama, Arizona, New Mexico, New York, Rhode Island, and California) were more likely to use the drug-only category. Years of clinical experience working in several of these states caused us to hypothesize that many of the clients in the drug-and-alcohol cate-
gory may have had alcohol use disorders but were only users of illicit drugs rather than persons who met standard criteria for drug abuse or dependence. Although NDATUS defines this category adequately in the glossary that accompanies its survey questionnaire, even experts sometimes use the term ‘drug abuse’ to refer to the use of illicit drugs, whereas they reserve the terms ‘alcohol abuse’ for excessive use that results in symptoms and is likely to require treatment (e.g. GAO, 1990, p. 12). The empirical behavior of the drug-only client measure and the combined client measure supported our hypothesis. Although the two measures correlated substantially with each other (0.79, PB 0.05), the correlation of drug arrest rates with drug-only client rates (0.63) was significantly greater than the correlation of arrest rates with the combined client rates (0.42). Drug mortality rates correlated significantly more with the drug-only client rate (0.89) than with the combined client rates (0.70), and IV-AIDS statistics correlated significantly more with the drug-only client rate than with the combined client rate (0.76 versus 0.57). The drug-only client rate correlated slightly more strongly with NHSDA model estimates of past year drug treatment than did the combined measure (0.28 versus 0.23), but the difference was not significant. The drug-only client rates correlated significantly less than the combined client measure with NASADAD drug-related admissions (0.47 versus 0.62). We decided against including IV-AIDS statistics as a drug-defined morbidity indicator in the DPI because data were missing for some states, IV-AIDS statistics reflect the prevalence of just injection drug users, and the rate of AIDS among injection drug users (IDUs) varies substantially across regions (e.g. the seroprevalence rates among IDUs on the West coast are much lower than the rates among IDUs on the East coast) (LaBrie et al., 1992). Beginning in 1993, the CDC expanded tuberculosis surveillance efforts to include individual state data on the number of TB cases that reported injection drug use and non-injection drug use as risk factors. Unfortunately, only 17 states reported these data in the years of interest, and as of 1997 only
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
130
Table 2 Descriptive characteristics and reliability of DPI components
Drug-defined arrest rate/ 100 000 Drug-treatment-only client rate/100 000 Drug-coded mortality rate/ 100 000 a
Min
Median
Mean
Max
S.D.
r91,92, r92,93, r91,93 Reliability of 3-year composite (Cronbach’s Alpha)
57
290
316
733
166
0.89a, 0.98, 0.87a
0.97a
9
49
63
269
49
0.94, 0.97, 0.92
0.98
0.94, 0.97, 0.88
0.96
0.3
1.9
2.5
12.0
2.1
Estimated values for four states were omitted from these calculations in order to avoid overestimating reliability.
40 states reported data according to the expanded definitions. Thus, while this measure has potential for future use, we could not include it in the DPI in the present analysis.
3.2. Descripti6e characteristics, reliability, and 6alidity of the index’s components All three of the selected components were somewhat skewed but reliable and sufficiently numerous to measure the extent of drug abuse (Table 2). New York had higher rates than the rest of the states in each case. Rhode Island’s treatment client rate and California’s drug arrest rate also stood out. As expected, there were far fewer drug-coded deaths than arrests or admissions at any one time, and for all three components the correlations between 1991 and 1993 were slightly lower than the correlations among adjacent years. All three indicators had Alphas (Cronbach, 1951) exceeding 0.95. Intercorrelations among the three components agreed with the convergent validity hypothesis that the selected variables measured the underlying concept of drug problem severity. The Pearson correlations among the three drug measures exceed 0.59 in every case, and there is little difference between the Pearson productmoment and Spearman rank-order correlations. The Drug-only treatment client rate correlated more strongly with Drug-coded mortality rate (r = 0.89) than with Drug-defined arrests (0.63), even though officials refer many arrestees to treatment. The Pearson correlations among the three parallel alcohol measures were positive, above 0.28, and significant in all cases. The average Pearson correlation among the three drug measures was 0.71, substantially greater than the average of 0.41 among the three alcohol measures. ‘Method effects’ existed only in the treatment client data. The rates of NDATUS drug-only clients correlated significantly (0.46) with the rates of NDATUS alcohol-only clients, even though we removed treatment clients with both drug and alcohol problems from both measures. The other two ‘method’ correlations, between drug and alcohol mortality rates and between drug and alcohol arrest rates, were not significant (Table 3).
The remaining correlations in Table 3 are among variables that share neither the substance nor the method, and therefore the correlations reflect the amount of overlap among the different methods and different substances. The six Pearson cross-correlations between measures with both different methods and different substances (e.g. alcohol deaths with drug arrests) averaged − 0.015. Two of these Pearson correlations were significant: NDATUS alcohol treatment client rates correlated significantly with drug-coded mortality, while the DUI arrest rate correlated significantly negatively with the NDATUS drug-only client rate. The parallel Spearman rank-order correlations were in the same direction. The unique characteristics of the states may partly explain the lack of perfect agreement among the DPI’s three indicators. For example, high death rates parallel arrests and admissions for only some drugs. Nebraska, Wyoming, South Dakota, Iowa, and North Dakota had low rates of drug-related deaths. In those states, 70–80% of the drug-related arrests were for marijuana (GAO, 1990), and their treatment admission rates between 1991 and 1993 were especially low for opiates and cocaine. States with the highest proportion of arrests for opiates, synthetic narcotics, and cocaine, such as New York, California, New Jersey, and Maryland, tended to have relatively high death rates (GAO, 1990). New Mexico was an exception to this explanation. It had the second highest drug-related mortality rate, but few arrests, and most of them were for marijuana violations. When we regressed the Drug-coded mortality rates on marijuana, cocaine and opiate admission rates, we found that the opiate and cocaine admissions had a positive relationship with mortality, but only opiate admissions were significant. Marijuana admissions had a negative regression weight but it was not significant. States also vary in how they have chosen to respond to drug abuse: some states appear to stress a criminaljustice approach, whereas others seem more likely to take a treatment-oriented approach. In 13 states, there were more NASADAD treatment admissions for drug abuse in 1991–1993 than there were arrests for drug-
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
131
Table 3 Multi-trait, multi-method correlation matrix of drug and alcohol indicators Indicator (rate/100 000)
Drug clients
Drug mortality
Drug arrests
Alcohol clients
Alcohol mortality
DUI arrests
Drug-treatment-only client Drug-coded mortality Drug-defined arrest Alcohol-treatment-only client Alcohol-coded mortality DUI arrest
— 0.89* 0.63* 0.46* 0.03 −0.31*
0.85* — 0.60* 0.37* 0.13 −0.26
0.68* 0.55* — 0.04 −0.17 −0.18
0.37* 0.20 −0.11 — 0.49* 0.28*
0.01 0.14 −0.20 0.50* — 0.47*
−0.24 −0.33* −0.16 0.26 0.41* —
Note: the entries are Pearson product-moment correlations below the diagonal and Spearman rank-order correlations above the diagonal. * Significant beyond 0.05, two-tailed. Bolded figures are convergent validities measured by Pearson correlations. Underlined figures are method effects measured by Pearson correlations.
defined offenses during the same period. In four states, the admissions and arrest rates were about the same, and in the remaining 30 states that provided data there was a higher rate of arrests than admissions. Tennessee, Nevada, and Mississippi had three times as many arrests as admissions. Substance abuse directors in these three states reported having substantial substance abuse treatment waiting lists in 1993 (Gustafson et al., 1995). Despite varying state responses, the drug indicators overlapped substantially. Based on the correlations in Table 3, the effective weights of the components of the composite of z-scores (i.e. the proportion of variance that each explains) were 35, 34, and 31% respectively for the treatment clients, mortality and arrests. The corrected item-total correlations with the DPI scores were 0.85 for NDATUS Drug-only Clients, 0.83 for Drug-related Mortality, and 0.63 for drug-related arrests. In a factor analysis of the three drug indicators, the first principal component’s weights were 0.95 for NDATUS Drug-only Clients, 0.93 for Drug-related Mortality, and 0.81 for Drug-related Arrests. The first principal component explained 81% of the variance in the indicators, suggesting a high degree of reliability. As measured by Alpha (Cronbach, 1951) the reliability of the unweighted 3-item z-score composite DPI was 0.88. The reliability of the 3-item z-score composite of alcohol measures, the Alcohol Problems Index (API), was 0.68.
3.3. Interstate 6ariations in drug abuse The DPI scores and the component rates for each state in Table 4 indicate that the states’ populations vary widely in the severity of drug problems. The states with the lowest DPI scores were the mostly rural states of North Dakota, Iowa, South Dakota, Montana, West Virginia, Vermont, and Idaho. With the exception of Nevada, the states with the highest DPI scores were the mostly urban states of New York, California, Maryland, New Jersey, Rhode Island, and Connecticut. When comparing states in the top and bottom quartiles, we found ten-fold differences in the rates of
drug-only clients in treatment. Even when we considered the combined drug-only and drug-and-alcohol client together, we observed fivefold differences between the highest and lowest rates (Office of Applied Studies, 1993, 1995a,b). Fourfold difference in the drug-related mortality rates and fivefold differences in the drugdefined arrest rates are similarly common. The DPI scores have a distinctive geographic distribution (Fig. 1). The upper plains and mountain states have the lowest-quartile DPI scores, while the northeastern and west-coast states have the highest-quartile DPI scores.
3.4. Comparison with pre6ious studies These results agree with previous conclusions regarding the extent of drug problems in the rural and urban areas of the country and in individual states (e.g. Ball and Chambers, 1970, p. 6; Hunt, 1974).
3.5. Comparison with IV-AIDS statistics The IV-AIDS rates, which include all persons with AIDS who reported injection drug use as a risk factor, correlated significantly with the DPI and with all of its components (Table 5). In a multiple regression that included the DPI, four regions of the country, and an interaction term consisting of dummy variables between the regions and the DPI, we found significant overall main and interaction effects. With the addition of region and the interaction with region to the equation, the percentage of explained variance increased from 65 to 84% (the adjusted R 2 went from 64 to 81%).
3.6. Correlations with diseases, crimes, and residential statuses associated with drug abuse The DPI correlated significantly positively with all three state rates of drug-associated diseases in Table 6. The DPI correlated 0.80 with 1996 TB cases who were IDUs and 0.67 with those who were non-injection drug users; both correlations were significant. Syphilis is
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
132
Table 4 Drug problem index scores and component rates per 100 000 State
1. New York 2. California 3. Maryland 4. New Jersey 5. Rhode Island 6. Connecticut 7. Nevada 8. New Mexico 9. Arizona 10. Oregon 11. Massachusetts 12. Michigan 13. Florida 14. Louisiana 15. Colorado 16. Washington 17. Delaware 18. Texas 19. Illinois 20. Kentucky 21. Georgia 22. Pennsylvania 23. Tennessee 24. North Carolina 25. Mississippi 26. Ohio 27. Missouri 28. Utah 29. Virginia 30. Kansas 31. Hawaii 32. Arkansas 33. Oklahoma 34. South Carolina 35. Wisconsin 36. Alabama 37. Nebraska 38. Maine 39. New Hampshire 40. Alaska 41. Indiana 42. Wyoming 43. Minnesota 44. Idaho 45. Vermont 46. West Virginia 47. Montana 48. South Dakota 49. Iowa 50. North Dakota
DPI
100 63 62 49 48 47 41 41 41 36 35 33 32 30 30 29 28 27 27 26 25 25 25 25 23 23 22 22 21 19 18 16 15 15 14 13 13 12 10 10 9 8 7 6 6 5 4 3 1 0
Components of the DPI Drug-treatment-only client rate
Drug-coded mortality rate
Drug-defined arrest rate
269 125 147 127 192 105 65 103 104 102 59 111 79 60 91 80 88 54 68 50 39 70 46 47 40 55 41 51 57 55 36 41 34 48 41 42 38 44 14 30 29 37 17 19 18 20 14 10 13 9
12.0 6.1 6.3 4.1 4.4 4.5 4.0 6.5 4.0 3.6 4.5 2.9 2.0 2.0 3.2 3.5 3.7 2.5 4.1 0.9 1.9 2.8 1.4 1.9 1.0 1.6 1.9 2.8 1.9 1.0 1.8 1.0 1.2 2.0 1.2 1.1 0.6 1.2 1.4 1.4 0.9 0.8 1.1 1.0 1.4 1.0 1.5 0.4 0.6 0.3
675 733 626 552 293 556 583 240 439 365 412 320 476 485 280 268 217 389 211 504 459 278 471 413 478 370 376 255 299 335 299 300 287 163 225 226 264 163 210 157 172 138 146 129 102 111 57 133 65 77
endemic in metropolitan areas and the rural southeastern USA (Kilmarx and St. Louis, 1995), and cocaine use has correlated strongly with syphilis at the individual level in clinical samples (Joachim et al., 1988; Rolfs et al., 1990; Mellinger et al., 1991; Finelli et al., 1993; DeHovitz et al., 1994; Kilmarx and St. Louis, 1995;
Beilenson et al., 1996; Aktan et al., 1997, p. 16). Accordingly, we also correlated syphilis rates with an index of cocaine problems that paralleled the DPI, but included cocaine and opiate arrests per 100000 (only 1993 was available), 1991–1993 cocaine-coded deaths per 100000, and 1991–1993 NASADAD cocaine ad-
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
133
Fig. 1. Drug problem index scores.
missions per 100000. The resulting 0.49 correlation significantly eclipsed the 0.32 correlation between syphilis and the DPI. In a multiple regression analysis, both the southern region and the cocaine index significantly predicted the state syphilis rate (R 2 =0.60). The partial correlation between the cocaine index and syphilis, controlling for southern region, was 0.57 (P B 0.01). Thus, the results confirmed the expected relationships between the DPI and rates of drug-abuse associated contagious diseases, and the relationship between a parallel cocaine index and syphilis agreed with reports of a recent epidemic associated with cocaine use. The DPI correlated significantly positively with the 1991 –1993 rates of robbery, burglary, and prostitution (Table 6). The DPI and each of its components correlated significantly with the percentage of state residents in prisons, homeless shelters, and living on the street.
3.7. Comparisons with other methods of estimating interstate drug abuse 6ariations
other methods of estimating drug abuse, including the Block Grant Drug Need Index per capita, the synthetic estimates of drug dependence developed by Burnam et al. (1994) (p. 85) and Burnam et al., 1997 and the model-based estimates of drug dependence developed by Folsom et al. (1996) and Folsom and Judkins (1997) (Table 7). Also included in the matrix are parallel estimates of dependence on alcohol but not illicit drugs from the Folsom et al. study and our Alcohol Problems Index (API) index. The Burnam et al. study did not report synthetic estimates of alcohol dependence alone. Figs. 2–4 present maps of the three other drug variables to help interpret the correlational results. The DPI correlated significantly with the Block Grant’s Drug Need index, but the DPI did not correlate
Table 6 Correlations of the Drug Problem Index (DPI) with drug-associated diseases, crimes, and residential statuses Indicator rates
Correlation with DPI
Tuberculosis Syphilis Hepatitis B Robbery arrests Prostitution arrests Burglary arrests Percent of state population in prison Percent of state population that are homeless in shelters Percent of state population that are homeless on the street
0.59* 0.32* 0.27* 0.84* 0.51* 0.46* 0.48* 0.69*
We further evaluated the DPI by comparing it to Table 5 Correlation of Drug Problem Index and its components with IVAIDS rate Indicator
Correlation with IV-AIDS rate
Drug Problem Index (DPI) Drug-coded mortality rate Drug-treatment-only client rate Drug-defined arrest rate
0.80* 0.76* 0.76* 0.63*
* PB0.05.
* PB0.05.
0.48*
0.06 −0.33* 0.22 0.31
0.31 0.02 0.29* 0.05
All variables have 50 cases except for those from the Folsom et al. study (n =26).
— −0.20
— 0.56* −0.03
Drug Problem Index (DPI) Block Grant drug need per capita Burnam et al. drug dependence rate synthetic estimates Folsom et al. drug dependence rate estimate Alcohol Problem Index (API) Block Grant alcohol need per capita Folsom et al. alcohol dependence rate estimate
Block Grant drug need
DPI
Indexes
0.82* 0.55* −0.22 0.48*
—
Burnam et al. drug dependence
Table 7 Correlations of the Drug Problem Index with Alternative Measures of Substance Abuse Severity
— 0.60* −0.22 0.57*
Folsom et al. drug dependence
— −0.07 0.29
API
— −0.25
Block Grant alcohol need
—
Folsom et al. alcohol dependence
134 W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
135
Fig. 2. Synthetic estimates of percent of population meeting RAND criteria for drug dependence only.
significantly with the Burnam et al. synthetic estimates or with the drug dependence estimates derived from the Folsom et al. (1996) model. Inspection of the map in Fig. 2 reveals that the Burnam et al. estimates contrast sharply with the DPI scores presented in the map in Fig. 1. According to the Burnam et al. (1994) (p. 85) drug dependence estimates, all of the states with the highest quartile of drug dependence rates were in the West, and none of the eastern states were in the top quartile1. New York, New Jersey, Illinois, and Michigan were among the states in the lowest quartile of state drug problems according to the Burnam et al. estimates. The relatively low correlation between the DPI and the Folsom et al. model estimates is surprising since both measures included NDATUS treatment client data, UCR arrest data, and mortality data. According to Folsom et al.’s model, all but one of the states with the highest percentage of drug dependent persons are in the West (Fig. 3). Oregon tops the list, where an estimated 1.99% of the population had a drug dependence. The next highest rates are in Washington (1.96%) and California (1.91%). In the East, New York’s model estimate (1.09%) ranked thirteenth out of 26 states, below Oklahoma’s estimate (1.50%), Georgia’s (1.21%), South Carolina’s (1.19%), Kansas’ (1.13%), and Virginia’s (1.11%). By contrast, a majority of the states identified by the DPI as most plagued with drug problems are in the East (Fig. 1), and New York State’s DPI score was nearly twice as high as the next highest score, California’s (Table 4).
Similarly, Folsom et al.’s model estimates of 0.88% for both Illinois and Pennsylvania were nearly as low as the lowest estimated drug dependence rate in all states (West Virginia’s rate of 0.84%), whereas Illinois and Pennsylvania have DPI scores in the upper half of the DPI distribution. Comparison of Fig. 4 with the previous three figures reveals some similarity between the Block Grant Drug Need index and the DPI scores with regard to the location of the states with the most and least severe drug problems, but there are marked differences between the current Block Grant Drug Need index and the estimates of drug dependence developed by Burnam et al. and by Folsom et al. The Block Grant Drug Need Index did not correlate significantly with either the Burnam et al. drug dependence rate synthetic estimates or with Folsom et al.’s model drug dependence estimates (Table 7). Strongly correlated with each other, both the Folsom et al. and the Burnam et al. drug dependence estimates correlated more strongly with measures of alcohol abuse problems than with measures of drug abuse (Table 7). The Burnam et al. and Folsom et al. drug dependence estimates correlated more strongly with the API than with the DPI, and they correlated significantly with the Folsom et al. alcohol dependence rate estimate. These findings suggest the possibility that the two survey-based measures may lack discriminant validity.
4. Discussion 1 In the 1997 version of their report, Burnam et al. divided the states into thirds, and in that analysis West Virginia, Vermont, New Hampshire and Rhode Island were eastern states that qualified for the group of states with the highest drug dependence rates (Burnam et al., 1997) (Figure 4.2).
Using existing statistics from three distinct data sources, we created an indicator-based measure of the variations in drug abuse problem severity among the 50 states. By design, the measure focused on drug-use
136
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
Fig. 3. Folsom et al. (1996) model estimates of percent drug dependent.
disorders rather than casual drug use. The Drug Problem Index (DPI) employed only indicators (rates of drug-treatment-only utilization, drug-coded deaths, and drug-defined arrests) that were closely linked to the presence of high rates of drug use disorders (abuse and dependence). Combining 3 years of data, we found that each of the component measures possessed substantial reliability as well as convergent validity. Parallel alcohol measures were somewhat less reliable and valid. These findings for the DPI confirm earlier interstate analyses by Ball and Chambers (1970) and Ford (1984, 1985), as well as the results of intrastate, intercity, and foreign indicator studies by Person et al. (1976), Frank et al. (1978), Woodward et al. (1984), Schlesinger and Dorwart (1992), Simeone et al. (1993), Larson and Marsden (1995) and Beenstock (1995). The resulting composite DPI index was reliable (0.88), and evidence of its validity included its correlation with the state IV-AIDS rates, a measure of injection drug use that we excluded from the index because of incomplete data and unique regional variations. The DPI’s relations to less-direct consequences of drug abuse (rates of other contagious-disease measures, drug-related crimes, as well as rates of incarceration and homelessness) also helped confirm its construct validity. With the exception of IV-AIDS and recent information on tuberculosis, the health consequences of drug abuse that we related to the DPI were diseases whose cases were not coded for drug abuse but for which there is evidence from other sources that drug abuse is an important cause (Alexander, 1977; Haverkos and Lange, 1990; Haverkos, 1991; Cherubin and Sapira, 1993; Crowe and Reeves, 1994; Addiction Treatment Forum, 1995; Tennant and Moll, 1995; Beilenson et al.,
1996; Finelli et al., 1997). Recent data suggests an association between tuberculosis and drug abuse (CDC, 1995b; McKenna et al., 1995). In 1993, 2.4% of tuberculosis patients were estimated to be IDUs, and another 4.7% reported non-injecting drug use (CDC, 1994c, 1995b, 1997). Syphilis has long been associated with heroin use (Cherubin and Sapira, 1993), and during the late 1980s syphilis became associated with cocaine injection among males and trading sex for drugs among females (Joachim et al., 1988; Rolfs et al., 1990; Mellinger et al., 1991; Finelli et al., 1993; DeHovitz et al., 1994; Kilmarx and St. Louis, 1995; Beilenson et al., 1996; Aktan et al., 1997, p. 16). About a quarter of large samples (e.g. 300000) of persons with hepatitis B were IDUs during the 1980s (Francis et al., 1984; Haverkos, 1991), although this rate has diminished to 11% recently (CDC, 1995c, p. 27). Measures of property crimes, such as robbery and burglary, and prostitution are similar to the disease measures, since people with drug use disorders commit a substantial proportion of these crimes. Drug testing in 23 cities confirmed that 71% of males and 78% of females arrested for robbery in 1995 had used drugs in the last 48 h (Schlaffer, 1997; also see Craddock et al., 1994). The same statistics for burglary were 71% of males and 67% of females. Arrestees for car theft had similarly high rates of positive tests. Females arrested for prostitution tested positive 87% of the time, which was higher than for females who were arrested for drug possession and sales (84 and 80%, respectively; Schlaffer, 1997). No other crimes were more highly associated with recent drug use (Bureau of Justice Statistics, 1992; Craddock et al., 1994; Schlaffer, 1997, Table 2). Recent research has shown that many of these arrestees meet clinical criteria for drug use disorders (Swartz, 1996).
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
137
Fig. 4. Block grant drug need allocation per capita.
Despite the limitations of arrest rate statistics for measuring the absolute levels of drug abuse, the results from the present study suggest that arrest statistics are useful for assessing relative differences in drug problems among large populations. With increasing numbers of crimes and arrests associated with drug abuse, we hypothesized that the DPI would correlate with the percentage of a state’s population that is in prison. Six recent studies of prisoners reported that between 57 and 80% had drug use disorder diagnoses (Farabee, 1994, 1995, Farabee et al., 1996; Hudik et al., 1994; Illinois Department of Corrections, 1995; Fredlund et al., 1995) Drug abuse also commonly leads its victims to become homeless (Susser et al., 1989; Fischer, 1991; McCarty et al., 1991; Smith et al., 1993; Rahav and Link, 1995). Like contagious disease rates, the homelessness and incarceration statistics stem from data collection systems missing from the DPI. The empirical confirmation of these hypotheses helps establish the construct validation of the DPI. This form of construct validation has an important practical dimension, for a measure of drug problems that predicts these health and social costs has obvious value.
4.1. Implications Our results bear on the use of substance abuse indicator data in general. Analysis of the reliability of the drug abuse indicators revealed a high degree of stability from year to year, with Alphas for 3-year composites exceeding 0.95 for drug-related arrests, clients, and mortality (Table 2). These results are inconsistent with De Fleur’s influential qualitative critique of agencybased indicator data, especially arrest rates (also see Bennett, 1995). Despite the anecdotal evidence of De-
Fleur (1975) from Chicago police officials suggesting that drug-defined arrest statistics may be unreliable in some situations, the findings in Table 2 confirm our earlier findings that a 3-year composite of drug arrest rates in Illinois counties had a high degree of stability (Cronbach’s Alpha of 0.97) (McAuliffe, 1995). Although drug arrests must inevitably reflect police activity, that relationship would cause the drug arrest statistics to be wholly invalid only if the activity itself were entirely unrelated to changes in drug abuse prevalence. If police activity increased as a consequence of growing drug abuse rates and resulting community concern, then the increased arrest rates would reflect increasing prevalence rates to some degree. Testimonials by selected police officials, such as those described by DeFleur (1975), suggest that some portion of the variance in police activity reflects factors other than the variance in prevalence, but the central validity issue is how much. Our findings indicate that a substantial amount of the variance in the 3-year composite of state-level drug arrest rates is reliable. Moreover, the arrest rate correlated significantly with the other two drug indicators, and the composite of all three indicators performed as one would expect with regard to several important effects of drug abuse. Apparently, measurement errors in the absolute number of arrests that occur at the town level over relatively short periods may have relatively little impact on the validity of the relative estimates for states based on several years of data. Thus, results obtained by an appropriate empirical validation methodology suggest that indicator data may be useful for addressing some important drug abuse research and policy issues. Despite these encouraging findings, the field must not ignore the importance of continuing enhancements in
138
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
the scope and availability of indicator data. Recent refinements in tuberculosis reporting are an important example of how indicators can be made more useful, and increased availability by CD-ROM of complete data on the causes of death had a beneficial impact on the present study. Access to NHSDA data, UCR data, and treatment admissions statistics on the internet was an important advance. Studies that have found errors in indicator data (e.g. Pollock et al., 1991; Shai, 1994) serve as important reminders that the validity of indicator data should be carefully evaluated in each new use. As the federal government substantially increases its investment in obtaining state-level data, responsible federal and state officials should mount a concerted effort to eliminate known problems in existing data files. There are obvious improvements that should be made in the completeness of state reporting of UCR data, especially in rural areas (see Appendix A). UCR drug-defined arrest data could be more useful and presumably more valid for the present purpose if the information were available in greater detail (e.g. separating heroin and cocaine possession arrests from one another other as well as barbiturate and amphetamine arrests from one another). Changes in CDC case definitions should be made uniformly over all states at the same time. The NHSDA should adopt a standardized measure of drug dependence. Making these refinements does not appear to be costly and could improve the usefulness of the data substantially. Improved data quality would no doubt result in greater reliability, validity and usefulness of these indicator models. Examination of the DPI and its components revealed that there are wide variations among the states in the rates of drug problems and presumably the need for drug treatment resources. We used our drug-abuse-problemrate indicators to assess the validity of the Block Grant Need index, which primarily reflects population size. Although the DPI and the Block Grant Drug Need index should correlate with each other because the treatment admissions component of the DPI most likely reflects previous Block Grant allocations, only about 30% of the variance in the Block Grant drug need measure was accounted for by the DPI. As a result of the present findings, it is reasonable to ask whether the current Block Grant formula allocates treatment resources as well as it could to meet state variations in the problems created by drug abuse and dependence. The states most affected by the difference between the two measures were Oregon and New Mexico. Congress may wish to give the need components of the Block Grant formula yet another look.
4.2. Sur6eys and indicators Comparison of the DPI with survey-based measures
and models revealed little evidence of agreement. The DPI and the synthetic estimates of drug dependence developed by Burnam et al. (1994) were negatively correlated. Even estimates generated by the Folsom et al. model, which included arrest, treatment, and mortality indicator data as its components, did not correlate significantly with the DPI. Folsom et al. (1996) (pp. 68–69) reported that the ‘direct’ NHSDA estimates of arrests had a rank-order correlation of − 0.07 with UCR arrest statistics corrected for multiple arrests, and the direct NHSDA estimates of past-year treatment had a rank-order correlation of 0.06 with Folsom et al.’s NDATUS drug treatment measure (drug-only and drugplus-alcohol-case rates corrected for length of stay and multiple treatment episodes). We found nonsignificant Pearson correlation (0.23) between the direct NHSDA arrest estimates and our own drug-defined arrest rate measure. The direct NHSDA past-year drug treatment rate correlated 0.28 with the NDATUS Drug-client only rate and 0.04 with the NASADAD admissions rate (Table 1). Neither correlation was significant. Several surprising findings from the Folsom et al. drug treatment model help illustrate the meaning of this lack of correlation between its estimates and the NDATUS and NASADAD statistics (Table 1). The NHSDA respondents in Oklahoma had the third highest percentage who reported receiving treatment during the past year according to the Folsom et al. model, but that state was among the lowest 25% of states according to the NDATUS drug-treatment-only client rates and fourth from the bottom with regard to the NASADAD drug admission rates. Similar results were apparent for Minnesota. By contrast, New York State had the highest NDATUS rate and was in the top quartile on the NASADAD rates, but the state was in the lower half of the Folsom et al. model estimates of drug treatment in the past year. Similar results were observed for New Jersey and Illinois. This lack of correspondence between the survey-based measures and the drug indicators raises important research and policy questions for the field. Although Folsom and Judkins (1997) suggested that the indicator data may be the source of the disagreement between the survey and indicator estimates, our investigation has shown that the indicator data may have greater validity than previously thought. Other explanations should be considered. For example, Folsom and Judkins (1997) noted that the NHSDA’s sampling design was not optimized for estimating state-level parameters, a shortcoming that is to be eliminated in the planned expansion (SAMHSA, 1997). While the method of synthetic estimation has become popular in policy studies (Rhodes, 1993; Minugh et al., 1997), researchers have increasingly identified potential difficulties in the technique’s use (Furst et al., 1981; Ciarlo et al., 1992; DeWit and Rush, 1996; Folsom et al., 1996, p. 65; Folsom and Judkins, 1997, pp. 1–21).
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
Burnam et al. (1994) appeared to have followed correct procedures in implementing the methodology, but comparison of several key differences between the DPI and the resulting synthetic estimates raised questions about the plausibility of the latter. Inspection of Figs. 2 and 3 indicates that the Burnam et al. estimates have a distinct regional pattern that may partly reflect the small number of NHSDA interviews conducted in regions such as New England and the mountain and upper plains states (see Fig. 3). We have concluded from these results that there is a clear need for additional validation research on synthetic estimation of drug abuse. Of particular importance is research that compares the estimates with other measurement methods. Policymakers should exercise care before acting upon service recommendations developed using synthetic estimation. For example, comparison of Figs. 2 and 4 suggests that many questionable changes in Block Grant allocations could result if federal officials replaced the current Block Grant Drug Need index with the synthetic estimates developed by Burnam et al. (1994, 1997). Without more independent research regarding the validity of Folsom et al.’s estimates of drug dependence, federal officials may wish to incorporate more validation studies in their current plans to expand the NHSDA and to use the state estimates derived from the Folsom et al. modeling technique (Substance Abuse Funding News, 1997c). It is noteworthy that the Folsom et al. estimates of alcohol dependence did not correlate significantly with chronic alcohol use as measured by the BRFSS surveys (0.27, our calculation), thus indicating that the very strong agreement reported by Folsom et al. (1996) between their model estimates and the BRFSS’s measure of alcohol use in the past month may have been a misleading indication of the amount of agreement between the two survey methodologies. Although a primary justification of the NHSDA expansion is to evaluate the effects of the planned youth-oriented federal initiatives (ONDCP, 1998), application of the resulting survey estimates in a range of policy and resource allocation decisions is also an explicit goal of the NHSDA’s increase in size (SAMHSA, 1997). The use of drug dependence estimates in assessing both prevention and treatment service needs was illustrated by the Burnam et al. (1997) (p. 56) study. States such as New York, Michigan, and Illinois are sure to object to the use of these survey-based methodologies in federal policy making and resource allocation. The estimates developed by Folsom et al. primarily reflect the findings of the NHSDA (see Table 1), but several authors have recently remarked about how little validation research has been conducted on the National Household Survey itself (e.g. Turner et al., 1992; Biemer and Witt, 1996; Miller, 1997). Most of the studies have been conducted by investigators associated with the agency and its contractors (Miller, 1997), and most of
139
the findings have been reported only in government monographs (Turner et al., 1992). Several years ago, the GAO (1993) criticized the NHSDA, focusing on the issue of self report. This critique has been answered by only limited research (Biemer and Witt, 1996), for we know of no studies that have evaluated the NHSDA using recent technical developments such as hair analysis. Moreover, a range of other well-know shortcomings may undermine the validity of the NHSDA’s estimates of state variations in drug dependence. The shortcomings include nonresponse bias, the NHSDA’s questionable editing and imputation procedures, and its nonstandard drug dependence and treatment need measures (Caspar, 1992; Epstein and Gfroerer, 1995; Biemer and Witt, 1996; Bray et al., 1996; Miller, 1997; Leen, 1998; ONDCP, 1998). Whereas survey noncoverage and nonresponse have relatively little impact on estimates of ever use of an illicit drug or on marijuana use, they have more substantial effects on hard-core drug use (Caspar, 1992; Bray et al., 1996). Critics have especially targeted the NHSDA’s estimates the use of heroin and cocaine, the drugs that account for the largest proportion of the variance in deaths and treatment admissions (Miller, 1997; Leen, 1998). Despite the NHSDA’s relatively large samples, even the national estimates of drug use are based on remarkably few cases. Miller (1997) reported out that the national estimate of past month cocaine use in 1996 was based on 31 interviews, and 35–40% of these estimates between 1994 and 1996 were based on imputed cases in which the respondent had reported not using the drug in the past month. Even after the sample size is increased substantially (from 19000 to 50000), it is difficult to see how the NHSDA could reliably estimate past month cocaine use for individual states or measure cocaine dependence or the need for prevention or treatment of cocaine abuse. Discussing a similar contrast between survey and mortality data in alcohol studies, Wilson et al. (1983) argued that the two methodologies may measure different aspects of the overall problem. Whereas indicators represent older chronic alcoholics, surveys represent younger at-risk populations. Our own research in Rhode Island suggested that surveys identify many respondents that meet diagnostic criteria for marijuana use disorders, but relatively few of those respondents ever seek treatment services (McAuliffe et al., 1987). Thus, it appears possible that a drug problem index such as the DPI could be a more sensitive measure of state variations in hard-core drug abuse, whereas surveys that do not make special efforts to include the hard-core addicts may be most valid for estimating variations in experimental drug use. Until further independent research on these hypotheses is completed, it is difficult to say with confidence what accounts for the troubling lack of agreement between drug-indicatorbased measures and NHSDA-survey-based estimates.
140
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
We wish to encourage other researchers and policy makers, including those who participate in funding epidemiological grant research, to recognize how little work has been done in the last several decades to advance our methods of estimating serious drug abuse problems. The long-term welfare of the field requires that grant funds be set aside for methodological research on substance use epidemiological techniques. Too many fundamental questions concerning validity of our drug problem estimates remain unanswered (Miller, 1997).
Acknowledgements Work on this article was supported by the North Charles Foundation and by a Center for Substance Abuse (CSAT) contract with the Rhode Island Public Health Foundation and the Division of Substance Abuse, Rhode Island Department of Health. It was carried out at the National Technical Center for Substance Abuse Needs Assessment.
Appendix A. Data sources A.1. NDATUS Drug-treatment-only client rates The National Drug and Alcohol Treatment Unit Survey (NDATUS) is administered by the Office of Applied Studies of the Substance Abuse and Mental Health Services Administration (Office of Applied Studies, 1993, 1995a,b). The sampling frame included 12303 specialty providers of substance abuse treatment, including public and private free-standing units and units in multi-purpose institutions. Identified mostly by state and federal agencies, these providers complete mailed questionnaires about all active clients in treatment on a specific reference day in the previous year (September 30, 1991 and 1992, and October 1, 1993). State substance abuse agencies encouraged providers to complete the forms, and SAMHSA’s contractor contacted nonresponders by telephone in order to obtain a minimum data set. The response rate was 93% in 1993 (Office of Applied Studies, 1995b). A.2. NASADAD drug-treatment admission rates The National Association of State Alcohol and Drug Abuse Directors (NASADAD) reports treatment admissions data annually from state agencies (SAMHSA, 1993; Butynski et al., 1994; Gustafson et al., 1995). The data come from only those programs that received at least some funds administered by the state agency. The admissions statistics were derived
from the federal Client Data System (CDS), and state officials review the statistics for accuracy prior to their publication. In 1991–1993, Washington had two missing observations (1991 and 1992), Oregon was missing 1992, and Wyoming was missing for all 3 years. Consequently, the sample size for this variable was 47. A.3. UCR drug-defined arrest rates The Federal Bureau of Investigation’s (FBI) Uniform Crime Reporting (UCR) system reports statistics on arrests for violations of state and local laws pertaining to possession, sale, growing, manufacturing, and making of narcotic drugs (FBI, 1994). The drug abuse arrest statistics count only those cases in which the drug offense was the most serious charge (GAO, 1990). Because the number of reporting units within a state varies from year to year, we formulated the Drug-defined Arrest Rate as the number of arrests per 100000 state residents covered by the statistics in the relevant year2. In both 1991 and 1993, two states had missing observations, although every state had data for at least two out of 3 years. We used multiple regression analyses to estimate the missing observations (1991 arrests regressed on 1992 and 1993 arrests; 1993 regressed on 1991 and 1992). The percentage of explained variance in the 1991 analysis was 0.80, and in the 1993 analysis it was 0.96. The FBI’s Criminal Justice Information Services Division provided us with a special breakdown of 1993 UCR sale and possession arrests for opiates and cocaine. We also used UCR data on robbery, burglary, and prostitution arrests for 1991, 1992, and 1993 (FBI, 1992, 1993, 1994). 2 The FBI (1994) states that the UCR covers 95% of the population, and this coverage includes 97% of metropolitan statistical areas (MSAs) as well as 86% of cities outside of the MSAs and in rural areas. However, the percentage of individual state populations covered in the 3 years averaged 80%, with one state as low as 24% in 1991, 35% in 1992 and 32% in 1993; some states had 100% in all 3 years. The states with the lowest coverage in the 3 years are Mississippi (33.4% on average), Tennessee (46.2%), Vermont (50.6%), New Mexico (54.6%), Missouri (55.8%), Indiana (56.1), Louisiana (58.2%), Ohio (59.7%), and North Dakota (61.8%). North Dakota was the only state to vary greatly from one year to the next, going from 61.8% in 1991, down to 24.3% in 1992, and then up to 77% in 1993. The correlations among the absolute population sizes from year to year were 0.99 in every case for the 3 years covered in our analysis. However, the percentage of the populations covered in the 3 years may vary for individual states. The three correlations among the percentages covered in each year averaged 0.74. It would be reasonable to hypothesize that the UCR data for North Dakota might be less valid and reliable than the observations for the states that had 100% coverage every year (Hawaii and Maryland). We found no evidence of a systematic bias: the correlation between the percentage of coverage and the size of the drug arrest rates was not significant in any of the 3 years (1991: r =0.02; 1992: r= 0.09; 1993: r= 0.06).
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
A.4. Drug-coded mortality rates The National Center for Health Statistics (NCHS) of the CDC published the Multiple Cause of Death Files for 1991, 1992, and 1993 on CD-ROM (NCHS, 1997a,b,c). Coded by state or NCHS nosologists from death certificates (Hopkins et al., 1989), each record contains multiple causes (up to 20) and demographic characteristics. We identified all records that included at least one ‘drug-coded’ cause of death. The International Classification of Diseases, Ninth Edition (ICD-9) (US Department of Health and Human Services, 1980) causes that we termed ‘drug-coded’ reasons were drug dependence (304.0 to 304.9), nondependent abuse of drugs (305.2 to 305.9), and accidental poisoning. The largest proportion of cases were due to accidental poisoning. We excluded poisoning cases that were coded as purposely inflicted. We defined a ‘drug-coded accidental poisoning’ as any accidental poisoning involving commonly abused drugs. The ICD-9 code for accidental poisoning includes accidental overdose, drug taking in error, and accidents in the use of drugs in medical and surgical procedures. To create our drug-coded accidental poisoning measure, we selected cases which had a poisoning ‘N’ code for the drugs of interest and an ‘E’ code which indicated that the poisoning was either accidental (E850.0 to E858.9) or of undetermined intent (E980.0 to E980.9). We included the undetermined category on the assumption that a majority of those cases were accidental overdoses associated with the abuse of those drugs. The ‘drug-coded accidental poisoning’ category thus included deaths associated with ingestion of opiates (N965.0), surface anaesthetics (N968.5), other specified analgesics (N965.8), barbiturates (N967.0), psychodysleptics (N969.6), psychostimulants (N969.7), benzodiazepines (N969.4), chloral hydrates (N967.1), gluthemide (N967.5) and unspecified sedative or hypnotics (N967.9) that was also assigned an accidental or undetermined intent E code (E850 to E858.9 and E980 to E980.9). We counted only drug-coded deaths that occurred in persons 12–64 in an effort to eliminate accidental poisoning of children and the elderly-groups that rarely if ever need drug abuse treatment. However, the Drugcoded Mortality Rate employed the entire population in the denominator (see below). It was possible for a case that was counted as having a drug-coded death also to have one or more alcohol-coded reasons as well. A.5. Alcohol-coded mortality rates We included only causes of death with explicit mention of alcohol according to the coding scheme used by the County Problem Alcohol Indicators (NIAAA, 1991, 1994). The ICD-9 codes were 291, 303, 305.0, 357.5,
141
425.5, 535.3, 571.0, 571.1, 571.2, 571.3, 790.3, E860.0, and E860.1. We counted only alcohol-related deaths for persons 12 or older in order to eliminate accidental deaths in non-alcoholics. The denominator of the Alcohol Mortality Rate is based on the entire state population. It is possible for a case that was counted as having an alcohol-coded death also to have one or more drug-coded reasons as well. A.6. Drug-related diseases Statistics on state rates of IV-AIDS are reported to the CDC (1994a). To verify the meaning of the missing data and obtain corrected values if they were available, we called states with missing values. Drug-related diseases hepatitis B, tuberculosis, and syphilis are reported to the CDC’s National Notifiable Diseases Surveillance System (CDC, 1992, 1993, 1994b, 1995b). There were no missing observations in the 1991, 1992, and 1993 hepatitis B, syphilis, and TB series. However, a new hepatitis case definition published by CDC in 1990 was not immediately adopted by all states. By calling three states to verify large annual variations in their counts of hepatitis cases, we obtained corrected data in one case (Delaware). A.7. NHSDA direct and Bayesian model estimates Folsom et al. (1996), Folsom and Judkins (1997) reported ‘direct’ survey estimates of past-year drug treatment for 26 states based on the combined NHSDA data from 1991 to 1993. The authors also reported the model-based estimates of a range of variables, including dependence on illicit drugs, dependence on alcohol but not illicit drugs, and drug treatment received in the past year (see description in Section 1). A.8. Rand criteria synthetic estimates Burnam et al. (1994) (Table 4.4) reported estimates of the percentage of each state’s population that met the ‘Rand criteria’ for drug dependence only. The Rand criteria were an attempt by Burnam et al. (1994) (p. 67) to approximate the criteria for drug dependence of the Diagnostic and Statistical Manual of Mental Disorders, third edition (DSM-III-R) (American Psychiatric Association, 1987). This step was necessary because the study used NHSDA data that covered only one of the nine DSM-III-R criteria for a current diagnosis and partially covered four more (Epstein and Gfroerer, 1995). To satisfy the Rand drug dependence criteria, subjects must have reported having three or more of eight ‘problems’ in the past year with regard to a specific drug. The problems included (1) tried to cut down or unable to cut down, (2) tolerance, (3) feeling sick as a result of drug use, (4) psychological problems
142
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
due to drug use, (5) social problems due to drug use, (6) physical health problems, (7) used the drug daily for 2 weeks or more, and (8) felt dependent on the drug. A.9. Population Granting that population size was the primary determinant of the absolute size of a state’s drug problem (NIAAA, 1994), our research focused on measuring the extent to which populations vary with regard to the rates of drug use disorders per 100000 state residents. We employed estimates of the entire population for 1991, 1992 and 1993 obtained from the US Bureau of the Census (1996)3. Other demographic statistics such as % foreign born, homeless, and in prisons were 1990 census data (US Bureau of the Census, 1993). A.10. Block Grant drug need allocation index per capital The SAPT Block Grant allocation formula’s population-at-risk-of-drug-abuse component is expressed as a proportion of the total Block Grant funds that should go to the state. In order to make this measure comparable to the other rate-based measures in this study, we multiplied each state’s proportion by half of the current total Block Grant amount and divided the dollars by the size of the state’s total population (1995 ST-96-1 estimates from the US Bureau of the Census, 1996). The Need Index in this study ignores adjustments for cost of living and state fiscal capacity that also affect Block Grant allocations.
References Addiction Treatment Forum, 1995. Hepatitis Haunts MMTPs. Addiction Treatment Forum 4, 1–3. Aktan, G.B., Calkins, R.F., Johnson, D.R., Miller, M., 1997. The Michigan Substance Abuse, Tuberculosis, and Sexually Transmitted Disease Survey (MSATS). Michigan Department of Community Health Bureau of Infectious Disease Control and Center for Substance Abuse Services, Lansing, MI. Alexander, M., 1977. Indicators of Drug Abuse—Hepatitis. In: Richards, L.G., Blevens, L.B. (Eds.), The Epidemiology of Drug Abuse: Current Issues. National Institute on Drug Abuse, Rockville, MD, pp. 123–129. 3
We used the entire population as the base of rates instead of the drug-using population because the primary interest for this analysis is to measure the burden of drug abuse on the entire state. Other things held constant, states with large elderly populations have lower rates of abuse than states with small elderly populations. If the elderly population were removed from the denominator, the resulting rate would overestimate the burden of drug dependence on the entire state (e.g. costs for treatment services). Age structure is a relevant cause of the variations in the rates of drug dependence over states. The goal of the present investigation is to estimate the magnitude of these variations rather than to control them.
American Psychiatric Association, 1987. Diagnostic and Statistical Manual of Mental Disorders (Third Edition, Revised): DSM-IIIR. American Psychiatric Association, Washington, DC. Ball, J.C., Chambers, C.D. (Eds.), 1970. The Epidemiology of Opiate Addiction in the United States. Thomas, Springfield, IL. Beenstock, M., 1995. An indicator model of drug use in Israel. Addiction 90, 425 – 433. Beilenson, P., Garnes, A., Brathwaite, W., West, K., Becker, K., Israel, E., Seechuk, K., Dwyer, D., 1996. Outbreak of primary and secondary syphilis — Baltimore City, Maryland, 1995. Morbidity Mortality Weekly Rep. 45, 166 – 169. Bennett, A., 1995. Is Tallahassee really as plagued by crime as New York City? Wall Street Journal, Jan 5, 1995, 1. Beshai, N., 1984. Assessing needs of alcohol-related services: a social indicators approach. Am. J. Drug Alcohol Abuse 10, 417–427. Biemer, P., Witt, M., 1996. Estimation of measurement bias in self reports of drug use with applications to the National Household Survey on Drug Abuse. J. Off. Stat. 12, 275 – 300. Bradstock, M.K., Marks, J.S., Forman, M.R., Gentry, E.M., Hogelin, G.C., Trowbridge, F.L., 1985. The behavioral risk factor surveys III: chronic heavy alcohol use in the United States. Am. J. Prev. Med. 1, 15 – 20. Bradstock, M.K., Marks, J.S., Forman, M.R., Gentry, E.M., Hogelin, G.C., Binkin, N.J., Trowbridge, F.L., 1987. Drinkingdriving and health lifestyle in the United States: Behavioral risk factors surveys. J. Stud. Alcohol 48, 147 – 152. Bradstock, M.K., Forman, M.R., Binkin, N.J., Gentry, E.M., Hogelin, G.C., Williamson, D.F., Trowbridge, F.L., 1988. Alcohol use and health behavior lifestyles among US women: the behavioral risk factor surveys. Addict. Behav. 13, 61 – 71. Bray, R.M., Wheeless, S.C., Kroutil, L.A., 1996. Aggregating survey data on drug use across household, institutionalized, and homeless populations. In: Warnecke, R. (Ed.), Health Survey Research Methods: Conference Proceedings. DHHS, Hyattsville, MD, pp. 105 – 110. Bureau of Justice Statistics, 1992. A National Report: Drugs, Crime, and the Justice System. NCJ-133652. Bureau of Justice Statistics, Rockville, MD. Burnam, M.A., Reuter, P., Adams, J., Palmer, A., Model, K., Rolph, J., Heilbrunn, J., Marshall, G.N., McCaffrey, D., Wenzel, S.L., 1994. Review and Evaluation of the Substance Abuse and Mental Health Services Block Grant Allotment Formula. DRU-635HHS. RAND Drug Policy Research Center, CA. Burnam, M.A., Reuter, P., Adams, J.L., Palmer, A.R., Model, K.E., Rolph, J.E., Heilbrunn, J.Z., Marshall, G.N., McCaffrey, D.F., Wenzel, S.L., Kessler, R.C., 1997. Review and Evaluation of the Substance Abuse and Mental Health Services Block Grant Allotment Formula. RAND, Santa Monica, CA. Butynski, W., Reda, J.L., Bartosch, W., McMullen, H., Nelson, S., Anderson, R.L., Ciaccio, M., Sheehan, K., Fitzgerald, C., National Association of State Alcohol and Drug Abuse Directors (NASADAD), and SAMHSA, 1994. State Resources and Services Related to Alcohol and Other Drug Problems: Fiscal Year 1992: An Analysis of State Alcohol and Drug Abuse Profile Data. (SMA) 94-2092. Department of Health and Human Services, Rockville, MD. Cagle, L.T., Banks, S.M., 1986. The validity of assessing mental health needs with social indicators. Eval. Program Planning 9, 127 – 142. Campbell, D.T., Fiske, D.W., 1959. Convergent and discriminant validation by the multimethod-multitrait matrix. Psychol. Bull. 56, 833 – 853. Caspar, R.A., 1992. Followup of nonrespondents in 1990. In: Turner C.F., Lessler, J.T., Gfroerer, J.C. (Eds.), Survey Measurement of Drug Use. US Department of Health and Human Services, Rockville, MD, pp. 155 – 173.
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145 CDC, 1991. Alcohol and other drug use among high school students — United States, 1990. Morbidity Mortality Weekly Rep. 40, 776 – 777, 783 – 784. CDC, 1992. Summary of notifiable diseases, United States 1991. Morbidity Mortality Weekly Rep. 40. CDC, 1993. Summary of notifiable diseases, United States 1992. Morbidity Mortality Weekly Rep. 41, 1–12. CDC, 1994a. AIDS Public Information Data Set. Six disks and technical report. December 1993. US Department of Health and Human Services, Rockville, MD. CDC, 1994b. Summary of notifiable diseases, United States 1993. Morbidity Mortality Weekly Rep. 42, 1–12. CDC, 1994c. Expanded tuberculosis surveillance and tuberculosis morbidity-United States, 1993. Morbidity and Mortality Weekly Rep. 43, 361 – 366. CDC, 1995a. Youth risk behavior surveillance—United States, 1993. Morbidity Mortality Weekly Rep. 44. CDC, 1995b. Tuberculosis Morbidity—United States, 1994. Mortality Morbidity Weekly Rep. 44, 387–389, 395. CDC, 1995c. Hepatitis Surveillance Report No. 56. Centers for Disease Control and Prevention, Atlanta, GA. CDC, 1997. Reported tuberculosis in the United States, 1996. Division of Tuberculosis Elimination (downloaded from www.cdc.gov). Chein, I., Gerard, D.L., Lee, R.S., Rosenfeld, E., 1964. Social and economic correlates of drug use. In: Chein, I., Gerard, D.L., Lee, R.S., Rosenfeld, E. (Eds.), The Road to H. Basic Books, New York, pp. 47 – 77. Cherubin, C.E., Sapira, J.D., 1993. The medical complications of drug addiction and the medical assessment of the intravenous drug user: 25 years later. Ann. Intern. Med. 119, 1017–1028. Ciarlo, J.A., Tweed, D.L., Shern, D.L., Kirkpatrick, L.A., Sachs-Ericsson, N., 1992. I. Validation of indirect methods to estimate need for mental health services. Eval. Program Planning 15, 115 – 131. Cleary, P.D., 1979. A standardized estimator of the prevalence of alcoholism based on mortality data. J. Stud. Alcohol 40, 408 – 418. Craddock, A., Collins, J.J., Timrots, A., 1994. Fact Sheet: Drug-Related Crime. NCJ-149286. Office of National Drug Control Policy Drugs and Crime Clearinghouse, Rockville, MD, pp. 1–5. Crider, R.A., 1985. Heroin incidence: a trend comparison between national household survey data and indicator data. In: Rouse, B.A., Kozel, N.J., Richards, L.G. (Eds.), Self-Report Methods of Estimating Drug Use: Meeting Current Challenges to Validity. National Institute on Drug Abuse, Rockville, MD. Cronbach, L.J., 1951. Coefficient alpha and the internal structure of tests. Psychometrika 16, 297–334. Cronbach, L.J., Meehl, P.E., 1967. Construct validity in psychological tests. In: Mehrens, W.A., Ebel, R.L. (Eds.), Principles of Educational and Psychological Measurement. Rand McNally, Chicago, IL, pp. 243 –270. Crowe, A.H., Reeves, R., 1994. Treatment for Alcohol and Other Drug Abuse: Opportunities for Coordination. (SMA)94-2075. Center for Substance Abuse Treatment, Rockville, MD. DeFleur, L.B., 1975. Biasing influences on drug arrest records: implications for deviance research. Am. Sociol. Rev. 40, 88–103. DeHovitz, J.A., Kelly, P.J., Feldman, J., Sierra, M.F., Clarke, L., Bromberg, J., Wan, J.Y., Vermund, S.H., Landesman, S., 1994. Sexually transmitted diseases, sexual behavior, and cocaine use in inner-city women. Am. J. Epidemiol. 140, 1125–1134. DeWit, D.J., Rush, B.R., 1996. Assessing the need for substance abuse services: a critical review of needs assessment models. Eval. Prog. Planning. 19, 41–64. Epstein, J.F., Gfroerer, J.C., 1995. Estimating substance abuse treatment need from a national household survey. Presented at 37th International Conference on Alcohol and Drug Dependence; August 20 – 25, 1995; La Jolla, CA.
143
Farabee, D., 1994. Substance Use Among Male Inmates Entering the Texas Department of Criminal Justice — Institutional Division: 1993. Texas Commission on Alcohol and Drug Abuse, Austin, TX. Farabee, D., 1995. Substance Use Among Female Inmates Entering the Texas Department of Criminal Justice — Institutional Division: 1994. Texas Commission on Alcohol and Drug Abuse, Austin, TX. Farabee, D., Leukefeld, C.G., Watson, D.D., Townsend, M., Spalding, H., Purvis, R., 1996. Substance Abuse Treatment Needs Among Kentucky Prison Inmates. University of Kentucky Center on Drug and Alcohol Research, Kentucky Division of Substance Abuse, and Kentucky Department of Corrections, KY. Federal Bureau of Investigation, 1992. Uniform Crime Reports for the United States 1991. US Department of Justice, Washington, DC. Federal Bureau of Investigation, 1993. Uniform Crime Reports for the United States 1992. US Department of Justice, Washington, DC. Federal Bureau of Investigation, 1994. Uniform Crime Reports for the United States 1993. US Department of Justice, Washington, DC. Finelli, L., Budd, J., Spitalny, K.C., 1993. Early syphilis: relationship to sex, drugs, and changes in high-risk behavior from 1987–1990. Sex. Transm. Dis. 20, 89 – 95. Finelli, L., Gursky, E.A. and CDC, 1997. Transmission of Hepatitis C virus infection associated with home infusion therapy for hemophilia. Morbidity Mortality Weekly Rep. 46, 597 –599. Fischer, P.J., 1991. Homeless persons: A review of the literature, 1980 – 1990, Executive Summary. National Institute on Alcohol Abuse and Alcoholism, Rockville, MD. Flaherty, E.W., Kotranski, L., Fox, E., 1983. A model for monitoring changes in drug use and treatment entry. Prev. Hum. Serv. 2, 89 – 108. Folsom, R.E., Judkins, D.R., 1997. Substance Abuse in the States and Metropolitan Areas: Model Based Estimates from the 1991– 1993 National Household Surveys on Drug Abuse — Methodology Report. SAMHSA, Office of Applied Studies, Rockville, MD. Folsom, R.E., Lessler, J.T., Witt, M.B., Gfroerer, J.C., Wright, D.A., Gustin, J., Office of Applied Studies, and SAMHSA, 1996. Substance Abuse in States and Metropolitan Areas: Model Based Estimates from the 1991 – 1993 National Household Surveys on Drug Abuse Summary Report. US Department of Health and Human Services, Rockville, MD. Ford, W.E., 1984. Predicting drug abuse service and manpower needs. Paper presented at The National Conference for Human Service Professionals: Assessing Community Needs for Alcoholism, Drug Abuse, and Mental Health Services, November 1984, Tucson, AZ. Ford, W.E., 1985. Alcoholism and drug abuse service forecasting models: a comparative discussion. Int. J. Addict. 20, 233–252. Francis, D.P., Hadler, S.C., Prendergast, T.J., Peterson, E., Ginsberg, M.M., Lookabaugh, C., Holmes, J.R., Maynard, J.E., 1984. Occurrence of hepatitis A, B, and non-A/non-B in the United States. Am. J. Med. 76, 69 – 74. Frank, B., Schmeidler, J., Johnson, B.D., Lipton, D.S., 1978. Seeking truth in heroin indicators: the case of New York City. Drug Alcohol Depend. 3, 345 – 358. Fredlund, E.V., Farabee, D., Blair, L.A., Wallisch, L.S., 1995. Substance Use and Delinquency Among Youths Entering Texas Youth Commission Facilities: 1994. Texas Commission on Alcohol and Drug Abuse, Austin, TX. Furst, C.J., Beckman, L.J., Nakamura, C.Y., 1981. Validity of synthetic estimates of problem-drinker prevalence. Am. J. Public Health 71, 1016 – 1020. GAO, 1990. Rural Drug Abuse: Prevalence, Relation to Crime, and Programs. GAO/PEMD-90-24. GAO, Washington, DC.
144
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145
GAO, 1993. Drug Use Measurement: Strengths, Limitations, and Recommendations for Improvement. GAO/PEMD-93-18. GAO, Washington, DC. Gast, J., Caravella, T., Sarvela, P.D., McDermott, R.J., 1995. Validation of the CDC’s YRBBS alcohol questions. Health Values 19, 38– 43. Gfroerer, J.C., Brodsky, M.D., 1991. Estimation of drug abuse prevalence in California using the National Household Survey on Drug Abuse. Presented at the meeting of the Sacramento Statistical Association, March 27, 1991, Sacramento, CA. Gruenewald, P.J., Treno, A.J., Klitzner, M.D., 1997. Measuring Community Indicators: A Systems Approach to Drug and Alcohol Problems. Sage Publications, Thousand Oaks, CA. Gustafson, J.S., Reda, J.L., McMullen, H., DiCarlo, M., Anderson, R., Brooke, M., Ciaccio, M., Foley, L., Gemma, D., Nelson, S., Sheehan, K., and National Association of State Alcohol and Drug Abuse Directors (NASADAD), 1995. State Resources and Services Related to Alcohol and Other Drug Problems: Fiscal Year 1993: An Analysis of State Alcohol and Drug Abuse Profile Data. National Association of State Alcohol and Drug Abuse Directors (NASADAD), Washington, DC. Haverkos, H.W., 1991. Infectious diseases and drug abuse: prevention and treatment in the drug abuse treatment system. J. Subst. Abuse Treat. 8, 269 –275. Haverkos, H.W., Lange, W.R., 1990. Serious infections other than human immunodeficiency virus among intravenous drug users. J. Infect. Dis. 161, 894 –902. Hopkins, D.D., Grant-Worley, J.A., Bollinger, T.L., 1989. Survey of cause-of-death query criteria used by state vital statistics programs in the US and the efficacy of the criteria used by the Oregon Vital Statistics Program. Am. J. Public Health 79, 570 – 574. Hudik, T.L., Huff, D., Prell, L., Roeder, L., Bell, B., Porter, K., and Moore, R.G., 1994. An Assessment of the Substance Abuse Treatment Needs of the Inmates of Iowa’s Correctional Institutions. Iowa Department of Human Rights, Division of Criminal and Juvenile Justice Planning, IA. Hunt, L.G., 1974. Recent Spread of Heroin Use in the United States: Unanswered Questions. Drug Abuse Council, Washington, DC. Illinois Department of Corrections, 1995. Adult Prison Study: Draft Version. Illinois Department of Alcoholism and Substance Abuse, IL. Joachim, G., Hadler, J.L., Goldberg, M., Sharrar, R.G., David, R., 1988. Relationship of syphilis to drug use and prostitution. Morbidity Mortality Weekly Rep. 37, 755–758. Kann, L., Warren, C.W., Harris, W.A., Collins, J.L., Douglas, K.A., Collins, M.E., Williams, B.I., Ross, J.G., Kolbe, L.J., 1995. Youth risk behavior surveillance—United States, 1993. J. Sch. Health 65, 163 – 171. Kann, L., Warren, C.W., Harris, W.A., Collins, J.L., Williams, B.I., Ross, J.G., Kolbe, L.J., 1996. Youth risk behavior surveillance — United States 1995. Morbidity Mortality Weekly Rep. 45 (Suppl.), 1 – 85. Kilmarx, P.H., St. Louis, M.E., 1995. Editorial: The evolving epidemiology of syphilis. Am. J. Public Health 85, 1053–1054. Kolbe, L.J., Kann, L., Collins, J.L., 1993. Overview of the youth risk behavior surveillance system. Public Health Rep. 108, 2–10. LaBrie, R.A., McAuliffe, W.E., Nemeth-Coslett, R., 1992. Seroprevalence of HTLV-I and HTLV-II among intravenous drug users. New Engl. J. Med. 326, 1783. Larson, M.J., Marsden, M.E., 1995. State-Level Substance Abuse Indicators for Youth. Brandeis University, Waltham, MA. Leen, J., 1998. Drug war success not in numbers. Washington Post, January 2, A1. Liu, S., Siegel, P.Z., Brewer, R.D., Mokdad, A.H., Sleet, D.A., Serdula, M.K., 1997. Prevalence of alcohol-impaired driving: results from a national self-reported survey of health behaviors. J. Am. Med. Assoc. 277, 122–125.
Mammo, A., French, J.F., 1996. On the construction of a relative needs assessment scale. Subst. Use Misuse 31, 753 – 765. McAuliffe, W.E., 1984. A validation theory for quality assessment and other health care measurement. In: Pena, J.J., Haffner, A.N., Rosen, B., Light, D.W. (Eds.), Hospital Quality Assurance, Risk Management and Program Evaluation: Tools for Clinicians and Administrators. Aspen Press, Rockville, MD, pp. 157 – 174. McAuliffe, W.E., 1995. A severe test of the NTC’s Model Family of Studies: assessing drug addict treatment needs. Presented at National Technical Center Annual Workshop: Needs Assessment in a Changing Health Care Environment, November 14 – 15, 1995, Rockville, MD. National Technical Center for Substance Abuse Needs Assessment, Cambridge, MA. McAuliffe, W.E., Breer, P., White, N., Spino, C., 1987. A Drug Abuse Treatment and Intervention Plan for Rhode Island: Review Copy. Rhode Island Department of Mental Health, Retardation and Hospitals, Division of Substance Abuse, RI. McCarty, D., Argeriou, M., Huebner, R.B., Lubran, B.G., 1991. Alcoholism, drug abuse, and the homeless. Am. Psychol. 46, 1139 – 1148. McKenna, M.T., McCray, E., Onorato, I., 1995. The epidemiology of tuberculosis among foreign-born person in the United States, 1986 to 1993. New Engl. J. Med. 332, 1071 – 1076. Mellinger, A.K., Goldber, M., Wade, A., Brown, P.Y., Hughes, G.A., Lutz, J.P., Harrington-Lyon, W., 1991. Alternative casefinding methods in a crack-related syphilis epidemic —Philadelphia. Morbidity Mortality Weekly Rep. 40, 77 – 80. Miller, P.V., 1997. Is ‘up’ right? The National Household Survey on Drug Abuse. Public Opin. Q. 61, 627 – 641. Minugh, P.A., McAuliffe, W.E., LaBrie, R.A., Geller, S., Pollock, N., Lomuto, N., Betjemann, R., 1997. Final Evaluation of the Substance Abuse Treatment Needs Assessment Program. National Technical Center for Substance Abuse Needs Assessment, Cambridge, MA. National Center for Health Statistics (NCHS), 1997a. 1991 Multiple Cause-of-Death File. Department of Health and Human Services, Center for Disease Control and Prevention, National Center for Health Statistics, Hyattsville, MD. National Center for Health Statistics (NCHS), 1997b. 1992 Multiple Cause-of-Death File. Department of Health and Human Services, Center for Disease Control and Prevention, National Center for Health Statistics, Hyattsville, MD. National Center for Health Statistics (NCHS), 1997c. 1993 Multiple Cause-of-Death File. Department of Health and Human Services, Center for Disease Control and Prevention, National Center for Health Statistics, Hyattsville, MD. NIAAA, 1991. County Alcohol Problem Indicators: 1979 – 1985. NIAAA, Rockville, MD. NIAAA, 1994. US Alcohol Epidemiologic Data Reference Manual Volume 3, Fourth Edition: County Alcohol Problem Indicators 1986 – 1990. 94-3747. US Department of Health and Human Services, Rockville, MD. Nunnally, J.C., 1978. Psychometric Theory, 2nd ed. McGraw-Hill, New York. Nurco, D.N., Balter, M.B., 1969. Drug Abuse Study: Maryland 1969. Maryland State Department of Mental Hygiene, Annapolis, MD. Office of Applied Studies, 1993. National Drug and Alcoholism Treatment Unit Survey (NDATUS): 1991 Main Finding Report. (SMA) 93-2007. US Department of Health and Human Services, Rockville, MD. Office of Applied Studies, 1995a. Overview of the FY94 National Drug and Alcoholism Treatment Unit Survey (NDATUS): Data from 1993 and 1980 – 1993. Advance Report Number 9A. US Department of Health and Human Services, Rockville, MD. Office of Applied Studies, 1995b. Overview of the National Drug and Alcoholism Treatment Unit Survey (NDATUS): 1992 and 1980– 1992. Advance Report Number 9. SAMHSA, Rockville, MD.
W.E. McAuliffe et al. / Drug and Alcohol Dependence 53 (1999) 125–145 Office of National Drug Control Policy, Executive Office of the President (ONDCP), 1998. Performance Measures of Effectiveness: A System for Assessing the Performance of the National Drug Control Strategy, 1998–2007. White House, Washington, DC. Pampalon, R., Saucier, A., Berthiaume, N., Ferland, P., Couture, R., Caris, P., Fortin, L., Lacroix, D., Kirouac, R., 1996. The selection of needs indicators for regional resource allocation in the fields of health and social services in Quebec. Soc. Sci. Med. 42, 909 – 922. Person, P.H., Jr., Retka, R.L., Woodward, J.A., 1976. Toward a Heroin Problem Index—An Analytical Model for Drug Abuse Indicators (Technical Paper). National Institute on Drug Abuse, Rockville, MD. Person, P.H. Jr., Retka, R.L., Woodward, J.A., 1977. A Method for Estimating Heroin Use Prevalence. (ADM) 77-439. NIDA, Rockville, MD. Pollock, D.A., Holmgreen, P., Lui, K.-J., Kirk, M.L., 1991. Discrepancies in the reported frequency of cocaine-related deaths, United States, 1983 through 1988. J. Am. Med. Assoc. 266, 2233 – 2237. Rahav, M., Link, B.G., 1995. When social problems converge: Homeless, mentally ill, chemical misusing men in New York City. Int. J. Addict. 30, 1019 – 1042. Rhodes, W.M., 1993. Synthetic estimation applied to the prevalence of drug use. J. Drug Issues 23, 297–321. Rolfs, R.T., Goldberg, M., Sharrar, R.G., 1990. Risk factors for syphilis: cocaine use and prostitution. Am. J. Public Health 80, 853 – 857. SAMHSA, 1993. State Resources and Services Related to Alcohol and Other Drug Abuse Problems, Fiscal Year 1991: An Analysis of State Alcohol and Drug Abuse Profile Data. (SMA) 93-1989. US Dept. of Health and Human Services, Rockville, MD. SAMHSA, 1997. Request for Proposal (RFP) No. 283-98-9008, 1999 – 2003 National Household Survey on Drug Abuse (NHSDA). Substance Abuse and Mental Health Services Administration, Rockville, MD. SAMHSA, 1998. Request for Proposal (RFP) No. 270-98-7052, State Treatment Needs Assessment Studies: Alcohol and Other Drugs. Substance Abuse and Mental Health Services Administration, Rockville, MD. Scherpenzeel, A.C., Saris, W.E., 1997. The validity and reliability of survey questions: a meta-analysis of MTMM studies. Sociol. Methods Res. 25, 341–383. Schlaffer, M., 1997. Fact Sheet: Drug-Related Crime. White House ONDCP Drug Policy Information Clearinghouse, Rockville, MD, pp. 1 – 5. Schlesinger, M., Dorwart, R.A., 1992. Institutional Dynamics of Drug Treatment in the U.S.: City and State Variation in Need and Treatment Capacity. Cambridge, MA. Schlesinger, M., Dorwart, R.A., Epstein, S., Clark, R., 1993. Weed, capacity and public choice; variation in treatment for drug abuse amoung the larger American cities. NIDA Treatment Services Research Monograph. National Institute on Drug Abuse, Rockville, MD. Shai, D., 1994. Problems of accuracy in official drug-related statistics. Int. J. Addict. 29, 1801–1811. Sherman, R.E., Gillespie, S., Diaz, J.A., 1996. Use of social indicators in assessment of local community alcohol and other drug dependence treatment needs within Chicago. Subst. Use Misuse 31, 691 – 728.
.
145
Simeone, R.S., Frank, B., Aryan, Z., 1993. Needs assessment in substance misuse: a comparison of approaches and case study. Int. J. Addict. 28, 767 – 792. Smith, E.M., North, C.S., Spitznagel, E.L., 1993. Alcohol, drugs, and psychiatric comorbidity among homeless women: an epidemiologic study. J. Clin. Psychol. 54, 82 – 87. Stein, A.D., Courval, J.M., Lederman, R.I., Shea, S., 1995. Reproducibility of responses to telephone interviews: demographic predictors of discordance in risk factor status. Am. J. Epidemiol. 141, 1097 – 1106. Substance Abuse Funding News, 1997a. House and Senate funding levels for substance abuse-related programs. Substance Abuse Funding News 3. Substance Abuse Funding News, 1997b. Proposed change for substance abuse Block Grant redistributes $44 million: state directors upset they weren’t notifies, claim client is the real loser if funds are redirected. Substance Abuse Funding News 1-2. Substance Abuse Funding News, 1997c. Changes in household survey would improve state estimates. Substance Abuse Funding News 9. Susser, E., Struening, E.L., Conover, S., 1989. Psychiatric problems in homeless men: lifetime psychosis, substance use, and current distress in new arrivals at New York City shelters. Arch. Gen. Psychiatry 46, 845 – 850. Swartz, J.A., 1996. Results of the 1995 Illinois Drug Use Forecasting Study. Illinois Criminal Justice Information Authority, IL. Tennant, F.S., Moll, D., 1995. Seroprevalence of Hepatitis A, B, C, and D markers and liver function abnormalities in intravenous heroin addicts. J. Addict. Dis. 14, 35 – 49. Thompson, D.C., Rivara, F.P., Thompson, R.S., Salzberg, P.M., Wolf, M.E., Pearson, D.C., 1993. Use of behavioral risk factor surveys to predict alcohol-related motor vehicle events. Am. J. Prev. Med. 9, 224 – 230. Turner, C.F., Lessler, J.T., Gfroerer, J.C. (Eds.), 1992. Survey Measurement of Drug Use: Methodological Studies. National Institute on Drug Abuse, Washington, DC. US Bureau of the Census, 1993. 1990 Census of Population and Housing Summary Tape File 3C. US Department of Commerce, Washington, DC. US Bureau of the Census, 1996. ST-96-1 Estimates of the Populations of States: Annual Time Series, July 1, 1990 to July 1, 1996. US Bureau of the Census, Population Division, Population Estimates Program, Washington, DC. US Department of Commerce, 1998. [AWARD] 1999 – 2003 National Household Surveys on Drug Abuse. Commerce Business Daily 2078. US Department of Health and Human Services, 1980. ICD-9-CM: International Classification of Diseases — 9th Revision —Clinical Modification, 2nd ed., Vol. 1, Diseases: Tabular List. US Government Printing Office, Washington, DC. Wilson, R.A., Hearne, B.E., 1985. The Feasibility of a Drug Abuse Treatment Profile System. GPO, Washington, DC. Wilson, R.A., Hearne, B.E., 1986. An assessment of the state of the art in drug abuse and alcoholism treatment needs estimation methods. In: Einstein, S. (Ed.), Treating the Drug User: Selected Planning Models, Issues, Parameters, and Programs. Sandoz Publications, Danbury, CT, pp. 186 – 221. Wilson, R.A., Malin, H.J., Lowman, C., 1983. Uses of mortality rates and mortality indexes in planning alcohol programs. Alcohol Health Res. World 8, 41 – 53. Woodward, J.A., Retka, R.L., Ng, L., 1984. Construct validity of heroin abuse estimators. Int. J. Addict. 19, 93 – 117.