Estimating actual rates of drug use

Estimating actual rates of drug use

Socio-Econ. Plann. Sci. Vol. 27, No. 3, pp. 199-207, 1993 Printed in Great Britain. All rights reserved Estimating JOHN 003%0121/93 $6.00 + 0.00 Cop...

908KB Sizes 0 Downloads 32 Views

Socio-Econ. Plann. Sci. Vol. 27, No. 3, pp. 199-207, 1993 Printed in Great Britain. All rights reserved

Estimating JOHN

003%0121/93 $6.00 + 0.00 Copyright 0 1993 Pergamon Press Ltd

Actual Rates of Drug Use

M. GLEASON’

and DAROLD

T. BARNUM’

‘College of Business Administration and Center for Health Policy and Ethics, School of Medicine, Creighton University, Omaha, NE 681780130 and %ollege of Business Administration and Center for Human Resource Management, University of Illinois (M/C 240), P.O. Box 4348, Chicago, IL 60680, U.S.A.

Abstract-An increasing number of business and public sector employers are requiring drug tests of employees and applicants, and tests often are conducted on other groups as well. One outcome of widespread testing is the tendency to equate the rate of positive drug test results with the actual usage rate for the target population involved. In this paper, we show that these two rates seldom are equal. We develop a simple model that may be used to determine the actual rate of drug usage in a given environment, based on drug test rates (reported rates) and laboratory drug testing accuracy. We use recent data on drug test outcomes and laboratory accuracy to illustrate the procedure. The results suggest that the highest reported rates underestimate the actual rates; specifically, the actual rates are 3315% higher than the reported rates. Similarly, the lowest reported rates overestimate the actual rates; specifically, the actual rates are l&55% lower than the reported rates. It is not possible to determine the exact degree of mis-estimation for a particular case without knowing the applicable test rate and lab accuracy level, but the aforementioned ranges give an indication of the possibilities.

ESTIMATING

ACTUAL

RATES

OF DRUG

USE

The reactions of public policy makers and public and private sector employers to workplace drug abuse are partly based on the percentage of the workforce presumed to be taking drugs. For example, in cases where it is thought that few workers take drugs, there have been efforts to avoid comprehensive drug testing [4,7, 191. Likewise, high prevalence rates have been used to justify massive testing and other intensive measures [8, 10, 11,461. That is, the level of efforts to reduce drug abuse, and the associated costs, are partly related to the extent of the perceived problem. High usage rates are more likely to generate high benefit-cost ratios for a wide range of prevention measures, while low abuse rates might not be cost-beneficial for the more comprehensive solutions [21,31]. As Zwerling et al. conclude, “the prevalence of drug use is the most important factor in determining the cost-benefit ratio of a screening program” [50, p. 931. The drug usage rate of a given group commonly is estimated by drug tests of those in the group [17,26,28, 31,33,41,48,50,51]. However, the rate of positive test results for a group, herein referred to as the reported rate, is not necessarily an accurate estimate of the actual proportion of those tested who have drugs in their urine. The reported rate includes both true positive and false positive test results, and omits false negatives. Thus, reported and actual rates are identical only when there are no false positives or false negatives (or when the false results exactly offset each other). The effects of incorrect drug test results on the probability of falsely accusing innocent people have been analyzed elsewhere [1,2, 161. Herein, the impacts of false test results on estimated drug-use rates are examined. A simple model for determining the true usage rate is developed. The model relies on drug test data, and on laboratory proficiency data regarding the accuracy of testing processes. Application of the model suggests that drug tests usually mis-estimate the actual percentage of a workforce on drugs; this mis-estimation may lead to inappropriate responses by public policy makers, business organizations and public agencies. In the following sections, we review basic terminology, develop a model to determine the actual usage rate, present positive drug test rates from a variety of industries, discuss information from laboratory proficiency studies concerning accuracy of testing processes, and use the drug-test and laboratory proficiency data to illustrate the model. 199

200

JOHNM. GLEAWN and DAROLDT. BARNUM

TERMINOLOGY Because of the specialized terminology, the following summary of terms utilized in this paper is offered. The “reported rate” of drug use is the proportion of those tested who test positive for drugs, whether or not they truly have evidence of drugs in their urine at the time they are tested. The “actual rate” of drug use is the proportion of those tested who truly have evidence of drugs in their urine at the time they are tested, whether or not they test positive. Neither the reported rate nor the actual rate attempts to identify the proportion of those tested who at some time in their past have used drugs. Both rates concern only the proportion of the target group whose urine suggests evidence. of drugs at the time of the testing. Thus, they both attempt to measure the proportion of a group with drugs or drug metabolites in their bodies (called “system presence” [13]), not the proportion of the group that has ever taken drugs. When a specimen is tested for drugs, one of four outcomes must occur: l l l l

true positive: specimen with drugs tests positive for drugs; false negative: specimen with drugs tests negative for drugs; false positive: specimen with no drugs tests positive for drugs; true negative: specimen with no drugs tests negative for drugs.

Given that a specimen contains drugs, it must test either positive or negative. That is, the probability of a positive test result (given drugs are present) plus the probability of a negative test result (given drugs are present) must equal 1.0. This may be written: P( + IDrugs) + P( - IDrugs) = 1.0. In other words, when drugs are present in a specimen, the probabilities of a true positive and a false negative must total one. The probability of a true positive is referred to as “sensitivity,” and the probability of a false negative is referred to as the “false negative rate.” That is, Sensitivity = P( + [Drugs) and False Negative Rate = P( - [Drugs). The sensitivity and the false negative rates are complements; that is, sensitivity = (1 - false negative rate). Thus, both rates measure the same phenomenon: the ability to detect the presence of drugs. Next, a specimen without drugs must test either positive or negative. That is, when no drugs are present, the test must result in either a false positive or a true negative. This may be written: P( + [No Drugs) + P( - [No Drugs) = 1.O. Thus, when drugs are not present in a specimen, the probabilities of a false positive and a true negative must total one. The probability of a false positive is referred to as the “false positive rate,” and the probability of a true negative is referred to as “specificity.” So, Specificity = P( - [No Drugs)

False Positive Rate = P( + /No Drugs). The specificity and the false positive rates are complements; that is, specificity = (1 - false positive rate). Thus, these two rates both measure the same phenomenon: the ability to detect the absence of drugs. In drug testing, we are most concerned with incorrect results; that is, with false positives and false negatives. Sensitivity and specificity, however, are indirect indicators of the false result rates. Thus, the higher the sensitivity, the lower the false negative rate; and, the higher the specificity, the lower the false positive rate.

Estimating actual rates of drug use

A MODEL FOR DETERMINING

201

THE ACTUAL DRUG USAGE RATE

The rate of positive drug tests-the reported rate of drug use-is the sum of two terms: (i) the product of the actual rate of drug use and test sensitivity, and (ii) the product of the actual non-usage rate and the false positive rate. That is, P( +) = [P(D) x

p(+P)l+

VW x P(+WX

where P( + ) is the reported rate of drug use, P(D) is the actual test sensitivity, P(N) is the actual rate of non-use, and P(+]N) Our objective is to determine the actual rate of drug use, P(D). of usage and the probability of non-usage is 1, we can substitute (1) to yield: P( +) = [P(D) x

V+P)l+

(1)

rate of drug use, P( + 10) is the is the false positive rate. Since the sum of the probability [l - P(D)] for P(N) in equation

w - JYDNx P(+ImI

= [[zy+P) - P(+Izv)I x P(D)] + Iy+pv). Solving for P(D) yields: P(D) = [P( +) -

~(+IwI[~(+P)

- P(+IW-

(2)

As the equation illustrates, the actual rate of drug use can be estimated if one knows: (1) the reported rate of drug use; (2) the false positive rate, and (3) sensitivity. These rates are discussed in the following two sections. REPORTED

RATES OF DRUG USE

The U.S. Department of Labor conducted a survey of employer drug programs during the summer of 1988 [41]. Included in this survey is information on the results of drug tests on current employees as well as job applicants, by industry, for the 1987-1988 period. Positive test rates reported for employees in the survey are presented in Table 1. Because these rates are representative of those found in various testing situations, they are used to illustrate the model. ESTIMATES

OF DRUG-TESTING

ACCURACY

In order to utilize the proposed model, it also is necessary to estimate the false positive rate and sensitivity for the laboratories doing the drug tests. Over the last 15 years, a number of empirical laboratory proficiency studies have been conducted in which prepared urine specimens have been sent to laboratories to determine the accuracy of their testing procedures [3,5,6,9, 14, 15, 18,20,24,27,35,47]. The three most recent blind studies of United States drug-testing facilities-studies that examined empirical data from significant numbers of laboratories and have been published in first-tier refereed journals-are those by Davis et al. in 1988 [9], Frings et al. in 1989 [14], and Knight et al. in 1990 [24]. The Davis study was conducted before the National Institute on Drug Abuse (NIDA) standards were proposed in 1987. The Frings study was conducted in November 1988, shortly after NIDA certification standards became required (in April 1988) for labs testing federal employees. Data for Table 1. Reported rates of drug use for current employees, based on drug test results Industry Services Transportation Mining Communications/public utilities Nondurable goods manufacturing Construction Durable goods manufacturing Retail trade Wholesale trade

Reported rate 0.03 1 0.056 0.061 0.078 0.089 0.120 0.121 0.188 0.202

202

JOHN M. GLEANIN and DAROLD T. BARNUM

the Knight study were collected before NIDA certification was possible, although its authors note in a subsequent publication [25, pp. 4294301: Methods now being used by NIDA laboratories . . . are essentially the same as those used to generate the majority of the results in our study. All laboratories we studied were then nationally accredited, headed by respected, credentialed directors and analysts, and many have since obtained NIDA . . . certification. The results from all three studies are based upon testing processes in which confirmation tests of positive screens were conducted. That is, specimens that tested negative on the screen were assumed to be negative. But, specimens testing positive on the screen were confirmed positive by a second test before being reported as positive. The Davis and Knight studies were completely blind, because the laboratories did not know which specimens were part of the study. The Frings study was partly blind, because many participating laboratories knew that test samples would come from certain clients, but did not know which of the samples from those clients were part of the study. (Because specimens tested under blind conditions receive routine treatment, blind studies more closely approximate normal accuracy levels than do studies in which the labs know which samples are challenges. Not surprisingly, blind studies find substantially lower levels of accuracy than open studies.) The false positive rates reported were 1.3% by Davis, 0.0% by Frings, and 2.0% by Knight. These are estimates of P( + 1No Drugs), and represent the proportion of drugless specimens for which drugs were reported to be present. The test sensitivities reported were 68.9% by Davis, 85.3% by Frings, and 80.0% by Knight. These are estimates of P( + IDrugs), and represent the proportion of the drug challenges for which drugs were reported present. Although no blind tests of multiple laboratories have been conducted for screening tests only, a 1991 study of commonly-used screening technologies utilized specimens gathered from a criminal justice population, using NIDA cutoff level guidelines [45]. For EMIT, a screening technology commonly used in NIDA-certified labs, the false positive rate was 2.5% for cocaine and 2.1% for marijuana, while sensitivity rates were 77% for cocaine and 71% for marijuana. For all involved technologies and drugs, the false positive rates ranged from 0.1 to 4.1%, and sensitivities ranged from 8 to 98%. The sensitivity rates are of special interest; if confirmation tests were added to the screen, sensitivity levels would not increase and most likely would decrease somewhat. Thus, this study gives another indication of sensitivity levels that might be expected in many testing situations, and shows that sensitivity estimates from the three blind studies cited previously are not worse than more recent empirical outcomes. Finally, accuracy might be higher and more uniform at laboratories truly operating under formal guidelines such as those required since 1988 at NIDA-certified labs [40,42] or guidelines recommended by the Task Force on the Drug-Free Workplace in 1991 [39]. NIDA standards require that its certified labs show a false positive rate of 0 and a sensitivity of at least 90% in proficiency testing [40]. Actual performance of NIDA-certified labs is not known, as the necessary records have not been released [25,36]. However, referees for this article suggested that in actual testing situations a false positive rate of 0.5% and a sensitivity level of 95.0% might be expected from labs following NIDA guidelines. As can be seen from the preceding discussion, there is some variation among the estimates of laboratory drug-testing accuracy. Moreover, variations can occur even among the labs in a single study. An illustration of such variation is provided in the Knight study [24]. For the labs sampled, which were all reference labs utilized on a regular basis by a large industrial employer, the false positive rates ranged from 0 at the best lab to 4% at the worst, while sensitivity levels ranged from 93% at the best lab to 66% at the worst. Interestingly, the lab with the worst false positive rate confirmed all tests with GC/MS (gas chromatography/mass spectrometry) technology, as did the lab that virtually tied for the worst sensitivity (68%), indicating that this NIDA-required “gold-standard technology” does not necessarily produce accurate results. Differences in accuracy might be lessened somewhat if most labs in the nation truly adhered to a common set of standards requiring very high minimum accuracy levels. No such set of common standards presently exists. A limited group of workers, federal employees and safety sensitive

Estimating actual rates of drug use

203

workers in federally-regulated transportation industries, must be tested in laboratories certified by NIDA [32,40,42]. However, although a common set of standards is theoretically required for laboratories testing workers in these groups, they are not always followed in practice [23,43,44]. As of 1992, only 11 states had relevant legislation regulating drug-testing accuracy of job applicants, and only 16 regulated drug-testing accuracy for current employees. Even in these states, the standards for drug testing accuracy were diverse [29,30,49]. Attempts have been made since 1988 to pass federal legislation which would require all drug-testing laboratories to meet NIDA-type guidelines [34,49], and various groups have suggested model legislation [39,46]. As of December 1992 no such law had been passed. In sum, the accuracy of drug-testing for job applicants and employees (as well as for students and prisoners) most often is unregulated. Except in a few states, private sector testing not required by the federal government need not be conducted by certified labs and does not require confirmation of positive screens [12,22,34,38,45,49]. As indicated in late 1991 by Professor Rodney Smolla of the College of William and Mary [37, p. 81: The current legal picture governing drug-testing is chaotic. . . . The uneven patchwork of state and federal legislation creates a maze of conflicting regulations . . . [the courts have not] set rigorous procedural or substantive standards . . . At the Sixth International Conference on Drug Policy Reform in November 1992, Paul Marcus of the College of William and Mary noted that there still were inadequate legislative and judicial requirements for accurate testing for most workers. He again cited the need for a federal statute covering private and public sector workers that would mandate confirmation of positive screening tests and would establish licensing requirements for testing laboratories [38]. However, even if such a law were to be passed soon, theoretically required standards and what actually occurs in practice would likely be quite different for years to come [l, 2, 16,23,43,44]. It appears to us, therefore, that for the foreseeable future there will continue to be a range of accuracy in United States drug testing laboratories. It would be inappropriate to assume that the accuracy of the very best labs, probably those which are NIDA-certified, represent the norm in the United States, much less in the rest of the world. Even accuracy estimates from the three blind empirical studies discussed herein may be better than is typical. However, the cited studies do present reasonable estimates of the range of average accuracies that could be expected from good U.S. laboratories in the near future; consequently, these estimates are used to illustrate the proposed model. Nevertheless, it should be clearly understood that none of the estimates necessarily approximate a United States average, and, of course, do not necessarily represent accuracy levels elsewhere in the industrialized world. And, accuracy in a specific situation or country may be much worse or better than these estimates. Therefore, if our model is utilized to estimate the actual rate of drug use for a particular case, one should use accuracy levels from the actual laboratories involved in the testing.

APPLICATION

OF THE MODEL

For purposes of illustrating the model, the drug test results from the Department of Labor (DOL) survey are used in conjunction with laboratory accuracy estimates from the Knight study and similar estimates suggested by the referees for this paper. The drug tests reported in the DOL survey were conducted in 1987-1988, roughly the same time period that data were collected for the Knight study. Also, the Knight study sampled laboratories engaged in workforce drug-testing, which also was the population considered by the DOL survey. While the accuracy rates of the laboratories used by the firms surveyed by the DOL may have been either higher or lower than those studied by Knight, we feel that the results of the Knight study are indicative of accuracy rates in the laboratories that conducted the drug tests reported in the DOL survey. The accuracy levels suggested by the referees as representative of what could be expected at labs following NIDA guidelines, herein called the NIDA accuracy levels, demonstrate what would occur if these higher

204

JOHN M. Table 2. Reported

GLEASON and DAROLD T.

and actual rates of drug use for current

Industry

Reported

Services Transportation Mining Communications/public utilities Nondurable goods manufacturing Construction Durable goods manufacturing Retail trade Wholesale trade

employees

rate

BARNUM (sensitivity

Actual

0.031 0.056 0.061 0.078 0.089 0.120 0.121 0.188 0.202

= 0.8, false positive

rate = 0.02)

Actual/reported

rate

0.45 0.82 0.87 0.95 0.99 I .07 1.07 I.14 I.15

0.014 0.046 0.053 0.074 0.088 0.128 0.129 0.215 0.233

levels of accuracy were present. Again, however, we carefully note that all estimates are for illustration purposes only, and do not necessarily apply to any particular case. Consider the positive drug test results reported for current employees in the DOL survey, as seen in Table 1. Positive test results (referred to herein as the reported rates) range from 3.1% in Services to 20.2% in Wholesale Trade. To illustrate the estimation of actual drug use rates, we first use these two extremes and Knight accuracy results in conjunction with eqn (2), above. From the DOL study, P( + ) is 0.031 for Services and 0.202 for Wholesale Trade. From the Knight study, P( + IN) = 0.02 and P( + 10) = 0.80. Using eqn (2), for Services: P(D) = [P( +) -

~(+IwI[~(+I~)

- Q+Im

= (0.031 - 0.02)/(0.80 - 0.02) = 0.014. And, for Wholesale Trade:

~~~~=~~~+~-~~+I~~II~~~+I~~-~~+I~~1 = (0.202 - 0.02)/(0.80 - 0.02) = 0.233. Thus, the actual usage rate in Services was 1.4%, less than half the reported rate of 3.1%. And, the actual usage rate in Wholesale Trade was 23.3%, 15% higher than the reported rate of 20.2%. Thus, drug use was a smaller problem in Services and a larger problem in Wholesale Trade than is indicated from the results of drug tests. Similar data for current employees in other industries are shown in Table 2. If the employees had all been tested in labs following NIDA standards (assuming accuracies suggested by the referees for this paper), the outcomes would have been those shown in Table 3. Thus, under NIDA accuracy levels the actual usage rate in Services would have been 2.8%, or 10% less than the reported rate of 3.1%; and the actual usage rate in Wholesale Trade would have been 20.8%, or 3% more than the reported rate of 20.2%. Thus, the results in Tables 2 and 3 suggest that the highest reported rates (Wholesale Trade) underestimate the actual rates; specifically, actual rates are 3-15% higher than the reported rates. Similarly, the lowest reported rates (Services) overestimate the actual rates; specifically, actual rates are l&55% lower than the reported rates. It is not possible to know the exact degree of mis-estimation without knowing the accuracy rates for the lab that does the tests, but these ranges give some indication of the possibilities. It was noted earlier in this paper that when false results occur, the actual rate will equal the reported rate only when the false positive rate offsets the false negative rate. To determine the Table 3. Reported

and actual rates of drug use for current employees

Industry Services Transportation Mining Communications/public utilities Nondurable goods manufacturing Construction Durable goods manufacturing Retail trade Wholesale trade

Reported 0.031 0.056 0.061 0.078 0.089 0.120 0.121 0.188 0.202

rate

(sensitivity Actual

rate

0.028 0.054 0.059 0.077 0.089 0.122 0.123 0.194 0.208

= 0.95, false positive

rate = 0.005)

Actual/reported 0.90 0.96 0.97 0.99 1.00 1.02 I .02 I .03 I .03

Estimating actual rates of drug use

205

reported rate at which this offset will occur-that is, the breakeven rate-we set the actual rate equal to the reported rate in eqn (2), and solve. Thus, substituting P( + ) for P(D): P( +) = [P( +) -

~(+Iwm+P)

- P(+lWl.

(3)

Solving for the breakeven level of P( + ), we get: Breakeven P( + ) = P( + ]N)/[l - P( + 10) + P( + IN)].

(4)

In words, the reported rate equals the actual rate only when the reported rate equals the ratio: (False Positive Rate)/( 1 - Sensitivity + False Positive Rate).

(5)

If the reported rate is greater than this ratio, then the reported rate will underestimate the actual rate; if the reported rate is less than this ratio, then the reported rate will overestimate the actual rate. As previously noted, the point where the two rates are equal, and hence at which the reported rate is an unbiased estimate of the actual rate, is called the “breakeven rate.” The implications of this can be seen in Tables 2 and 3. For Table 2, where a false positive rate of 0.02 and a sensitivity rate of 0.8 were utilized, the breakeven rate is 0.02/( 1 - 0.8 + 0.02) = 0.091. As can be seen in Table 2, reported rates greater than 0.091 underestimate the actual rate, while lower reported rates overestimate the actual rate. For Table 3, where a false positive rate of 0.005 and a sensitivity rate of 0.95 were used, the breakeven rate is O.OOS/(1 - 0.95 + 0.005) = 0.091. The fact that the breakeven rate occurred at the same place was by chance, and would not always be true. For example, if the accuracy rates from the Davis study [9] are used, then the breakeven rate would be 0.013/( 1 - 0.689 + 0.013) = 0.0401, which results in most true rates being underestimated. POLICY IMPLICATIONS

AND CONCLUSIONS

As has been demonstrated, reported rates of drug use may overestimate or underestimate the actual rates. These mis-estimates could lead to inappropriate decisions by those charged with public policy formulation, as well as by private and public organizational decision makers. For example, the reported rate for employees in service industries indicates that 1 out of 32 tested positive for drugs. Both organizational policy and public good might dictate routine testing based on these results. Yet, when adjusting this figure to the actual rate, utilizing the Knight accuracy estimates, it is seen that only 1 out of 71 employees may actually have drugs in their urine. This result presents a somewhat different picture of the urgency of implementing routine screening in this industry. On the other hand, because it is also possible to underestimate the actual rate, the real extent of the problem may be undetected and cost-beneficial measures may not be taken. For example, the actual rate of drug use by Wholesale Trade employees may be up to 15% higher than the reported rate. Although test results in this industry might be felt to signal a problem regardless of mis-estimates, more intensive interventions might be viewed as cost-beneficial if the true rate were known to be higher. A combination of the two preceding cases might occur if decision makers were allocating funds among communities. Assume two communities with reported rates of drug use of 3 and 12%. If the per-capita allocation of funds were based on the seriousness of the drug problem, then the higher-usage community would receive four times as much money per capita as the lower-usage community. If, however, the reported rates came from labs with Knight-study accuracy, the actual rates of drug use would be 1.3 and 12.8%. If these rates are used, the higher-usage community would receive almost ten times as much money per capita as the lower-usage community. In such circumstances, utilization of reported vs actual rates could result in substantial shifts in fund allocation. Finally, consider the case where drug-test accuracy levels are increasing over time. This may well be the situation in the United States today, if labs are increasingly adopting NIDA-type procedures. When accuracy is rising over time, then trends in drug use will be misrepresented by test results. Assume, for example, that the true rate of usage in a particular case was 24% in 1990 and had declined to 22% by 1992, a decrease of 8%. Also assume that the tests were conducted in a Knight

206

JOHN M. GLEA~WN

and DAROLD T. BARNUM

lab in 1990 and a NIDA lab in 1992. Thus, utilizing eqn (I), the reported rate in 1990 would have been [(0.24 x 0.8) + (0.76 x 0.02)] = 20.72%. And, the reported rate in 1992 would have been [(0.22 x 0.95) + (0.78 x 0.005)] = 21.29%, an increase of 3% over the 1990 reported rate. Thus, while in truth drug usage was decreasing, the increased accuracy of the test results make drug usage appear to be increasing! A similar problem would apply in trying to compare test results from cases where the tests are conducted under NIDA-type standards with test results from cases where drug-testing is unregulated. Because public policy makers and employers might be misled by such differences, the importance of adjusting reported rates for test accuracy is clearly evident. The preceding examples are for illustration only, although they are based on reasonable estimates of drug use and drug testing accuracy reported in the literature. Actual levels of use and accuracy will differ in each situation, so the outcomes in a particular case may be quite different from those illustrated here. Decision makers who are making use of drug test averages should thus also obtain accuracy estimates for the labs conducting their testing, and use this information to estimate the actual drug usage rates for their situation. Also, in those cases where public policy is being influenced by perceived prevalence of drug use, it is important that the reported rates be adjusted to reflect actual rates before decisions are made. It is useful to note that, although this paper has focused on rates of drug usage, the techniques are equally applicable to any situation where empirical estimates are made, individuals are subject to misclassification, and it is possible to identify false positive and sensitivity rates.

REFERENCES 1. D. T. Barnum and J. M. Gleason. Accuracy in transit drug testing: a probabilistic analysis. Trans. Res. Rec. 1266, l&-18 (1990). 2. D. T. Barnum and J. M. Gleason. Determining transit drug test accuracy: the multidrug case. Trans. Res. Rec. 1297, 20-29 (1991). (1987). 3. R. V. Blanke. Quality assurance in drug-use testing. Ciin. Chem. 33, 41B45B 4. R. Blumner and L. Siegel. Where is U.S. drug policy headed? Civil Liberties, p. 5 (Spring/Summer 1991). evaluation and assistance efforts: 5. D. J. Boone, H. J. Hansen, T. L. Hearn, D. S. Lewis and D. Dudley. Laboratory mailed, on-site and blind proficiency testing surveys conducted by the Centers for Disease Control. Am. J. Publ. Hlth 72, 13641368 (1982). 6. D. Burnett, S. Lader, A. Richens, B. L. Smith, P. A. Toseland, G. Walker et al. A survey of drugs of abuse testing by clinical laboratories in the United Kingdom. Ann. Clin. Riochem. 27, 213-222 (1990). I. Carriers question if $300 million tab for random drug tests is worth it. Trafl WId 226(7), 38-39 (1991). More private workers to face drug tests. New York Times, p. 36 (18 December 1989). 8. J. H. Cushman. quality in urine drug testing. JAMA 260, 9. K. H. Davis, R. L. Hawks and R. V. Blanke. Assessment of laboratory 174991754 (1988). 10. R. P. DeCresce, M. S. Lifshitz, A. C. Mazura and J. E. Tilson. Drug Testing in the Workplace. Bureau of National Affairs, Washington, D.C. (1989). random testing needs to be undertaken at the worksite. In Controversies in the Addiction 11. R. L. DuPont. Mandatory Field, Vol. 1 (Edited by R. C. Engs), pp. 105-l 11. Kendall/Hunt, Dubuque, Iowa (1990). should look before they leap onto drug testing bandwagon, experts warn. National Report on Substance 12. Employers Abuse 6(18), l-2 (1992). 13. Federal court upholds ‘system presence’ as company’s standard for drug testing. National Reporr on Substance Abuse 6(7), 1 & 6 (1992). testing in urine under blind conditions: an 14. C. S. Frings, D. J. Battaglia and R. M. White. Status of drugs-of-abuse AACC study. C/in. Chem. 35, 891894 (1989). testing in urine: an AACC study. Clin. Chem. 15. C. S. Frings, R. M. White and D. J. Battaglia. Status of drugs-of-abuse 33, 1683-1686 (1987). in employee drug-testing. RISK 2, 3-18 (1991). 16. J. M. Gleason and D. T. Barnum. Predictive probabilities and C. Uihlein. Drug testing in the workplace: a view from the data. William and Mary Law Rev. 17. M. R. Gottfredson 33, 127-145 (1991). 18. E. Gottheil, G. R. Caddy and D. L. Austin. Fallibility of urine drug screens in monitoring methadone programs. JAMA 236, 1035-1038 (1976). Court backs tests of some workers to deter drug use. New York Times, pp. I, 11 (22 March 1989). 19. L. Greenhouse. 20. H. J. Hansen, S. P. Caudill and D. J. Boone. Crisis in drug testing: results of CDC blind study. JAMA 253, 2382-2387 (1985). The unconvincing case for drug testing. Can. publ. Policy-Analyse de Politiques 27, 183-196 (1991). 21. L. E. Henriksson. and NIDA guidance. National Reporf 22. HHS exempts workplace drug testing from CLIA pending further consultation on Substance Abuse 6(19), 1 & 7 (1992). deficiencies’ in Interior Department’s drug testing program. National 23. Inspector General reports ‘serious operational Report on Substance Abuse 7(l), 1 & 6 (1992). 24. S. J. Knight, T. Freedman, A. Puskas, P. A. Martel and C. M. O’Donnell. Industrial employee drug screening: a blind study of laboratory performance using commercially prepared controls. J. Occup. Med. 32, 715-721 (1990).

Estimating

actual

rates of drug

use

207

25. S. J. Knight and C. M. O’Donnell. Proficiency testing of drug analysis laboratories: the author replies. J. Occup. Med. 33, 429430 (1991). 26. Lab’s data show workers, applicants tested positive at lower rate in 1991. National Reporr on Substance Abuse 6(6), I & 6 (1992). 27. L. C. LaMotte, G. 0. &errant, D. S. Lewis and C. T. Hall. Comparison of laboratory performance with blind and mail-distributed proficiency testing samples. Publ. Hlfh Rep. 92, 554560 (1977). 28. S. E. McNagny and R. M. Parker. High prevalence of recent cocaine use and the unreliability of patient self-report in an inner-city walk-in clinic. JAM.4 267, 1106-1108 (1992). 29. Morgan, Lewis and Bockius. State-by-state drug and alcohol testing survey. WiIIiam and Mary Law Reu. 33, 189-252 (1991). 30. The National Report on Substance Abuse: A Bi-Weekly Newsletter, Vol. 6, Nos. l-24. Bureau of National Affairs, Washington, D.C. (1991-1992). 31. J. Normand, S. D. Salyards and J. J. Mahoney. An evaluation of preemployment drug testing. J. appi. Psychol. 75, 629639 (1990). 32. Omnibus Transportation Employee Testing Act of 1991, Title V of Public Law 102-143, Department of Transportation Appropriations Bill. 33. Prevalence studies help assess need for workplace drug testing. Nafional Report on Substance Abuse 5(23), 67 (1991). 34. Private sector testing: Hatch reintroduces legislation to regulate private drug testing. National Report on Substunce Abuse 6(l), 1 & 7 (1991). 35. J. Segura, R. de la Terre, M. Congost and J. Cami. Proficiency testing on drugs of abuse: one year’s experience in Spain. Ciin. Chem. 35, 879-883 (1989). 36. Silverberg v. Department of Health and Human Services, C.A. No. 89-2743 (D.D.C.), as reported in Ref. [49]. 37. R. A. Smolla. Proposal for a substance abuse testing act: introduction. WiIIiam and Mary Law Reo. 33, 5-9 (1991). 38. Speakers call for random testing ban or limits by statute, state courts, or Von Raab reversal. National Report on Substance Abuse 6(24), I & 6 (1992). 39. Task Force on the Drug-Free Workplace, Institute of Bill of Rights Law. Proposal for a substance abuse testing program. William and Mary Law Rev. 33, 546 (Fall 1991). 40. U.S. Department of Health and Human Services, Alcohol, Drug Abuse and Mental Health Administration. Mandatory guidelines for federal workplace drug testing programs. Federal Register 53(69), 1197&l 1989 (1988). 41. U.S. Department of Labor, Bureau of Labor Statistics. Survey of Employer Anti-drug Programs. Report 760, Washington, D.C. (1989). 42. U.S. Department of Transportation. Procedures for transportation workplace drug testing programs; interim final rule. Federal Register 53 (224), 4700247021 (1988). 43. U.S. General Accounting Office. Drug Testing: Management Problems and Legal Challenges Facing DOT’s Industry Programs (GAO/RCED-90-31). Washington, D.C. (1989). 44. U.S. General Accounting Office. Employee Drug Testing: DOT’s Laboratory Qualify Assurance Program Not Fully Implemented (GAO/GGD-89-80). Washington, D.C. (1989). 45. C. Visher and K. McFadden. A Comparison of Urinalysis Technologies for Drug Testing in Criminal Justice. National Institute of Justice, Washington, D.C. (1991). 46. The White House. NafionaiDrug Control Strategy. U.S. Government Printing Office, Washington, D.C. (1989). 47. J. F. Wilson. J. Williams. G. Walker. P. A. Toseland. B. L. Smith, A. Richens and D. Burnett. Performance of techniques used to detect drugs of abuse in urine: study based on external quality assessment. Clin. Chem. 37,442447 (1991). 48. E. D. Wish. Preemployment drug screening. JAMA 264, 2676-2677 (1990). 49. K. B. Zeese. Drug Testing Legui Manual, Release #7. Clark Boardman Callaghan, New York (1992). 50. C. Zwerling, J. Ryan and E. J. Orav. Costs and benefits of preemployment drug screening. JAMA 267,91-93 (1992). 51. C. Zwerling, J. Ryan and E. J. Orav. The efficacy of preemployment drug screening for marijuana and cocaine in predicting employment outcome. JAMA 264, 26392643 (1990).