0149=1189/821~0319-08%03.00/O Copyright * 1983 Pergamon Press Ltd
Evaluation and Program Pluming, Vol. 5, pp. 319-326, 1982 Printedin the U.S.A. All rights reserved.
ARCHIVAL DATA IN PROGRAM EVALUATION AND POLICY ANALYSIS JAMESW. LUCKEY University of North Carolina
ANDY BROUGHTON Hutchings Psychiatric Center
JAMESE. SORENSEN University of Denver
ABSTRACT Evaluators have typically avoided using existing data sets, choosing instead to collect their own information to maintain control over both the content and quality of the data. As demands on action researchers increase without provision for additional resources, primary data collection may become a luxury and the use of archival data may increase. Though archiva~ data has practical and methodoiogicaf advantages, there are limitations associated with the utilization of such information. The general problems with secondary data sources include the accumcy, acceptability and accessibility of the information. Following a discussion of these general problems, an example from the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) reimbursement data set is presented to illustrate specific difficulties with using archival data for evaluation research. Strategies are then presented for minimizing these difficulties, including determining the feasibility of utilization of such data sources, methods for assessing their accuracy and factors to consider in the data acquisition process.
cle focuses on the potential problems of the latter appreach . Evaluators historically have avoided using archival data, preferring instead to collect their own data to insure control over the content and process of data collected. F-Iowever, there are some real advantages to using existing data sources. The most obvious of course, are the potential savings in time, money and effort achieved by sidestepping the original data collection process. These considerations are likely to make archival data more attractive to evaluators as demands for analyses increase and resources decrease. Archival data also has some methodological advantages, one being the information is generally non-reactive to the specific purpose of the present evaluation (Webb, Campbell, Schwartz, & Sechrest, 1966). Another is
Accountability through the evaluation of human service programs has received increased emphasis in recent years. The social security, health, mental health and rehabilitation acts, among others, have either mandated or strongly urged built-in evaluation systems. Concurrent with these requirements has come increasing use of data-based support by policy analysts. Despite the growing overlap in function between ongoing-data systems, evaluation research and policy analysis, gaps still remain among the three. One method to reduce these gaps has been the design of management information systems with the goal of evaluation specifically in mind (e.g., Chapman, 1976; Sorensen & Elpers, 1978). Another strategy has been retrospective analysis of existing data bases. This arti-
Requestsfor reprints should be sent to James W. Luckey, University of North Carolina/Chapel 263 Rosenau Building-2OlH, Chapel Hill, N.C. 27514. 319
Hill,
Departmentof HealthAdministration,
320
JAMES W. LUCKEY,
ANDY BROUGHTON
that archival data are often available for extended periods of time and for a variety of populations. Depending on the nature of the data and the ingenuity of the analyst, either of these may allow for a variety of quasi-experimental designs in answering evaluation and policy-related questions (Campbell & Stanley, 1963; Cook & Campbell, 1979). Although strong arguments have been marshalled for the use of randomized experiments in evaluation (e.g., Boruch, 1976; Apsler, 1977), prospective, randomized experiments may not always be possible because of ethical considerations or feasible because of resource or time constraints. Often the evaluator or policy analyst does not enjoy the luxury of enough time to GENERAL
PROBLEMS
Evaluations using archival data risk severe limitations. The general considerations in using an existing data collection system not specifically designed with evaluation in mind include the appropriateness, accuracy and accessibility of the information contained in the system. Appropriateness
The primary concern about archival data is appropriateness for the particular evaluation effort. Appropriateness embodies both the type and form of the information available. When evaluators have the luxury of collecting their own data, the type and format can be tailored to the purpose of the evaluation. When existing data are used, however, there is the temptation to tailor the evaluation to the data.
Purpose of Data Collection. The appropriateness
of the existing data is frequently problematic because the original rationale for collecting the data was different from the reason for the current evaluation. Data collected by fiscal intermediaries to document reimbursement for accounting reports are shaded by the reimbursement purpose (e.g., some services qualify for reimbursement while others, although equally acceptable in some fields of practice, do not). Data collected for research purposes are usually free of this possible contamination. In general, the less congruence between the two purposes, the less likely the data will be of real use in an evaluation. The evaluator is then faced with a decision of altering the evaluation, generating new data or scrapping the evaluation effort. Given the external pressure for accountability the first option frequently is tempting.
Quantity Oriented Data Bases. Appropriateness of the information is a common problem where data collected by governmental agencies emphasizes documenting the quantity of effort (e.g., number and kinds of persons served). Such information may have some
and JAMES E. SORENSEN
produce research results to influence the decisionmaking process. The only alternative is to utilize data that are already available. This article first identifies types of problems encountered in using archival data for policy analysis and evaluation. Following this general discussion, an example will be presented to illustrate specific difficulties and potential pitfalls of using existing data sets. While other authors such as Weiss (1974) have discussed the inadequacies of existing data sets, this article goes beyond a description of these potential problems and will present a series of strategies for anticipating, assessing and overcoming problems with archival data. WITH
ARCHIVAL
DATA
use in process evaluations but is usually of dubious value in outcome evaluations without the use of questionable assumptions. For example, following the introduction of a new law intended to protect the rights of involuntarily admitted patients, psychiatric hospital utilization patterns were examined (Luckey & Berman, 1979). The average length of stay decreased following this intervention and could be interpreted as an indication of successfully decreasing infringement on the rights of patients. However, an alternative conclusion could be a “revolving door” phenomenon exchanging a few longer hospitalizations for many short ones with more frequent disruptions in the patient’s life. This latter conclusion was supported by a significant increase in the number of readmissions. Utilization patterns support both interpretations; an assessment of the desirability of these changing patterns of care could not be determined by this information alone.
Data Format. A related but less obvious
difficulty has to do with the form of data collection or storage. Information may be aggregated, for example, across time, programs, geographic units, or in other ways different from those required for evaluation. For example, in the CHAMPUS study cited later, the data provides an example where the unit of interest for the evaluation project was the inpatient psychiatric admission while the data had been collected and stored by provider reimbursement claims. Accuracy
Accuracy of the archival data includes the quality of the data (reliability and validity) and the significance of the information.
Reliability. Quality
of the data is a major concern in a large data collection system. One aspect is mechanical. The greater the number of discrete steps involved in going from original source to data analysis (i.e., collection, coding, keying, etc.), the greater the potential
Archival Data
321
for the introduction of error into the system. When using archival data, the user has no control over the reliability of the information in the system. The only alternative is to check on the system to arrive at some estimate of the accuracy of the various steps.
tions were made that the data management personnel could be asked to search their files and eventually were able to locate documentation on prior codes. Only with this additional information could valid conclusions be drawn.
Validity. A second aspect of quality is the validity of the data. A problem arises from both the number of levels involved in the data acquisition process and the lack of control the evaluator has over the system. Generally, the protocol for the data is provided by the administrative branch of the organization (i.e., data management personnel) while the information in the system is generated at the program level. Congruence between these levels in the perception of the meaning of a particular piece of information is important, especially when the evaluator has to rely on the description provided by the data management personnel. Factors other than differing perception of the meaning of the data impinge on the validity of the data such as response bias, perception of how the information is to be used, stability of the characteristics included, the number of possible alternatives for a given item and the level of abstraction of the information. The evaluator may be easily seduced into relying on a view of their immediate source of the information, the data system personnel, which may or may not correspond with the view of the people at the program level.
~ign~~cance. Concerns about the quality of the data relate to the data acquisition and storage process and are internal to the system. Concerns about the significance of the data on the other hand are evaluation based. One frequent limitation of archival data is the availability of a small amount of information for a large population of cases and sometimes leads the evaluator to impute more significance to the data than is justified-a problem similar to operational definitions in traditional experimental designs. In mental health data systems, for instance, one almost universally available item is diagnosis. In the absence of additional information, diagnosis may be equated with either severity of illness or level of functioning. Diagnosis as a surrogate measure for severity or functioning would be attaching unrealistic significance to the available data.
Time Span. The longer the time period of the evaluation, the greater the concerns with the reliability and validity of archival data. Because of growth of the data system and turnover in personnel, both common occurrences, the passage of time can increase the number of people involved both at the program and data management levels. Staff turnover has the tendency to increase the possibility of problems with both the reliability and validity of the data. Long time spans also increase the possibility of system-wide changes, either through refinements of the data management system, abrupt changes in the system reporting requirements or alternation of the meaning of an item through changes in factors external to the system (e.g., changes in the diagnostic system). While current personnel should be aware of the present status of the system, historic changes in instrumentation pose potential problems. Experience with state hospital admission data provides an example of the problems with changes in instrumentation over time. One key variable was the type of commitment used to admit a patient. Since commitment laws had been changed three times over the ten year data period, coding schemes were suspected to have also changed. This suspicion was bolstered by a visual examination of the data revealing discontinuities in the commitment codes coincident with the legal changes. It was only when these observa-
Understanding the Program. A major difficulty with evaluators relying solely on archival data is the potential for an evaluation to become an exercise in the manipulation of large numbers of numbers without a real understanding of the program being evaluated. Basing perceptions on official program descriptions or administrative viewpoints only can result in a limited and biased view of the actual program objectives and operations. Accessibility
While archival data has already been collected, existence does not assure accessibility. One issue is the confidenti~ity or the right to privacy of the individual’s information in a system. In general, access to most existing data systems requires removal of all identifying information for individuals. Tracking of individuals through the system may not be possible (e.g., matching repeat episodes for the same person) and matching the data at the individual level with other data sources may be nearly impossible. A frequent method used to assure confidentiality is for the system to provide only aggregate data to the evaluator. Checking the reliability and validity of the information in aggregate form is more difficult, however. In addition, aggregated data can mask important information and prevent an examination for any sub-aggregate trends. Political Barriers. While not unique to archival data, negative exposure from an evaluation is a further barrier in gaining access to archival data. Political aspects of evaluation have been widely discussed in the
JAMES
322
W. LUCKEY,
ANDY BROUGHTON
literature (e.g., Downs, 1971; Rossi, 1972). Refusal of access to data often (under the guise of confidentiality) can be a hidden reaction to this threat.
Hardware and Software Barriers. The final problem involved
in accessibility
is the electromechanical CHAMPUS
and
and JAMES
E. SORENSEN
software aspects of the system involved. Compatibility and capability of the varying systems used to collect and analyze the data are potential problems since many existing data sets have massive dimensions by social science standards.
- AN EXAMPLE
The Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) is a reimbursement system for health care services for both dependents of military personnel and those retirees who do not yet qualify for Medicare. Our evaluation experience with the CHAMPUS program focused on efforts to contain inappropriate utilization of mental health services through concurrent peer review. CHAMPUS has extensive mental health coverage and provides for both inpatient services with minimal copayment and almost unlimited outpatient services. Because of this extensive coverage, CHAMPUS has been offered by some as a model or prototype for the inclusion of mental health services for all carriers including any proposed national health insurance. However, increasing costs and reports of abuse raised concerns about such extensive coverage. The CHAMPUS data system is enlightening for two reasons. First, it provides specific examples of the types of problems encountered with utilization of existing data systems for research. Second, these problems raise concerns about policy recommendations based on descriptive information from this system which has appeared in recent literature, particularly those about mental health benefits (e.g., Dorken, 1976; 1977; 1980).
Concurrent Peer Review Project The CHAMPUS example is based on an evaluation of two concurrent peer review demonstration projects. Though the focus was on beneficiaries diagnosed as schizophrenic, the comparison group covered all mental health diagnoses (Note 1). The experience with the CHAMPUS data base was broad-based since it cut across all mental health diagnoses, inpatient and outpatient services and four different locations in three states over a five-year period (FY 1974-1978). The initial data request made to CHAMPUS was for all claims, physical and mental health, for all those who received psychiatric care during the period of study. The result was 11 computer tapes with some 1.8 million claims. Clearly, an initial accessibility problem was the size of the data set and the possibility of consuming large amounts of resources just to manipulate it. The problems encountered were not a result of poor
cooperation by CHAMPUS staff. Both data processing and managerial personnel were extremely helpful by providing information about the data system and CHAMPUS procedures. They facilitated access to other sources of information, offered useful suggestions and provided validation for many of our observations. Data Set Problems Several technical difficulties were experienced with the data set. Foremost, the system was designed as an accounting system for reimbursement of insurance claims. This raised concerns about the appropriateness of using this system for an evaluation because of both data format problems and also using a system for purposes other than those for which it was designed. Second, the CHAMPUS data set was a compilation of many sets of similar information originating from several sources often using varying coding schemes. Variations arose because CHAMPUS used a system of insurance companies as fiscal intermediaries (FI) and did not reimburse claims directly. Rather, claims were forwarded to the FI by the provider; the FI codes, keys, processes, and forwards the data to CHAMPUS in periodic batches. Because of the number of subsystems involved in the data collection process over the period of study, the accuracy of the data was a major concern. Third, the system of making adjustment entries was in transition during the period of study which also raised concerns about the accuracy of the data over time. Finally, the size of the data set presented some accessibility problems. Resource constraints required careful planning to avoid depleting the entire computer budget by just extracting the necessary data.
Faculty Claims Data. Evaluation
of the effects of peer review focused on utilization and reimbursement data. Even such straightforward variables presented difficulties. The first problem was the determination of what claims to include in the analysis (i.e., distinguishing original claims from adjusting entries for those claims). Because the system was designed for accounting purposes, an auditing trail was required. Initially, any adjustments for a claim were entered without deleting the original claim, but CHAMPUS changed the method of entering adjustments midway through the study period. Visual inspection of the data suggested adjustments were not always clearly identified. This lack of iden-
323
Archival Data tification required a set of decision rules to exclude those claims which appeared to be adjustments only or to include only the relevant portion of the adjustment. Failing to examine the raw information visually or only having aggregated data would have resulted in overlooking this problem and would have resulted in double or possibly triple counting of values for an episode thereby inflating both reimbursement and utilization values. Unit of Analysis. Another major acceptability problem was an incompatibility between the data collected by CHAMPUS and the unit of analysis desired for evaluation. Inpatient admissions was the desired unit for evaluation, but as a reimbursement system, CHAMPUS works with individual claims. The number of claims involved for a given admission varied with the length of time the patient was in the hospital and the billing procedures used by the facility. Again, it was visual inspection of the raw claim data which revealed the difficulties entailed in creating an admissions file. For most claims, the day portion of the date was missing, thereby precluding an exact determination if any two claims were contiguous (i.e., for the same admission). Two other variables in the data set could be used to define an admission but in many cases the two contradicted each other. Once identified, the resolution of the admission file dilemma came from information external to the data set. This information was obtained during week-long visits to the demonstration project sites included as part of the evaluation procedure. The visit to one location uncovered utilization data collected independent of the CHAMPUS system. Though this manual system was insufficient for the evaluation, it did serve as a criterion to assess the relative accuracy of the two possible methods of generating admissions data. Use ST~TEGIES
FOR DEALING
Despite limitations archival data can be used for evaluation, managerial or policy decisions. Valuable information does exist and has been either underutilized or ignored. Caution and common sense are required and several strategies are useful in coping with the limitations of archival data. Experiential hindsight can be an asset to future forays into archival data. Assessment of the Data Source The first commandment in working with archival data is “know thy data.” A continual learning process begins at the start of an evaluation and continues until the final report. One striking feature of working with data collected by others is the occurrence of “insights” about the information. To keep last minute surprises and traps to a minimum, a reasonable understanding
of one variable clearly minimized discrepancies between the local information and the CHAMPUS data set, though differences remained. Without careful scrutiny of the raw data, the discrepancies in the data would not have been discovered. An arbitrary choice between the two variables to create an admissions file had a 50% chance of generating erroneous information. Other Problems. Another difficulty resulted from the use of five FIs in the CHAMPUS data subset. There were different FIs across the provider locations and also changes in Fls over time. These variations created difficulties in identifying patients and facilities involved in the review since each FI used its own coding scheme for certain items. For example, inspection of the raw data revealed one FI used a non-numeric coding scheme for age. Failure to detect this coding would have biased the sample of patients included. Also, knowledge of the number of eligible beneficiaries in each location would have been useful, but CHAMPUS did not have this information. The number of military personnel at each site had to be used as a surrogate measure. Both scrutiny of the raw data and site visits were crucial in uncovering data set problems. In addition to the discovery of a validation data source, the site visits yielded invaluable information about the various procedures used by the hospitals for filing claims, the methods used by the FIs for processing claims and most important, a detailed view of the scope, purpose and functioning of the demonstration projects. CHAMPUS serves as a useful example of archival data including problems of size, control, compatibility, purpose of the data system and changes over time. Further, visual inspection of the data and site visits lead to discovery and methods of addressing problems. WITH
ARCHIVAL
DATA
of the data is required before becoming committed to the project. I4ew of Ilata System. A cross-sectional view of the data system is needed early to effectively assess the appropriateness and accuracy of the data. Objective measures may not be possible at this time, but subjective impressions from a variety of sources can provide insights into the operation and acceptability of the data system. A useful first step is to obtain copies of official documents on the data system, including all forms on which the data are collected, key codes used for punching the data, definitions of variables, training manuals and other information related to the system. Besides the official documentation describing the
324
JAMES
W. LUCKEY,
ANDY BROUGHTON
system, early exposure to the data itself is highly desirable. Old printouts of aggregated data, monthly reports, for example, identify information included and may yield some insight into the utiiity and/or accuracy of the data. These reports can also provide an indication of the time lag involved in data processing by comparing the time of the event reported with the date of the report. Such reports may also provide the evaluator with an intuitive check on the data (i.e., do the figures make sense?); several monthly reports can form an initial cross-sectional check on validity and reliability. Similar checks can be done if other documents containing the same information collected independently are available. ilafa Manager Perceptions. Besides documentation and sample data, another important variable is the perception of the personnel involved. Higher level management can provide a view of how the system is intended to work; the data processing personnel are more likely to have detailed knowledge of the actual functioning of the system and be sources of information on weaknesses with the system. Their knowledge of the foibles of the system will provide a view of both the reliability of the overall system and the trustworthiness of individual items. Ke,s Decisiu~-laker Perceptions. The initial assessment of the system should also include the perceptions of key decision-makers in the organization. If the evaluation is to result in the implementation of changes, the credibility of the data base should be tested early. Frequently, unpopular evaluation results are attacked on methodological grounds (Rossi, 1972), but a similar strategy can be criticisms of the data source. The involvement of the decision-makers at this stage reduces the likelihood of the latter kind of criticism.
~ea~i~j/~t~y.The appropriateness, accuracy, as well as accessibility of the data are to be considered in assessing a potential data source. The simplest part of the accessibility is purely mechanical: compatibility of machines (e.g., tape density), size of the data set, the form in which the data can be released and the type of information to be included. The more difficult aspect of assessing the accessibility question is political in nature because of the potential threat of negative exposure resulting from evaluation efforts. For an outside evaluator, a long term negotiation process is required to address issues such as the purpose of the evaluation, who can release the results, and who is to pay for the data extraction process. The unique aspect in archival data is the introduction of an additional agent in the data collection process. Collecting one’s own data generally implies control over the timetable. Requesting data from others places the evaluator in the provider’s timetable with the data extraction often being done as an addition to the provider’s regular
and JAMES
E. SORENSEN
workload. Depending on the size of the request and the workload of the facility, considerable time delays may ensue.
Working Relationship. The preliminary
assessment of acceptable and accessible, a detailed assessment of the system is advisable. Since the cooperation of a variety of people will be required, building relationships with data processing personnel becomes critical. Often these personnel may feel their efforts are not fully appreciated because of under-utilization of the data in the past and a request for data creates an additional workload for them. A good working relationship is invaluable, particLllariy when a major stumbling block with the data is encountered after it has been acquired. Because of their day-to-day knowledge of the system, they are the ones most likely to have the solution.
the data source is a first step; if the data appears
Data Acquisition A thorough preliminary assessment improves the acquisition process. If one has to make early decisions about data and format (especially with a large data set requiring substantial data processing and long lead times), obtaining the raw data or major subsets is highly desirable. Advantages include continued assessment of the data and ~exibi1ity in the design and form of the evaluation. Independent sources of the same information (e.g., manual records) may be discovered after the data is in the hands of the evaluator but only cover a subset of the evaluation data source. If the information from the original data system is aggregated so a subset is not extractable, using other discovered information as a check on the data is not possible. In addition to external checks, a visual review of the raw data can be enlightening. Missing data or unexplained discontinuities over time or shifts will often raise important questions about data processing, recording methods or programmatic changes.
Data Type and Format. Size of the data set, confidentiality or other practical considerations, may require decisions about the type and form of the information extracted. The ideal situation is to obtain all information in raw form. If a decision is necessary to limit acquisition to some subset of the available variables, the two key criteria are usefulness to the evaluation effort and the accuracy of the information, both reliability and validity. A preliminary assessment of the data system may provide a tentative estimate of the accuracy of individual items. The level of abstraction involved in a particular item also may be a determinant. FOJ instance, basic demographic information tends to be more accurate than some global measure (e.g., sex vs. level of functioning) but there is a trade-off because, generally the more subjective information tends to be more useful.
Archival
Timing. A final consideration in the acquisition process is timing. In addition to the lead time required for data processing personnel to honor the evaluator’s request, time is required for data to be processed through the information system. Depending on the size of the system and the number of steps involved, the time lag between the event and final processing of information about the event may be a few days or several months. A data set must be complete for the time frame desired at the time of the data acquisition by the evaluator to avoid bias because of a selection artifact. Suppose, for example, length of psychiatric hospitalization was being assessed for all patients admitted to a facility during a given calendar year. Acquiring the data the following February will skew the distribution. For those admitted early in the year, discharge information will be available for almost a full year. But for those admitted in December, discharge data will only be available for those who left the hospital within 60 days of their admission; the longer length of stay for those admitted late in the year will not be included because the patients still remained in the hospital. Site Visit/Case Studies Up to this point, the discussion has focused on the end point of the system, the data processing division. To perform a thorough assessment of the data and to understand what significance should be placed on any results of the evaluation, an evaluator should consider the organizational level of the individual program plus all administrative and data processing levels between the individual program and the focus of the evaluation.
Data
325
While evaluation literature urges evaluators to understand the workings of a program being evaluated, this requirement is often overlooked, especially in large scale programs with a variety of levels involved. Site visits to all or a selected sample of programs can lead to an understanding of the perceptions of purpose, operation, scope, origins and outcome of the program at each level. Official documentation only provides a view of the head of the elephant; one has to consider the legs also (i.e., that which makes it move). Besides a process evaluation, site visits provide an additional opportunity to assess the quality of the data. One method is to physically follow the information through the system. Insights emerge by talking to the people who filled out the forms and assessing what the information means to them. Questions may focus on unavailable data, timing problems, and importance or meaninglessness of data. This simple-minded approach of following the form through each step of the system will often provide more insight into the accuracy and meaning of the data than sophisticated and expensive reliability and validity studies. Independent auditors (such as Certified Public Accountants, CPA’s) often use the foregoing approach in evaluating the internal controls operating to insure the accuracy and completeness of information produced by an information system. An inexpensive and quick check on the reliability of the system and its time lag is to feed several test cases into the system and then monitor the speed and accuracy of the output of those cases. The test-case approach and tracing single transactions throughout the entire system are popular techniques with independent auditors as well.
CONCLUSIONS The use of archival data can be important to program evaluators and policy analysts. Through an empirical example potential problems are outlined, ineluding safeguarding the confidentiality of the data and the appropriateness, accuracy, and accessibility of the data. Strategies based on a full knowledge of the program and its data system are presented as possible ways of addressing these problems. Final evaluation reports should document the problems encountered and strategies used to cope with them. Explicit statement should be made if, for exam-
REFERENCE 1.
ple, specific variables were not used in the evaluation because of concerns about their accuracy. For the variables used, similar statements should be offered if there is either objective information or subjective imabout their relative accuracy. These pressions statements will allow the report reader to weight various results and conclusions appropriately. A detailed assessment of the acceptability and accuracy of various pieces of information may also serve as an impetus for the improvement of the data system facilitating future evaluation efforts.
NOTE
SORENSEN, J. E., ZELMAN, W. N., BROUGHTON, A., CLOW, H. K., LUCKEY, J. W., MEILE, R. L., & YOUNG, E. H. CHAMPUS experience with concurrent peer review: Case studies, utilization and cost-effectiveness analysis. Department of Defense Contract #MDA906-80-C-0003. National Institute of Mental Health Contract #278-784O78 (OD). March, 1980.
REFERENCES APSLER, evaluation
R. In defense of the experimental paradigm research. Evaluation, 1977, 4, I-18.
as a tool for
BORUCH, R. On common contentions about randomized field experiments. In G. Glass, (Ed.). Evaluation studies review annual: Volume 1. Beverly Hills: Sage Publications, 1976.
326
JAMES W. LUCKEY, ANDY BROUGHTON
CAMPBELL,
D. T., & STANLEY,
J. C. Experimental and quasiRand-McNally, 1963.
experimental designs for research. Chicago:
CHAPMAN, R. L. The derign of management information systems for meflta~ health orga~i~aiions: A primer. (DHEW Publication No. ADM 76-333) fice, 1976.
Washington,
D.C.:
U.S.
Government
Printing
Of-
1979.
DORKEN, disorder.
H. CHAMPUS In H. Dorken
ten-state claim experience for mental and Associates, The prufessioaa~
psychologist today: New developments in iaw, health insurance and health practice. San Francisco: Jossey-Bass, 1976. DORKEN, H. CHAMPUS ten-state claim experience for mental disorder: Fiscal year 1975. American Psychologist, 1977, 32,
697-710. DORKEN, H. Mental health services to children and adolescents under CHAMPUS: Fiscal year 1975. Professional Psychofogy,
1980, If, 12-14.
DOWNS, A. Some thoughts on giving people economic advice. In F. Caro, (Ed.). Readings in evaluation research. New York: Russell Sage Foundation, 1971. LUCKEY, J. W., & BERMAN, J. J. Effects of a new commitment law on involuntary admissions and service utilization patterns. Law
and Human Behavior, 1979, 3, 149-161.
COOK, T. D., & CAMPBELL, D. T. Quasi-experimenfation: Design and analysis issues fur fieid settings. Chicago: Rand McNally,
and JAMES E. SORENSEN
ROSSI. P. H. Booby traps and pitfalls in theevaluation of social action programs. In C. Weiss, (Ed.). E~la~aating action programs: Readings in social action and education. Boston: Allyn and Bacon, Inc., 1972. SORENSEN, J. E., & ELPERS, J. R. Developing information systems for human service organizations. In C. C. Attkisson, W. A. Hargreaves, M. J. Horowitz, and J. E. Sorensen (Eds.), Evaluation of Human Service Programs. New York: Academic Press, 1978. WEBB. E. SECHREST, 1966. WEISS,
J., CAMPBELL, D. T., SCHWARTZ, R. D. 6i L. Unobtrusive measures. Skokie, Ill.: Rand McNally,
C. Evaluation research. New York:
Prentice
Hall,
1974.