Evaluating vocational interventions

Evaluating vocational interventions

Journal of Vocational Behavior 13, 252-254 (1978) Evaluating Vocational interventions GARY D. GOTTFREDSON Johns Hopkins University Increasing nu...

209KB Sizes 0 Downloads 58 Views

Journal of Vocational Behavior 13, 252-254 (1978)

Evaluating

Vocational

interventions

GARY D. GOTTFREDSON Johns Hopkins

University

Increasing numbers of manuscripts submitted to the Journal of Vocareport attempts to assess the effects of tests, counseling, or other treatments on clients. This evidence of increased interest in evaluation research is to be welcomed. For too many years practitioners who attempt to help people with career planning, educational choices, job change, and problems of vocational adjustment have simply assumed that their interventions have beneficial effects. Evaluation research gives us the prospect of learning which of the available interventions promote the most positive outcomes. Perhaps more important in the long run, however, evaluation can also provide clues about ways to design more powerful interventions and lead to improvements in vocational theory. The primary purpose of this editorial is to encourage more high-quality evaluations of vocational interventions. The recent account of experimental and quasi-experimental designs for research given by Cook and Campbell (1976) is a valuable resource that vocational researchers can use to guide the development of experimental and quasi-experimental research plans. That account also reiterates that experiments in which subjects are randomly assigned to alternative treatments or to treatment and control conditions are usually superior to the variety of quasiexperimental designs. Researchers interested in evaluating vocational treatments, however, are sometimes reluctant to employ randomization in their work. Consequently, a secondary purpose of this editorial is to suggest that this reluctance is rarely justified in research on common vocational interventions. One common set of quasi-experimental designs used to assess the effects of vocational interventions involves the provision of a treatment believed to be beneficial to individuals seeking help and the comparison of the later status of these persons with the status of similar individuals who did not seek treatment and were not treated. A number of variations on this strategy appear in the literature, but the results of all such quasiexperiments will be open to interpretations that they are due to differences between the two groups (such as differences in the desire to do

tional Behavior

252 OOOl-8791/78/0133-0252$02.00/O Copyright All rights

@ 1978 by Academic Press, Inc. of reproduction in any form reserved.

EVALUATING

VOCATIONAL

INTERVENTIONS

253

something about one’s career, among others) rather than to the treatment itself. The stimulating report by Smith and Glass (1977) of their examination of a large number of evaluations of psychotherapy and counseling illustrates a reason for concern about the quality of experimental designs for evaluating treatments. One result found in that report was that the less reactive the outcome measure and the greater the internal validity of the experimental design in an investigation, the smaller the estimated size of the treatment effect. Although reliable, the differences associated with research rigor were not large in the Smith and Glass sample of published studies and dissertations employing control groups (see Glass, 1978; and Mansfield & Busse, 1977), but similar analysis of studies in another research area (Astin & Ross, 1960) supply further reason for concern. To make progress in learning what kinds of vocational interventions are most useful we must be able to have confidence in the interpretations of our experimental results. In practical terms this means more experiments with random assignment to treatments. Successful implementation of quasi-experimental designs (Cook & Campbell, 1976)that rule out most of the plausible alternative interpretations of the research outcomes is frequently possible. Quasi-experiments will often be necessary, well executed, interpretable, and useful. Among other virtues, they sometimes provide a way to avoid intrusive or reactive procedures and to promote external validity. There is no intent here to disparage these virtues. But such research requires a great deal of thought and work on the part of the experimenter. In many cases, convincing quasi-experiments would be much more elaborate than an even more convincing true experiment. Why do vocational researchers sometimes not randomize? The primary reason seems to be a feeling that it is inappropriate to deny any subjects access to treatment that they desire. Implicit in this rationale is the assumption that withholding treatment deprives persons of a benefit. Now, the question of whether the treatment provides a benefit, or no benefit, or is harmful is usually the question the research is intended to answer. Strupp and Hadley (1977) have argued that therapeutic interventions may sometimes have negative effects, and Cole (1977) has outlined a perspective that suggests that vocational interventions may have some negative outcomes. Furthermore, the absence of evidence to the contrary for some treatments makes it reasonable to assume that they may be ineffective. The social imperatives in evaluating interventions are well described by Eisenberg (1977), who uses the history of medical practices-including the once widely accepted practice of blood lettingto illustrate that the failure to evaluate treatments rigorously has ethical consequences that can rival the consequences of denying treatment. Until we have better grounds for believing that we actually deny a control group subject substantial benefit, we do not have serious ethical worries in the

254

GARY GOTTFREDSON

case of many (not all) true experiments likely to be published in this journal. Some practitioner-researchers may be reluctant to conduct true experiments because institutions or clients put a higher priority on the provision of services than on research to evaluate and eventually to improve services of questionable value. Expectations on the part of clients or employing institutions may create pressures that even dedicated researchers may find hard to resist. One way out of this dilemma is to compare alternative services experimentally. A second kind of solution that is often acceptable is to delay treatment for a randomly selected portion of those applying for it. The use of a waiting list as a research tool is a relatively unobtrusive strategy in many settings, and all clients eventually receive the treatment that they seek. As the American Psychological Association’s Task Force on Evaluation and Accountability in National Health Insurance (1978) put it, “In the vast majority of cases the only really ethical position lies in providing the public with effective services or services whose effectiveness is under systematic evaluation (p. 305):’ This ethical requirement applies to vocational services just as it applies to health services generally. REFERENCES American Psychological Association Task Force on Continuing Evaluation in National Health Insurance. Continuing evaluation and accountability controls for a national health insurance program. American Psychologisf, 1978, 33, 305-313. Astin, A. W., & Ross, S. Glutamic acid and human intelligence. Psychological Bulletin, 1960, 57, 429-434. Cole, N. S. Evaluating standardized tests and the alternatives. In A. J. Nitko (Ed.), Exploring alternatives to current standardized tests (Proceedings of the 1976 National Testing Conference). Pittsburgh: University of Pittsburgh, School of Education, 1977. Cook, T. D., & Campbell, D. T. The design and conduct of quasi-experiments and true experiments in field settings. In M. D. Dunnette (Ed.), Handbook of industrial and organizational psychology. Chicago: Rand-McNally, 1976. Eisenberg, L. The social imperatives of medical research. Science, 1977, 198, 1105-I 110. Glass, G. V. Reply to Mansfield and Busse. Educational Researcher, 1978, 7, 3, Mansfield, R. S., & Busse, T. V. Meta-analysis of research: A rejoinder to Glass. Educational Researcher,

1977, 6, 3.

Smith, M. L., & Glass, G. V. Meta-analysis of psychotherapy outcome studies. American Psychologist,

1977, 32, 752-777.

Strupp, H. H., & Hadley, S. W. A tripartite model of mental health and therapeutic outcomes with special reference to negative effects in psychotherapy. American Psychologist, 1977, 32, 187-l%. Received: February, 23, 1978