Research methodology: Strengthening causal interpretations of nonexperimental data

Research methodology: Strengthening causal interpretations of nonexperimental data

Book RezCews 77 L. Seth rest, E. Perrin, and J. Bunker (Eds.). Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data. (...

213KB Sizes 0 Downloads 38 Views

Book RezCews

77

L. Seth rest, E. Perrin, and J. Bunker (Eds.). Research Methodology: Strengthening Causal Interpretations of Nonexperimental Data. (AHCPR Conference Proceedings, Tucson, AZ, April, 1987). Washington, DC: U.S. Department of Health and Human Services, Public Health Service, Agency for Health Care Policy and Research, 1990. Reviewed by: ZITA M. CANTWELL The major papers and accompanying statements, presented at an AHCPR sponsored conference primarily designed for researchers in health services, form the contents of this monograph. Commenters and conference participants were drawn primarily from fields in health services research. Major papers were oriented more generally to applied social research. Perhaps the best starting point for the reader of this collection of papers is the final contribution: F. Mosteller’s overview, “Improving research methodology.” First, that paper’s title gives a more accurate description of the whole collection than does the title of the conference, “Strengthening causal. . . .” But more importantly, Mosteller’s statement, referenced to each of the major papers, functions as an effective-and needed-advanced organizer for the reader. A stimulus for the conference was stated to be the growing realization among researchers in health service fields that studies cannot be “confine[d] . . . to randomized experimental designs. . .” and therefore “ . . . improved methods were needed for interpreting nonexperimental data. . . .” Expansions of this theme are directed to a concept of polarities between two sets of linked constructs, i.e., random assignmentexperimental design-causal interpretation and nonrandom assignment-nonexperimental design-no causal interpretation. The nuances in neither the constructs nor the linkages received much consideration, at least in so far as structuring emphases or directions for major paper topics and discussant comments. The collection is introduced with a paper on the current condition of research design applications in the health services. The statement is based on conclusions drawn from a critical analysis of research methodologies used in 100 research studies Zita

M. Cantwell

-City

University

of New

York, 30 We\t 60th St. IS L, New York, NY

loC23

78

EVALUATION

PRACTICE,

13(l), 1992

published in two health services journals over a l4-month period and 20 (10% sample) research proposals submitted over a 6-month period to the National Center for Health Services Research and Health Care Technology. Seven research design and analysis categories functioned as content review guides. namely, sampling design, measurement issues, internal validity, external validity, construct validity, statistical conclusion validity, and unjustified conclusions. Half, or slightly better, of the studies had problems of sampling, measurement. external validity. or statistical conclusion validity. At least a third drew unjustified conclusions; close to the same proportion had problems related to construct or to internal validity. A major conclusion-the need of researchers in the field to strengthen weak designs-provides the needs statement for the conference and its themes. Major papers are concerned with specific topics in the design of research studies: generalization of results. the role of theory in the development of hypotheses, latent structure models. establishing confidence limits, confounding variables, regressiondiscontinuity design, contributions of meta-analysis, small area analysis, and statement on the general meaning of causality to the researcher in the field of health services. Each major paper is a solid contribution to its domain. For example, the sophisticated researcher can find information about design methodologies in the articles on latent structure models and regression-discontinuity design. The paper on theory as method comes the nearest to dealing directly with the issues related to causality and the need to develop a strong defense for objectives and hypotheses. Expert contributions appear on the topics of external validity, meta-analysis. and confounding variables. The keynote address of the conference, generalization of causality, speaks to the major theme of the conference and offers the thoughtful reader many prompts for reflection about cause-and-effect relationships in relation to the variabilities within research designs. This contribution also serves as a reminder that the ability to generalize findings at all rests on the scaffolding. or “logic net”- Platt’s ( 1964) tern-that supports the problem and hypothesis. However. significant points of this paper are not followed-up within the conference structure. Stated major conference themes of “causality” and “causal interpretation” are frequently expressed but not frequently explored with reference to major paper topics. The distinction between laboratory research and field research studies is not made clearly: possibilities for causal attribution in the two types of studies not made subjects for speculation. Further, the distinctions between true experimental and quasi-experimental designs with reference to the stated conference theme of causality are not pursued. The emphasis is placed on random assignment-or its absence, and causality. The kinds of health services research alluded to in the conference rationale are those often studied through the use of field research paradigms. Field research studies frequently use quasi-experimental designs, designs that approach cause-and-effect attribution from a different perspective than laboratory research, usually the subject of true experimental designs. But in a most critical aspect of any cause-and-effect attribution, both experimental and quasi-experimental approaches are similar. That is,

79

Book Reviews

both require the development of conceptually strong and detailed “logic nets” and the use of the method of “strong inference” (Platt, 1964). Although Platt’s contributions were alluded to briefly in one paper, “. . . eliminating rival explanations . . .,” their implications in relation to causality played at best a relatively unvoiced role in the total conference structure. The experienced evaluation researcher can use these papers to advantage. However, the researcher should have a grasp of the distinctions and similarities between “field research” and “evaluative research,” to use Suchman’s (1967) term, and a clear idea of the categories of information wanted from pursuing the collection. Suchman’s five categories of evaluation criteria for a program’s success or failure, namely, effort, performance, adequacy of performance, efficiency, and process can provide a stimulus for questions to address to the material, especially those concerning causeand-effect relationships ( 1967, pp. 6 l-68). The ideas, suggestions, and conjectures presented provide material for the practiced researcher. The materials have the potential to spark some very constructive discussions. The collection will not be as useful to the neophyte researcher. The collection will be of least use to the researcher who has not a clear understanding of the distinction among the major categories of research and associated general methodological issues, a system of evaluation criteria, and a familiarity with current issues basic to cause-and-effect interpretations of the results of research.

REFERENCES Platt, J.R. (1964). Strong inference. Science, 146 (3642), 347-353. Suchman, E.R. (1967). E~uluarive Re.semrh. New York: Russell Sage Foundation.