0160-791X/93 $6.00 + .OO Copyright 0 1993 Pergamon Press Ltd.
Zkchnology in Society, Vol. 15, pp. 371~381,1993 Printed in the USA. All rights reserved.
Political Decision Making in Science and Technology: A Controversy
About the Release of Genetically Engineered Organisms Red
von Schomberg
ABSTRACT In the policy-making process, there is increasing demand for scientific and technical information. But one of the major insights about science and technology recently highlighted by historians and philosophers is that in their development, science and technology are accompanied as much by internal controversy as by consensus. This creates a problem for the policy decision maker. How can a consensus in the policy making process be achieved in the context of scientific and technological controversy? This problem can be approached by analyzing the types of discussions in which scientific and technical experts participate, and is the topic of this paper.
Analyzing the types of discussions in which scientific and technical experts participate reveals that the central statements in a scientifictechnological controversy claim plausibility rather than truth. In the case of such epistemic controversies, discussion is not bound to the level of statement and counter-statement. Indeed, often a whole domain of knowledge, and even the methods of how we are to gain new knowledge, become controversial. Plausibility claims are raised by weak arguments like analogies, suppositions, and appeals to authority. As a result, we become aware of a fundamental lack of knowledge and information for Rene von Schomberg is Faculty Associate at the Faculty of Philosophy, Tilburg University, in the Netherlands, and Director of the International Center for Human and Public Affairs. He publishes in the field of Philosophy, Sociology and Science, and Technology Studies. He is editor of the book Science, Politics and Morality, Scientific
Uncertainty
and Decision-making.
He is currently working on a book on reasoning in science and politics. Z want to thank Professor Carl Mitcham, Pennsylvania State University, for editing this manuscript, and for his helpful comments for my further work, during my visit to Penn State in October, 1991. 371
372
R. von Schomberg
dealing in an effective way with the problems encountered. Conversely, such forms of argumentation serve as indicators that a scientific controversy is going on. This article considers the case of epistemic controversy in disagreements about risks associated with the deliberate release of genetically engineered organisms into the environment. It begins by describing the problem as fundamentally trans-scientific - that is, as a scientific problem for which there is no clear means of resolution - and then identifies its multidisciplinary aspects. It concludes with an analysis of the kind of issues such epistemic controversies raise for policy formation, and makes suggestions for how these issues can be addressed.
Questions About the Release of Genetically Posing a Dans-Scientific
Engineered Problem
Organisms
as
It is impossible to predict with complete accuracy the ecological impact of the introduction of a genetically engineered organism into the environment. The uncertainty of the ecological outcomes could be reduced by further inquiry and by field tests. But such tests could themselves have unexpected and undesirable impacts. A ban on field tests, which would protect against such deleterious consequences, would also make it impossible to gain the kind of information that could counter fear of such impacts or help develop means to counter the results. Thus, the scientific question concerning the environmental impact of genetically engineered organisms is transformed into a transscientific one concerning how to solve the problem at issue. Moreover, even the claim about the uncertainty of the impact of the new technology is controversial. There are scientists who argue that there are no serious risks involved in the deliberate release of genetically engineered organisms. Others maintain that field tests involve unknown risks. The positions of equally eminent scientists can be drastically different. Two quotations of participants in the debate can illustrate this point. The response of Waclaw Szybalski (McArdle Laboratory for Cancer Research, University of Wisconsin) to a moderate article by Winston J. Brill (Vice President of Research and Development, Agracetus, Middleton, Wisconsin) in Science begins as follows: Brill’s article is a well-prepared, scholarly evaluation of the true benefits and hypothetical hazards of genetic engineering in agriculture. However, since it seems to represent the point of view of the theoretical scientist, a few additional practical points should be included stressing the societal point of view. 1) Brill refers to risks that are “very small” or “extremely unlikely,” instead of saying that they are nonexistent [emphasis added] from the practical [his emphasis],societalpoint of view. . . .
Political Decision Making in Science and Technology
373
2) Brill does not refer to the “early warning principle” which is built into genetic engineering experimentation and which offsets any present need for regulation [emphasis addedl.1
Szybalski thus thinks that the term “risk” is not applicable at all.
By contrast, Martin Alexander (Liberty Hyde Baily Professor, Cornell Universitys) admits a hazard, and proposes to control it. After explaining the structural lack of knowledge in the field of ecology, he concludes: To ban field testing of genetically engineered organisms . . . would prevent practical exploitation of any of these products designed for the use in agriculture, pollution control, or mining. Ecologists and ecotoxicologists have an answer to this dilemma: the microcosm. Microcosms are designed to simulate natural environments and the interactions among natural populations, and their dimensions allow them to be maintained in the laboratory or greenhouse. They can also be modified to provide the degree of containment and security appropriate for potential communicable agents. For these ecosystem models to be useful in this application, of course, they would have to be modified somewhat to accommodate the specific requirements of testing genetically engineered organisms. Moreover, the usefulness of the various microcosms as predictors of events in the field must be validated. This can only be done by field studies of benign surrogates for the genetically modified species, and later, by studying engineered organisms whose safety has been adequately ensured.3
Alexander thus wants research in laboratories and greenhouses in order to avoid the unknown hazards field tests and to gain new knowledge. But in the last few sentences he proposes a massive research program that eventually must lead to something like test protocols for genetic engineered organisms. Alexander’s research program actually implies that no field tests should be undertaken prior to the completion of investigations using his vague proposal for creation of an ecological “microcosm.” And Alexander’s point of view is not extreme. For example, according to Daniel Pimentel (Professor of Insectecology and Agricultural Science, Cornell University), “After conducting indoor tests, the developer of a modified product should conduct field tests on islands and similar isolated areas.“4 The points of view of equally qualified scientists thus differ so much that there is not even a consensus about the adequacy of the available knowledge in regard to the problem of evaluation of risks. On the one hand, scientists agree that it is impossible to predict the ecological impact of the introduction of a genetic engineered organism. On the other, they radically disagree about the relevance of an answer to this question in regard to the evaluation of risks. Moreover, some scientists doubt whether it makes sense even to make an effort to investigate the problem, a point revealed in Brill’s original argument:
374
R. von Schomberg
To allay concerns about the safety of a recombinant organism, it would be useful to follow testing protocols before the organism is generally released. However, the task of designing relevant tests for most situations seems to be enormous, if achievable at all.“5 Brill does not want to wait for the results of the research program of the ecologists. He has serious methodological objections against greenhouse experiments, arguing they would not yield any relevant results for calculation of the impact of a release of organisms under field conditions. But five ecologists react to Brill’s point as follows: Does our current lack of knowledge make it hopeless to attempt to predict possible danger? Past attempts to predict the ecological trajectory of introducing organisms have not been successful . . . . Experimental ecology has made rapid strides in the last 15 years . . . . Progress will, however, only be made if a genuine spirit of interdisciplinary cooperation is adopted, and the proponents of the various viewpoints on the risk of genetically engineered organisms work together to better define the important questions and to answer them.6
Multidisciplinary Aspects of the Problem of the Release of Genetically Engineering Organisms Since there are no knock-down arguments available, it is not clear which discipline can provide the most promising answers. Both biotechnologists and ecologists want to contribute to the problem of evaluating risks. There seems to be some agreement on the necessity of an interdisciplinary approach to solving the problem. But a closer look shows that the controversy extends into a question of how such interdisciplinary work should be done. For example, Bernard N. Davis (Adele Lehman Professor Emeritus of bacterial physiology, Harvard Medical School) argues that in trying to assess the potential dangers, the experience of ecologists with transplanted higher organisms is less pertinent than are the insights of fields closer to the specific properties of engineered microorganisms: population genetics, bacterial physiology, epidemiology, and the study of pathogenesis.7 By contrast, Pimental, with an eye on the greenhouse and lab experiments, has the following in mind: “Such efforts would require teams of microbiologists, ecologists, plant breeders, agronomists, wild life specialists, public-health specialists, and botanists to work together.“8 The different evaluations of the possible risks and hazards and the different proposals for interdisciplinary approach derive from different interpretations of modern evolution. One cannot say that one interpretation is better than the other. One can only say that different interpretations cannot all be universally true in the long run. The
Political Decision Making in Science and Technology
375
central controversial statements can only claim plausibility. I claim that the whole discussion can be represented in pairs of plausible claims. Each pair consists of contrary but even plausible claims. For the sake of brevity, I will give the following example of such a pair.9 Statement One: Predictions of the risk of deliberate release can be based on the experience of traditional practices in agriculture (live plant breeding). In traditional plant breeding the exact genetic changes are unknown. In the case of genetic experimentation, however, the specific modifications can be characterized. Therefore, there is no reason to expect that engineered organisms could cause greater problems than traditional techniques. Statement Two: In some cases it is more likely that engineered organisms could cause more problems. In the case of genetic manipulation, we introduce new genes in species, whereas traditional practices only rearrange existing genes. These two statements are both plausible. The plausibility of statement one does not affect the plausibility of statement two. (This would be different when there were truth claims at stake. Accepting one statement as true would imply the rejection of the other statement as false.) We cannot expect that the scientists will come to some kind of agreement in a discussion on this topic, since the differences derive from different methods and approaches that are more or less standard and fundamental (and therefore indisputable) for the specific discipline. Since there are no knock-down arguments available, each discipline can develop a perspective on the problem.
Expert Problem Definition and Inconsistent Decision Makers
Recommendations
to
Ecologists and biotechnologists make different recommendations to policy makers based on different scientific arguments. Biotechnologists maintain that no particular regulation of the new technique is necessary. Existing controls on traditional methods for plant breeding guarantee sufficient safety, given the analogy with natural and traditional processes. The opposite recommendation espoused by ecologists is founded on the argument that it is impossible to predict the ecological effects of a new release into the environment. They maintain that new tests should be developed to investigate the effects of the products of genetic technology Both scientific communities call for regulations based on scientific evidence, but each puts forward incompatible assessments of the problem at ha.nd.lo Ecologists define the problem as new, whereas biotechnologists maintain that nothing new is involved. Biotechnologists propose that decisions be made in accordance with available information, whereas ecologists believe that a decision can only be made after new knowledge has been acquired.
376
R. von Schomberg
‘I%justify policy-making decisions, we usually use a form of argument that could be reconstructed in terms of practical argumentation or in terms of the well known model of rational decision making. The idea that scientific agreement is the foundation of a rational policy can be found in the model of rational decision making that is the basis of methods such as cost-benefit analyses. Using this model, we start with given problems in which the policy maker has clearly defined policy goals and a list of alternative means by which each of these goals could be realized. The choice of means is based on specified rules of preference and selection. Scientific information plays an important role in this model, since it determines the effectiveness of the means and the possibility of realizing the policy goals. The quality of a policy is expressed in terms of its effectiveness and efficiency. The ideal concept, forming the basis of a rational decision model, is based on the assumption that an agreement consensus among scientific experts guarantees the quality of the policy. These important assumptions cannot be made when we rely on information borrowed from an epistemic discussion in science. Against the background of epistemic discussions, there appears the uncertainty of our present knowledge: the incompleteness of the information and the inconsistency of the data. The scientific information constitutes a strategic source that, dependent on the different political options, can be interpreted in several ways. Within this context of political decision making and epistemic discussions, we find the following phenomena by which the inadequate rationality of societal actions can be catalogued. 1. Data is translated from an epistemic discussion to expressions characterized by truth or probability (or the translation of illustrative data in proof, or the transformation of dangers in risks).ll Take, for example, the case of plausible knowledge claims translated into predictions with probability characteristics. Such probabilities must again and again be adjusted to suit new events, states of affairs, and catastrophes. This has been generally acknowledged after the Chernobyl disaster. In our example of genetic engineering, the data of field experiments have to prove the safety of genetic engineering. 2. In a particular policy, preference is given to a discipline that can provide facts (data). The hypothetical dangers formulated by opposing scientific disciplines do not sufficiently flow into the process of decision making.12 The emphasis on the inadequacy of our knowledge constitutes a loss of scientific authority for that particular discipline, rather than a problem for political decision making. The discipline that sides with the subjective preference of the policy maker is the winner (decisionism). In Europe we came across this when the release of organisms was considered dangerous in Denmark and resulted in a stop on the deliberate intentional release of genetically modified organisms; at the same time, it was regarded as innocent in Italy, leading to an absence of any control until the EC eventually developed a directive of regulation.
Political Decision Making in Science and Technology
377
3. When an epistemic discussion attracts the attention of the general public, the scientific debate is usually politicized unnecessarily. This can be seen in the irrational struggle concerning data: Interest groups look for support from experts who share their political objectives. And yet, one reacts affirmatively on the presuppositions of the rational decision making model.-13 As long as the controversy continues, there will be no decision. Depending on one’s political preferences, one feeds a controversy with explosive new data or tries to escape scientific dissent. It is also the case that interest groups call for political decisions, but in the policy-making process, controversy is used to delay those decisions. I do not want to reinforce the idea that in the case of controversies, interests come into play. I would rather stress a point that is important for a theory of society: Science has become a resource for strategic action, and has lost its functional authority. This means that science cannot separate societal actions from unnecessary discussions. In the policy-making process, a contradiction arises: An appeal to science seems necessary because of the complexity of the issue, but is not possible since there is a controversy, and what is impossible, such as an appeal to a source that can provide authoritative data, often becomes necessary. 4. Following the sequence of actions prescribed by the policy, collecting the facts before making the decisions leads to a situation in which unsolved or unsolvable scientific problems do not appear on the agenda. Therefore, science is put under pressure to produce hard facts in fields in which this cannot be done. What everyone wants are one-handed scientists. l4 5. The scientific debate among disciplines for claiming acknowledgement and authority in a new field of research is threatening to deteriorate into public campaigns for recruiting sympathy. Biotechnologists promulgate the promises and blessings of the new technology, while ecologists do the same for a threatened environment. 6. Even the legal system seems unable to cope with the problems, as is apparent from the following. l
l
The principle of the causal agent (blame) cannot be applied. Eventually, neither “actors” nor “victims” can be identified when using the new technology. For example, it is impossible for a victim of the Chernobyl disaster to go to a court of law in Europe and claim that the disease from which he or she is suffering has been caused by nuclear radiation. Legal norms can no longer be controlled in practice. The observance of standards is often the result of informal agreements and negotiations between public authorities and individuals. In some cases, legal norms cannot even be defined. In most of the western countries, the admissibility of the maximum amount of radiation in the vicinity of nuclear reactors is determined by that which laws regarding atomic
378
l
l
R. von Schomberg
affairs refer to as “the most recent state of affairs in science and technology.” The legal system can no longer fulfill a normative role in relation to the admissibility of technological actions. This is illustrated in a judgement of the “Bundesuerfussungsgericht” (supreme court) in Germany concerning the controversy surrounding the Kalkar plant: “It is not the duty of the lawgiver to determine the possible kinds of risks, factors of risk, the methods to determine such, or, fix the limits of toleration.“15 It is obvious that the judge had shifted this problem to the realm of politics. The conflicts in society cannot be settled under the conditions of the equality of power of a judicial judgement; they are left to a socially unequal power struggle in which human beings depend on the responsibility of individual citizens. This is a difficult and, in the future, undoubtedly unrealizable task, because the risks of the new technologies can no longer be observed by the individual citizen.
Against the background of these phenomena, the problems of legitimacy arise confronting the planning state. On the one hand, the state can no longer agree with the definitions of problems by interest groups. On the other hand, it does not know how to cope with the disapproval of the overpowering innovation processes about which citizens cannot make decisions. The problem of legitimacy is partly compensated by the tendency of public management to negotiate with different groups in society. Examples are found in the representation of such groups in health and environment councils. I do not expect spectacular results from mutual concessions reached during negotiations, since such concessions normally arise within the framework of strategic actions and unequal conditions of power. It is far more important to note that if this is the road taken, the solution to the new problem of making decisions under conditions of scientific uncertainty using a justifiable procedure is abandoned. The fact that we need a procedural solution is apparent, since there does not yet exist an accepted institution in society that can determine who would participate in making decisions within the context of scientific controversies.
Discursive Procedures in Administrative Law
To conclude, let me show that the use of a procedural process applied to such problems can generate the general framework for a solution to the problematic phenomenon. The so-called “discourse ethics” provide an understanding of the ways that questions about just procedural solutions can be answered without getting stuck in biased ethical criticisms of technology or a dogmatic unwillingness to make certain values the subjects of an argumentative test. Within discourse ethics, one argues
Political Decision Making in Science and Technology
379
that a material ethics maintaining that one could, for once and always, prescribe norms cannot be founded on arguments. History has shown that our ethical and scientific insights are fallible and must always be revised in light of new situations and problems. The validity of norms can no longer be derived from sources that have been regarded as infallible. Since they can only be found in discussions, the validity of norms should be sought in the free mutual acknowledgement of these norms by (potential) discussion partners. Therefore, the conditions for the acknowledgement of norms is of the utmost importance. As a matter of fact, this is true for all procedural theories of justice and democracy. In a sense, we can say that the conditions for a discussion are simultaneously the conditions for a rational agreement about norms. In light of our question about the ways to reach an agreement regarding policy, and given the background of epistemic discussions, we can make a list of conditions that have to be fulfilled in discursive procedures. In the analysis of the structure of epistemic discussions, we have to establish the idea that there should be an acknowledgement of scientific disagreement. It is not reasonable to expect that scientific experts, on the basis of scientific insights, will be in agreement in the near future. This means that in procedures concerning policy, participation of scientific experts with only one viewpoint cannot be justified. The problem of legitimacy cannot be solved if some of the parties concerned with the political process are excluded. Another condition is that all parties should have equivalent roles. Therefore, experts should only be allowed to supply information for the discussion and not have an advisory or determinative function with regard to policy. Decisions having irreversible consequences for nonidentifiable groups or future generations who are not able to participate in the discussions should carry the burden of a heavy legitimacy and should, if possible, be avoided. Discursive procedures imply that the norms that exclude anticipated universal interests will be rejected. In discursive procedures the norms are not grounded, but are selected negatively. All relevant aspects of the problem should be dealt with within the framework of a discursive procedure. In addition to eventual scientific problems, the following are also relevant: l
l
l
ethical questions such as: What options are desirable? (In general, how do we want to live?); moral justifications such as: What norms should in the interest of all be included (e.g., questions about the division of risks in the society); questions about justice such as: What (social and technological) aims should be promoted or limited within the framework of the rules of law? (Example: Should biotechnological research be controlled by legal means?)
380
R. von Shornberg
During the short history of technological policies in the western countries, we find few proposals for procedures that address this problem. During the 1970s there were heated discussions about a Science Court (especially in the United States). The focus of this proposal was an assemblage of scientists from different fields that would meet and act as judges to reach agreements with the promise that normative aspects would be kept out of the discussions. This plan, however, was never realized. In light of the foregoing analysis, such a body would not be able to solve controversies, since the procedure in question is based on the assumption that the scientists would agree to set aside the normative aspects. The illusionary aspects of this became evident very early on. Another proposal, however, was institutionalized. This was the so-called “technology assessment” procedure. Initially this implied the establishment of a planning instrument in which the expected effects and side effects of technology would be mapped and used as input in the process of making a decision. In this case, the possibility of a rational consideration of the pros and cons of technology was the guiding light. An Office of Technology Assessment was set up as early as 1973 in the United States. This office acts as an advisory board to the U.S. Congress. In some European countries, similar offices exist. In the Netherlands one has been operating since the mid 1980s. This office can advise the Minister either voluntarily or by request. The carrying out of this proposal can be regarded as the first attempt at institutionalization, since its goal is actual and democratic guidance and control of technology. This essential element, however, hardly manifests itself in the real roles of these offices, and is not possible because an “evaluation” of technology always comes too late. The public information on new technologies only starts moving when the products of the new technologies have been realized. Moreover, the information available on new products is quite often limited by patent laws. The six phenomena mentioned are not eliminated by the presence of assessment offices (certainly not in their present form). And yet, there is no acknowledgement - at least not of the nature of a procedural acknowledgement - of any scientific uncertainty or scientific dissent. As far as commissions for interdisciplinary research are concerned, assessment offices seem to anticipate agreement among scientists. Ultimately, epistemic discussions are analyzed in terms of conflicting interests. This leads to the loss of the ability to select on the basis of universal interests. Notwithstanding these remarks, the Dutch office, for example, has contributed towards a social learning process that could develop in the direction of a discursive procedure. From this point of view, the office has a social function: Social groups could approach the office and make known their desires and needs for information. It would, however, be an improvement if all the parties involved in discussions could be granted equal power in discursive procedures. To achieve this, the necessary
Political Decision Making in Science and Technology
381
changes to administrative laws have to be made. The conditions for a discursive procedure should be legally settled and, above all, the rights and duties regarding the distribution of information should be put into place. The policy process would then no longer be evaluative: It would become constructive in the form of a continuous interaction between the exchange of information and determination of policy. In such a policy process, real democratically controlled learning processes can be implemented, and mistakes can be corrected. In any case, it is plausible that one day the unnecessarily politicized debate will be eliminated.
Notes 1. W. Szybalski, Letter, Science, Vol. 229, no. 4709 (July 12, 1985), pp. 112 and 115. The article being criticized is W. J. Brill, “Safety Concerns and Genetic Engineering in Agriculture,” Science, Vol. 227, no. 4685 (1985), pp. 381-384. 2. Interestingly, continuing the two earlier connections, Alexander received his Ph.D. in bacteriology from the University of Wisconsin. 3. M. Alexander, “Ecological Consequences: Reducing the Uncertainties,” Issues in Science and Technology, Vol. 1, no. 3 (Spring 19851, pp. 66-67. 4. D. Pimental, “Down on the Farm: Genetic Engineering Meets Ecology,” Technology Review (January 19871, p. 30. 5. W. J. Brill, “Safety Concerns and Genetic Engineering in Agriculture,” Science, Vol. 227, no. 4685 (1985), pp. 381-384. 6. R. K. Colwell, E. A. Norse, D. Pimental, F. E. Sharples, and D. Simberloff, Letter, Science, Vol. 229, no. 4709 19851, p. 111. 7. B. Davis, “Bacterial Domsetication: Underlying Assumptions,” Science, Vol. 235, no. 4794 (1987), p. 1329. 8. D. Pimental, op. cit., p.30. 9. For an extensive analysis that also deals with all the properties of epistemic discourse, see my “Controversies and Political Decisionmaking,” in Rene von Schomberg (ed.), Science, Politics and Morality: Scientific Uncertainty and Decisionmaking (Dordrecht: Kluwer, 1993), pp. 7-25. 10. Occasionally, solutions in the form of interdisciplinary research have been attempted. Usually, the experts argue endlessly about which disciplines should participate in the research. 11. For the transformation of dangers in risks, see A. Evers and H. Nowotny, ober den Umgung mit Unsicherheit. Die Entdeckung der Gestaltbarkeit von Gesellschaft (Frankfurt am Main: Suhrkamp, 1987). 12. In the policy-making process, one cannot deal with the concept of hypothetical risk either. See R. Kollek, “Ver-rtickte Gene. Die inhiirenten Risiken der Gentechnologie und der Defizite der Risikodebatte,“Asthetik und Kommunikation, no. 69 (1988), p. 34. 13. I do not mean that in the political realm one actually turns explicitly to the normative model of rational decision making. I only suggest that empirically founded arguments in the policymaking process could be optimally represented in this way. 14. See Arie Rip and Peter Groenewegen, “Les fuits scientifique a l’epreuve de la publique,” in Michel Callon (ed.), La Science et ses reseaux. Genese et circulation des fuits scientifique, (ParisZStrasburg, 19881, pp. 149-172. 15. Cited by Rainer Wolf in his article: “Zur Antiquiertheits des Rechts in der Risikogesellschaft,” in U. Beck (ed.), Politik in der RisikogeseZZschaft, (Frankfurt a.M.: Suhrkamp, 1991), pp. 378-424.