The moral role differentiation of experimental psychologists

The moral role differentiation of experimental psychologists

So...

721KB Sizes 2 Downloads 50 Views

So<. Ser. Mrd. Vol. ISF-. pp. 27 10 31. 1911 Prmted ,n Great Br~lam All rIghta reserved

0271-5392/81/010027-05102.00/O Prcs,

CopyrIght0 1981Pcrgamon

THE MORAL ROLE DIFFERENTIATION EXPERIMENTAL PSYCHOLOGISTS

Ltd

OF

H. A. BASFORD Department of Philosophy, Atkinson College, York University, Canada Abstract-This essay asks whether the discipline of experimental psychology is morally role differentiaied; whether, that is, the social functions or contributions of that discipline give rise to special norms which allow experimental psychologists to weight some moral considerations less heavily than would be required in everyday situations. This question is important to experimental psychology because of the large number of research procedures which clearly would be immoral if carried out by the nonprofessional. The essay shows that any claim to moral role differentiation for the discipline must involve proposing first that the results of psychological experimentation are of great value to furthering human welfare (this claim is not disputed in the essay), and second that these general benefits override the specific harms or disutilities caused to the subjects of particular experiments. The essay argues that in most cases experimenters can, roughly calculate utilities arising from individual experiments so cannot appeal to the general benefits of research to excuse themselves from ordinary moral considerations in deciding whether to undertake particular experiments. The essay further argues that the utilitarian (cost-benefit) model itself must be modified by various considerations of human rights, which lay even more stringent moral consideration upon the psychologist. Accordingly, experimental psychology is only very weakly morally role differentiated. This result, however, does not significantly undercut psychological research, for most experimental procedures can be modified to conform with the relevant moral consideration. Further, a proper consideration of the rights model shows that many of the current concerns about obtaining informed consent are misplaced and put morally unnecessary burdens upon the experimental psychologist.

S.P.C.A. would certainly prosecute me. To a lesser but significant extent, humans are subjected to conditions which in nonexperimental context would clearly be morally dubious. People are sometimes studied in non-public situations without being informed of the study. At other times, subjects are subjected to stress or occasionally even to risks of physical harm without being informed of those risks. A substantial minority of experiments involve deception on the part of the investigator. Studies of journals of social psychology, for example, suggest that just under 20% of the total experiments in this area involve deception [9,7J In ordinary life, one needs very strong grounds in order to justify invading privacy, lying, hurting people or threatening their personal integrity. It is accordingly worthwhile asking what special overriding moral considerations are available to experimental psychologists. The general location of any special professional norm for psychologists appears very quickly. Psychology is a science, and as such is dedicated to the advancement of knowledge. This goal can then be seen as the primary moral norm regulating the psychologist’s professional activity. The report of the Ad Hoc Committee on Ethical Standards in Psychological Research to the American Psychological Association [2] exemplifies this sort of claim.

Let me begin by explaining what I have in mind by the term ‘moral role differentiation’. If you or I were to put a noose around someone’s neck and then hang them, we would rightly be considered moral monsters. But if the royal hangman were to do this while performing his official duties, his action would not be morally culpable. Indeed, at least until recent times, it would be morally laudable. This, of course, is because the hangman was thought to have a special and morally important role in society and in carrying out that role he was excused from certain ordinary moral considerations. In any study of professional ethics, it is useful to examine whether the social functions of that profession give rise to any special professional norms which members of the profession must follow in order to perform their social functions. Modifying Alan Goldman [4], I shall say that if a profession has a special norm and if that norm allows members of that profession to weigh less heavily what would ordinarily be morally important considerations, then that profession is morally role differentiated. What I shall ask in this paper is whether experimental psychology is morally role differentiated; whether, that is, the discipline has any special moral norms which override ordinary moral considerations. At least on the face of things, any general browsing through the psychological literature makes the present sort of moral inquiry important, for there are significant numbers of research procedures which would clearly be immoral if performed in everyday contexts. Animals are regularly subjected to stress which is physically or emotionally painful or damaging. If I, for example, were to subject my pet monkey to virtually any of the deprivation conditions which Harlow has reported on over many years, the

We begin with the commitment that the distinctive contribution of scientists to human welfare is the development of knowledge and its intelligent application to appropriate problems. Their underlying ethical imperative. thus, is to carry forward their research as well as they know how.

Now, to start considering this position it is worth noting that there are two possible interpretations of 2-l

28

H. A. BASSFORD

the claim being made. The first is that knowledge itself has intrinsic value and that the pursuit of this value frees one from otherwise applicable moral standards. The second is that knowledge gained through psychological research is sufficiently beneficial to mankind that ordinary moral considerations either do not apply to, or have a diminished claim upon, the researcher. While I believe that it is some version of the second claim that is normally put forward, it iS useful to begin by considering the first. No one (at least no one in academia!) will deny that the pursuit of knowledge is a legitimate goal in and of itself. When evaluating one’s behaviour, the fact that one has increased knowledge is always a reason in one’s favour. But the problem with epistemism is that it presumes knowledge to be the only or the most important intrinsic value, which contradicts the moral intuitions and moral theories of virtually everyone. Some of the medical experiments conducted by doctors in Nazi concentration camps (and I don’t think it necessary to go into detail here) may well have increased medical knowledge, but they also led to those doctors being tried at Nuremberg. This is because the research grossly violated moral norms requiring concern for human individuals and human welfare. Accordingly, it is clear that the fact that a profession searches for human knowledge does not in and by itself provide moral role differentiation for that profession. The second interpretation of the “scientific norm” mentioned above takes the overriding claim of welfare into accouht. Roughly, it proposes first that the results of psychological experimentation are of great value to furthering human welfare, and second, that in one form or another these general benefits override the specific harms or disutilities caused to the subjects of particular experiments. Although not all commentators are willing to do so [l], I am happy to accept the first proposal and to confine my examination to the resulting special norm. The strongest possible version of the position is one which would exempt the investigator from conducting any careful moral evaluation of his proposed experiment. The argument proceeds as follows. Overall experience shows that psychological experimentation has great social utility. On the other hand it is often noted how difficult it is to predict the benefits or harms, at either the social or individual level, that will flow from a particular experiment. Given these two facts, as long as the researcher is not perpetrating such great and obvious harms as occurred in the Nazi experiments or the Tuskegee syphilis studies, the experiment should be considered morally justified. This conclusion would make life easier for the researcher, but unfortunately, it does not stand up to analysis. On the level of logic its two premisses seem incommensurate. The general position is premised upon the overwhelming social utility of experimental results, so must presuppose methods of calculating that utility. Given this, it would be very strange if there were no way to do a social cost/benefit analysis of at least some reasonable expected outcomes or of predicting any costs or benefits accruing to the participants in the experiment. This seems to be reasonably borne out on the factual level. The Ad Hoc Committee on Ethical Standard in Psychological Research

claims that there exist many “long established and for “assessing reasonably effective mechanisms” whether the results of an investigation will have scientific and practical value” [Z]. Reynolds [7] provides detailed examples of how to do cost/benefit calculations. Accordingly, it appears that the minor premise of the argument can be rejected. There will, undoubtedly, be unforeseeable results of experiments. and in these cases the experimenter will not be culpable for the moral costs resulting from these factors. But many costs and benefits can be calculated, albeit often with considerable effort, so it will often be possible for a psychologist to know reasonably well whether his experiment is likely to provide more social or individual utilities than it will cause disutilities to its subjects. If it can be seen that there are greater disutilities than utilities there are good grounds for not conducting the experiments. For in this case it would seem overall to be reducing total human welfare rather than furthering it, which is the value upon which this whole argument hinges. Accordingly, the researcher has a moral duty to consider the potential costs to his experimental subjects. When he discovers such costs, it is morally required, at a minimum, that he show these costs do not over-ride the potential benefits. This moral cost/benefit or utilitarian model, which is to be applied to individual experiments rather than giving blanket moral approval to experimentation, seems to have been adopted by many of those conaerned with the ethics of social science research. It arises from a belief in the value of the research contrasted with the fact that some of that research has negative value for participants in that research. For example, the A.P.A.‘s Ad Hoc Committee [2] =Ys The basic problem faced by the investigator in planning research is how to design the study so as to maximize its theoretical and practical value while minimizing the costs and potential risks to the humans who participate in it. A particular study is ethically unacceptable to the extent that its theoretical or practical values are too limited to justify the impositions it makes on the participants or that scientifically acceptable alternative procedures have not been carefully considered. [See also 3.1

The position reached so far in this paper’s philosophical excursion is thus similar to that of many concerned psychologists. Actually, I think the present argument shows that the Ad Hoc Committee’s position is somewhat too strict. They suggest that the positive utilities must be shown to over ride the negative utilities. But this ignores the accepted fact that psychological research has produced social benefits overall. Given that not all results can be calculated, this factor would suggest that a given experiment will be acceptable as long as the calculable costs do not over-balance the calculable benefits. In terms of the initial question of this paper, it is clear that appeal to the value of psychological research provides at best a very weak moral role differentiation. The imperative to do research does not override ordinary moral considerations. The general efficacy of psychological research provides a relevant consideration only when the calculation of negative and positive utilities in proposed experimental situ-

The moral role differentiation of experimental psychologists ations produces a balance of opposing considerations. In any other situation the psychologist must do the same sort of balancing of utilities as does the ordinary person, and he is as morally responsible for ignoring relevant moral considerations or for making inexcusable calculations as is the ordinary person. This is not to say, let me emphasize, that experimental psychologists must always undertake all of these moral calculations. Rather I am arguing that they must do so when there are apparent disutilities for the subjects of the experimental research. Very few discussions of such calculations appear in the literature. This may be because few morally risky experiments are conducted. The reports of experiments on animals, however, would appear to contradict this. Numbers of these involve subjecting the animals to suffering or actually causing them harm. Even granting that the interests of animals are of less importance than those of humans, the infliction of suffering on them does demand moral justification. The literature makes it clear that many of these experiments do provide tangible benefits for humans and indeed for general animal welfare [S]. Other experiments, however, produce negative utilities for the animal subjects but do not obviously produce any positive utilities. It is, I think, significant that in virtuafly none of these cases is there any attempt made to show what the possible benefits could be. This suggests that there is some cause for moral concern within the discipline. In any case I now want to argue that the evaluation model presented here is in fact not morally adequate and that a more adequate model will put even more stringent moral requirements upon some psychological experiments. The problem is that the present model is that of classical utilitarianism, which virtually every contemporary moral theorist holds to be in need of significant revision. Roughly, utilitarianism holds that there are certain interests which have value and that the morally best action is that which in the given circumstances maximizes those interests. One should, in other words, so act to produce the greatest good for the greatest number. This is pretty clearly the principle which has been proposed here. A proposed experiment may produce certain harms to its subjects, but it may also produce certain benefits for them or others. If the total benefits outweigh the total harms, then more good will be produced by performing the experiment than by not performing it. Therefore, the experiment should be performed. The problem with utilitarianism, or at least the one which is relevant here, is that there are certain interests which most people think should not be sacrificed even if their sacrifices would lead to a general increase in utility. These interests are those which are basic to the integrity of the person, and which have come to be called fundamental moral rights. This illegitimate abrogation of rights is precisely what was wrong with some of the Nazi medical experiments. It may well be the case that by selecting a group of people, wounding them grievously, and then studying various treatments of those wounds, doctors would be in a position to save many more lives in an anticipated military conflict. On balance it may well be the case that the total harm done is less than the total harm prevented. I am sure, however, that all of us would consider such action morally reprehensible, for it treats 55H,Fl15 I--c

29

its subjects with less than the minimum dignity which must be accorded to any human person. Interrelations of moral agents as moral agents can take place only on a basis of mutual respect for each other’s dignity and autonomy. People have a right to their personal integrity, and so, as the various documents on human rights declare, to the rights to life, personal security and liberty. Certainly no liberal society would condone the above sort of behaviour, for liberal societies have as their very touchstones the concept of human rights. One must, then, modify utilitarianism by placing certain ‘side constraints’ [6] upon the pursuit of maximizing utility. One is justified in acting so as to produce the greatest good for the greatest number, but only so long as one’s actions do not abrogate any basic moral rights. This modification presents no problems for most psychological experimentation, and I think its demands might well be met without excluding any psychological knowledge goals but the modification presents moral difficulties for some sorts of psychological research as presently conducted. [See 7.1 The problems are as follows. Some procedures may lead to physical harm or produce such stress as to lead to lasting psychological difficulties or mental incapacities. This would appear to violate the subject’s right to life and personal security. The use of deception or manipulation in experimentation appears often to violate the subject’s right to liberty. The right to liberty is a right to the self-determination of one’s activities so long as those activities do not interfere with the rights of other moral agents. If a person is manipulated or deliberately given erroneous information, that person cannot properly be said to be self-determined. Finally, some forms of deception or even of covert observation would appear to violate a person’s right to privacy since they would involve, to quote Article 12 of the United Nation’s Universal Declaration of Human Rights subjecting that person “to arbitrary interference with his privacy, family, home or correspondence.. . .” Whenever a proposed experiment falls into one of these categories, it is morally incumbent upon the experimenter either to abandon the experiment or to find a means to proceed without abrogating the subject’s moral rights. An experimental psychologist would at best be excused from this requirement only when his experiment is absolutely vital to human society. R. R. Sears [8], for example, claimed in 1968, The blunt fact is that, unless our scientific understanding of man can be brought to a far higher plane within the next couple of decades than it has heen in the last couple of millenia, there will be no one left whose privacy can be

defended. If such a claim could be shown to be factual rather than emotional, then the researcher might well be allowed to override rights. But I think it is clear that there are very few times when the psychologist can legitimately make such claims on behalf of his research. The conclusion is that the moral role differentiation which can be legitimately accorded to the psychologist is even more minimal than it appeared. This does not mean, however, that vast numbers of experiments must be scrapped, for in many cases the researcher can, with a little ingenuity, get reliable

30

H. A. BASSFORD

results after obtaining what the literature calls ‘informed consent’. In most cases, to have a right against someone is to be able to make a claim against the person. Implicit in this is the notion that one can waive particular claims, that one can give up certain rights against particular people in particular situations. Accordingly, if the psychologist informs the potential participant about risks or stresses, and if the participant agrees to undergo these risks or stresses, then the experiment will not violate the participant’s rights. It is worth noting that while the rights model sometimes burdens the psychologist with the moral requirement of obtaining consent, it leaves him, once he has obtained that consent. in a happier position than would be the case under the utilitarian model. This is because at least part of the moral burden imposed by the negative utilities of the experiment will have been shifted from the experimenter to the subject. When the psychologist makes the decision for the subject, however, he must shoulder the entire burden of moral responsibility. This is not to say that consent removes all responsibility from the ‘experimenter. First, he is responsible for those risks which should have been identified and were not. Second, there is much moral debate about the extent of the rights which an individual can waive. While we allow people voluntarily to undergo great risks, we do not allow them to agree to be harmed knowingly. Consent, for example, is not an accepted defence against a homocide charge. So, perhaps those experiments which would knowingly cause grievous harm to right bearers must be ruled out. But, given the almost total lack of permanent injury in non-therapeutic research in any case (1 reported for 93,000 participants [7]) this does not seem terribly much to give up.* As virtually all the commentators point out, the psychologist must be careful to see that he really has obtained consent If the subject has been coerced and misled, either through lies or crucially incomplete information, then he cannot be considered to have given a morally binding agreement, and his right to self-determination will have been violated. This is a very serious requirement indeed, but it is not quite so serious as the literature suggests. The Ad Hoc Committee worries about whether the participant can be said to have consented when the experiment is too complex for him to fully understand, or when the experimenter is uncertain just how stressful the participant will find the experiment [2]. Reynolds [TJ mentions these worries, and adds the worry that the participant may be in awe of the social status of the experimenter or may make an intuitive rather than a carefully considered decision. In fact these concerns reflect a paternalism which is inconsistent with the rights model, and which puts an unnecessary burden upon the experimental psychologist. One of the results of the rights model, which is the model basic to modern liberal democratic society, is that most people must be considered to be moral agents. This means they really must be accorded the * Though this will seem a much more stringent restriction if the currently raging debate about animal rights should be settled in favour of any of those animals used in psychological research.

right to self-determination. If this is taken seriously. then, as long as a person is not coerced or misled, that person’s explicit transfer to foregoing of a rights claim is all that is needed for the right to have been given up. It really does not matter whether he has acted on the spur of the moment, or for poorly thought out reaons, or because he is emotionally attached to the psychologist. One of the ways that one can respect human integrity is to allow people to make choices, even if they make poor ones. This is not to suggest that the problem of not misleading people is not a serious one. If I fail to tell people that there are or may be risks, then even though they have agreed to be subjects of my experiments, I am still responsible for any harm or stress that they experience. People have a normal expectation that they will be warned of negative factors. and their consent is indeed conditioned by these normal expectations. Take as an example the Sergeant-Major in the old films who asks for volunteers. In all the old movies, we see he always specifies that he wants volunteers for a dangerous mission. If he were just to ask for volunteers to “help him out a bit” the next morning, and if the volunteers were to find themselves dropped behind enemy lines the next morning, then they would have a legitimate complaint. But they have no complaint if he tells them that it is dangerous, even though he does not detail the dangers. The psychologist, in asking people to consent to be experimental subjects, has the same obligation to specificity as does the sergeant, but he has only that great an obligation. There remains one problem which the literature perceives to be quite serious. In many cases, psychologists use deception because they feel that is the only way they will be able to obtain vital data. The problem is that people will tend to modify their behaviour if they know it is being studied. Bower and de Gasperis [3] state the problem quite well: How can you study fear as normally experienced if you tell the subjects you are doing so and permit them to prepare for the shock? How can you probe the extent of obedience if subjects are aware of what you are doing and can decide ahead of time how obedient they wish to appear in an experimental situation? If you say you are working for the Anti-Defamation League, may not respondents temper their expression of prejudice? Or can you find out how hospitable people are to strangers if the stranger identifies himself as a social science researcher studying hospitality” CP. 121. The worry then is that significant numbers of socially valuable experiments will have to be foregone in order for the experimenter to respect the rights of his subjects. Although I cannot adequately explore this worry here, I do believe that a proper-consideration of the rights model, combined with ingenuity and a willingness on the part of psychological investigators to expend more time and energy will allow most investigations to be carried out with moral propriety. Many ofthe above questions, and many of the oft expressed concerns, seem significant because they assume that fill disclosure is necessary in order to have morally binding consent. But this is a higher demand than a concern for human rights requires. People can be told that because of what the experiment needs to discover

The moral role differentiation or experimental psychologists they cannot be informed of the purposes or details of the experiment. They can be told that they may encounter stress or risk. If in these circumstances they are willing to undergo the experiment. then they do so voluntarily and the researcher is excused from moral responsibility for the negative utilities which result (assuming of course that the experiment does not violate any rights which cannot be voluntarily given up). This is certainly the case when it is made clear that the individual may discontinue participation in a given experiment at any time he wishes (Ethical Principle 5 of the A.P.A). There remains the fact that people will tend to behave in what are generally considered socially desirable ways if they know thay are being studied by psychologists. First, some of my psychologist friends say they can discover my prejudices no matter how hard I try to hide them. The problem here is that of developing properly sophisticated questioning techniques. Second, the experimenter may need to be willing to remain with the subjects a long enough time so that they revert to their unguarded behaviour. This may involve a considerable expenditure of time on the psychologist’s part. But given the moral proprieties, if the experiment does not justify such an expenditure, then it is probably not ail that important in any case. This same procedure can be used in cases wherein insufficient numbers will undertake unspecified risks or wherein valid results cannot be obtained if they have any notion that”there may be risks. Psychologists will simply have to go to situations wherein the relevant risks or stresses are an expected part of the subject’s life and wait until the subject encounters those risks or stresses. If the knowledge to be gained in these sorts of cases is not sufficient to justify this amount of expenditure of the psychologist’s time, then there is clearly not reason to override the potential participant’s rights, and these experiments should be abandoned. One possible, though controversial, procedure remains open for those researchers who simply must conceal their manipulations from their subjects. Robert Nozick has recently suggested that a person’s right may be violated in a particular situation, provided that the person is fully compensated for that violation [6]. In the case of a violation of selfdetermination the compensation would be such that the subject felt compensated for that violation. This

31

seems to be the sort of consideration the A.P.A.‘s Ad Hoc Committee has in mind when they propose that the deception may be justifiable if among other things, the research participant may be expected, upon later being fully informed, to find the deception reasonable, and if the “investigator takes full responsi-

bility for detecting and removing stressful after effects” [2]. At best this procedure puts the researcher at considerable risk. There is no problem if the subject is satisfied the deception was justified. But if he is not so satisfied, then the researcher may have to undertake very onerous burdens in order to provide adequate compensation. In any case this sort of procedure must be much more thoroughly investigated before its moral propriety can be properly decided. Given all this, it may well still be the case that there may be some psychological experiments which researchers would like to perform and which would significantly advance human knowledge, but which cannot proceed without violation of basic moral rights. In these cases moral requirements simply will not allow the experiments to proceed. This is, for better or worse, one of the costs which must be paid for life in a liberal democratic society.

REFERENCES 1. Abelson R. Persons. Macmillan, London,

1977. 2. American Psychological Association. Ad Hoc Committee on Ethical Standards in Psychological Research. Ethical Human

Principles in Participants.

the

Conduct

of

Research

With

American Psychological Association, Washington, DC., 1973. 3. Bower Robert T. and deGasparis P. Ethics in So& Research:

4. 5. 6. 7. 8. 9.

Protecting

the Interests

of Human

Subjects.

Praeger, New York, 1978. Goidman Alan H. Business ethics: profits. utilities and moral rights. Philus. Pub!. 4tf: 9, 260, 1980. Keehn J. D. In defence of experiments with animals. Bull. Br. Psycho/. Sot. 30,404, 1977. Nozick R. Anarchy, Srate and Utopiu. Basic Books, New York, 1974. Reynolds Paul D. Ethicul Dilemmus and Social Science Research. Jossey-Bass, New York, 1979. Sears R. R. In defense of nrivacy. School Rev. 76, 23, 1968; Quoted in Bower a&i deG&paris, 1978. Stricker L. 3. The true deceiver. Psychofog. Buft. 68, 13, 1967.