Social epistemology and the ethics of research

Social epistemology and the ethics of research

Pergamon Stud. Hist. Phil. Sci., Vol. 21, No. 4, pp. 565-586, 1996 Copyright 0 1996 Elsevier Science Ltd Printed in Great Britain. All rights reserv...

2MB Sizes 3 Downloads 124 Views

Pergamon

Stud. Hist. Phil. Sci., Vol. 21, No. 4, pp. 565-586,

1996 Copyright 0 1996 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0039-3681/96 $15.00+0.00

Social Epistemology and the Ethics of Research David Resnik * 1. Introduction

Many contemporary philosophers have begun to embrace and articulate a movement known as social epistemology.’ While many social epistemologists do not deny the importance of the individualistic perspective, all members of this loose alliance see a genuine need to frame epistemological questions in terms of the social structures of communities in which individual agents acquire and produce knowledge. It is important to examine these social aspects of knowledge because the vast majority of our beliefs are acquired by reading books and newspapers, watching television, listening to other people, and other forms of communication. Many people know that the moon has no atmosphere. But how do they know this and what justifies this belief? Since very few people have ever been to the moon, most people must rely on the testimony of those people who have or the authority of those scientists who have studied the moon. Social epistemology addresses these and other problems that emerge when one examines the social dimensions of knowledge and belief. Some of the topics discussed by social epistemologists include the division of cognitive labor, epistemic authority, the structure of communities, and principles of communication. As some philosophers have begun to address the social aspects of knowledge in more depth, many people from the various sciences and fields of public policy research have begun to emphasize the importance of research ethics.* Although *Department of Philosophy, University of Wyoming, Laramie, Received IO January 1995; in revisedform 10 April 1995.

WY 82071-3392,

U.S.A.

‘See, for example, Alvin Goldman, ‘Argumentation and Social Epistemology’, The Journal oj‘ Philosophy 91 (1994), 27-49; Liaisons (Cambridge, MA: MIT Press, 1992); Philip Kitcher, The Advancement of Knowledge (New York: Oxford University Press, 1993); David Hull, Science as a Process (Chicago: University of Chicago Press, 1988); Helen Longino, Science as Social Knowledge (Princeton: Princeton University Press, 1990); and Steve Fuller, Social Epistemology (Bloomington: Indiana University Press, 1988). *See, for example, Marcel LaFollette, Stealing into Print (Berkeley: University of California Press, 1992); Daryl Chubin and Edward Hackett, Peerless Science: Peer Review and U.S. Science Policy (Albany: State University of New York Press, 1990); William Broad and Nicholas Wade, Betrayers of the Truth: Fraud and Deceit in the Halls of Science (New York: Simon & Schuster, 1982); Frederick Grinnell, The Scienrzjk Attitude, 2nd ed. (New York: Guilford Press, 1992); and Kristin Shrader-Frechette, The Ethics of Scientific Research (Boston: Rowman & Littlefield, 1994). 0039-3681(96)00043-7

566

Studies in History and Philosophy of Science

scientists and sociologists of science have for many years recognized that research ethics play a role in inquiry, highly publicized cases of fraud, plagiarism, and other types of scientific misconduct have brought concerns about the ethics of research to the foreground. In response to these concerns, many scientific organizations, such as the NSF, the AAAS, the NAS, the NIH, as well as the U.S. Congress have sponsored studies of scientific conduct and misconduct, and have urged scientists to make a serious attempt to teach research ethics to science students.3 There is a growing awareness that good scientific practice requires much more than sound methodology and experimental design; it also requires a commitment to various standards of conduct. It is tempting to view research ethics as nothing more than a type of applied ethics that raises no unique or interesting philosophical questions. One might view research ethics as merely an application of general, ethical principles to the context of scientific research. But I believe that research ethics is much more than a type of applied ethics; it also has important connections to social epistemology. Many of the standards of research ethics can be justified, in part, in so far as they promote the attainment of scientific objectives, viz., the advancement of knowledge. Seen in this light, research ethics are closely related to principles of scientific methodology and epistemology. Since research ethics promote epistemic goals, they raise some interesting and unique problems, especially when they create conflicts between moral and political obligations and scientific obligations. Thus, I believe that these two fields, social epistemology and research ethics, share a great deal of common ground, and I shall attempt to explore this territory in this essay. 2. Social Epistemology Epistemology since Descartes has been construed as an attempt to survey and evaluate an agent’s cognitive resources, e.g. sensory perception, memory, reason, etc. to determine when her beliefs shall count as knowledge, to determine the limits of her knowledge, i.e. what she can and cannot know, and to articulate principles for justifying and improving beliefs. The classic problems of epistemology in this tradition include the analysis of knowledge and the problem of skepticism. Most social epistemologists do not deny the importance of this traditional perspective on epistemology, but they do believe that it presents us with an inadequate account of human knowledge. Since human beings are social animals, human knowledge has important social dimensions which need to be addressed in any epistemology. However, ‘See, for example, Panel on Scientific Responsibility and the Conduct of Research, Responsible Science: Ensuring the Integrity of the Research Process (Washington: National Academy Press, 1992); House of Representatives Committee on Space, Science, and Technology, Subcommittee on Investigations and Oversight, 1Olst Congress, 1st Session, 28 June (1990), No. 73, Maintaining the Integrity ofScientifi( Research (Washington: U.S. Government Printing Office).

Social Epistemology

561

although all social epistemologists share this common insight, they differ greatly on many philosophical and ideological issues.4 I cannot explore these different viewpoints here, but I will discuss a typology of different approaches to social epistemology and the assumptions that will guide my presentation. The typology classifies approaches as either objectivist or non-objectivist and either normative or descriptive. Since this is only a typology, we should expect that most positions actually defended by various writers will not neatly fit into any of these categories and will be considerably sophisticated. However, the typology is useful for understanding my position and for contrasting it with other possible approaches. With these caveats in mind, we can reflect on the distinction between objectivist and non-objectivist approaches. The objectivist approach to knowledge has a long history dating back to the ancient Greeks and is exemplified by many current assumptions in epistemology and the philosophy of science. An objectivist approach to epistemology holds that knowledge is in some way independent of our personal or social biases, political ideologies, moral beliefs, theories and concepts. Knowledge must be more than mere belief or opinion, since belief can be highly subjective. Socrates made this very point 2500 yr ago when he sought to discredit the Sophists and defended the view that knowledge is justified, true belief. This standard account of knowledge is objectivist, provided that we view notions of ‘truth’ and ‘justification’ as independent of personal and social biases, opinions and so forth. According to one form of epistemological objectivism, epistemic realism, beliefs count as knowledge insofar as these are justified and exhibit the proper relation to the world. Many different schools of thought take an objectivist approach to scientific knowledge including Bayesians, Popperians and Hypothetico-Deductivists.5 Philosophers and scientists have traditionally defended objectivist accounts of knowledge against non-objectivist critics, but the non-objectivist approach has gained more adherents in the wake of post-Kuhnian, post-modernist critiques of science and knowledge. There are a variety of non-objectivist approaches to knowledge and it is impossible to give them fair treatment in this paper. Most non-objective approaches define their positions by denying objectivist arguments and assumptions. For instance, non-objectivists argue that scientific theories are radically underdetermined by the data, that evidence is not theory-neutral, that there is no common scientific method, that truth is relative to historical or social conditions, and that social and political 4For a discussion of these different approaches, see Goldman’s essay ‘Foundations of Social Epistemics’, in Liaisons, op. cit., note 1, pp. 1799208, and ‘Social Epistemology, Interests and Truth’, unpublished manuscript. ‘For a defense of objectivism in epistemology, see Goldman’s Epistemology and Cognition (Cambridge, MA: Harvard University Press). For a defense of objectivism in the philosophy of science, see Colin Howson and Peter Urbach, ScientiJic Reasoning: The Bayesian Approach (La Salle, IL: Open Court, 1989).

568

Studiesin History and Philosophyof Science

forces affect theory-choices in science. According to a popular non-objectivist approach to knowledge, known as social constructivism, knowledge is socially constructed and is an artifact of various social institutions, political and economic interests, and so forth. Other schools of thought committed to a nonobjectivist approach to knowledge and science include the Strong Programme in the sociology of science, and the empirical relativist program defended by Harry Collins.6 The net effect of non-objectivist approaches to knowledge and science is an implicit denial of the standard, philosophical distinction between knowledge and opinion. Indeed, many writers in these traditions often do not maintain any steadfast distinction between ‘knowledge’ and ‘belief’. Most writers who call themselves “social epistemologists” tend to adopt a non-objectivist account of knowledge and adopt a non-objectivist approach to social epistemology. Indeed, many writers focus on the social aspects of knowledge in order to debunk the objectivist perspective and critique its arguments and assumptions. For many, the phrase “objective, social epistemology” is therefore an oxymoron. However, in this essay I break with this intellectual orthodoxy because I believe that it is possible to develop an objective approach to social epistemology based on an objectivist account of knowledge. My view may sound somewhat “old-fashioned” to readers enmired in post-modernist philosophy, but I think it can be reasonably defended. The second distinction that 1 shall use in this typology is between normative and descriptive approaches to social epistemology. Philosophers have traditionally adopted normative approaches to epistemology in that they have defended principles for the evaluation of beliefs and epistemic principles and have attempted to tell people how they ought to form their beliefs. The general idea here is that epistemology is like moral philosophy in that its central concepts and principles do not merely describe human conduct; they prescribe it. Although philosophers who follow W.V. Quine’s epistemological naturalism have challenged these assumptions in recent years, the normative approach remains well-entrenched. If we apply this approach to the social aspects of knowledge, it implies social epistemology is also normative in that this study attempts to defend concepts and principles which prescribe or guide the conduct of inquiring communities.7 A social epistemologist, on this ‘For a non-objectivist approach to science, see Bruno Latour and Steven Woolgar, Laboratory Life (Beverly Hills, CA: Sage, 1979) Harry Collins, Changing Order (London: Sage, 1985) Fuller, op. cit., note 1, and Philosophy, Rhetoric and the End of Knowledge (Madison, WI: University of Wisconsin Press, 1993). For a critique of non-objectivist approaches to epistemology and the philosophy of science, see Larry Laudan, Science and Relativism (Chicago: University of Chicago Press, 1990). Ron Giere, Explaining Science (Chicago: University of Chicago Press, 1988) and Kitcher, op. cir., note 1. ‘For a defense of normativism in epistemology, see Goldman, Epistemology and Cognition, op. cir., note 5. For a defense of normative philosophy of science, see Larry Laudan, ‘Normative Naturalism’, Philosophy of Science 57 (1990), 44-59, and Philip Kitcher, ‘The Naturalists Return’, The Philosophical Review 101 (1992) 53-I 14. My approach to epistemology and the philosophy of science draws heavily on the work of Goldman, Kitcher and Laudan.

Social Epistemology

569

view, evaluates the social practices involved in knowledge production, communication and inquiry. However, I should note that many, perhaps even most, social epistemologists adopt a descriptive rather than normative perspective on knowledge and social epistemology. According to this approach, epistemology describes and explains how human beings form beliefs about the world, and social epistemology describes and explains the social aspects of knowledge production. Thus, the descriptivist approach to social epistemology is an empirical investigation of the social structures of inquiring communities, as well as the economic, political, technological and cultural forces that influence knowledge production. Social epistemology, on this view, is a branch of the sociology of knowledge.8 Given these classifications, there are four distinct approaches to social epistemology, normative objectivism, normative non-objectivism, descriptive objectivism and descriptive non-objectivism. Following Goldman, I will adopt a normative, objectivist approach to social epistemology.9 Readers may already see that this view is somewhat of a minority position in social epistemology, since most social epistemologists approach this topic from a non-objective and/or non-normative point of view. In order to understand this approach and why I believe we should accept it, it is important to compare it to other approaches to social epistemology. I believe we should accept an objectivist approach to social epistemology because a non-objectivist approach would collapse the distinction between social epistemology and moral/social/political philosophy. It would allow that a beliefs epistemic status might in some way depend on our moral beliefs, our social institutions, or our political ideologies. Thus, for example, Mendelian genetics would not count as scientific knowledge in the Soviet Union from 1930 to 1960 because it did not accord with Marxist ideology although Mendelian genetics would count as knowledge in the U.S. during the same era. I want to maintain that whether Mendelian genetics should count as knowledge does not depend on its relation to some political ideology. I do not deny that moral and political factors and personal biases often play a role in determining which beliefs are accepted or rejected, but I think it is reasonable to maintain that knowledge ought to be objective. I would allow that a belief or theory could be rejected (or accepted) for moral/social/political reasons, but this implication would have to do with its moral/social/political implications, not with its epistemic justification or status per se. Those who abandon the objectivist approach to knowledge usually attempt to show how knowledge is simply the product of specific social institutions or economic or political interests or they deny the distinction between epistemology and moral/social/political ‘For defense of a non-normative approach, (London: Routledge, 1976). “Goldman, Liaisons,op. cit., note 1.

see David

Bloor,

Knowledgeand Social Imagery

Studies in History and Philosophy of Science

570

philosophy. This view quickly leads to epistemological relativism, subjectivism, “post-modernism”, and various other “isms”. I would like to avoid these “isms” though I cannot explore or critique them in depth here.‘O I believe that we should seek a normative approach to social epistemology because we should not abandon epistemology’s traditional, normative standpoint. Ever since Plato, philosophers have interpreted key epistemic concepts, such as ‘justification ’ , ‘evidence’ and ‘warrant’ as providing us with epistemic standards or norms. Epistemology is concerned with how we ought to form our beliefs, how we ought to gather evidence for beliefs, and so forth. If we do not adopt a normative approach to social epistemology, then we collapse the distinction between social epistemology and other social sciences, such as sociology of knowledge, social psychology, cultural anthropology and so forth. By doing so, we would not only abandon epistemology’s traditional normative standpoint, but we would also relinquish its status as an autonomous discipline. Social epistemology would simply be a chapter of social science. Although I cannot fully defend a normative stance here, I would like to maintain that social epistemology is normative and that it has its own subject matter and methods distinct from other social sciences.” Now that we can see what my basic assumptions are and why I hold them, I shall briefly state my approach to social epistemology. On my approach, which bears a strong resemblance to Goldman’s and Kitcher’s approaches, social epistemology attempts to justify and articulate rules (or principles) that should govern knowledge-seeking communities. These rules pertain to the social practices, institutions and social structures involved in the attainment of the community’s cognitive aims. The rules can be viewed as hypothetical imperatives which are justified insofar as they promote the cognitive aims of the community. As hypothetical imperatives, the principles have an empirical rather than a priori basis because they assert connections between means and ends. Thus, although the rules of social epistemology are intended to prescribe conduct, they also describe it insofar as such conduct allows scientists to effectively pursue their aims. Thus, empirical studies of the history, sociology and psychology of science can justify particular policies and rules of social epistemology in that these studies provide evidence for how scientists can best go about pursuing their cognitive aims. Although I cannot defend this point in depth here, I accept a pluralistic account of science’s cognitive aims. I2 Thus, knowledge-seeking communities “‘For

a critique

of epistemologicakientific relativism, see Kitcher, The Advuncement of I; Laudan, Science and Relutivism, op. cit., note 6. “For further defense of epistemology’s normative stance, see Jaegwon Kim, ‘What is Naturalized Epistemology?, Philosophical Perspectives, vol. 2. edited by James Tomberlin (Atascadero, CA: Ridgeview, 1988), and Kitcher, op. cit., note 7. ‘*For more on this issue. see Stephen Stich, The Frugmentution of Reason (Cambridge, MA: MIT Press, 1990).

Knowledge,

op. cit., note

Social Epistemology

571

seek to acquire true beliefs and avoid false ones, and they also seek to understand and explain the world and make successful predictions. In addition to these ultimate epistemic aims, knowledge-seeking communities have proximate epistemic aims or goals that promote ultimate aims and can be used to evaluate epistemic practices. Some of these proximate epistemic aims include reliability, fecundity, speed, efficiency, as well as such criteria of theory choice as explanatory power, simplicity, conservatism and the like.13 Although social epistemology has some clear implications for science, it also applies to other knowledge (or information)-directed social institutions, such as the law, education, advertising and the media. The rules studied by social epistemology include scientific methods, rules for controlling communication and the flow of information, principles for allocating social resources and for dividing cognitive labor, standards for recognizing epistemic authority, and rules of argumentation and rhetoric. Many of our social institutions have non-epistemic aims as well as epistemic ones. For instance, the U.S. legal system aims to produce true verdicts of guilt or innocence but it also aims to protect the rights of the accused and promote justice. Rules of evidence reflect this tension between the legal system’s epistemic goals and its non-epistemic ones. An intuitive principle of epistemology and philosophy of science holds that one should gather all available evidence in fixing beliefs, but rules of evidence go against this principle in that they put restrictions on evidence gathering.14 If the acquisition of true beliefs about guilt and innocence were the sole aim of the legal system, then there should be no restrictions on evidence gathering. But there are some restrictions imposed in order to protect the rights of the accused and to promote justice. The U.S. has laws against certain types of surveillance and searches and grants defendents the legal right to not incriminate themselves through their testimony.15 Viewed as a social institution. science also has non-epistemic (or practical) aims as well as epistemic ones. Some of science’s practical aims include control of the environment, power, technological innovation and development, education, and the promotion of human flourishing. Although I will not defend this view here, I accept a pluralistic approach to science’s practical aims. However, since epistemic and practical aims often support different standards of conduct we need to decide how to resolve conflicts between different scientific standards or prioritize scientific objectives. For example, in deciding whether to place restrictions on certain types of research, such as research on cloning human beings, we may find that epistemic goals, e.g. the advancement of “For more on proximate aims, see William Lycan, Judgement and Jusfzjication (Cambridge: Cambridge University Press, 1988), and Goldman, Epistemology und Cognition, op. cit., note 5. 14Goldman, Liaisons, op. cit., note 1. “See Stich, The Fragmentation of Reuson, op. cit., note 12.

512

Studies in History and Philosophy of Science

knowledge,

conflict

epistemology,

as I understand

social institutions it does

not

with non-epistemic

it, concerns

and practices

address

the

ones,

e.g. human

of how

Social

itself only with the evaluation

in so far as they promote

issue

happiness.

we should

epistemic

balance

of

aims, and

epistemic

and

non-epistemic goals in the evaluation of social institutions and practices. This question is a topic for ethics and social and political philosophy, but not for social epistemology.

I6

3. Research Ethics I will now discuss the connection between social epistemology and research ethics. Research ethics, I suggest, are those standards used to prescribe and evaluate scientific conduct in general. Conduct includes all actions performed during the research process, including communication as well as other social interchanges. Conduct evaluated by these standards ranges from the actions of particular scientists to the policies and practices of scientific institutions. The scope of research ethics is quite broad, encompassing such topics as the gathering, storage and interpretation of data, scientific communication, publication and peer review, the allocation and use of scientific resources, scientific education, hiring and promotion practices, the social structure of laboratories, of human substance,

the selection

of research

problems,

scientific collaboration,

the greater community. For the purposes of this essay I will take it as a fact that scientists by certain

the use

and animal subjects, restrictions on research, the use of controlled science funding, and science’s relation to business, the military, and

standards

of conduct.

This is a perfectly

reasonable

are guided

conclusion

to

reach if we view science as a social institution and we acknowledge that no social institution can function unless its members share some common values, norms

and commitments’7

Science

without

norms

is not science;

it is utter

chaos. But the important issue, as far as I am concerned, is not whether there are standards of scientific conduct, but which standards ought to guide conduct and why? In order to answer

these distinctly

normative

questions

we need to

explore the foundations of research ethics or their ultimate justification. There are two different approaches to this topic and I think both of them are correct, in part. According to one approach, research ethics can be justified insofar as certain standards of conduct promote science’s cognitive aims. Thus, it is wrong to falsify data, in part, because data falsification undermines the search for scientific knowledge. Standards can promote scientific aims either by directly ‘&For an interesting perspective on this issue, see R. M. Hare, Morul Thinking (Oxford: Oxford University Press, 1980). “See Thomas Kuhn, The Structure of’ Scienti~5c Revolutions (Chicago: University of Chicago Press. 1962); Robert Merton, The Sociology qf Science(New York: Free Press, 1973).

Social Epistemology

573

promoting epistemic outcomes or by promoting cooperation among scientists and the public support of science. Cooperation and public support are important in science because they play a key role in enabling any social practice or institution to achieve its aims. This type of justification for research ethics is precisely the type of justification we can discuss from the point of view of social epistemology and it constitutes an internal foundation for research ethics. However, standards of conduct can also be justified insofar as they are based on broader ethical, social or political standards. Such standards are general norms that apply to all people in all social or institutional roles. Thus, one might argue that it is wrong to falsify data because data falsification is a form of lying and it is (morally, ethically) wrong to lie. This type of justification constitutes an external foundation for research ethics in that it does not appeal to the aims of science as a social institution but it appeals to standards of conduct in the society(s) in which science is practised. From this point of view, research ethics is simply a type of applied ethics, viz. ethical principles applied to the context of scientific research. Some of these basic ethical principles might include: l

Nonmalificence: Do not harm other people.

l

Benejcence:

l l l l

Help other people. Utility: Promote a greater balance of good/bad consequences for society. Fairness: Treat people fairly. Autonomy: Do not interfere with the choices of competent individuals. Respect: Respect other people.

Many philosophers who do what goes by the name “applied ethics” have argued that various general principles (such as the ones mentioned here) can guide ethical decision-making by implying more specific rules and by justifying moral choices in particular situations. 18 For instance, in the context of scientific research, the principles of nonmalificence, beneficence and utility might imply that scientists have a moral duty to do scientific research that benefits people and avoid research that harms people. The principles of fairness and nonmalificence might imply that scientists should not plagiarize each other’s work, that they should give credit where it is due, and so on. Moreover, many applied ethicists prefer to employ general principles or rules rather than moral theories, since these principles can be supported from a number of very different theoretical perspectives. For instance, Kantians, utilitarians and contractarians can all agree that we have a duty not to harm other people, even though they defend radically different ethical theories. Thus, on this external approach to research ethics, all a scientist needs to do “Tom Beauchamp and James University Press, 1979).

Childress,

Principles of Biomedical Ethics (New York:

Oxford

514

Studies in History and Philosophy

in deciding principles

how

to act

morally

in research

is to consult

of Science

various

moral

and apply them to her situation.

I think both of these approaches us with some important

to the foundations

insights,

that

neither

of research ethics provide

of them

captures

the “whole

truth” about research ethics, and that they complement each other. Thus, we can locate research ethics in a domain where social epistemology intersects with ethics and social and political philosophy. In articulating and justifying standards of research ethics, we can (and should) appeal to both internal and external sources of justification. Thus, it makes sense to say that data falsification is wrong because it undermines because it is a form of lying, which is morally I accept

this

view because

it provides

the search wrong.

the best

for knowledge

overall

account

and of the

standards of conduct that should (and perhaps do) govern science. If we held that research ethics had only an internal foundation, i.e. if it were no more than social epistemology, then it would follow that scientists should attempt to do whatever it takes to promote epistemic aims so long as their conduct undermine scientific cooperation or the public support for science.

does not Thus, to

take an extreme example, an internalist perspective on scientific ethics might imply that perhaps the only reason why scientists should treat human subjects with respect and dignity is that showing a lack of dignity/respect will undermine the public knowledge can

support for science. If scientists make greater advancements in by violating the autonomy and dignity of human subjects, and they

get away

with

it, then

they

should

do so, according

to the internal

justification. But I think most of us (including most scientists) would not accept this implication since scientists, like all people, also must answer to broader ethical and social norms. The standards of conduct that we find in science and those that should govern science go beyond social epistemology and should be based, in part, on these external standards. However,

we could also not provide

an adequate

account

of research

ethics

if we held that these standards of conduct had only an external foundation. If research ethics had only an external foundation, then it would follow that all of the standards

of conduct

we find in science should reflect moral/social/political

norms and that standards of conduct that do not accord with these norms should be rejected or revised. But the standards of conduct we find in science do not and should not completely accord with moral/social/political norms. For example, scientists, unlike “ordinary” people, are held to the highest standards of honesty. Many people would hold that there is nothing terribly wrong with a “white lie” if the lie is told to prevent harm to someone else or to promote their happiness. But there are no “white lies” in science, since any attempt to distort or hide the truth can do great harm to the search for knowledge. Thus, a scientist’s obligation to be honest is much stronger than the ordinary person’s obligation.

515

Social Epistemology

Let’s consider another example. Although one might argue that we have a general obligation to share information, we also recognize that we have a moral right to privacy or confidentiality, and that there is a great deal of information we are not required to share. Indeed, in order to protect another person’s privacy we may be required to not share a great deal of information. But now consider secrecy in science. Secrecy is sometimes permissable in science, of course. For instance, scientists are justified in keeping information secret in order to protect ongoing research or to protect the identities of referees in the peer review process.19 But scientists still have a very strong obligation to share information, an obligation that is much stronger than any general, ethical obligation to be open. There are therefore standards of secrecy, privacy, and openness in science that differ in important respects from our ordinary standards. Thus, although research ethics can be supported from an ethical/political viewpoint, they also reflect goals and concerns that are unique to science, viz. the pursuit of knowledge. These goals and concerns imply that a scientist’s institutional obligations may be very different from an ordinary person’s obligations. 4. A Code of Conduct for Science Given this perspective on research ethics, I shall briefly articulate and defend a code of conduct for science.20 The code is as follows: 1. Scientists should not fabricate, falsify, or misrepresent data or results. 2. Scientists should avoid errors in the reporting of data and results, the statistical interpretation of data, the calibration of instruments, and other aspects of the research process. 3. Scientists should share information, data, results, ideas, theories and technologies. 4. Scientists should be free to pursue all areas of inquiry. 5. Scientists should give credit where credit is due and not where it is not due. 6. Scientists should participate in the education, recruitment and training of future scientists. 7. Scientists should obey the law. 8. Scientists should respect human and animal research subjects. 9. Scientists should attempt to determine the social consequences of their research, inform the public about these consequences, and consider not conducting research if they deem the research to have dire social consequences. 10. Scientific research should be evaluated by members of the scientific community before it is made public. “See Grinnell The Scient$c Altitude, op. cit., note 2, and Sissela Bok, Secrets: On the Ethics of Concealment ani Revelation (New York: Vintage Books, 1984). “1 do not claim that my proposed code is entirely original, since various scientific disciplines have adopted their own codes. Merton (op. cit., note 17) also defends ethical codes for scientists.

576

Studies in History and Philosophy

of Science

11. Scientists should not discriminate against their colleagues on the basis of sex, race, national origin, or other characteristics not directly related to scientific competence or merit. 12. Scientists should treat their colleagues with respect. In justifying this code, I shall appeal to the two foundations (internal and external) for research ethics mentioned in the previous section. Readers should note that the internal arguments I present require empirical backing, in that they concern connections between means and ends in scientific practice. As such, they may be treated as somewhat speculative and uncertain in that they could be strengthened or weakened by further studies of the history, sociology and psychology of science. However, I shall assume that the external (moral, political) arguments I present do not necessarily suffer from this problem, if we admit that moral and political standards do not require an empirical foundation. Of course, this suggestion takes us into controversial areas of meta-ethical debate, which I do not intend to explore here. In discussing (1) it is important to say what is meant by the terms ‘fabrication’, ‘falsification’ and ‘misrepresentation’. Fabrication is the act of making up data that have not been recorded or observed; falsification is the act of changing data; and misrepresentation is the act of intentionally presenting data in a misleading or deceptive way through trimming, fudging or the clever use of statistics. The common element in all of these different types of scientific misconduct is intentional deception: fabrication, falsification and misrepresentation are all deliberate attempts to get other people to believe things that the person knows not to be true. Honesty in science can be justified from a moral point of view in that all people have an obligation to be honest, but as we have seen, this justification does not seem sufficient. Hence, we need to provide additional justification for the importance of honesty for science: the advancement of knowledge will be severely hampered if scientists intentionally deceive each other. Dishonesty produces false beliefs instead of true ones, it slows down the process of finding true beliefs, and it makes inefficient use of scientific resources by forcing scientists to put effort into correcting false beliefs. Although (1) seems like a fairly simple principle, it is not always easy to interpret and apply it. In particular, it is often difficult to distinguish between misrepresentation of data and legitimate (though perhaps unorthodox) scientific practice. Since scientists often disagree about techniques for analyzing and interpreting data, the line between deception and methodological or theoretical disagreement is often quite fuzzy. 21 In general, the best way to distinguish between dishonesty and disagreement is to focus on the researcher’s motives: if the researcher intends to deceive her audience, then she is being dishonest; *‘Ullica

Segerstrale,

‘The

Murky

Borderland

Between

International Journal of’ Applied Ethics 5 (1990). 1l-20.

Scientific

Intuition

and

Fraud’,

Social Epistemology

571

if not, then she may simply be using unorthodox or perhaps mistaken methods.22 Although dishonesty is intentional deception, scientists can also unintentionally deceive each other. Errors in the reporting and interpretation of data, in the drawing of inferences, or in the calibration of instruments can also lead to deception. From a moral point of view, error is by no means as serious an offense as dishonesty, since we do not expect ordinary people to never make mistakes. However, from a scientific point of view error is a very serious offense, since it can yield the same consequences as dishonesty in that an honest mistake can lead to deception as easily as an intentional fraud. While it is impossible to avoid errors in science-scientists are human beings, after all-scientists have an obligation to minimize errors and to correct them. One way to minimize errors is to not be too hasty in doing research and presenting results. The old adage, “haste makes waste”, is appropriate here. Although errors result from different motives than dishonesty, they also makes research unreliable and inefficient. However, we should note that scientists may offer some reasons for being hasty based largely on the current research environment. There are several features of this environment that induce haste: (a) priority is of utmost importance in science and many scientists rush to publish in order to be the first to discover or propose something; (b) funding pressures often compel scientists to produce results as soon as possible; and (c) tenure and promotion pressures can also compel scientists to produce results as soon as possible. In an effort to obtain results in order to have grants renewed or obtain tenure, scientists may take short-cuts, rush experiments, and so forth. Do any of these conditions of the research environment justify haste? One could argue that they do not. A white person in South Africa might appeal to environmental conditions to explain their racial bias but these conditions would not justify such bias, since one could argue that the environmental conditions in South Africa are themselves unjust and need to be changed. Thus, conditions can provide justification for conduct only if they themselves have some justification. Bringing this discussion back to the research environment, we might ask whether priority concerns, funding pressures, or tenure and promotion practices have some justification. If they do, then a certain amount of haste may be justified. 22Most people would agree that motives are important in evaluating conduct from a moral point of view but wonder why motives should matter from a scientific point of view, since fraud and error both produce similar results. But motives matter in that a person who intentionally deceive her peers is more likely to produce counterproductive results than one who simply makes a mistake. A person with dishonest motives may try to get away with as much deception as they can, hide their dishonesty, and so forth, while a person who makes a mistake is more likely to try to correct their errors, will try to prevent further deception, and so forth.

578

Studies in History and Philosophy of Science

Although I cannot examine these different conditions in depth here, I will suggest that there may be some scientific reasons for placing an emphasis on priority in science. That is, priority can be justified in that it promotes science’s goals. Priority promotes these aims, one might argue, by making scientists work faster and harder and compelling them do work that is innovative and/or original. (There may also be some non-scientific reasons that justify an emphasis on priority, but I will not explore them here.) In short, the emphasis on priority can be justified in science because increased effort, speed and originality yield increases in knowledge. If this is the case, then there are other important scientific values that counterbalance the value of carefulness. We must therefore recognize that carefulness is not any absolute standard of research ethics and that scientists must sometimes sacrifice carefulness in order to achieve other aims.23 (3) and (4) both pertain to the free exchange of information among scientists and the free pursuit of scientific ideas; (3) states a principle of openness and (4) states a principle of intellectual freedom. Freedom can be justified from a moral point of view in that freedom of expression is an important part of personal autonomy. Openness can be justified from a political point of view in that it promotes fair, impartial, democratic debates. However, (3) and (4) can also be justified from a scientific point of view. Openness is important for two reasons: (a) scientists can more quickly and efficiently pursue scientific objectives when they share information than when they do not; and (b) scientific criticism and the evaluation of scientific knowledge--claims cannot operate very quickly, reliably or efficiently when information is not shared. When scientific ideas are not subjected to criticism, the chances of deception and false belief increase. Thus, openness is good, from the point of view of social epistemology in that it is more efficient, fast and reliable than secrecy. Intellectual freedom is important in science for similar reasons: (a) the quest for knowledge operates more efficiently and quickly when scientists are free to pursue all areas of inquiry; and (b) scientific criticism and debate work better when scientists are free to criticize old ideas and propose new ones. Thus, intellectual freedom is also more efficient, reliable, and fast than intellectual control. As Feyerabend and many other writers have pointed out, freedom is important in preventing dogmatism.24 “See Merton, op. cit., note 17. In his Sociological Ambivalence (New York: Free Press, 1976) Merton discusses how conflicts of institutional and social norms arise in science. 24The history of Soviet science illustrates the importance of freedom and openness in science. Freedom and openness were severely limited from the 1920s to the 1960s and Soviet science suffered as a result. Soviet geneticists were not allowed to do or publish any research that contradicted the views of Lysenko and they were not permitted to learn about Mendelian genetics. Lysenko’s views were accepted for political reasons in that they reinforced Marxist theories concerning the malleability of human nature. Mendel’s view were banned because they undermined this political ideology. See D. Joravsky, The Lysenko A#% (Cambridge, MA: Harvard University Press, 1970). footnote 24 continued on p. 579

Social Epistemology

579

(5) Instructs scientists to not plagiarize each other’s work or steal each other’s ideas. It also implies that scientists should make sure to give proper citations and to not list people as authors of scientific papers for trivial reasons.25 The justification for a principle of credit is fairly obvious: since scientists want to receive scientific rewards, e.g. grants, status, recognition, money, and these rewards are based on getting credit for accomplishments, and since scientists will not cooperate with each other without rewards, then science needs a principle of credit allocation that reflects scientific accomplishments. There are both scientific and non-scientific justifications for a principle of credit: From a scientific viewpoint, the scientific community needs some mechanisms to promote scientific cooperation (and collaboration) since scientists are basically self-interested individuals.26 From a moral viewpoint, scientists should be given proper credit in order to promote fairness, gratitude and other important moral concerns and to protect property and individual rights. Although the principle of credit is clearly an important rule in science’s code of conduct, I suspect that its application depends, in large part, on certain cultural conditions: in a society where people are less self-interested or where there is no conception of property, a principle of credit might not play an important role in science. In such a society, science’s reward system might operate quite different from the reward system we find in Western science. I will offer only a brief explication and justification of principles (6~(12) since more in-depth discussion would take us too far afield.27 (6) is justified on the grounds that the pursuit of science will end unless the scientific community educates, recruits and trains the next generation of scientists. All social institutions must participate in education in order to survive, and science is no exception. It can be justified from a non-scientific viewpoint in that the societies in which science is practised value education in general and science education in particular. The application of (6) also depends on certain cultural conditions. (6) might not be a very important principle of research ethics in a society where scientists play no role in education and educational institutions. (7)-( 11) are justified, from a scientific viewpoint, because they promote the public support of science, and science cannot flourish without public support in most societies. The public support of science can be undermined when scientists disobey the law during research, abuse human or animal subjects, do not footnote 24 continuedfrom p. 578 For more on scientific freedom, see Paul Feyerabend, Against Method (London: Verso, 1975), Jacob Bronowski, Science and Human F’alues (New York: Harper & Row, 1956). Longino and Kitcher both stress the importance of freedom, op. cit., note 1. For recent treatment of this issue, see M. Ross Quillian, ‘A Content-Independent Explanation of Science’s Effectiveness’, Philosophy of Science 61 (1994), 429448. ‘sLaFollette, Stealing into Print, op. cit., note 2. “See Hull, Science as a Process, op. crt., note I. *‘I discuss these principles in more depth in a book I am writing tentatively titled Research Ethics.

580

Studies in History and Philosophy

of Science

address the consequences of their research, discriminate against other scientists, or report results to the public before these results have been shown to be scientifically valid.28 (7)-(11) can be justified from a moral or political viewpoint in that they promote moral/political values, such as justice, respect for human beings and animals, happiness, avoidance of harm, and so forth. Finally, (12) can be justified from a scientific viewpoint in that it is important in promoting cooperation and collaboration among scientists: scientists who show no respect for each other are less likely to cooperate than those who do. It can be justified from a moral point of view in that all people have a moral duty to treat each other with respect. The application of (12) also depends on various social conditions in that different societies may have different notions of mutual respect.

5. Objections and Replies In the remainder of this essay I will answer some questions about this portrayal of research ethics in order to help clarify and justify my views. 1. Aren’t these principles very general, vague and uninformative? How can they possibly be useful in guiding scientific conduct? I admit that the principles are general, vague and uninformative. But this is all that one can expect from a general code of conduct. No general code can be specific enough to tell people what to do in every situation. But a general code can still be useful in guiding conduct if it can be applied to a variety of situations, problems and dilemmas. In applying this code, it is important to take the practices, traditions and concerns unique to each situation into account. Thus, for example, an important question that (5) does not answer is: “How should credit be allocated in a particular context, e.g. writing a physics paper?” In answering this question, one needs to understand the important details of physics, e.g. its research traditions, its communication practices and so forth. Given this context, it might turn out, for example, that the best way to allocate credit is to list all authors who have made a significant contribution to a paper in descending order. Thus, the author who has made the most important contribution would be listed first, etc.... Philosophers may be able to offer some general guidance in answering these specific questions, of course, but we still need to draw on some of the significant details drawn from the context of research. 28One other social condition assumed here is that the public is generally not qualified to assess scientific knowledge claims; these claims need to be evaluated by experts. When scientists report results to the public before they have been evaluated by other scientists, they risk embarrassing the scientific community and misleading the public if it turns out the results are not valid. The debate over cold fusion provides a salient example of the dangers of reporting results to the public too soon. See J. Huizenga, Cold Fusion: The Scient~$c Fiasco of the Century (Rochester: University of Rochester Press. 1992).

Social Epistemology

581

2. Don’t these principles sometimes conflict and what should scientists do when they do? Yes, they do sometimes conflict. For instance, (4) conflicts with (9) when a scientist has to decide whether she should conduct research that could have some dire social consequences, e.g. research on cloning human beings. Since conflicts among the various principles can arise, all the principles will have at least a few exceptions. Thus, they should be regarded as prima facie principles of conduct: they should be followed, other things being equal, but not under all circumstances.29 How should scientists resolve such conflicts? Conflicts can be settled by appealing to two sources of justification, one internal, the other external. From an internal point of view, scientists should settle conflicts by performing the action that they believe best promotes the aims of science. From an external point of\view, scientists should settle conflicts by performing the action that they believe has the most support from a moral, political or social point of view. I admit that resolving conflicts of rules is a messy business and that I can offer no overall procedure for adjudicating conflicts. The problem becomes especially acute when conflicts within science’s code of conduct expose tensions among science’s goals, e.g. technological control vs the search for knowledge, and between science’s goals and broader, ethical, political and social values, e.g. the advancement of knowledge vs human rights. This brings us to the next concern. 4. Don’t scientific obligations sometimes conflict with other institutional obligations or with moral, political or legal obligations? What should scientists do in that face of these types of conflicts? Yes, scientific obligations can conflict with many other obligations we have in society. For example, consider scientists who worked on the Manhatten project. From a scientific point of view, they had an obligation to share information about the atomic bomb, but their research was kept confidential for the sake of national security. For a recent example from business, consider the researcher working for a tobacco company who wrote a paper on nicotine’s addictive properties. The paper was accepted for publication in a journal, but the company forced him to withdraw it.30 The topic of resolving conflicts between scientific obligations and other obligations takes us way beyond the scope of social epistemology into the realm of moral, social and political philosophy. The issue is a very large and it raises many important questions that I cannot fully address here. I do, however, find sympathy with the view that ethical obligations ordinarily override all other obligations one might have. By “ordinarily”, I mean under most circumstances. 29This view is analogous to W. D. Ross’ view of moral rules: see The Right and the Good (Oxford: Clarendon Press, 1930). The issue of conflicts is discussed by Merton in Sociological Ambivalence, op. cit., note 23. 30Philip Hilts ‘Tobacco Firm Withheld Results of 1983 Research’, The New York Times, 1 April 1994, Al. ’

582

Studies in History and Philosophy of Science

I think there are some situations in which one might be justified in setting aside ethical obligations in order to fulfill other obligations. But since ethical obligations ordinarily override all other obligations, one must have a compelling argument for setting aside ethical obligations. That is, when conflicts arise, the burden of proof rests with the argument for setting aside ethical obligations. The kind of argument one might make in favor of setting aside an ethical obligation might go as follows: (1) Practice P is an morally worthwhile practice; it produces socially valued outcomes, e.g. happiness, knowledge, technological control, etc. (2) Practice P cannot produce these socially valued outcomes if people set aside its obligations when they conflict with moral obligations in circumstances C. (3) Therefore, people should not set aside P’s obligations in circumstances C. To see how this line of reasoning might work, let’s consider the sport of football. Suppose that this sport is held to be socially worthwhile by many people. The sport requires its players to act in such a way that they end up harming other people. It is morally wrong to harm other people, even when they agree to be harmed. But football could not continue to function as a sport if its players continually tried to avoid harming each other; harm is a part of the game itself. Of course, this argument does not justify malicious and premeditated harm: football is not completely immune to moral criticism and football players still have to obey moral rules. Thus, football players have a limited form of autonomy in their sport in that they can set aside moral obligations in playing their sport, to a certain degree.31 Now how might this point apply to science? Let’s look at restrictions on research. One might argue that certain forms of research should be restricted because they could produce significant harms, injustices, and other morally bad outcomes. For instance, it is often claimed that research on human germ-line gene therapy could produce many harms by homogenizing the human gene pool, creating severe genetic defects, enabling governments to create a genetic caste system, and promoting discrimination and bias.32 If we make the analogy between football and science, we might reason as follows: science is a morally worthwhile human enterprise in that it produces many socially valued (or good) consequences. But it also produces bad consequences, and in order to prevent the bad consequences from occurring, we would also have to stop the good ones. Thus, since the production of bad consequences is part of the “game” of science, scientists are justified in setting aside their ethical obligations to refrain from doing harm in order to go about the business of research. Science could “The view developed here is similar to the view Alan Gewirth develops in ‘Professional Ethics: The Separatist Thesis’, Ethics 96 (1986), 282-300. “See W. Anderson, ‘Human Gene Therapy: Why Draw a Line?‘, Journal of’ Medicine and Philosophy 14 (1989), 81-93.

583

Social Epistemology

not function if scientists had to constantly worry about producing bad consequences. But of course, this argument does not justify malicious and premeditated harm: science is not completely immune to moral criticism and scientists still have to obey ethical rules. Thus, scientists have a limited form of autonomy in their profession in that they can set aside moral obligations in doing research, at least to a certain degree. This limited autonomy does not grant scientists immunity from moral criticism nor does it free them from taking responsibility for the consequences of their research, but it should allow them to pursue research without constantly trying to please various people and organizations who might take an interest in regulating or criticizing science. Perhaps an example will help to clarify the preceding argument. The study of human genetics, as many writers have noted, may have profound implications for mankind and geneticists must certainly be mindful of the potential consequences of their research. 33 In the last decade, numerous books, conferences, governmental committees and television shows, have examined the moral, social and political dimensions of research on human genetics. Although I happen to agree that we need to carefully monitor and perhaps control this research, there are many areas of research on human genetics that are not likely to have any clearly identifiable, social consequences. For instance, research on the transcription and translation of DNA should probably be allowed to take place without any close, outside supervision even if we carefully monitor or control research on human gene therapy. If public hearings were held on all research pertaining to human genetics, it is likely that such research would grind to a halt. If we agree that this research is, on the whole, socially valuable, then we should grant geneticists a certain degree of autonomy and only monitor or control this research when we have reasons to be apprehensive about its possible social consequences. 5. You have claimed that science has both epistemic and non-epistemic (or practical) aims. But couldn’t it be the case that science’s epistemic aims are only justified in so far as they promote its non-epistemic aims? For example, one might argue that the advancement of knowledge is justified only because knowledge enhances power, happiness, etc.... This is not an arcane problem since its resolution could have a profound effect on the shape of research ethics. If science aims to make us happy, then why not restrict research that could make us unhappy? If science aims to promote some political ideology, then why not restrict research that could undermine that ideology? My answer to this question is that science’s epistemic aims can be justified in their own right. Although I do not deny that we can value epistemic goods as a means to practical goals, I believe that we also value epistemic goods for their own sake: many people would want to know why the dinosaurs became extinct, or when the universe began even if this knowledge could not help us achieve any ?See David Suzuki and Paul Knudson,

Genefhic.s (Cambridge,

MA: Harvard

University

Press).

584

Studies in History and Philosophy of Science

practical goals. We often seek explanation, truth, understanding, knowledge and other epistemic aims for their own sake and not for anything else. Indeed, highly theoretical sciences like astrophysics reflect our commitment to knowledge for knowledge’s sake. Although it is possible that astrophysics may some day help us achieve our practical goals, we do not study astrophysics in order to bring about this remote possibility. However, I think it is worth keeping this objection in mind because it helps us see that science has practical aims as well as epistemic ones. Although many philosophers would claim that science’s aims are only epistemic,34 this position does not take into account the close relationships between science and technology and between pure science and applied science. One might be able to claim that science’s aims are only epistemic by drawing a sharp distinction between science and technology on the one hand and pure science and applied science on the other. While the aims of technology and applied science are practical and epistemic, the aims of pure science are only epistemic. But there are no sharp distinctions between science and technology or between pure and applied science. Today, science is often driven by technological advancements and these advancements often precede genuine scientific understanding. On the other hand, many types of technology draw heavily on scientific theories and methods. The line between science and technology may have been sharp in ancient Greece, but as we approach the 21st century, science and technology have become intertwined.35 The same point holds for the distinction between pure and applied science. Many of the so-called “pure” sciences, such as quantum physics, have important practical implications and some of them even make use of insights drawn from applied sciences. “Applied” sciences, such as aerodynamics, make extensive use of concepts and theories drawn from the pure sciences. Of course, one might try to argue that science’s aims are only epistemic by distinguishing between science and engineering, but I similar types of arguments dissolve this distinction, If these distinctions do not stand up, the most reasonable view holds that science’s aims are both epistemic and practical.36 6. You have claimed that science has various epistemic aims, but you are wrong. Science has only one fundamental, epistemic aim, to provide a complete description of the observable world. All other aims, such as explanation, understanding, simplicity and so forth, are mere means to this goal. This point could have an important impact on science’s standards of conduct. This objection asks me to say a bit more about science’s epistemic aims, a topic that I have tried to avoid thus far. For some time now there has been an ‘%ee Karl Popper, The Logic of Scientific Discovery (London: Routledge, 1959). “Ian Hacking, Representing and Intervening (Cambridge: Cambridge University Press, 1983); Rudi Volti, Society and Technological Change, 3rd ed. (New York: St Martin’s Press, 1995). j60f course, some sciences, such as astrophysics, are more epistemic in their orientation, while others, such as medicine, are more practical in their orientation, but this observation does not affect my general point.

Social Epistemology

585

ongoing, hotly contested debate about science’s epistemic aims. Although there are many different perspectives on this debate, for our purposes we can view it as a contest between realist and empiricist construals of those aims. Realists hold that science aims to give us a true description of the world that extends beyond what is directly observable; empiricists hold that science aims to give as an empirically adequate description of the world, i.e. a description that does not extend beyond what is directly observable .37 Although I recognize the philosophical importance of this debate, I think its resolution will not have a great deal of impact on the ethics of research. Why? First, let’s consider the internal point of view on standards of conduct, i.e. social epistemology’s perspective. If we accept the commonsense idea that truth is a necessary condition for knowledge, then it should not make much difference whether the truths we seek are about observable entities (i.e. objects, properties, processes or events) or non-observable ones. In either case, true beliefs are valued.38 Thus, on either construal of science’s aims, it will run counter to the aims of science to falsify evidence or produce sloppy research. I think we can also make a similar point concerning science’s other standards of conduct, such as openness, freedom, and so forth. Thus, the realist/empiricist debate makes little difference to the justification of standards of conduct in science, from an internal point of view. Moreover, this ongoing debate would seem to make even less difference from an external point of view in that moral/political/social standards and values should still generate obligations for scientists regardless of the outcome of the realism/empiricism debate. 6. Conclusion In this essay I have explored the connections between social epistemology and research ethics and I have argued that research ethics is where social epistemology meets moral/political/social philosophy. Empirical studies of the history, sociology and psychology of science also have some direct implications for research ethics, since I hold that principles of social epistemology are hypothetical imperatives that assert connections between means and ends.39 As such, these principles depend on some empirical evidence about how communities of inquirers can most effectively obtain their cognitive goals, and 37Bas van Fraassen, The Scient$c Image (Oxford: Clarendon Press, 1980). 381 discuss this point in more depth in my book Research Ethics. ‘“It is not my intention to review or discuss the extensive literature on the history, sociology, and psychology of science in this essay, since I am only attempting to sketch a philosophical approach to research ethics. However, I will mention the following references for the interested reader: William Shadish and Steve Fuller (eds), The Social Psychology of Science (New York: Guilford Press, 1994); Michael Mulkay, Sociology ofScience (Indianapolis: Indiana University Press, 1991); Steve Fuller, Marc De Mey, Terry Shinn, and Steve Woolgar (eds), The Cognitive Turn: Sociological and Psychological Perspectives on Science (Dordrecht: Kluwer Academic Publishers, 1989); John Ziman, An Introduction to Science Studies (Cambridge: Cambridge University Press, 1984).

586

Studies in History and Philosophy

of Science

this evidence needs to come from empirical studies of science. There are many pressing questions that still need to be answered, and I do not claim to have answered any of them to my own (or anyone else’s satisfaction) in this essay. Instead, I have sought only to motivate further discussion of research ethics by mapping out its conceptual domain. Philosophers have largely ignored research ethics for quite some time, but it is an important and fertile topic not only for social epistemologists and science policy researchers but for all philosophers concerned with epistemology and the philosophy of science. A more in depth study of research ethics will also yield new perspectives on epistemology and the philosophy of science and the relation between science, ethics and politics. Acknowledgements-I Goodin, Marshall

would like to thank Steve Darwall, Jim Forrester, Alvin Goldman, Susanna David Hull, Helen Longino, Geoffrey Sayre-McCord, Claudia Mills, Ed Sherline, Thomsen and two anonymous referees for helpful comments.