Moral trust & scientific collaboration

Moral trust & scientific collaboration

Studies in History and Philosophy of Science 44 (2013) 301–310 Contents lists available at SciVerse ScienceDirect Studies in History and Philosophy ...

328KB Sizes 3 Downloads 248 Views

Studies in History and Philosophy of Science 44 (2013) 301–310

Contents lists available at SciVerse ScienceDirect

Studies in History and Philosophy of Science journal homepage: www.elsevier.com/locate/shpsa

Moral trust & scientific collaboration Karen Frost-Arnold Hobart & William Smith Colleges, 300 Pulteney St., Geneva, NY 14456, USA

a r t i c l e

i n f o

Article history: Received 27 August 2012 Received in revised form 9 March 2013 Available online 17 May 2013 Keywords: Collaboration Trust Social epistemology Self-interest Authorship Industrial science

a b s t r a c t Modern scientific knowledge is increasingly collaborative. Much analysis in social epistemology models scientists as self-interested agents motivated by external inducements and sanctions. However, less research exists on the epistemic import of scientists’ moral concern for their colleagues. I argue that scientists’ trust in their colleagues’ moral motivations is a key component of the rationality of collaboration. On the prevailing account, trust is a matter of mere reliance on the self-interest of one’s colleagues. That is, scientists merely rely on external compulsion to motivate self-interested colleagues to be trustworthy collaborators. I show that this self-interest account has significant limitations. First, it cannot fully account for trust by relatively powerless scientists. Second, reliance on self-interest can be self-defeating. For each limitation, I show that moral trust can bridge the gap—when members of the scientific community cannot rely on the self-interest of their colleagues, they rationally place trust in the moral motivations of their colleagues. Case studies of mid-twentieth-century industrial laboratories and exploitation of junior scientists show that such moral trust justifies collaboration when mere reliance on the self-interest of colleagues would be irrational. Thus, this paper provides a more complete and realistic account of the rationality of scientific collaboration. Ó 2013 Elsevier Ltd. All rights reserved.

When citing this paper, please use the full journal title Studies in History and Philosophy of Science

1. Introduction When is it rational for members of the scientific community to trust each other? And what grounds trust in one’s scientific colleagues? The rationality of trust has been extensively studied in the context of testimony in general, and testimony within science in particular.1 But members of the scientific community trust each other to do more than tell the truth. Trust also plays a role in undergirding collaboration in science.2 Just as believing a colleague’s testimony carries risks, so does collaboration. But in view of such risks, what rationally justifies collaboration? In this paper, I argue that members of the scientific community rationally trust each other, in part, on the basis of evidence of the moral character of their colleagues. Section 2 outlines risks collaboration poses for members of the scientific community, and then presents two explanations for trust

in one’s colleagues. On the prevailing account, trust is a matter of mere reliance on the self-interest (RSI) of one’s colleagues. However, a second account explains trust as a matter of moral trust (MT) in the moral motivations of one’s colleagues. Section 3 argues that the RSI account has significant limitations. First, RSI cannot fully account for trust by relatively powerless scientists (Section 3.1). Second, reliance on self-interest can be self-defeating (Section 3.2). For each limitation, I show that moral trust can, and often does, bridge the gap—when they cannot rely on the self-interest of their colleagues, members of the scientific community place trust in the moral character of their colleagues. This conclusion is important for philosophers and policy makers alike. It expands the analysis of trust in science beyond the testimony literature, and it shows that a complete account of the rationality of science requires greater attention to scientists’ moral

E-mail address: [email protected] On trust and scientific testimony, see Hardwig (1985, 1991), Barber (1987), Blais (1987), Rescher (1989), Adler (1994), Shapin (1994), Scheman (2001), Fricker (2002), Rolin (2002), Code (2006), Sztompka (2007), Wilholt (2009), Grasswick (2010) and Anderson (2011). 2 The role of trust in grounding collaboration has been studied less than trust’s role in grounding testimonial practices. For exceptions, see Rescher (1989), Shamoo & Resnik (2003, pp. 56–59) and Whitbeck (1995). On the epistemic significance of collaboration and sharing, see Fallis (2006), Longino (2002), Thagard (1997, 2006), Tollefsen (2006) and Wray (2002, 2006). 1

0039-3681/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.shpsa.2013.04.002

302

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

psychology. While philosophers influenced by rational choice theory have made great progress in understanding the rationality of science by modeling scientists as self-interested agents,3 this paper argues that such a project yields an incomplete picture of scientific rationality. Including scientists’ moral assessments of their colleagues yields a more realistic analysis. By recognizing that scientists, like people in general, have both self-interested and otherinterested motivations, this analysis follows David Hull’s methodological dictum that ‘[w]hatever is true of people in general had better apply to scientists as well’ (1988, p. 304). Finally, there continues to be great concern among scientists and policy makers about how to promote productive and ethical collaboration. To create effective policies, we first need to understand both the risks of collaboration and the reasons why scientists take these risks. I show that policies can be self-defeating when based on the assumptions that scientists are solely self-interested and that scientists view each other as merely rational egoists. Thus, we need more nuanced policies that recognize the critical role of moral trust in promoting scientific collaboration. 2. Explanations of collaboration 2.1. The risks of collaboration Collaboration is a risky enterprise for scientists.4 Many of the risks stem from harm one’s collaborator might cause. In working with another scientist, one risks one’s partner performing sloppy, wasteful, or fraudulent work that damages one’s reputation. In addition, consider the sharing of ideas or materials (e.g., reagents, stocks of model organisms, or computer models) that is often part of collaboration. Some of the risks involved include: the receiver plagiarizing and taking credit for the materials or ideas, the receiver using the materials or ideas to complete the donor’s own research project faster thereby scooping the donor’s work, the receiver using the materials or ideas to complete other research projects faster and thereby gaining a better reputation than the donor, and the donor wasting time preparing the materials for sharing instead of making progress on the donor’s research projects. Of course, collaboration and sharing can also be beneficial to those who participate. Publications and reputations can be built on fruitful collaborations, and participation in sharing networks gives researchers access to much-needed resources. Some research can only be done in collaboration (Wray, 2007). Given these possible risks and benefits, the rational scientist will attempt to assess whether any particular instance of collaboration is worth the risk.5 While many considerations play into such calculations, one important part of determining whether it is reasonable to collaborate is weighing whether one ought to trust one’s colleagues. I use

‘trust’ here in a broad sense to describe the phenomenon of making plans based on the assumption that someone will do something or care for some valued good.6 When person A trusts person B to perform action u (or trusts B with valued good C),7 A takes the proposition that B will u (or that B will care for C) as a premise in her practical reasoning, i.e., A works it into her plans that B will u (or that B will care for C) (cf. Frost-Arnold, 2012, p. 8). When one counts on someone in this way, one is vulnerable to having one’s plans undermined. For example, when I trust my collaborator not to steal my ideas, I make plans for my research agenda on the basis of the assumption that she will not unfairly scoop me. In doing so, I am vulnerable to having my research plans undermined; if she lets me down, then I may have to make costly changes to my line of research. But what grounds such trust? In the next section, I canvass two explanations for trust in one’s collaborators. 2.2. Two explanations of trust among scientists The first explanation of trust among scientists is premised on the idea that scientists expect each other to be rational, self-interested beings. This self-interest approach argues that scientists trust each other because they believe sanctions for untrustworthiness make it in their colleagues’ self-interest to be trustworthy (Adler, 1994; Blais, 1987; Fricker, 2002; Rescher, 1989; Sztompka, 2007).8 The existence of such sanctions makes this trust rational on the self-interest explanation: untrustworthy collaborators will be detected and punished. For example, one might argue that sharing is grounded in the kind of reciprocity that motivates cooperation in iterated prisoners’ dilemmas. On this account, scientists are reliable stewards of materials or ideas that a colleague has shared with them because it is in the recipient’s interest to maintain a sharing relationship with the donor for future reciprocation (Rescher, 1989). Knowing that one’s colleague values an ongoing relationship rationalizes trust in her. Furthermore, scientists can sometimes rely on community-level sanctions to motivate trustworthiness in their colleagues. Thus, one might explain scientists’ trust in their colleagues as simply rational expectations about the self-interested behavior of their utility-maximizing peers. The structure of self-interest explanations couples a simple picture of the trusted agent (in this case, the trusted scientist) with a complex view of the social environment in which the trustor encounters the trusted party. The reward and punishment mechanisms that make it in B’s self-interest to be trustworthy do much of the work in rationalizing trust in one’s colleagues. Sometimes the reward and punishment mechanisms are at the level of community (e.g., institutional punishment for stealing a collaborator’s ideas), and sometimes they are more personal (e.g., one party ends a collaborative relationship).9 In either case, A need know nothing more

3 The properly-organized, self-interested behavior of scientists has been credited for generating objectivity (Railton, 1994; Wray, 2007), truth and knowledge acquisition (Goldman & Shaked, 1991; Hull, 1988, 1997), and an efficient division of cognitive labor (Kitcher, 1993; Strevens, 2006). See Strevens (2011) for a summary. 4 For economy of expression, I will often abbreviate ‘members of the broader scientific community’ to ‘scientists.’ I include under this heading those who are essential parties to scientific collaboration, e.g., graduate students and scientific managers who set up and maintain collaborations. One reason to include such participants in the research process, rather than focusing solely on relationships between senior scientists of equal standing, is that (as Baier (1994, p. 106) notes) issues of trust are particularly pressing in unequal relationships. 5 For more on the costs and benefits of collaboration, see Fallis (2006) and Wray (2006). 6 Note that I take trust to be a three-part relation in which A trusts B to u (or A trusts B with valued good C). In addition, note that u may be the omission of an act, e.g., scientist A trusts her colleague B to avoid stealing her ideas. One might prefer to take trust to be a two-part relation in which A trusts B. However, such analyses often fail to recognize that when A trusts B, A rarely, if ever, trusts B with everything. Instead, our trust in others is often context-dependent or localized to a specific range of actions or goods. 7 There is some debate in the trust literature about whether trust is best analyzed by an entrusting model (A trusts B with good C) or by an action model (A trusts B to u). I take no position on that debate here. 8 See Hardin (2002) for an influential summary of self-interest approaches to trust in contexts other than science. 9 Some sociologists of trust (e.g., Giddens, 1990), argue that a central feature of modern life is that the institutional mechanisms that ground trust eclipse the personal components to such an extent that we now place trust in organizations and institutions rather than people. However, others (e.g., Shapin, 2008) argue that the personal still matters. I model trust as a relation between agents (rather than between an agent and an institution) because much of the trust with which I am concerned is between colleagues within an organization (rather than between a lay person and an expert). While it makes sense to say that a lay person trusts the scientific establishment to produce useful knowledge, I doubt it makes sense to talk of one scientist trusting the scientific establishment to ensure that her colleague produces useful data for their joint paper. Instead, the scientist trusts her colleague. This is not to deny that their relationship is mediated and structured by institutions.

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

about B other than that B is self-interested, knows about the sanctions, and is rational. If A has doubts about whether to trust B, A ought to settle them by gaining information about the social environment of the reward and punishment mechanisms. Two purported virtues of this account of trust between scientists are its simplicity and its conformity with the ‘moral equivalence’ principle. First, self-interest theorists tout their approach as simple yet powerful. They argue that by making only the barest assumptions about the agents involved (they are rational and selfinterested), we can explain much, if not all, trust without needing to draw on ‘vague’ moral notions like virtues and moral character (Blais, 1987, p. 370; Hardin, 2002, p. 6). Second, self-interest explanations of scientists’ behavior have been popular in social epistemology of science because they avoid attributing special virtues to scientists. Shapin (2008, p. 21) calls the claim that scientists are no more or less moral or virtuous than the rest of humanity the ‘moral equivalence of scientists’ thesis. Similarly, maintaining that the success of science cannot be explained simply in terms of scientists’ altruistic and disinterested search for truth, Hull (1988, p. 304) states as a basic axiom that ‘[w]hatever is true of people in general had better apply to scientists as well.’ And he claims people in general are self-interested beings. However, these purported virtues are also the source of the selfinterest approach’s limitations; explaining trust exclusively using self-interest explanations flies in the face of both the moral equivalence thesis and recent experimental results in behavioral science. Suppose we abandon the ideal of the disinterested scientist because it violates moral equivalence; is the only alternative a selfinterested scientist? The answer is no, because dis-interested and self-interested are not exhaustive. Other-interested scientists are another possibility. People often have other-interested motivations; they are motivated to act for the good of another even when it is not in their self-interest to do so.10 Recent experimental work has shown that many people are not rational egoists (Ostrom, 2005). This research ‘challenges the assumption that human behavior is driven in all settings entirely by external inducements and sanctions’ (Ostrom, 2005, p. 253). There is widespread evidence for intrinsic motivations such as civic virtues and the desire to avoid harm to others. Thus, if we are to follow Hull’s moral equivalence dictum that ‘whatever is true of people in general had better apply to scientists as well,’ we ought to search for explanations of scientists’ behavior in terms of other-interested motivations as well as self-interested motivations. This suggests a second explanation for trust in science: scientists trust each other on the basis of evidence of their colleagues’ moral character.11 I call this the moral trust explanation.12 Evidence that a potential collaborator B is honest, loyal, fair, and/or cares about her peers can provide a scientist A with reason to trust B to avoid damaging A’s reputation with sloppy or fraudulent collaborative work. Similarly, evidence of B’s good moral character can give A reason to share ideas or materials with B; evidence that B cares

303

about fairness and abhors exploiting people’s vulnerabilities can rationalize A’s expectation that B will not steal A’s ideas or use A’s reagents to scoop A. The moral trust explanation eschews the simple view of scientists as self-interested agents motivated by external payoffs. Instead it views scientists as having various complex moral motivations, and it maintains that scientists sometimes trust each other to be morally motivated. A note about terminology is in order. ‘Trust’ is a polysemic term. Some authors distinguish moral trust (which they call simply ‘trust’) from mere reliance. While there is debate about the right way to delineate this distinction, some points are clear. In contrast to interactions of mere reliance, moral trusting relationships carry moral weight, because they possess the possibility of betrayal (Baier, 1994, p. 99). We feel the moral reactive attitude of betrayal when the person we are counting on fails to live up to our normative expectations of her (Faulkner, 2011, p. 24). In morally trusting we count on the moral motivations of the trustee, whereas in merely relying we do not look to any moral motivations. Expecting that someone will do something because it is in her self-interest is a paradigm case of mere reliance (Baier, 1994, pp. 98–99). In contrast, expecting that someone will do something because she cares about me, has good will towards me, or has various pro-social virtues (e.g., honesty, fairness) are all cases of moral trust. As Holton (1994, p. 66) notes, in moral trust the trustor counts on the trusted’s intrinsic moral motivations, whereas self-interested fear of external punishments is insufficiently internal to be the basis of moral trust. Finally, it should be clear that the type of trust I discuss here is not blind trust. Sometimes our trust in others is based on evidence that they will do something (or care for some good). This is the type of trust I am concerned with. Confusions can stem from the polysemy of ‘trust’ when ‘trust’ is taken to refer to faith, which is not similarly grounded in evidence. To summarize, I use the term ‘trust’ broadly to refer to any cognitive attitude of taking the proposition that someone will do something (or care for some good) as a premise in one’s practical reasoning. Thus, the phenomenon to be explained is trust between scientists: members of the scientific community making plans based on the assumption that a fellow member will do something or care for some valued good. The self-interest and moral trust explanations provide different explanations of this phenomenon. For the self-interest approach, trust is a matter of mere reliance on the self-interest of one’s colleagues. For the moral trust approach, trust is a matter of moral trust in the moral motivations of one’s colleagues. Thus we have on the table two accounts of collaboration: the popular mere reliance on self interest account (RSI), and the unduly neglected moral trust account (MT).13 Note that both accounts of collaboration include normative and descriptive elements. Both approaches provide normative accounts of when trust in one’s collaborator is rational. For RSI, it is rational to trust one’s colleague to u (or care for C) when one is justified in believing that it is in her self-interest to u (or care for C). For MT, it

10 I use the term ‘other-interested motivations’ rather than ‘altruistic motivations’ for two reasons. First, the latter has been used to refer to scientists’ ‘disinterested search for truth’ (Hull, 1988, p. 287). My argument is distinct from the debate between Hull and Wray (2000) over whether scientists’ disinterested curiosity contributes to the success of science. Second, I take no position on the finer details of the psychological egoism debate between e.g., Batson (2002) and Sober & Wilson (1998). My argument is that scientists sometimes trust each other to be motivated by intrinsic, other-directed motivations like caring or a disposition of fairness, rather than always relying on external inducements to make it in their colleagues’ self-interest to be trustworthy. I will not engage in the debate about whether all motivations, even caring and the desire to avoid exploiting others, can ultimately be given egoistic explanations (e.g., as ultimately the result of the desire to avoid the empathic psychological pain of seeing someone suffering). 11 Of course, trust in others’ moral character is not unique to science. Many of the arguments I make about the role of trust in science apply to other epistemic and social communities. 12 For influential moral trust explanations of scientific testimony (rather than collaboration), see Hardwig (1985, 1991), Shapin (1994) and Scheman (2001). 13 MT should not be confused with a Mertonian account of the ethos of science. While both MT and Merton refer to scientists as having virtues, the two analyses operate at different levels. Merton gives an account of the institutional values of science, but the debate between MT and RSI is at the motivational rather than institutional level of analysis (see Merton, 1973a, p. 276). MT and RSI are explanations of the kinds of motivations that scientists trust each other to act upon. Merton himself is not primarily concerned to explain what motivates scientists to act according to the ethos of universalism, communism, disinterestedness, and organized skepticism. However, when he does address this question, he refers to both fear of punishment and the internalization of the values into the superego of the scientist (Merton, 1973a, p. 269). Thus, one could give an RSI argument that scientists merely rely on each other to uphold the ethos of science (e.g., Sztompka, 2007), or one could give an MT argument that scientists trust the everyday moral motivations of their colleagues to motivate them to act in accordance with the ethos.

304

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

is rational to trust one’s colleague to u (or care for C) when one is justified in believing that her moral character will motivate her to u (or care for C). Each account is also supposed to provide the foundation for descriptively accurate explanations of actual scientists’ behavior. Using a principle of charity, philosophers and historians of science often attempt to explain scientists’ collaborative behavior (both in general and in particular cases) by showing that, under the specified conditions, collaboration is the rational course of action. Thus, to explain why scientists take the risk to collaborate, RSI theorists show that that the trusting scientists in question have good reason to expect that it is in their colleague’s self-interest to act trustworthily. In contrast, MT theorists explain the existence of scientific collaboration by showing that the trusting scientists have good reason to expect that their colleagues have a trustworthy moral character. Thus, these two different normative accounts of the rationality of scientific collaboration provide the material for two alternative explanations of actual scientific practice. One barrier to greater exploration of MT is the view that RSI alone is sufficient to capture all the interesting features of the social epistemology of science. I argue that this is incorrect, and that MT better explains certain features of the social epistemology of science. There are cases of scientific collaboration in which it would not be rational for the trusting scientists to merely rely on the self-interest of their colleagues. But we do not need to dismiss these collaborative practices as mysterious or irrational, because MT suggests that we can apply the principle of charity by providing an alternative explanation for the collaboration—the trusting scientists are looking for evidence of moral trustworthiness in potential collaborators. Before I detail these limitations of RSI, some final clarifications about the nature of my thesis are necessary. In arguing that RSI has significant limits, I grant that some instances of collaboration can be explained entirely in terms of reliance on self-interest. In addition, I accept that some collaborative practices can be explained by both RSI and MT. Scientists operate from both moral and self-interested motives, and scientists recognize this fact about their colleagues. Scientists both morally trust and merely rely on each other. Promoters of RSI, on the other hand, have not been so ecumenical. While they usually acknowledge that selfinterest is not the only motivation people have, they frequently follow such qualifications by claiming that self-interest alone can explain the important features of science or human behavior in general (e.g., Blais, 1987, p. 370; Hardin, 2002, p. 6; Hull, 1997, p. S125). This claim is my target, and I aim to redress the relative neglect of MT in the literature. 3. Limitations of RSI 3.1. The first limitation of RSI: powerlessness 3.1.1. The general problem of powerlessness To see the first limitation of RSI, it is useful to analyze the selfinterest accounts of collaboration using two solutions to the problem of cooperation between Hobbesian rational egoists. First, as Hobbes (1994, §xv.4) notes in his response to the fool, Hobbesian agents can find it in their self-interest to cooperate in the state of nature when cooperation will maintain a useful friendship. Second, cooperative behavior is rational for Hobbesian agents when external constraints imposed by the sovereign make cooperation in one’s self-interest. With these two solutions, RSI can nicely explain why it is rational for one to rely on another when one has the ability to detect unreliable behavior and prevent it by placing retaliatory constraints on the relied upon, thereby making it in the other’s self-interest to be reliable. However, not everyone who trusts others can detect potential defection and punish it with effective retaliation. Some people lack

the power to influence the behavior of the other by making it in the other’s self-interest to act as expected. Consider the first Hobbesian solution. According to iterated prisoner’s dilemma interpretations of this solution, there are two ways that A can make it in B’s selfinterest to cooperate. Either A can punish uncooperative behavior, or A has something that B hopes to gain from future cooperative interactions. However, if A is unable to punish B’s defection effectively and has nothing to offer B as future reciprocation, then A lacks this power to make it in B’s self-interest to cooperate. Members of the scientific community with low professional standing can be in this position, as those with greater standing may perceive them as having nothing profitable to offer in return. Similarly, RSI theorists argue that A can rationally expect to influence B’s behavior if A has reason to believe that B values their relationship (Blais, 1987; Hardin, 2002; Rescher, 1989; Sztompka, 2007). A can influence B’s behavior through the threat of cutting off the relationship when B acts contrary to A’s expectations. However, individuals who are caught in a relationship and have no viable opportunity to end it are unable to exercise this type of power. Vulnerable graduate students who cannot switch advisors without heavy costs illustrate this problem. In addition, some scientists have expressed concerns about the ‘unintended consequences when large funding agencies force people to collaborate’ (Ledford, 2008). Funding arrangements are just one mechanism through which scientists can find themselves caught in relationships they have little power to end. Now consider the second Hobbesian solution—sometimes it will not be A herself who enforces the constraints; instead A depends on community-level sanctions to make it in B’s self-interest to act as A expects (Adler, 1994; Blais, 1987; Sztompka, 2007). For example, if a scientist believes that a collaborator has plagiarized her ideas, she can appeal to university administrations or journals to enforce sanctions. However, not all scientists are similarly situated to be protected by such community-level sanctions. There are scientists who fail to report breaches of community norms for fear of retaliation, scientists who complain but find their grievances dismissed in favor of unscrupulous colleagues with greater credibility, and scientists who depend on colleagues to do things for which there is no or insufficient community sanction (or who inhabit communities with poorly functioning sanctioning mechanisms). These are all types of scientists whose relative powerlessness makes it difficult to explain their collaborative behavior according to RSI. To summarize, the first limitation of RSI is its inability to explain the collaborative practices of powerless individuals who cannot rely on it being in the self-interest of their colleagues to be reliable. I have given general descriptions of the kind of scientists who find themselves in positions of powerlessness. The next section provides a concrete illustration by analyzing complaints about senior scientists abusing authorship conventions at the expense of their junior colleagues. 3.1.2. Coercive authorship and the search for trustworthy mentors Authoring peer-reviewed publications is the key to advancing one’s career in science. Hence, it is of primary importance to scientists that their research contributions are sufficiently reflected in the list of authors for any resulting publications. To ensure the legitimacy of the authorship process, governing bodies in the scientific community have set forth guidelines for determining who should be listed as an author. For example, the foremost guidelines for authorship in biomedicine are promulgated by the International Committee of Medical Journal Editors (ICMJE) (ICMJE, 2010; Strange, 2008). Failure to follow the authorship guidelines of a journal may prevent a scientist from publishing research. Additionally, a scientist who violates authorship guidelines takes

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

the risk that another scientist will object, which can prompt embarrassing corrections and letters to be published (Committee on Publication Ethics, 2008). Despite these community-level norms and threatened sanctions, which would seem to make it in the selfinterest of scientists to provide journals with accurate lists of authors, promiscuous authorship (the listing of a non-contributor as an author) still exists (Strange, 2008, p. C567). There are many types of promiscuous authorship, but coercive authorship most undermines RSI. Coercive authorship occurs when someone is given author status ‘in response to their exertion of seniority or supervisory status over subordinates and junior investigators’ (Strange, 2008, p. 567).14 Junior scientists regularly collaborate and share ideas and resources with senior scientists. Junior scientists act both as donors (e.g., giving senior researchers materials or ideas during shop talk sessions) and recipients (e.g., receiving lab space and materials or listening to a senior colleague’s ideas). This sharing carries significant risks for junior researchers. As donors, they may find their ideas stolen by a senior colleague (Anderson et al., 2007, p. 454) or their materials or data given by a senior colleague to a third party who scoops their research (Campbell et al., 2002, p. 478). As recipients, junior colleagues take the risk that the senior colleague may use her donation as a pretext for demanding that she be listed as an author on publications issuing from the junior colleague’s research (Strange, 2008, p. C573). Adding a senior scientist as author poses significant risks for junior scientists, since, due to the ‘Matthew effect,’ senior scientists are given more credit than junior colleagues for discoveries presented as joint work (Merton, 1973b). Despite these risks, junior colleagues regularly include senior scientists as co-authors. Sometimes this co-authorship is due to pressure junior scientists feel (Anderson et al., 2007, p. 455). A survey of the corresponding authors of papers published in four of the top medical journals found that ‘junior faculty and individuals whose job is dependent on publications were significantly more likely to feel obligated to consider adding an author who doesn’t meet ICMJE criteria when that person has administrative power over them’ (Mainous et al., 2002, p. 462). Junior researchers’ relative powerlessness makes them vulnerable to what Kwok (2005, p. 554) calls ‘the white bull,’ a senior scientist ‘who uses his academic seniority to distort authorship credit and who disguises his parasitism with carefully premeditated deception.’ A white bull can use power over a junior scientist to pressure her to add the bull as an author to publications, thereby undermining the junior scientists’ career. Given these risks, for it to be rational for junior scientists to work with senior scientists,15 they need to be able to trust their senior colleagues not to steal the credit for their work.16 But can we explain this trust solely in terms of rational reliance on the self-interested behavior of senior scientists? The problem with RSI accounts of junior scientists’ trust in senior colleagues is that sometimes the detection and punishment mechanisms do not provide sufficient disincentives for coercive authorship. First, junior scientists often cannot call upon the punitive retaliations that are part of the iterated prisoner’s dilemma solutions to the problem of cooperation. Because junior scientists

305

need their senior colleagues’ sponsorship, lab space, and other resources, they often cannot present a credible threat to end the relationship upon discovering untrustworthy behavior. Second, while there are community-level norms surrounding authorship, junior scientists are put in a difficult position when senior colleagues violate the norms: Most [junior scientists] do not hold permanent appointments and as a result may be afraid to confront their supervisors (often full professors) over authorship decisions, which may make it hard for them to get an extension of their contract. I can testify that life can be made very difficult for a junior researcher who raises questions about whether a more senior colleague demanding coauthorship has made the substantive contribution to a project upon which authorship would be justified. (Wagena, 2005, p. 308) Blowing the whistle on a senior colleague has a history of backfiring on junior scientists (Kwok, 2005, p. 19). So while community-level sanctions for untrustworthiness may be on the books, the precariousness of a junior researcher’s position can make it unwise for her to call upon them. In addition, one proposed solution to avoid authorship disputes can also be manipulated by senior scientists. Several authors have suggested that potential collaborators engage in a pre-collaboration discussion about roles and responsibilities (including who will be listed as what type of author), culminating in either an informal agreement or a formal ‘collaborator’s pre-nuptial agreement’ (Gadlin & Jessar, 2002, p. 12). This might appear to protect junior colleagues, since they could potentially have an agreement about their authorship rights in writing. However, ‘[t]hese systems fail when a collaborator uses power asymmetry and intimidation to coerce junior collaborators to agree to unfair arrangements regarding authorship and recognition’ (Kwok, 2005, p. 554). Finally, administrators, journal editors, and institutions such as the Office of Research Integrity, who enforce sanctions, are often unwilling to jump into the middle of thorny authorship disputes (Woolston, 2002, Credit where credit is due, para 6; Kwok, 2005, p. 555). In fact, the ICMJE guidelines explicitly disavow a role for journal editors in settling authorship disputes (ICMJE, 2010). So even if the junior scientist does blow the whistle, sanctions may still not be forthcoming.17 In sum, under current scientific institutions, a senior scientist often need not fear that her self-interest will be harmed if she engages in coercive authorship. This undermines RSI’s favored explanation of junior scientists’ collaboration with senior scientists.18 Of course, context matters. Some junior scientists belong to institutions with vigorous and well-functioning punitive mechanisms and are well-positioned to withstand the possible retaliations of blowing the whistle on their mentor (e.g., they are considered ‘rising stars’ and will easily find another mentor if retaliation occurs). Such junior scientists may have good reason to believe it is in their mentor’s selfinterest to avoid coercion. However, given the pervasive concern about the vulnerability of junior scientists, we can safely assume that this is not universal. In this way, coercive authorship exemplifies the

14 For obvious reasons, it is difficult to obtain clear data on the prevalence of coercive authorship. Flanigan et al. (1998, pp. 223–224) cite evidence that a ‘substantial’ number of peer-reviewed medical articles have honorary or ghost authors (19% of the 809 articles studied had honorary authors and 11% had ghost authors), but they did not determine the extent to which honorary authorship was coerced. 15 Anderson et al. (2007, p. 455) cites interviews with junior scientists who report avoiding working with senior colleagues whom they suspect might engage in coercive authorship. 16 It could also be rational for them to share if the cost of coercive authorship is outweighed by benefits that could not reasonably be expected otherwise. I will deal with this alternative in Section 3.1.4, by considering the possibility that junior scientists merely act as if they trust their mentors. 17 Graduate students distinguish between a ‘resource they could turn to’ in cases of abuse by a mentor and a resource ‘they would turn to’ (Fagen & Wells, 2004, p. 83). Fagen & Wells cite several quotes from graduate students who are well aware that their institutions’ mechanisms to protect students do not work, e.g. ‘There is an ombudsman’s office on campus, but this faculty is notorious for ignoring all efforts by that office to attend to or resolve student-initiated academic concerns. This office will admit that professors may do as they like’ (Fagen & Wells, 2004, p. 83). 18 In addition, Sztompka (2007, p. 216) provides the following RSI explanation: top scientists have strong incentives to be trustworthy since, due to the Matthew effect, ‘they have much more to lose’ than junior scholars. But, as I have shown, while senior scientists have more to lose, they are at less risk of losing it.

306

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

problem RSI faces in explaining the trust of powerless members of the scientific community. The next section argues that if we look at the reasons graduate students cite for their choice of advisor, we find an alternative explanation: junior scientists trust the moral motivations of their senior colleagues. 3.1.3. The moral trust explanation Junior scientists can attempt to reduce the risk of collaboration with senior scientists by looking for evidence of moral motivations in potential senior mentors. When junior scientists do this they are acting in ways not predicted by RSI—they are not looking for evidence that it is in the mentor’s self-interest to be reliable. Recent studies of the reasons why graduate students choose their advisors provides some evidence that junior scientists look for evidence of morally trustworthy character in senior colleagues. One study of reasons for graduate student attrition found that while advisor choice plays a critical role in graduate student success, students are given little formal guidance about how to choose an advisor (Lovitts, 2001, p. 122). Instead, students rely on word of mouth about who is a good advisor. As one graduate student noted, recommendations about moral character (‘Oh, he’s a nice guy,’ or ‘She’s a good person’) influence advisor choice (Lovitts, 2001, p. 123). A 2001 national study of 4114 graduate students in 11 disciplines provides more anecdotal evidence from students’ own accounts of the process of choosing an advisor (Golde & Dore, 2001a). The study asked advanced students to give advice to entering students. It is striking how many of these pieces of advice mention the need to look for an advisor who cares about the student, who is fair, or who abhors exploiting students. All of these are markers of moral motivations. Here are some examples: ‘Find a good advisor who cares about you and your work who will take an active and positive role in getting you successfully through grad school. The topic is not as important as your advisor.’ (Molecular Biology Student) ‘It doesn’t matter what research you end up doing, a bad advisor can screw you over. The personality of the potential advisor and a proven track record are much more important than your actual work. They have complete control over whether you succeed or fail. For one person to have this much power over you is very scary, so choose advisors carefully and negotiate a thesis project before you formally enter the lab.’ (Molecular Biology student) ‘Talk to the grad students already in the lab about how they get along with their advisor. Is he/she a good advisor? Fair? Involved? . . .’ (Ecology student) ‘Make sure you get a good advisor; one who is there to teach you, not make you their indentured servant.’ (Chemistry student) ‘Make sure you pick someone who won’t exploit you excessively and who is willing to go to bat for you (even at his own expense, if necessary) . . .’ (Chemistry student) (qtd. in Golde & Dore, 2001b, all emphases mine) This, therefore, appears to be a situation in which vulnerable members of the scientific community, who cannot reasonably merely rely on their senior colleagues, fall back on moral trust to ground their choice of collaborators. Thus, MT must be part of the explanation of collaboration in science. RSI by itself fails to capture important features of the reasons that ground trust by the powerless in science. 3.1.4. Objections and replies At this point, a proponent of RSI might argue that instances of relative powerlessness do not present cases of reasonable trust

that fall outside the scope of RSI explanations. She might argue that the powerless simply do not trust. This is Russell Hardin’s approach: There are inherent problems in trusting another who has great power over one’s prospects. If a much more powerful partner defaults . . . she might be able to exact benefits without reciprocating. Moreover, she might be able to dump partners willynilly and replace them with others, while they cannot dump her with such blissful unconcern because there may be few or no others who can play her role. (Hardin, 2002, p. 101) Hardin does not take this as evidence that his RSI approach cannot account for trust by the powerless. Instead, he raises these issues to explain why ‘[i]n general, therefore, the weaker party cannot trust the more powerful much at all. Inequalities of power therefore commonly block the possibility of trust’ (Hardin, 2002, p. 101). While Hardin is right that the powerless do often distrust the powerful, this is only half the story. The powerless do often trust the powerful. And when they do, we often cannot give an RSI explanation for their trust. Thus, this response is clearly wrong if its claim is that junior scientists, for example, do not collaborate with senior colleagues. However, there is a more charitable interpretation of this RSI objection. One might point out that we sometimes act as if we trust, while maintaining a distrustful attitude (Frost-Arnold, 2012, pp. 12–15). Occasionally, we do so when the expected rewards of placing ourselves in others’ hands outweigh the expected harm that would be done to us if our trust is betrayed. So we act as if we trust, but we do so with a suspicious and wary attitude, and thus do not, in a sense, actually trust. Perhaps, the self-interest theorist might argue, junior scientists are in this position. The potential reward for taking the risk of collaborating with senior colleagues is a future career in science, which requires that one pass through a period of apprenticeship. Yes, the harm done by coercive authorship is significant, but if one needs to work with a senior scientist in order to gain access to the scientific community, then it may be worthwhile to act as if one trusts by sharing one’s work with one’s mentor, even at the risk of coercive authorship. Thus, someone who thinks that all the important cases of trust in science can be explained in terms of reliance on self-interest might argue that my objection does not provide an example of trust in science that cannot be explained by RSI, because junior scientists only act as if they trust their senior colleagues. In response, I concede that this objection can certainly account for some of the instances of junior scientists’ collaboration with senior colleagues; but I maintain that it is implausible that all junior scientists are merely acting as if they trust their mentors. Surely some graduate students work with mentors they deem untrustworthy. But to maintain that all vulnerable graduate students have an attitude of distrust towards their mentors seems implausible. Surveys of graduate students show high levels of satisfaction with their advisor choice (Golde & Dore, 2001a, p. 37), and the analyses of graduate student attitudes reveal students expressing their gratitude to morally upstanding mentors who care for their students as people (Lovitts, 2001, pp. 128–129).19 Thus, this second version of the RSI objection is also inadequate. 3.2. The second limitation of RSI: self-defeating detection & punishment 3.2.1. Problems with detection & punishment in general According to RSI, it is rational for A to rely on B when detection and punishment mechanisms exist that make it in B’s self-interest

19 One might wonder whether these positive responses reflect after-the-fact gratitude by students who merely acted as if they trusted and found that their gamble paid off. This may account for some of the satisfaction of some graduates in Lovitts’ (2001) survey. However, the students in Golde & Dore’s (2001a) survey were current students who were still at risk of exploitation by their advisor. I thank an anonymous reviewer for raising this issue.

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

to be reliable. As a corollary, RSI endorses the institution of detection and punishment mechanisms to deter untrustworthiness. But what if detection of unreliable behavior is unlikely or itself unreasonable to pursue? In this section, I argue that such situations exist and provide another area of collaborative practices in science that fall outside the scope of the RSI. The problem of unlikely detection has been discussed in previous arguments for the role of trust in scientists’ moral character. In the testimony literature, John Hardwig (1991) argues that the peer review and replication mechanisms for detection of fraud provide insufficient disincentives for self-interested scientists. Thus, he argues, scientists’ trust in the testimony of their peers is grounded in belief in the good moral character of their colleagues. In addition, Whitbeck (1995) argues for a similar conclusion in a brief discussion of collaboration. She points to the limitations on heads of laboratory to monitor all of the activities of their junior colleagues.20 Whitbeck also notes the limitations of interdisciplinary collaborators to detect unreliability by their collaborators, whose work they are not competent to judge (Whitbeck, 1995, p. 405). There are a number of responses available. Some have disputed Hardwig’s premise that the detection mechanisms are insufficient to uncover fraud (Adler, 1994; Blais, 1987). This is an empirical question best determined by such proposals as an audit of scientific publications to determine the extent of fraud (cf. Rennie, 1989). Others might argue that the existence of scientific fraud means that the detection mechanisms are insufficient and should be bolstered. This argument could be used by the RSI theorist to point out that the low probability of detection of unreliable behavior is not an argument against their claim that reliance is rational in those cases where adequate detection mechanisms are in place. However, Hardwig (1991, p. 707) anticipates such a response and suggests the stronger premise that it is impossible to provide adequate detection mechanisms in science; for example, he says, ‘There are no ‘people-proof’ institutions.’ I will not pursue that debate; instead I will argue that whether or not it is possible to people-proof institutions, sometimes there are good reasons not to even try. There are situations in which detection and punishment mechanisms are unreasonable, selfdefeating, and counterproductive to epistemic goals. I begin by outlining the general arguments for this claim, and then I argue that they apply to scientific trust. Numerous authors argue that excessively checking up on the trusted party is counter-productive. Baier (1994, p. 139) argues that excessive checking up on the trustee undermines healthy trust relationships. Similarly, O’Neill (2002) argues that recent decades have seen a problematic proliferation of a ‘culture of accountability’ in many areas of professional life. Demands for greater accountability have led to requirements that professionals provide detailed documentation of their activities and submit to regular audits. One of O’Neill’s concerns about the culture of accountability is that it undermines the proper aims of professions (O’Neill, 2002, p. 49). Using examples from education and health care, O’Neill (2002, pp. 54–57) shows that professionals who work under stringent detection mechanisms may well be motivated by fear of punishment to live up to the (often arbitrary) expectations and the burdensome reporting requirements placed on them, but they can grow to resent the attitude of suspicion in which they work and the ways in which constantly having to prove their trustworthiness distracts them from the real aims of their profession. Thus reliance on detection mechanisms can actually be ineffective in motivating valuable, trustworthy behavior and can even motivate untrustworthy behavior when people strongly resent being monitored.

20

Barber (1987, p. 130) makes a similar point.

307

Addressing the other side of the coin, several ethicists have argued that while distrustful checking can motivate untrustworthiness, moral trust can motivate trustworthiness. Holton (1994), Jones (2004), and McGeer (2008) emphasize the rationality of therapeutic trust, ‘trust undertaken with the aim of bringing about trustworthiness’ (Jones, 2004, p. 5). Trusting someone, which involves forgoing attempts to reduce one’s vulnerability by checking up on the trustee (Jones, 2004, p. 8), can inspire the trustee to live up to one’s expectations. Why would the absence of moral trust (reflected in reliance on detection and punishment mechanisms) and the presence of moral trust have these opposing effects on the trustworthiness of the trustee? One reason is that to be distrusted and subjected to monitoring of one’s trustworthiness is a sign of disrespect. We do not feel motivated to live up to the trust of those who show us such disrespect. Social scientists have provided evidence that systems of external detection and punishment diminish intrinsic pro-social motivations when agents feel that their self-determination and self-esteem are decreased (Ostrom, 2005, p. 260). Perhaps the trustworthiness inspired by trust can be explained by a converse psychological mechanism that motivates us to live up to the trust of those who show signs of respecting us and taking us to be trustworthy. For these reasons, it can sometimes be unreasonable to follow RSI’s advice to use detection mechanisms that would trigger punishment upon detection of untrustworthiness, thereby providing incentive for the trustee to act trustworthily. When is it unreasonable? When the detection mechanisms are likely to make the person being monitored feel distrusted enough to decrease their motivation to be trustworthy, and when that decrease in motivation is greater than the opposing motivation to be trustworthy provided by the fear of punishment. In other words, RSI breaks down as an explanation of trust in situations when mechanisms for making it in the self-interest of the trustee to be trustworthy are selfdefeating. The RSI theorist might object that I have overemphasized the role of excessive checking up in RSI, and she might argue that Pettit’s (1995) work on trust-responsiveness provides an RSI explanation for trust in situations when detection mechanisms are selfdefeating. This RSI theorist argues that reliance on the self-interest of another can be rational, even in the absence of a stringent culture of accountability. The key is that, as Pettit points out, humans value the good esteem of others. This makes it rational for A to rely on B, because A’s reliance expresses A’s good esteem of B. In relying upon B, A shows that A thinks B has the praiseworthy trait of reliability. Since A’s good esteem is valued by B, it is in B’s self-interest to be reliable in order to keep it. Thus, ‘the act of trust can prove inherently motivating: can provide an incentive in the economy of regard for the trustee not to let me down’ (Pettit, 1995, p. 220). There is a problem with this RSI defense. As McGeer argues, Pettit’s esteem-seeking mechanism is unstable because it depends upon B being unaware that A is relying upon B for these reasons: [T]rustees cannot know or suspect that they are only being trusted because the trustor is relying on the likelihood of their having a desire for good opinion; for then trustees will know or suspect that trustors do not really hold them in high regard (as actually possessing trust-attracting virtues), but only imagine them to be manipulable because they possess the less admirable trait of seeking others’ good opinions. (McGeer, 2008, p. 16) When B suspects this reason for A’s reliance, B loses the incentive to act reliably and is likely to react negatively, since no one likes to feel manipulated. Thus, while it may explain the rationality of trust in

308

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

situations where the trustee is unlikely to suspect that the trustor is merely relying on her self-interest, this RSI explanation for collaboration cannot explain all cases of trust. To summarize this subsection, when we look at RSI’s explanation of trust, we find that it relies on unstable and potentially self-defeating mechanisms. Again, this is not to deny that such explanations enjoy some explanatory power. When reliance on self-interest requires minimal, unobjectionable detection mechanisms, and when the relied upon party is unlikely to feel that such mere reliance is a sign of low esteem, then it is rational to rely on the self-interest of another to motivate her to be reliable. But what about when these conditions do not apply, and how do these issues play out in scientific contexts? In the next section, I argue that we can explain some concrete instances of scientific trust as cases of moral trust in the moral character of scientists. 3.2.2. Problems with detection in science & the moral trust solution Having argued previously that trust in the moral and social character of the gentleman scientist was central to the epistemic practices of early modern science (Shapin, 1994), Shapin (2008) challenges the view that such personal considerations are alien to the world of late modern science, which is often presented as part of the late modern expansion of trust in institutions rather than trust in individuals. To prove the importance of personal familiarity in late modern science, Shapin analyses the modes of organization of twentieth-century industrial scientists. By the mid-century, the majority of American scientists were working in industry (Shapin, 2008, p. 110). Industrial labs were social environments where collaboration was valued, and the composition of the teams was carefully constructed by managers who hired the scientists and set up the labs. Starting around the 1950s, these industrial managers (most of whom had earlier been bench scientists themselves) shared their experiences and exchanged craft lore in journals (Shapin, 2008, pp. 129–131). Since the managers were responsible for the selection, hiring, and day-to-day running of industrial labs, their reflections reveal much about the considerations that determined who gained entrance to industrial scientific collaborations, and the social-epistemic environment in which they worked.21 Issues of trust were perennial concerns, since all the parties were taking a risk in the creation of an industrial lab. The scientists who left secure, if not lucrative, academic positions for industry risked losing their status in the scientific community, since industrial scientists focused less on publishing and attending conferences. In addition, they risked losing the ability to choose their own research projects (Shapin, 2008, p. 139). Of course, not all scientists prized the kind of autonomy available in academia. But for those who still wanted some freedom to direct their research and participate in academic circles, trust in their managers not to exercise excessive control was needed. For their part, the industrial managers, as representatives for the corporation’s interests, risked wasting company resources on unsuccessful or unprofitable projects. Thus, managers needed to be able to trust the scientists not to waste company funds on pet projects. But what grounded this trust? Was it mere reliance on self-interest, or was it moral trust in moral motivations? One might argue that the trust of the industrial managers was a matter of mere reliance on the self-interest of the researchers, since the scientists were company employees. Scientists who wasted company money or took lab resources to pursue unsanctioned research projects could be fired. Thus, one might argue that there were ample incentives to be a trustworthy industrial

scientist. Surely this story accounts for some of the trust placed in the rising numbers of industrial scientists; however, thinking that this is the whole story misses a key feature of the social-epistemic environment of industrial science. The academic scientists whom managers hoped to lure to industry were accustomed to significant freedom. Managers thus recognized that they would have trouble hiring and keeping first-rate scientists in a lab with a stringent culture of accountability. Shapin (2008, p. 154) cites a 1945 text as arguing that ‘‘Punching of time-clocks, pettiness relative to time off, . . . criticism for apparently doing nothing but looking out of the window’ rightly ‘incenses research men,’ and increases the possibility that the company’s most valuable assets will walk out the door.’ In sum, given the tendency of the scientists to react poorly to excessive monitoring and lack of autonomy, the explanation of the trust placed in them cannot be a simple story of mere reliance on punishment coupled with ongoing detailed monitoring to determine whether sanctions should be implemented. So what did ground the managers’ trust? Trust in the moral character of industrial scientists provided such reasons for trust. Rather than placing scientists under strict control mechanisms, managers sought researchers who would ‘consider that they are on their honor’ to use their time well (qtd. in Shapin, 2008, p. 156). In describing the ideal research scientist, research managers included moral virtues: for example, one list includes ‘honesty, accuracy, dependability, loyalty and cooperativeness’ (qtd. in Shapin 2008, p. 184). Managers did not just extol the merits of hiring such trustworthy scientists; they actually sought evidence of moral motivations during the hiring process. One 1948 letter of reference form is striking in its concern for the recommender to comment on the moral character of the applicant. Questions include ‘Do you believe that the applicant is: Honest . . . Sober . . . Dependable . . .,’ ‘Is there anything which would tend to reflect unfavorably on applicant’s character or reputation? . . . ’ And recommenders are asked to comment on the applicant’s dependability (‘Consider reliability, willingness, consistent industry, and honesty’) (qtd. in Shapin, 2008, p. 185). Thus, while managers considered stringent control of scientists to be counterproductive, they considered seeking evidence of moral virtue to be worthwhile. Of course, most employers value virtues such as honesty and reliability in employees. But this further shows the importance of following the moral equivalence dictum that ‘whatever is true of people in general had better apply to scientists as well’—it draws our attention to the role of moral virtues in science. Managers also sought evidence of moral virtue to deal with another issue of trust in the industrial lab. Not only did managers need to be able to trust their employee researchers, but the scientists needed to be able to trust their fellow team members. Many at the time regarded industrial science as much more collaborative than academic science (Shapin, 2008, p. 182). Thus, part of the industrial manager’s job was setting up a lab where scientists could trust each other. Importantly, this was not primarily achieved through setting up punitive mechanisms for uncooperative behavior. Instead, managers ensured that the team would function by looking for evidence of the moral virtues that foster collaboration. Executives at a New York chemical company claimed that their top researchers ‘should be ‘gentlemen,’ not in the snobbish sense, but in the broad meaning of the term, involving the qualities of fairness and consideration for others’ (qtd. in Shapin, 2008, p. 183). The letter of recommendation form cited earlier also asks for comments on the applicant’s cooperativeness (‘Consider ability to get along with people in various capacities, willingness, loyalty’) (qtd. in Shapin, 2008, p. 187). Several texts from the

21 One might worry about bias in these managerial texts and their potential role as apologiae for the expansion of industrial science. For Shapin’s response, see (2008, pp. 130– 131).

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

period endorse virtues of self-sacrifice for the good of one’s colleagues: one manager defines ‘scientific integrity’ as ‘the ability to consider another man’s work as favorably as you would your own or another group’s needs as favorably as you would those of your own group,’ and another manager professes to want young researchers with the following virtues: ‘‘honesty’ first, then, ‘cooperation’ which includes willingness to submerge personal desires in joint accomplishment’ (both qtd. in Shapin, 2008, p. 185). In sum, managers looked for evidence of moral virtue to deal with the issues of trust that were central to the social-epistemic environment of the industrial lab. Note that the considerations here are partly epistemic: the managers eschewed a stringent culture of accountability because it was deemed epistemically counterproductive—it would not produce a culture in which research would thrive. Similarly, well-functioning teams were valued because they were believed to produce more results. During this period, industrial managers extolled the epistemic virtues of collaboration and highlighted the epistemic vices of individualism in science (Shapin 2008, pp. 190–194). Finding scientists with moral virtues suited to collaborative work not only enabled managers to hire employees who could get along in a collaborative corporate context, but, since industrial collaboration was seen to have epistemic advantages, these moral virtues had epistemic significance for managers. Thus, mid-twentieth century industrial science shows another instance where moral trust in the character of scientists laid the foundation for collaboration. In an environment in which it would be counter-productive to rely on the detection and punishment mechanisms promoted by RSI, members of the scientific community can search for evidence of moral motivations in their colleagues. Moral trust, therefore, can provide reasons for collaboration when mere reliance fails. I have now provided two arguments that RSI cannot account for all cases of trust in science. First, RSI cannot fully account for trust by the relatively powerless. Second, reliance on self-interest can be self-defeating. In addition, I have provided case studies (of junior scientists and industrial science) showing that in situations where RSI fails, members of the scientific community ground their moral trust in evidence of the moral virtue of their colleagues. This demonstrates that a full explanation of collaboration in science needs to go beyond RSI to also recognize MT. 4. Conclusion Like the rest of humanity, scientists are complex beings. They are motivated by both self-interest and moral regard for their colleagues. They have self-interested desires for credit and reputation, and they have moral virtues and distaste for taking advantage of others. Unlike many philosophers who have focused exclusively on the self-interested drives of scientists, members of the scientific community know that many of their colleagues have moral virtues. Accordingly, they do not only merely rely on each other’s selfinterest to avoid exploiting the risks inherent in collaboration; they also often morally trust each other. This moral trust is particularly salient in situations when powerlessness and self-defeating detection make it irrational to merely rely on one’s colleagues. Acknowledgements I thank Cory Andrews, Carole Lee, Sandra Mitchell, Lisa Parker, Nicholas Rescher, Laura Ruetsche, and Kieran Setiya for helpful comments on earlier versions of this paper. References Adler, J. (1994). Testimony, trust, knowing. Journal of Philosophy, 91, 264–275. Anderson, E. (2011). Democracy, public policy, and lay assessments of scientific testimony. Episteme, 8, 144–164.

309

Anderson, M., Ronning, E., De Vries, R., & Martinson, B. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13, 437–461. Baier, A. (1994). Moral prejudices. Cambridge, MA: Harvard University Press. Barber, B. (1987). Trust in science. Minerva, 25, 123–134. Batson, C. D. (2002). Addressing the altruism question experimentally. In S. G. Post, L. G. Underwood, J. P. Schloss, & W. B. Hurlbut (Eds.), Altruism and altruistic love: Science, philosophy, and religion in dialogue (pp. 89–105). New York: Oxford University Press. Blais, M. (1987). Epistemic tit for tat. Journal of Philosophy, 84, 363–375. Campbell, E. G., Clarridge, B. R., Gokhale, M., Birenbaum, L., Hilgartner, S., Holtzman, N. A., et al. (2002). Data withholding in academic genetics. Journal of the American Medical Association, 287, 473–480. Code, L. (2006). Ecological thinking: The politics of epistemic location. New York: Oxford University Press. Committee on Publication Ethics. Changes in authorship: (d) Request for removal of author after publication. (2008). Accessed 24 August 2012. Fagen, A., & Wells, K. (2004). The 2000 national doctoral program survey. In D. Wulff & A. Austin (Eds.), Paths to the professoriate (pp. 74–91). San Francisco: JosseyBass. Fallis, D. (2006). The epistemic costs and benefits of collaboration. The Southern Journal of Philosophy, 44, 197–208. Faulkner, P. (2011). Knowledge on trust. New York: Oxford University Press. Flanigan, A., Carey, L. A., Fontanarosa, P. B., Phillips, S. G., Pace, B. P., Lundberg, G. D., et al. (1998). Prevalence of articles with honorary authors and ghost authors in peer-reviewed medical journals. The Journal of the American Medical Association, 280, 222–224. Fricker, E. (2002). Trusting others in the sciences: A priori or empirical warrant? Studies in History and Philosophy of Science, 33, 373–383. Frost-Arnold, K. (2012). The cognitive attitude of rational trust. Synthese, 1–18. Available at 10.1007/s11229-012-0151-6. Gadlin, H., & Jessar, K. (2002). Preempting discord: Prenuptial agreements for scientists. The NIH Catalyst, 10, 12. Giddens, A. (1990). The consequences of modernity. Stanford, CA: Stanford University Press. Golde, C. & Dore, T. (2001a). At cross purposes: What the experiences of doctoral students reveal about doctoral education. Philadelphia: A Report for The Pew Charitable Trusts. Accessed 24 August 2012. Golde, C. & Dore, T. (2001b). Quotes from students. Accessed 24 August 2012. Goldman, A., & Shaked, M. (1991). An economic model of scientific activity and truth acquisition. Philosophical Studies, 63, 31–55. Grasswick, H. (2010). Scientific and lay communities: Earning epistemic trust through knowledge sharing. Synthese, 177, 387–409. Hardin, R. (2002). Trust and trustworthiness. New York: Russell Sage Foundation. Hardwig, J. (1985). Epistemic dependence. Journal of Philosophy, 82, 335–349. Hardwig, J. (1991). The role of trust in knowledge. Journal of Philosophy, 88, 693–708. Hobbes, T. (1994). Leviathan. Indianapolis: Hackett (First published 1651). Holton, R. (1994). Deciding to trust, coming to believe. Australasian Journal of Philosophy, 72, 63–76. Hull, D. (1988). Science as a process. Chicago: University of Chicago Press. Hull, D. (1997). What’s wrong with invisible-hand explanations? Philosophy of Science, 64, S117–S126 (Proceedings). International Committee of Medical Journal Editors (ICMJE) (2010). Uniform requirements for manuscripts submitted to biomedical journals. Philadelphia, PA: International Committee of Medical Journal Editors. Jones, K. (2004). Trust and terror. In P. DesAutels & M. U. Walker (Eds.), Moral psychology: Feminist ethics and social theory (pp. 3–18). New York: Rowman & Littlefield. Kitcher, P. (1993). The advancement of science: Science without legend, objectivity without illusions. New York: Oxford University Press. Kwok, L. S. (2005). The white bull effect: Abusive coauthorship and publication parasitism. Journal of Medical Ethics, 31, 554–556. Ledford, H. (2008). Collaborations: With all good intentions. Nature, 452, 682–684. Longino, H. (2002). The fate of knowledge. Princeton, NJ: Princeton University Press. Lovitts, B. (2001). Leaving the ivory tower: The causes and consequences of departure from doctoral study. Lanham, MD: Rowman & Littlefield. Mainous, A. G., III, Bowman, M. A., & Zoller, J. S. (2002). The importance of interpersonal relationship factors in decisions regarding authorship. Family Medicine, 34, 462–467. McGeer, V. (2008). Trust, hope and empowerment. Australasian Journal of Philosophy, 86, 1–18. Merton, R. K. (1973a). The normative structure of science. In R. K. Merton & N. W. Storer (Eds.), The sociology of science (pp. 267–278). Chicago: Chicago University Press. Merton, R. K. (1973b). The Matthew effect in science. In R. K. Merton & N. W. Storer (Eds.), The sociology of science (pp. 439–459). Chicago: Chicago University Press. O’Neill, O. (2002). A question of trust. New York: Cambridge University Press. Ostrom, E. (2005). Policies that crowd out reciprocity and collective action. In H. Gintis, S. Bowles, R. Boyd, & E. Fehr (Eds.), Moral sentiments and material interests (pp. 253–275). Cambridge, MA: MIT Press. Pettit, P. (1995). The cunning of trust. Philosophy & Public Affairs, 24, 202–225.

310

K. Frost-Arnold / Studies in History and Philosophy of Science 44 (2013) 301–310

Railton, P. (1994). Truth, reason, and the regulation of belief. Philosophical Issues, 5, 71–93. Rennie, D. (1989). How much fraud? Let’s do an experimental audit. The AAAS Observer, 3, 4. Rescher, N. (1989). Cognitive economy: An inquiry into the economic dimension of the theory of knowledge. Pittsburgh, PA: University of Pittsburgh Press. Rolin, K. (2002). Gender and trust in science. Hypatia, 4, 95–118. Scheman, N. (2001). Epistemology resuscitated: Objectivity as trustworthiness. In N. Tuana & S. Morgen (Eds.), Engendering rationalities (pp. 23–52). Albany, NY: State University of New York Press. Shamoo, A., & Resnik, D. (2003). Responsible conduct of research. New York: Oxford University Press. Shapin, S. (1994). A social history of truth. Chicago: University of Chicago Press. Shapin, S. (2008). The scientific life. Chicago: University of Chicago Press. Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior. Cambridge, MA: Harvard University Press. Strange, K. (2008). Authorship: Why not just toss a coin? American Journal of Physiology—Cell Physiology, 295, C567–C575. Strevens, M. (2006). The role of the Matthew effect in science. Studies in History and Philosophy of Science, 37, 159–170. Strevens, M. (2011). Economic approaches to understanding scientific norms. Episteme, 8, 184–200. Sztompka, P. (2007). Trust in science. Journal of Classical Sociology, 7, 211–220.

Thagard, P. (1997). Collaborative knowledge. Noûs, 31, 242–261. Thagard, P. (2006). How to collaborate: Procedural knowledge in the cooperative development of science. Southern Journal of Philosophy, 44, 177–196. Tollefsen, D. (2006). Group deliberation, social cohesion, and scientific teamwork: Is there room for dissent? Episteme, 3, 37–51. Wagena, E. J. (2005). The scandal of unfair behaviour of senior faculty. Journal of Medical Ethics, 31, 308. Whitbeck, C. (1995). Truth and trustworthiness in research. Science and Engineering Ethics, 1, 403–416. Wilholt, T. (2009). Bias and values in scientific research. Studies in History and Philosophy of Science, 40, 92–101. Woolston, C. (2002). When a mentor becomes a thief. Science Careers. Accessed 24 August 2012. Wray, K. B. (2000). Invisible hands and the success of science. Philosophy of Science, 67, 163–175. Wray, K. B. (2002). The epistemic significance of collaborative research. Philosophy of Science, 69, 150–168. Wray, K. B. (2006). Scientific authorship in the age of collaborative research. Studies in History and Philosophy of Science, 37, 505–514. Wray, K. B. (2007). Evaluating scientists: Examining the effects of sexism and nepotism. In H. Kincaid, J. Dupré, & A. Wylie (Eds.), Value-free science: Ideal and illusions? (pp. 87–106). Oxford: Oxford University Press.