Value-entanglement and the integrity of scientific research

Value-entanglement and the integrity of scientific research

Accepted Manuscript Value-entanglement and the integrity of scientific research David B. Resnik, Kevin C. Elliott PII: S0039-3681(18)30092-X DOI: h...

514KB Sizes 1 Downloads 55 Views

Accepted Manuscript Value-entanglement and the integrity of scientific research David B. Resnik, Kevin C. Elliott PII:

S0039-3681(18)30092-X

DOI:

https://doi.org/10.1016/j.shpsa.2018.12.011

Reference:

SHPS 1440

To appear in:

Studies in History and Philosophy of Science

Received Date: 26 March 2018 Revised Date:

26 November 2018

Accepted Date: 24 December 2018

Please cite this article as: Resnik D.B. & Elliott K.C., Value-entanglement and the integrity of scientific research, Studies in History and Philosophy of Science (2019), doi: https://doi.org/10.1016/ j.shpsa.2018.12.011. This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

ACCEPTED MANUSCRIPT 1

Value-Entanglement and the Integrity of Scientific Research

*David B. Resnik, Bioethics, National Institute of Environmental Health Sciences, National

[email protected].

RI PT

Institutes of Health, 111 Alexander Drive, Research Triangle Park, NC, 27709, USA.

Kevin C. Elliott, Associate Professor, Lyman Briggs College, Department of Fisheries and Wildlife, and Department of Philosophy, Michigan State University

SC

*corresponding author

Abstract

M AN U

Throughout much of the 20th century, philosophers of science maintained a position known as the value-free ideal, which holds that non-epistemic (e.g., moral, social, political, or economic) values should not influence the evaluation and acceptance of scientific results. In the last few decades, many philosophers of science have rejected this position by arguing that non-epistemic values can and should play an important role in scientific judgment and decision-making in a variety of contexts, including the evaluation and acceptance of scientific results. Rejecting the

TE D

value-free ideal creates some new and vexing problems, however. One of these is that relinquishing this philosophical doctrine may undermine the integrity of scientific research if practicing scientists decide to allow non-epistemic values to impact their judgment and decisionmaking. A number of prominent philosophers of science have sought to show how one can

EP

reject the value-free ideal without compromising the integrity of scientific research. In this paper, we examine and critique their views and offer our own proposal for protecting and

AC C

promoting scientific integrity. We argue that the literature on research ethics and its focus on adherence to norms, rules, policies, and procedures that together promote the aims of science can provide a promising foundation for building an account of scientific integrity. These norms, rules, policies, and procedures provide a level of specificity that is lacking in most philosophical discussions of science and values, and they suggest an important set of tasks for those working in science and values—namely, assessing, justifying, and prioritizing them. Thus, we argue that bringing together the literature on research ethics with the literature on science and values will enrich both areas and generate a more sophisticated and detailed account of scientific integrity.

ACCEPTED MANUSCRIPT 2

Key words: science, philosophy, epistemic values, non-epistemic values, value-free ideal, integrity, responsible conduct of research

1. Introduction

RI PT

Throughout much of the 20th century, most philosophers of science accepted the valuefree ideal, which Douglas (2009) defines as the view that “the value judgments internal to

science, involving the evaluation and acceptance of scientific results at the heart of the research process, are to be as free as humanly possible of all social and ethical values” (45). Adherents of

SC

the value-free ideal often acknowledge that “epistemic values”, which promote the advancement of knowledge, should still be allowed to influence scientific reasoning (McMullin 1983). These

M AN U

values include commonly cited criteria for developing, testing, and accepting hypotheses and theories, such as simplicity, explanatory power, generalizability, testability, novelty, and rigor (Longino 1990; Resnik 2007; Douglas 2009; Elliott 2011b; Steel 2015).1 Proponents of this position acknowledge the descriptive/historical thesis that non-epistemic values often do influence science but defend the normative/philosophical claim that scientists still ought to strive to make research value-free (Hudson 2016; Resnik 2007; 2009; Betz 2013; 2017). One of the

TE D

main arguments in favor of value-freedom as an ideal is to protect the integrity of science, since allowing non-epistemic values to affect scientific judgment and decision-making could lead researchers to sacrifice epistemic aims for social, moral, or political ones, which could have disastrous consequences for science and society.2 To protect the integrity of science, on this

(Haack 1998).

EP

view, scientists should use only epistemic values in evaluating and accepting scientific results

AC C

Since the 1990s, however, many philosophers of science have argued that non-epistemic values can and should play an important role throughout scientific judgment and decisionmaking, including problem selection, research design, methodology, data analysis and 1

Some philosophers (e.g. Rooney 1992, 2017; Longino 1996) have challenged the distinction between epistemic and non-epistemic values, but we will not explore their arguments here. We will assume that something like this distinction is necessary to coherently maintain that scientists ought to minimize the impact of moral, social, or political values on their research (Steel 2010). If we are to maintain the thesis that the ideal of value-freedom in scientific inquiry does not rest on a conceptual mistake, we need to adopt something like the epistemic/nonepistemic value distinction (Douglas 2009). 2 For example, from the late 1930s to 1960s, the Soviet scientists were forced to accept Trofim Lysenko’s Lamarckian views of heredity because they supported the ideology of the Communist Party. Scientists who studied or taught Mendelian genetics could be imprisoned, exiled, or executed. Lysenkoism had a negative impact on Soviet science and stunted research in many fields for several decades (Resnik 2009).

ACCEPTED MANUSCRIPT 3

interpretation, and hypothesis acceptance (see Table 1).3 While there is some disagreement about precisely how non-epistemic values should impact scientific research, as well as the legitimate scope or extent of these value influences, many scholars and researchers reject the value-free ideal and accept a position that we will call value-entanglement because it emphasizes the

RI PT

multiple ways that non-epistemic values can and should impact science (Douglas 2009; Elliott 2017).

SC

Table 1: Areas of Scientific Judgment and Decision-Making that May be Influenced by Non-Epistemic Values (based on National Academy of Sciences 2002; Steneck 2004;

M AN U

Shamoo and Resnik 2015)4 •

Problem Selection (e.g. studying a topic chosen by a private or public research sponsor)



Literature Search (e.g. ignoring previous studies that contradict a hypothesis favored by one’s values)



Research Design (e.g. underpowering an experiment so that it is not likely to yield

TE D

statistically significant results pertaining to the adverse effects of a chemical; choosing methods that minimize animal pain or suffering) •

Concept Formation (e.g. key terms, such as “intelligence”, “race”, “disease”, and “aggression” may be defined in terms of social or other values) Data and Sample Collection, Testing, and Experimentation (e.g. protecting humans or

EP



animals involved in experiments) •

Data Recording and Record-Keeping (e.g. fabricating or falsifying data)

AC C



Data Storage (e.g. transcribing and then destroying audiotapes of interviews with human subjects to protect confidentiality)

3 The literature on science and values is vast. See, for example, Kuhn (1978); McMullin (1982); Laudan (1984);Anderson (2004); Biddle (2013); Brown (2013); de Melo-Martín and Intemann (2016); Douglas (2000, 2009)Elliott (2011b, 2017); Hicks (2014); Kincaid et al. (2007); Kourany (2010); Lacey (1999); Longino (1990); Resnik (1998, 2007, 2009); Schroeder (2017); Shrader-Frechette (1996); Steel (2010, 2015); Steele (2012). 4 This table is merely descriptive and illustrative; it does not pass judgment on whether particular non-epistemic value influences are problematic. Typically, proponents of the value-free ideal have tried to limit non-epistemic values to particular areas of scientific judgment (such as data and sample collection, testing, and experimentation), whereas critics of the value-free ideal have argued that a broader array of judgments (e.g. data analysis or data interpretation) can appropriately be influenced by non-epistemic values (Douglas 2009).

ACCEPTED MANUSCRIPT 4



Data Analysis (e.g. choosing a method of data analysis likely to yield results favorable to a hypothesis that promotes one’s interests or ideology)



Data Interpretation (e.g. interpreting data in a direction favorable to one’s financial or other interests or values) Paper Writing (e.g. using language in a paper that promotes one’s values or interests; hyping a hypothesis or idea in a paper)



RI PT



Publication (e.g. not publishing a study that goes against one’s financial interests;

selectively publishing data; communicating important public health results directly to the

paper to avoid disclosing their conflict of interest)

Peer Review (e.g. allowing one’s financial, personal, political, or other interests to

M AN U



SC

media without first submitting them to a journal; not naming someone as an author on a

influence one’s review of a paper) •

Data Sharing (e.g. not sharing data to protect one’s financial interests; removing personal identifiers from data to protect the confidentiality of humans subjects)



Hypothesis/Theory Acceptance (e.g. accepting a hypothesis or theory because it

TE D

promotes one’s financial interests or moral or political values)

In recent years, a variety of arguments have been offered for abandoning the value-free ideal. Some critics of the ideal have argued that scientific reasoning must incorporate values of some sort, and it is typically impossible, impractical, or unhelpful to formulate a distinction

EP

between epistemic and non-epistemic values (Longino 1990; Rooney 1992, 2017). A second argument is that scientists are often engaged in accepting and rejecting hypotheses, and when

AC C

doing so they typically have a responsibility to incorporate non-epistemic values in their judgments about how much evidence to demand (Douglas 2009; 2017; Elliott and Richards 2017). A third justification is that scientists are forced to make a wide variety of choices (about terminology, concepts, background assumptions, data analysis, data interpretation, and so on) that are underdetermined by purely epistemic considerations, and thus it is best for scientists to reflect on the social ramifications of these choices and the values that ought to guide them (Biddle 2013; Dupré 2007; Elliott 2017;Longino 1990). Rejection of the value-free ideal creates some new and vexing problems, however. The first is that relinquishing this philosophical doctrine may undermine the integrity of scientific

ACCEPTED MANUSCRIPT 5

research if practicing scientists decide to allow non-epistemic values to impact their judgment and decision-making in problematic ways (Douglas 2009; Elliott and McKaughan 2014; de Winter 2016). For example, a scientist working for a pharmaceutical company could decide that she is justified in fabricating or falsifying clinical trial data to promote her company’s values and

RI PT

interests (Resnik 2007), or a nutrition researcher politically opposed to genetically modified crops could decide that he is justified in manipulating his study design and data analysis in order to produce evidence to promote his ideology (Resnik 2015). Abandoning the value-free ideal could give free-rein to researchers, private companies, non-profit organizations, or even

SC

government agencies with an interest in skewing, distorting, or corrupting science for moral, social, political, or other purposes (Pielke 2007; Resnik 2007, 2009; Elliott 2011b; Steel 2015).

M AN U

According to Douglas (2009), “There must be some important limits to the role values play in science” (87).

The second problem is that accepting science’s inherent entanglement with non-epistemic values raises the issue of which (or whose) values should influence science (Elliott 2011b; Kitcher 2001; Schroeder 2017). Answering this question requires philosophers of science to engage in reflection on moral and political theory and consider the values that ought to influence

TE D

science. Some have suggested that science should benefit society by, for example, promoting public health (Krimsky 2003; Elliott 2011b), while others have argued that science should advance democratic ideals, such as freedom and equality of opportunity (Kitcher 2001, 2011), or rational deliberation about policy issues (Pielke 2007; Resnik 2009). Since engaging in

EP

philosophical reflection about the non-epistemic values that ought to influence science raises a whole host of difficult normative issues that philosophers of science may not be adequately

AC C

prepared to deal with, discussions and collaborations with philosophers who are working in moral or political theory would seem to be appropriate. In this paper, we will focus on the first problem with the value entanglement view.5

Many critics of the value-free ideal have tried to formulate criteria for distinguishing legitimate influences of non-epistemic values that do not damage scientific integrity from problematic influences of values that do harm scientific integrity. We will argue, however, that these accounts tend to be inadequately specified and tend not to address the full range of decision points through which values influence scientific practice. Fortunately, literature on research 5

We intend to focus on the second problem in a future paper.

ACCEPTED MANUSCRIPT 6

ethics can provide a foundation for developing a more promising account of scientific integrity. On our view, an account of scientific integrity should start with adherence to a set of norms, rules, policies, and procedures that together promote the aims of science. These norms, rules, policies, and procedures are embodied and supported by a research environment that includes

RI PT

many different organizations and interested parties, including researchers, academic institutions, regulatory agencies, public and private sponsors, professional associations, and journals.

These norms, rules, policies, and procedures provide a level of specificity that is lacking in most philosophical discussions of science and values, and they suggest an important set of

SC

tasks for those working on science and values—namely, assessing, justifying, and prioritizing them. Thus, we argue that bringing together the literature on research ethics with the literature on

M AN U

science and values will enrich both fields and help generate a more sophisticated and detailed account of scientific integrity. Our defense of this approach will proceed as follows. In section 2, we will define scientific integrity; in section 3, we will consider and critique some proposed solutions to problem of protecting scientific integrity; in section 4, we will articulate our view; in section 5, we will illustrate in a particular case how it provides opportunities for integrating work in research ethics with scholarship in science and values; and we address some objections to it in

2. Scientific Integrity

TE D

section 6.

Before examining proposals for protecting the integrity of scientific research from

EP

problematic influences of non-epistemic values, it is essential to define scientific integrity. According to a common dictionary definition, integrity is “1. adherence to moral and ethical

AC C

principles; soundness of moral character; honesty. 2. the state of being whole, entire, or undiminished…3. a sound, unimpaired, or perfect condition” (Dictionary.com 2018). The Stanford Encyclopedia of Philosophy distinguishes between five types of integrity: “(i) integrity as the integration of self; (ii) integrity as maintenance of identity; (iii) integrity as standing for something; (iv) integrity as moral purpose; and (v) integrity as a virtue (Cox et al. 2017). The Encyclopedia of Ethics defines integrity as a type of moral virtue in which one’s actions are consistent with one’s moral identity (Diamond 2001). In the research ethics literature, integrity is usually equated with adherence to ethical and legal norms for the conduct of inquiry (Shamoo

ACCEPTED MANUSCRIPT 7

and Resnik 2015). For example, fabricating or falsifying data or manipulating statistics to achieve a particular result would violate the integrity of research (Shamoo and Resnik 2015). As one can see from these definitions, integrity could apply to many different things, including individuals, physical objects, or organizations (Diamond 2001). In the debate about

RI PT

science and values, philosophers are concerned mostly with preventing or discouraging practices (or behaviors) that undermine the integrity of science as a discipline or profession (Douglas 2009; Douglas 2014; de Winter 2016). A definition of integrity in science should therefore focus on what is constitutive of science. If we characterize science in terms of its aims or goals, then

SC

practices that interfere with the attainment of science’s aims would diminish the integrity of science (de Winter 2016; de Winter and Kosolosky 2013; Douglas 2014) (See Figure 1).

M AN U

Philosophers have proposed a number of different aims for science, including epistemic goals (e.g. knowledge, truth, error avoidance, explanation, understanding) and non-epistemic goals (e.g. economic growth and promotion of human flourishing) (Resnik 1998; Kitcher 2001; Elliott and McKaughan 2014). Since the main concern with abandoning the value-free ideal is the incursion of non-epistemic values into science, for the purposes of this paper we will define scientific integrity in terms of science’s epistemic aims. Moreover, in order to avoid begging

TE D

questions about which epistemic aims are most important, we will focus on the minimal and widely agreed-upon goal of empirical adequacy (Douglas 2009; van Fraassen 1980).6 However, one could formulate somewhat different accounts of scientific integrity if one took different aims to be fundamental.

EP

With these considerations in mind, we can distinguish between two different senses of scientific integrity, outcome integrity and process integrity. Outcome integrity consists in the

AC C

attainment of goals of science; process integrity consists in conformity to norms that tend to promote those goals. One can say that science has outcome integrity insofar as it tends to produce outcomes (i.e. statements, beliefs, hypotheses, or theories) that are empirically adequate and it lacks outcome integrity insofar as it produces outcomes that are not empirically adequate. Science has process integrity insofar as it conforms to norms that promote the production of empirically adequate outcomes, and it lacks process integrity insofar as it deviates from those

6

By focusing on empirical adequacy, we are not denying that truth might be an additional goal for science, and thus we are not taking a stand on debates about scientific realism.

ACCEPTED MANUSCRIPT 8

norms.7 Although it is tempting to think that producing empirically adequate outcomes is all that matters to scientific integrity, both senses of integrity are important. For example, suppose that a scientist is convinced that his hypothesis concerning the structure of a protein is empirically adequate but needs more data to have his article accepted for publication. If that scientist

RI PT

fabricates the remaining data, we would say that he has violated the integrity of science, even if his hypothesis is ultimately confirmed, because he has used improper means to obtain his results (de Winter 2016). Conversely, we would say that a scientist promotes integrity if she carefully

defends is ultimately refuted.

3.1 Direct vs. Indirect Roles for Values

M AN U

3. Proposals for Protecting Scientific Integrity

SC

follows norms for the production of empirically adequate conclusions, even if the hypothesis she

We shall now consider several proposals that opponents of the value-free ideal have made for protecting and promoting scientific integrity. One strategy for preserving scientific

TE D

integrity is to distinguish different roles that values can play in scientific research. For example, in her influential book, Science, Policy, and the Value-Free Ideal, Douglas (2009) emphasizes that values may impact science in various ways but holds that only some of these threaten its integrity. She draws a distinction between direct and indirect roles for values in scientific

EP

decision-making to identify problematic value influences and claims that: “[b]y distinguishing between direct and indirect roles for values, the fundamental integrity of science is protected”

AC C

(112). Although other philosophers (e.g. Hempel 1965; Heil 1983; Kitcher 2001) have appealed to or articulated something like this distinction, Douglas’ account merits closer examination because it has had considerable influence on the science and values debate (Steel 2015). Douglas formulates her distinction in several different ways (Elliott 2011a; Elliott 2013).8

For the purposes of this paper, we will focus on her logical or epistemic formulation, which 7

This is similar to de Winter’s definition of epistemic integrity which focuses on non-deceptiveness. For de Winter (2016), practices that lead to deceptiveness undermine the epistemic integrity of science. It is also worth noting that Longino (1990) also focuses on norms embodied in the social structure of science. 8 Douglas also says that values have a motivational component. Under a different formulation of the distinction, values in a direct role “determine our decisions in and of themselves, operating as stand-alone reasons to motivate our choices” or “provide direct motivation for the adoption of a theory” (Douglas 2009, 96). However, motivating

ACCEPTED MANUSCRIPT 9

holds that values in a direct role “operate much in the same way as evidence normally does, providing warrant or reasons to accept a claim” (Douglas 2009, 96; see also Douglas 2018, 3). Douglas does not think that values should operate in a direct role when scientists are accepting or rejecting hypotheses. She (2009) cautions that “values should not be construed as providing

RI PT

epistemic support for a claim (p97)” and “should never suppress evidence, or cause the outright rejection (or acceptance) of a view regardless of evidence” (113). For example, if one decided to accept a hypothesis because it promotes one’s financial interests, regardless of the evidence concerning its truth, then one would be allowing values to function in a direct epistemic role.

SC

According to Douglas, values operate in an indirect role when they “act to weigh the importance of uncertainty, helping to decide what should count as sufficient” evidence or support

M AN U

for making a choice (2009, 96, emphasis in original). Whereas Douglas thinks that values should not be allowed to play a direct role in assessing hypotheses or interpreting evidence, she argues that values acting in the indirect role can “completely saturate science, without threat to the integrity of science” (96). In an indirect role, values may “serve as a reason to accept or reject the current level of uncertainty, or to make the judgment that the evidence is sufficient in support of a choice, not as reason to accept or reject the options per se”(2009, 97). Values can help us

TE D

select the standards we should use to assess the evidence or strategies for interpreting data, but should not function as evidence or data. For example, scientists who decide to require a high level of evidence to accept a hypothesis concerning the safety of a new drug because they want to protect the public’s health would be using values in an indirect role.

EP

Incidentally, Douglas prefers not to speak of epistemic values and instead divides them into cognitive values and epistemic criteria (2009, 92; see also Douglas 2013). The cognitive

AC C

values are qualities like simplicity, explanatory power, and scope, which assist scientists in their thinking but do not ensure the truth or reliability of theories. The epistemic criteria are minimal characteristics like internal consistency and empirical adequacy, which apply to all acceptable theories. Douglas does not think that any values, including cognitive ones, should serve in a direct role when evaluating the warrant for hypotheses or theories. Instead, values like simplicity

belief is logically distinct from functioning as a reason for belief (Elliott 2011a). For example, if a close family member dies I may be motivated to believe in an afterlife, because I want to see that person again. However, this would not necessarily be a reason, in the aforementioned sense, for believing in an afterlife.

ACCEPTED MANUSCRIPT 10

and explanatory power should only serve in an indirect role.9 Thus, while many philosophers (e.g., Resnik 2007; Steel 2010) view the distinction between epistemic and non-epistemic values as relevant to protecting the integrity of science, Douglas does not. What matters is the role that values play in judgment and decision-making: “maintaining a distinction between the kinds of

RI PT

values to be used in science is far less important than maintaining a distinction in the roles that values play” (Douglas 2009, 98).

Since other writers (e.g. Steel 2010; Elliott 2011b; Elliott 2013) have critiqued the

cogency of Douglas’s distinction between direct and indirect roles for values in science, we will

SC

not rehash their arguments here. Instead, our criticism of her view will focus on whether her distinction between direct and indirect roles for values is adequate for protecting the integrity of

M AN U

science. Our short answer to this question is that it is not: values can still threaten the integrity of science even when they function in an indirect role (Elliott 2011a; Steel 2015). For example, Daniel Steel and Kyle Whyte (2012) suggest that Douglas’s approach would be problematic in a case where a pharmaceutical company thought that the standards of evidence for accepting the results of a clinical trial should be exceptionally high (perhaps because the trial produced results that conflicted with the company’s financial interests). This would apparently be an indirect role

TE D

for values, but Steel and Whyte argue that it could be epistemically problematic for the company to dismiss the study and fail to publish it, thereby depriving the scientific community of important information that could be incorporated in subsequent meta-analyses. Another worry about Douglas’s distinction is that she allows values to play a direct role

EP

in the early stages of scientific inquiry, when scientists are deciding what topics to study and how to study them. However, many writers (e.g. Krimsky 2003; Resnik 2007; Shrader-Frechette

AC C

2007; Michaels 2008; Holman and Elliott 2018) have described ways that private companies have undermined the integrity of science to promote their financial interests by strategically manipulating these early stages of inquiry, including study designs (Elliott and McKaughan 2009; Holman and Bruner 2017). For example, a chemical company interested in producing evidence that one of its products is not dangerous to human health can underpower10 an 9

It worth noting that many philosophers, such as Resnik (2007) and Steel (2015) would regard empirical support as a type of epistemic value. Thus, a scientist who accepts a hypothesis because it is supported by the evidence would be appealing to an epistemic value. 10 A study is underpowered if the sample size is too small to produce statistically significant evidence of a particular effect in a population. Underpowered studies yield Type II errors if the null hypothesis is that there is an effect. See note 7.

ACCEPTED MANUSCRIPT 11

experiment in which rodents are exposed to the product so that the study is not likely to yield statistically significant results pertaining to adverse effects (Resnik 2007). Alternatively, a chemical company might decide not to measure an important outcome related to health, such as changes in blood calcium levels, so that the experiment would yield no evidence of this effect.

RI PT

Pharmaceutical companies have suppressed data and results related to adverse effects of their drugs in order to promote their products. For example, Merck failed to include data related to cardiovascular risks of its drug Vioxx in a paper published in the New England Journal of

Medicine (Resnik 2007). Other companies have refrained from publishing clinical trials that did

SC

not yield favorable results (Resnik 2007; Holman and Elliott 2018). Finally, companies can skew the research record in a direction favorable to their interests by combining funding and

M AN U

publication strategies (Resnik 2007; Michaels 2008). For example, if a drug company funds 10 clinical trials of its product and only publishes 5 studies that produce positive results, the research record will reflect this bias.

To her credit, Douglas (2009) is aware that decisions made at earlier stages of inquiry must be “handled with care” so not to “determine the outcome of research” (101). However, the distinction between direct and indirect roles for values is relatively unhelpful for preventing

TE D

problems of this sort. Douglas remarks that “the best we can do is to acknowledge that values should not direct our choices in the early stages of science in such a pernicious way” (Douglas 2009, 101). Douglas (2009) focuses most of her attention on problematic roles for values at the last stage of inquiry, i.e., hypothesis and theory acceptance. Clearly, allowing non-epistemic

EP

values to supplant empirical evidence threatens the integrity, objectivity, and reliability of research. However, as we have seen, scientists and sponsors can stack the deck for or against a

AC C

hypothesis or theory long before it reaches this final stage. An approach to protecting the integrity of science should consider strategies for preventing non-epistemic values from biasing, distorting, or corrupting the research enterprise at many different stages of inquiry (Resnik 2007; Elliott and McKaughan 2009; Holman and Bruner 2017; Steel 2015). In part for this reason, Douglas (forthcoming) herself has recently argued that a comprehensive approach to preserving scientific integrity will require a combination of strategies, including distinguishing different types of values, distinguishing roles for values, choosing the right values, and promoting adequate functioning of the scientific community and institutions. In section 4 of this paper, we argue that the norms, rules, policies, and procedures discussed by the literature in research ethics

ACCEPTED MANUSCRIPT 12

can provide an important foundation for scientific integrity alongside the other items that Douglas lists. In his book, Interests and Epistemic Integrity in Science, Jan de Winter (2016) has developed an account of epistemic integrity based on a distinction between direct and indirect

RI PT

value influences that has much in common with the work of Douglas (2009). According to de Winter (2016), a value influence is direct if it operates as a stand-alone reason for a decision; an influence is indirect if it merely influences a decision as one of multiple reasons. For example, if drug company scientists interpret data from a clinical trial with the sole aim of promoting the

SC

company’s profits, the company’s financial interests (and perhaps also the scientists’ interests) would be functioning as a stand-alone reason for the interpretation. According to de Winter

M AN U

(2016), direct influences of non-epistemic values on the characterization of data, interpretation of evidence, or acceptance of theories (or hypotheses) should be avoided in science, but indirect influences are harmless and even desirable.

Although de Winter distinguishes between direct and indirect influences and Douglas distinguishes between direct and indirect roles, this difference does not help clarify the former’s distinction. The difference between a stand-alone reason and a mere influence is unclear, since a

TE D

mere influence may still threaten the integrity of science by exerting a disproportionately high impact on decision-making even if it does not function as a stand-alone reason. For example, suppose that scientists working for a drug company are interpreting data pertaining to a clinical trial of one its products. If they allow the company’s interests to exert a strong influence over

EP

their data interpretation we would still say that this would threaten scientific integrity, even if the influence is not a stand-alone reason. Additionally, as we have seen, non-epistemic values can

AC C

threaten the integrity of science even when they affect judgments and decisions relating to problem selection, publication, research design, and aspects of science other than characterization of data, interpretation of evidence, or acceptance of hypotheses. While de Winter recognizes the importance of promoting integrity in these other areas, his proposal, like Douglas’, does not adequately address them.

3.2. Values in Science Unlike Douglas and de Winter, Daniel Steel (2010, 2015) does not rely on a distinction between direct and indirect value influences to protect the integrity of science. Instead, Steel

ACCEPTED MANUSCRIPT 13

leans heavily on the distinction between epistemic and non-epistemic values (see also Steel and Whyte 2012). Epistemic values, according to Steel (2015), “promote truth-seeking aims” (161) and include such desiderata as empirical accuracy, internal consistency (i.e., logical consistency), external consistency (i.e. consistency with confirmed background beliefs and theories),

RI PT

testability, and simplicity. Non-epistemic values include moral, social, economic, and political values. Steel argues that to protect the integrity of science, researchers should adopt what he calls the “values in science standard”, i.e., “[n]on-epistemic values should not conflict with

epistemic values in the design or interpretation of scientific research that is practically feasible

SC

and ethically permissible” (2015, 178). While it is not entirely clear what Steel (2015) means by “conflict”, it is reasonable to assume that he means something like “trump” or “supersede” since

M AN U

elsewhere he says that non-epistemic values should not “override” epistemic values (171). For example, a scientist who ignores empirical evidence against a hypothesis when interpreting data in order to promote her financial interests would be violating this standard because she would be allowing her economic interests to supersede empirical accuracy. A scientist who designs an experiment in order to predetermine an outcome that promotes his political opposition to genetically modified foods would also be violating this standard because he would be allowing

TE D

his political ideology to override empirical accuracy.

Steel’s view is more comprehensive than Douglas’s and de Winter’s because he explicitly recognizes that non-epistemic values can threaten the integrity of science at earlier stages and articulates a standard for preventing this from happening. Scientists should avoid appealing to

EP

non-epistemic values not only in interpreting data and accepting hypotheses or theories but also in designing experiments that are practically feasible and ethical. For example, a concern for

AC C

human or animal welfare may impact experimental design, but not a desire to promote a particular commercial product. However, one might worry that Steel is not entirely explicit about addressing all the stages of science where integrity is at risk, including problem selection, publication, data sharing, data analysis, or peer review. A company could bias the research record solely through its funding, publication, and data sharing decisions (Holman and Bruner 2017; Elliott and McKaughan 2009; Resnik 2007). Another problem with Steel’s view is that there may be situations in which the experimental design choices made by researchers threaten the integrity of science even though it is not clear that non-epistemic values are overriding epistemic ones. For example, suppose that a

ACCEPTED MANUSCRIPT 14

chemical company’s scientists decide not to measure a particular adverse effect in an animal toxicology study, such as changes in blood calcium levels, so that their results will not show evidence of this effect. Although the decision is guided by non-epistemic values, it is not clear that these values have overridden epistemic ones, since epistemic values alone in this case might

RI PT

not determine which outcomes researchers should measure. Such choices may be guided by the researchers’ scientific aims (i.e., the questions they are trying to answer) or perhaps by a

particular agenda they are trying to promote (e.g., a company’s financial interests), but it is not clear that the epistemic values included in Steel’s framework would require scientists to measure

SC

particular outcomes. One can generate results that are true or empirically adequate even though they do not capture the whole truth.

M AN U

Similar problems can arise when researchers choose a method of analyzing the data that meets epistemic and methodological standards but nevertheless is biased toward a particular result. For example, researchers funded by a pharmaceutical company might decide to use the on-study approach in analyzing data from a Phase III clinical trial of a new drug as opposed to the intent-to-treat approach. In the on-study approach, one analyzes data only from human subjects who have completed the entire study and have complied with protocol requirements,

TE D

such as taking the medication as directed. In the intent-to-treat approach one analyzes data from all subjects who enroll in the study, including those who dropped out or have been withdrawn to protect them from harm or because they were not complying with the protocol (Gupta 2011). The on-study approach tends to produce more accurate information concerning efficacy, because

EP

it focuses on data from subjects who have completed the study and complied with the protocol, while the intent-to-treat approach tends to produce more complete data concerning safety,

AC C

because it includes data from subjects who did not finish the entire study because they were withdrawn to protect them from harm or they dropped out because they could not tolerate side effects (Gupta 2011). Epistemic goals related to truthfulness do not determine which approach the researchers should use for analyzing the data, because they do not determine whether one should be more interested in producing results related to efficacy or safety. Nevertheless, one might argue that deciding to use the on-study approach in order to minimize safety concerns with a new drug would be inappropriate because it would give higher priority to a company’s financial interests in obtaining regulatory approval as opposed to public health concerns.

ACCEPTED MANUSCRIPT 15

The problem for scientific integrity illustrated by both of these examples is that researchers may make decisions at various stages of inquiry that tend to bias the results in one direction or another even though the results are accurate, truthful, and internally and externally consistent, given the experimental design and method of data analysis (Elliott and McKaughan

RI PT

2009; Holman and Bruner 2017; Resnik 2007). Thus, Steel’s approach does not adequately protect scientific integrity because it does not include a sufficiently complete and specified

account of bias or how to avoid it. For example, one of the most important ways of minimizing bias is to subject research to rigorous peer review when planning or a study or publishing results,

SC

since reviewers can identify subtle methodological concerns or misleading ways of presenting

M AN U

study results (Shamoo and Resnik 2015).

3.3 Transparency, Representativeness, and Engagement

One of the authors of this paper, Kevin Elliott, offers a different strategy for preserving scientific integrity (Elliott 2017, 2018b). He acknowledges that non-epistemic values can and should affect scientific judgment and decision-making related to many different aspects of research, such as problem selection, research design, and so on (see Table 1). Unlike Douglas

TE D

(2009) and de Winter (2016), Elliott does not rely on a distinction between direct and indirect roles or influences to protect the integrity of science. He also does not try to prevent epistemic values from being overridden by non-epistemic ones; in fact, he argues that non-epistemic values can appropriately “trump” epistemic values under some circumstances (Elliott and McKaughan

EP

2014; see also Brown 2013, 2017). Instead, his recent book, A Tapestry of Values, focuses on three conditions for incorporating non-epistemic values into science in a legitimate fashion:

AC C

transparency, representativeness, and engagement (Elliott 2017). Regarding the first condition, he says that “scientists should be as transparent as possible about their data, methods, models, and assumptions so that others can identify the ways in which their work supports or is influenced by particular values” (2017, 14; see also Elliott and Resnik 2014). Regarding the condition of representativeness, he says, “[w]hen clear, widely recognized ethical principles are available, they should be used to guide the values that influence science. When ethical principles are less settled, science should be influenced as much as possible by values that represent broad societal priorities” (2017, 14-15). Regarding the condition of engagement, he says that value influences “should be scrutinized through appropriate processes of engagement between different

ACCEPTED MANUSCRIPT 16

scholars and other stakeholders” (2017, 10). Elliott suggests that when these conditions are met, values can typically be incorporated into science in an appropriate fashion. The notion of transparency plays a particularly important role in Elliott’s approach to protecting scientific integrity, not only in his book but in other articles too (Elliott and

RI PT

McKaughan 2014, Elliott and Resnik 2014). For example, when he and Daniel McKaughan (2014) argue that non-epistemic values can sometimes legitimately override epistemic values, they respond to concerns about losing scientific integrity by insisting that scientists need to try to be clear about the value-laden decisions they are making and their reasons for doing so (see also

SC

de Winter 2016). By doing so, others can determine whether they would employ similar value judgments or whether they would employ different value judgments and therefore arrive at

enhances scientific integrity:

M AN U

different conclusions. On his view, disclosure of value assumptions not only protects but also

Whether explicitly or implicitly, scientists make value judgments when they decide how to handle these choices [e.g. deciding what topics to study and how best to study them]. While proponents of the value-free ideal worry

TE D

that incorporating values in science will detract from scientific objectivity, we have seen that objectivity can actually be enhanced when implicit value judgments are brought into the open and subjected to thoughtful scrutiny and deliberation…scientists can legitimately incorporate values into their

EP

work if they are sufficiently transparent about them (Elliott 2017, 177).

AC C

Unfortunately, while Elliott’s three conditions are helpful, they are insufficient by themselves for preserving scientific integrity. For one thing, he does not precisely specify whether they are necessary or sufficient; he regards them more as rules of thumb or norms (Elliott 2018b). This would not be problematic if he provided enough guidance about how to apply them, but his book provides only minimal guidance in specific cases about which conditions must be met in order to ensure integrity and how to determine whether those conditions have actually been met. For example, while his representativeness condition calls for making value decisions in ways that accord with social and ethical priorities, it is unclear how to handle cases in which there is significant social and ethical disagreement. Elliott would call for

ACCEPTED MANUSCRIPT 17

more engagement in response to these disagreements, but he does not provide rules for achieving appropriate engagement or guidelines for determining when engagement has been successful (Kourany 2018). The transparency condition is also vague; scientists can never be perfectly transparent about the value judgments involved in their research, and it is unclear exactly how

RI PT

much transparency is needed in order to maintain scientific integrity.

Because Elliott’s three conditions remain unspecified, it is not clear that he can show when value influences or biases violate scientific integrity. For example, although he asserts that researchers and university administrators should find ways to protect science from “powerful

SC

interest groups” (Elliott 2011b, 191), it is not clear that he can distinguish between problematic cases of biasing research to promote a company’s financial interests from appropriate efforts to

M AN U

influence research in order to promote societal goals, such as public health. Elliott needs a clear account of which (or whose) values should influence science (Steel 2015) and how to achieve adequate transparency, representativeness, and engagement to preserve scientific integrity.

4. Protecting Scientific Integrity: A Way Forward

In the previous section, we considered three strategies for rejecting the value-free ideal

TE D

while protecting and promoting scientific integrity. While these proposals identify some problematic influences on scientific judgment and decision-making, they do not adequately address all of the ways that non-epistemic factors may bias or corrupt research, nor do they articulate fully comprehensive strategies for promoting scientific integrity. In other words, they

EP

are incomplete, and we suggest that they can be fruitfully elaborated by integrating them with the norms, rules, policies, and procedures discussed in the literature on research ethics.

AC C

In this section of the paper, we will describe our own proposal for protecting and promoting scientific integrity. The proposal makes use of the following insights that emerge from our critique of the other proposals. First, scientific integrity may be at risk throughout the entire research process. While previous philosophical accounts of scientific integrity have tended to focus on protecting integrity in the later stages of inquiry (e.g. data interpretation, hypothesis acceptance), it is also important to consider how to protect integrity at other stages (e.g. problem selection, research design, data analysis, publication, peer review). Second, previous accounts of scientific integrity do not provide enough practical guidance for addressing real-world applications. They tend to focus on general philosophical principles (e.g., not

ACCEPTED MANUSCRIPT 18

overriding epistemic values or promoting conditions like engagement) but do not provide details about how to achieve those principles. Third, building on the previous point, practical guidance for achieving scientific integrity will need to include guidance not only for individual scientists but also for academic institutions, government agencies, professional societies, and scientific

RI PT

journals (Douglas 2018).

Like Douglas and the other writers discussed in this paper, we reject the value-free ideal and acknowledge that non-epistemic values can and should play an important role throughout the different stages of scientific inquiry. However, we do not think that protecting scientific

SC

integrity hinges on a single philosophical distinction (such as direct vs. indirect value

influences). Instead, as Douglas (forthcoming) suggests, an adequate account of scientific

M AN U

integrity may require bringing together a number of different elements, and we argue that the literature on research ethics (specifically, its focus on norms, rules, policies, and procedures) can play a foundational role in building this sort of pluralistic account. Returning to our account of scientific integrity provided in Section 2, we contend that integrity is a matter of taking appropriate steps to ensure that the aims of science, such as empirical adequacy, are met. Building on the research ethics literature, we suggest that these aims can be promoted by

TE D

cultivating a research environment that embodies a commitment to norms for inquiry (National Academy of Sciences 2002; Steneck 2004; see Table 2). This environment includes research institutions, research sponsors, regulatory agencies, professional associations, journals, and other organizations or parties with a stake in scientific research (see Figure 1). These institutions

EP

institute rules, policies, and procedures that guide scientific judgments and decisions in accordance with the norms (see Figure 2 and Table 3).

AC C

The advantage of building an account of scientific integrity on this foundation is that it provides the additional specification and practical guidance that is lacking in previous accounts stemming from the philosophy of science. The norms, rules, policies, and procedures that promote the aims of science are embedded in a range of institutions, many of which provide guidance in the form of ethics statements, policy statements, or legal statutes. Thus, although integrity ultimately comes down to judgments and decisions made by individual researchers, scientific organizations play an instrumental role in promoting good science (National Academy of Sciences 2002; Shamoo and Resnik 2015). Organizations can advance the cause of research integrity by developing, implementing, and enforcing rules, policies, and procedures that support

ACCEPTED MANUSCRIPT 19

norms for inquiry and by providing the material and human resources needed for good science (National Academy of Sciences 2002).

RI PT

Table 2: Norms for Scientific Inquiry (based on Shamoo and Resnik 2015) •

Honesty (honestly communicating with other scientists and the public)



Evidentiary support (drawing inferences based empirical evidence and/or sound logical, statistical, or mathematical arguments)

Rigor (subjecting research to rigorous tests; critically scrutinizing research; considering the limitation of one’s methods and results)

SC



Objectivity (minimizing or controlling experimental, theoretical and other biases)



Carefulness (minimizing human and instrumental errors; keeping good research records)



Openness/transparency (disclosing methods and assumptions; sharing data, results, ideas, and materials)

M AN U





Intellectual Freedom: (freedom of thought, discussion, debate, publication)



Fair sharing of credit (giving proper credit on publications and other scientific works;

TE D

acknowledging previous research; respecting others’ intellectual property) •

Mutual Respect (treating one’s colleagues and students with respect)



Social responsibility (promoting benefits and minimizing or avoiding harms from one’s research)

EP

Protection of human and animal research subjects (only for research disciplines that work with animals or humans)

AC C



ACCEPTED MANUSCRIPT 20

Figure 1: Elements of the Research Environment that Guide Behavior in Accordance with

Research Institutions • Academic • Private laboratories • Government laboratories Legal Oversight • Regulatory agencies • Elected Representatives • Courts

RI PT

Scientific Norms

SC

Professional Associations

Researcher Behavior

M AN U

Professional Journals

TE D

Research Sponsors • Government • Private

Public and the Media

Figure 2: Relationship between Scientific Aims, Norms, and Rules

AC C

EP

Aims of Science

Norms for Scientific Inquiry

Rules, Policies, and Procedures

Scientific Judgments and Decisions

ACCEPTED MANUSCRIPT 21

While many of these norms, rules, policies, and procedures directly promote the attainment of fundamental aims such as empirical adequacy, others promote this goal indirectly. Prohibitions on data fabrication and falsification promote the attainment of empirical adequacy

RI PT

in a fairly direct way by discouraging researchers from asserting or publishing statements known to be false (de Winter 2016). Policies and procedures concerning peer review do not promote the attainment of empirical adequacy as directly because statements can be subjected to peer review while not turning out to be empirically adequate, and it is possible to arrive at empirically

SC

adequate beliefs without subjecting them to peer review. However, since one is more likely to arrive at empirically adequate conclusions by submitting one’s work for peer review, policies

M AN U

and procedures for peer review promote the attainment of empirical adequacy indirectly. Norms, policies, and rules concerning authorship, mutual respect, and intellectual property promote the attainment of the epistemic aims of science by fostering cooperation and trust among researchers, which is necessary for conducting research and ultimately arriving at empirically adequate conclusions. Thus, many of science’s norms, rules, policies, and procedures are justified because they help to support a social structure that tends to advance the aims of science. Scientific norms

TE D

are part of the social epistemology of science (Longino 1990; Resnik 1996; Solomon 2001).11 Most of these norms, rules, policies, and procedures have an ethical as well as an epistemological dimension. For example, prohibitions on data fabrication and falsification are epistemological rules because they promote the attainment of truth, but they are also ethical rules

EP

because lying is morally wrong and fraudulent research can cause considerable harm to society.12 Authorship policies have an epistemological dimension because they promote cooperative

AC C

activities that yield truth, but they are also ethical rules because they help to ensure fair distribution of credit (Resnik 1996, 1998). However, some of the norms of science have an 11

There is some overlap here with Robert Merton’s (1973) norms, i.e. communism, universalism, disinterestedness, and organized skepticism. Indeed, one of the authors of this paper, David Resnik, received a letter from Merton congratulating him on the publication of a book on the ethics of science and endorsing its views on scientific norms. However, it is also worth noting that intellectual property rules may deviate from the Mertonian ideal, since they run counter to the norm of communism (i.e. sharing the products of research). Nevertheless, one might argue that intellectual property rules promote sharing in the long-run even if they deter it in the short-term (Shamoo and Resnik 2015). 12 For example, British surgeon Andrew Wakefield published a paper in 1998 in the Lancet claiming that the measles, mumps, and rubella vaccine is a possible cause of autism. The anti-vaccine community seized upon the study as proof that vaccines are dangerous to children and vaccination rates declined significantly in the U.K. and other countries. An investigation by journalist Brian Deer found most of the data in the paper had been fabricated or falsified (Shamoo and Resnik 2015).

ACCEPTED MANUSCRIPT 22

ethical but not epistemological basis. For example, rules concerning research on human or animal subjects are justified for moral reasons that have little to do with the production of truth or empirically adequate results. Indeed, scientists could probably obtain more empirically adequate results by violating these rules, but we would regard such research as unethical.

RI PT

Although de Winter (2016) distinguishes between epistemological and ethical integrity in science, we do not think this distinction is particularly useful because, as we have seen, epistemological and ethical concerns often go hand-in-hand. We prefer to use the term

“scientific integrity”, while noting that norms of science may have epistemological and ethical

SC

dimensions.

M AN U

Table 3: Rules, Policies, and Procedures for Promoting Scientific Integrity (based on Resnik and Author 2015) •

Policies that define and prohibit research misconduct, such fabrication or falsification of data or plagiarism

Procedures for reporting, investigating, and adjudicating research misconduct



Policies and procedures for keeping good research records



Policies and procedures for auditing data and other research records



Rules for designing experiments



Rules concerning standards of evidence for accepting or rejecting hypotheses



Rules concerning good statistical practice



Rules of deductive logic (e.g. the rule of non-contradiction, modus ponens)



Policies for disclosing and describing one’s materials and methods in publications and

AC C

EP

TE D



grant proposals



Policies for sharing data, research materials (e.g. biological samples and chemical

reagents), and computer codes used in statistical programs for analyzing data



Policies and procedures for registering clinical trials in publicly available databases



Policies for disclosing and managing financial and other interests related to research



Policies and procedures pertaining to the confidentiality, rigor, and fairness peer review for scientific publications

ACCEPTED MANUSCRIPT 23



Policies and procedures pertaining to the confidentiality, rigor, and fairness of peer review for grants or contracts Policies for assigning authorship on publications



Policies and procedures for correcting or retracting publications



Rules and procedures for promoting the reproducibility of research, such as disclosure of

RI PT



materials and methods (discussed above), sharing and auditing data (discussed above), and standards for organizing and analyzing data and designing experiments Rules pertaining to the ownership and use of intellectual property



Policies on academic freedom



Policies pertaining to hiring, tenure, and promotion



Rules, policies, and procedures pertaining to research with animal or human subjects



Rules and policies related to social responsibility in research



Policies pertaining to education and training in responsible conduct of research (i.e.

M AN U

SC



research ethics, good scientific practice).

As one can see from the list in Table 3, many different rules, policies, and procedures

TE D

operating at various levels of organization serve to promote integrity in research. Some, such as policies concerning hiring, tenure, and promotion and academic freedom, are promulgated primarily by research institutions; others, such as authorship policies, are promulgated primarily by journals; and still others, such as rules concerning review of research grants, are promulgated

EP

primarily by government agencies. There is also considerable overlap because similar rules, procedures, and policies operate at different organizational levels. For example, institutions,

AC C

government research sponsors, journals, and professional organizations all have policies pertaining to conflicts of interest, data fabrication or falsification, plagiarism, data sharing, and research involving animal or human subjects (Shamoo and Resnik 2015). Scientific integrity is thus a shared responsibility (National Academy of Sciences 2002). It is also important to recognize that these rules, policies, procedures, and norms do not

completely eliminate the need for judgment in science. Thus, philosophers of science who work on the topic of science and values can make an important contribution by reflecting on how to justify, assess, prioritize, and elaborate on them. For example, sometimes the existing rules, policies, and procedures will not be sufficient to determine how particular judgments (about, for

ACCEPTED MANUSCRIPT 24

example, research design, interpretation, or publication) should be made. In other cases, two or more norms might come into conflict, in which case decisions would need to be made about how to prioritize them. Moreover, previous rules, policies, and procedures may need to be revisited or extended as scientists become aware of new challenges to scientific aims and the norms that

RI PT

support them. For example, in recent years there have been extensive efforts to develop more effective rules, policies, and procedures for addressing financial conflicts of interest in science and for promoting greater reproducibility of research results (Elliott and Resnik 2015; Holman and Elliott 2018). Finally, if one were to adopt an account of scientific integrity that incorporated

SC

a wider variety of aims than solely epistemic ones, additional decisions would need to be made about how to prioritize those aims (Brown 2013, 2017; Elliott and McKaughan 2014).

M AN U

Nevertheless, by appealing to the extensive infrastructure provided in institutional rules, policies, and procedures, the account of scientific integrity developed here provides much more detail than most previous accounts provided by most philosophers of science (see Table 3). Moreover, by showing how the rules, policies, and procedures are justified by virtue of their relationships to scientific norms, which are in turn justified in relationship to scientific aims, this account shows

TE D

how difficult judgments can be adjudicated and addressed (see Figure 2).

5. Case Illustration: Evaluating Toxicology Studies Let us consider a specific case to illustrate how our proposal to integrate the literatures on research ethics and on science and values can provide the foundation for a more sophisticated

EP

and detailed account of scientific integrity. The field of toxicology has been a popular area of study for those reflecting on the role of values in science because it raises many important

AC C

epistemological and ethical questions.13 For example, Douglas’s (2000) classic paper on the topic of inductive risk explored the role of values in toxicology studies, as did Elliott’s (2011b) book, Is a Little Pollution Good for You? These authors have shown that toxicologists face a wide variety of value-laden decisions about what chemicals to study, how to design their studies, how to interpret ambiguous data, how to extrapolate beyond the dose levels and animals investigated in their studies, how to integrate information from different sorts of studies (in silico, in vitro, in vivo, and epidemiological), and how to frame and communicate their findings. 13

The arguments we make in this section extend beyond toxicology and apply to various scientific disciplines from the natural and social sciences. For other case studies that draw connections between the science/values and research ethics literature, see Weed 1997; Resnik 2007, 2009.

ACCEPTED MANUSCRIPT 25

As we have seen, however, the accounts of scientific integrity provided by Douglas and Elliott are still lacking in specificity, and the approach suggested in this paper is that their work on scientific integrity could be strengthened by drawing upon work by research ethicists on the norms, rules, policies, and procedures that apply in fields like toxicology. For example, in

RI PT

response to scandals in which toxicology studies were shown to be falsified, regulatory bodies created the Good Laboratory Practices (GLP) guidelines as a way to ensure that laboratories actually performed the procedures they claimed to be performing (Elliott 2016). In addition, international bodies like the Organization for Economic Cooperation and Development (OECD)

SC

have developed standardized study guidelines to specify how toxicity studies should be

performed in order to provide usable information for regulators around the world (Elliott 2016),

reliable (e.g. Klimisch et al. 1997).

M AN U

and researchers have proposed criteria for evaluating studies to determine which ones were most

While these rules and procedures are very helpful for guiding toxicologists in concrete ways, they also illustrate our point that those working on the topic of science and values still have work to do in assessing, justifying, and prioritizing them. For example, both philosophers of science and scientists have critiqued the GLP system and have argued that it is seriously limited

TE D

in its ability to regulate value judgments in science; unfortunately, it merely ensures that a particular study is followed, not that the study design is appropriate for answering the question under investigation (Myers et al. 2009; Elliott 2016). Standardized study guidelines are better for ensuring that appropriate study designs are actually employed, but critics have pointed out that

EP

they leave enough ambiguities and choices that those interested in “gaming” the system can still choose study designs that serve their interests (Elliott 2016; Elliott and Volz 2012). In addition, it

AC C

appears that many of the standardized studies currently being performed with animals may not be as predictive of human-health effects as previously thought (Birnbaum et al. 2016). Finally, with respect to the criteria for evaluating studies, one of the most influential sets of criteria has recently been criticized because it leaves too much room for interpretation (Moermond et al. 2016). And, of course, these sorts of philosophical and scientific critiques of rules and procedures are not limited to the field of toxicology; for example, similar analyses of standardized study guidelines have been provided in the field of agricultural biotechnology (Wickson and Wynne 2012).

ACCEPTED MANUSCRIPT 26

In response to these criticisms, one might be tempted to conclude that these rules and procedures ultimately have little significance for those working on science and values who are trying to formulate an account of scientific integrity; one might insist that all the interesting philosophical issues involve decisions about how to handle choices that are left underdetermined

RI PT

by the rules and procedures. Nevertheless, while they are clearly not a complete solution, they do still have significant value. Not only do they provide minimal standards in many cases, but

philosophical criticisms of the rules and procedures can provide opportunities for improving them. For example, a recent paper (Moermond et al. 2016) proposed a new set of criteria for

SC

evaluating the quality of toxicity studies as a way of improving over previous approaches. It includes 20 criteria for deciding whether a study is reliable and 13 criteria for determining

M AN U

whether it is relevant to the question under investigation. Furthermore, it provides detailed recommendations for reporting the results of ecotoxicity studies (including 50 criteria divided into 6 categories) to promote more effective peer review of the studies. In addition to this work by scientists, philosophers working on the topic of science and values have also been proposing strategies for augmenting current rules and procedures with better ones (Elliott 2018a; Holman and Elliott 2018). For example, even though it is focused on the evaluation of pharmaceutical

TE D

studies rather than toxicology studies, Justin Biddle’s (2013) proposal for a “science court” is a particularly intriguing example of a concrete procedural approach that could advance scientific integrity.

Thus, we can see that the norms, rules, policies, and procedures operative in a field like

EP

toxicology provide helpful and specific guidance that can provide a foundation for building an account of scientific integrity. This does not mean that they are completely sufficient on their

AC C

own or that they are immune from being questioned or challenged; on the contrary, philosophers working in the area of science and values can play a valuable role by assessing their strengths and weaknesses and augmenting them with additional guidance. There are numerous benefits of having philosophers of science engage with the literature on research ethics in this way. First, it strengthens the rules and policies by subjecting them to critical scrutiny. Second, it enables those working on the topic of science and values, which has the potential to remain at a fairly abstract level, to have a more concrete influence on the practice of science. Third, it provides a way for those working on science and values to help scientists make good value judgments without overburdening them. Whereas it might be difficult for scientists to weigh all the value

ACCEPTED MANUSCRIPT 27

considerations involved in making particular judgments or choices, it is relatively easy for them to follow rules or policies that have been designed to help them make good choices. Of course, the fact that these rules and procedures generally yield good results does not mean that scientists must follow them slavishly. Scientists still need to use their judgment in deciding when they

RI PT

should deviate from the rules and when the rules provide inadequate guidance. But even in these cases, it is important for scientists to clarify how their approach differs from or relates to standard practices (de Winter 2016; Wilholt 2009).

SC

6. Objections and Replies

One might object to our view on the grounds that it is no more than a hodge-podge of

M AN U

norms, rules, policies, and procedures for promoting scientific integrity that lacks a coherent foundation in philosophical theory. For example, our view does not hinge on fundamental distinctions between direct vs. indirect value influences or epistemic vs. non-epistemic values. Our first reply to this objection is that the norms, rules, policies, and procedures are ultimately justified by the ways in which they support the aims of scientific inquiry, and thus we do provide a basis for justifying them and addressing conflicts between them. We would also reply that our

TE D

account is founded on a social approach to scientific inquiry (Longino 1990; Resnik 1996, Solomon 2001). While we recognize that it is important to consider how individual researchers should make judgments and decisions related to inquiry, we hold that promoting the aims of science is largely a function of science’s social structure. Individuals are prone to various errors,

EP

biases, and misbehaviors, but scientific communities, organizations, and institutions can adopt norms, rules, policies, and procedures that help to control, minimize, or compensate for these

AC C

shortcomings (Elliott and Resnik 2015). For example, norms pertaining to openness and transparency help to ensure that the scientific community can critically evaluate scientific research and detect biases, errors, or other problems with research (Elliott and Resnik 2014). Policies and procedures for reporting, investigating, and adjudicating misconduct can deter researchers from fabricating or falsifying data (Shamoo and Resnik 2015). Policies and procedures pertaining to journal peer review help ensure that published research meets methodological and ethical standards (Shamoo and Resnik 2015). A second objection is that our approach does not provide straightforward and explicit answers to important philosophical questions that have been central to recent work in the

ACCEPTED MANUSCRIPT 28

philosophical literature on science and values. For example, our account does not directly attempt to answer the question of when it is acceptable to allow non-epistemic considerations to guide scientific judgment and decision-making. Our reply to this objection is that we are encouraging a somewhat different approach to addressing these questions. Rather than providing

RI PT

a general answer based on a single philosophical distinction or set of conditions, we are pointing to a multi-faceted framework that can provide more detailed guidance to researchers about how to handle these questions in particular cases. The previous distinctions and conditions discussed in Section 3 can still serve as valuable rules of thumb that are applicable in many cases, but we

SC

suggest that they can be extended and specified using the framework that we have proposed here. Clearly, there are many situations, such as research involving animal or human subjects,

M AN U

in which research integrity requires non-epistemic considerations to guide many different aspects of scientific judgment and decision-making. There are other situations, however, in which nonepistemic values should have no impact on research. For example, it is never acceptable to fabricate or falsify data, even for a noble cause, such as promoting public health. Non-epistemic considerations should also typically have little impact on experimental design or data analysis in sciences, such as physics or chemistry, which do not use human or animal research subjects and

TE D

have little direct impact on society. However, even in these sciences, there could be cases where non-epistemic values would be relevant to drawing conclusions based on the available data (Staley 2017). In other situations, there may be good reasons for allowing or not allowing nonepistemic values to influence scientific judgments and decisions. For example, while the norm

EP

of openness requires scientists to share data, scientists may decide not to share data to protect their interests related to priority or intellectual property; to prevent other scientists from being

AC C

misled by data which have not been properly audited, cleaned, or validated; or to avoid causing harm to human subjects or society (Shamoo and Resnik 2015).

6. Conclusion

In this paper we have examined and critiqued three different accounts of how one can

protect and promote the integrity scientific research while rejecting the value-free ideal. We have argued that none of these approaches adequately safeguards scientific integrity, and we have suggested that the literature on research ethics can provide the foundation for a more sophisticated and detailed account of scientific integrity. On this view, integrity starts with

ACCEPTED MANUSCRIPT 29

adherence to a system of rules, policies, and procedures that guide scientists in following a system of norms that together promote the aims of inquiry. According to this account, scientific norms are embodied in a research environment that includes research institutions, research sponsors, regulatory agencies, professional associations, and journals that support the norms and

RI PT

that provide the material and human resources needed for good science. Though integrity

ultimately comes down to judgments and decisions made by individual researchers, scientific organizations play a fundamental role in protecting and promoting integrity. Thus, this account incorporates both the insights of those who have focused primary attention on the responsibilities

SC

of individual scientists to preserve scientific integrity (e.g., Douglas 2009; Steel 2010) and those who have focused on the role of the scientific community (e.g., Douglas 2018; Longino 1990,

M AN U

2002). Scientific integrity is a function of the social epistemology of science. One of the advantages of this view is that it integrates philosophical theorizing about science and values with empirical and conceptual scholarship on the ethical conduct of science. We have emphasized that our proposed account of scientific integrity still requires philosophical work to prioritize, evaluate, and extend the norms, rules, policies, and procedures provided by the literature on research ethics. To date, only a few philosophers of science have published

TE D

papers or books on research ethics, aside from analyses of ethical and policy issues related to research involving animal or human subjects.14 Most of the literature on research ethics is dominated by scientists, health care professionals, policy analysts, and attorneys. Tragically, the research ethics literature includes many topics ripe for philosophical analysis, such as how to

EP

define key terms such as misconduct, fabrication, falsification, plagiarism, authorship, conflict of interest, objectivity, and reproducibility; and examining how to best to promote integrity related

AC C

to research design, peer review, publication, funding, and data sharing. We urge our philosophyof-science colleagues to consider how they can bring their skills and perspectives to bear on issues related to the integrity of scientific research. Connecting the science and values debate more closely to research ethics offers opportunities for both communities to gain new insights while directly contributing to ongoing discussions concerning the practice of science.

Acknowledgments

14

See, for example, Shrader-Frechette (1996, 2016); Resnik (1998); and Elliott (2011b).

ACCEPTED MANUSCRIPT 30

This research was supported, in part, by the Intramural Program of the National Institute of Environmental Health Sciences (NIEHS), National Institutes of Health (NIH). It does not

RI PT

represent the views of the NIEHS, NIH, or US government.

References

Ågerstrand, M., M. Breitholtz, C. Rudén. (2011). Comparison of four different methods for reliability evaluation of ecotoxicity data: A case study of non-standard test data used in

SC

environmental risk assessments of pharmaceutical substances. Environmental Science Europe 23: 17.

M AN U

Anderson, E. (2004). Uses of value judgments in science. A general argument, with lessons from a case study of feminist research on divorce. Hypatia19,1-24.

Betz, G. (2013). In defence of the value free ideal. European Journal for Philosophy of Science 3(2), 207–220.

Betz, G. (2017). Why the argument from inductive risk doesn’t justify incorporating nonepistemic values in scientific reasoning. In: Author and D. Steel (eds.), Current Controversies in

TE D

Values and Science (pp. 94-110). New York: Routledge.

Biddle, J. (2013). State of the field: transient underdetermination and values in science. Studies in History and Philosophy of Science 44,124–133. Birnbaum, L., T. Burke, and J. Jones. 2016. Informing 21st century risk assessments with 21st

EP

century science.” Environmental Health Perspectives 24: A60-A63. Brown, M. (2013). Values in science beyond underdetermination and inductive risk. Philosophy

AC C

of Science 80, 829-839.

Brown, M. (2017). Values in science: Against epistemic priority. In: K. Elliott and D. Steel (eds.), Current Controversies in Values and Science (pp.64-78). New York: Routledge. Cox, D., M. La Caze and M. Levine. (2017). Integrity. Stanford Encyclopedia of Philosophy. Available at: https://plato.stanford.edu/entries/integrity/#TypeInte. Accessed: January 23, 2018. de Melo-Martín, I. and K. Intemann. (2016). The risk of using inductive risk to challenge the value-free ideal.” Philosophy of Science 83, 500-520. De Winter, J. (2016). Interests and Epistemic Integrity in Science: A New Framework to Assess Interest Influences in Scientific Research Processes. London: Lexington Books.

ACCEPTED MANUSCRIPT 31

De Winter, J. and L. Kosolosky. (2013). The epistemic integrity of scientific research. Science and Engineering Ethics 19(3), 757-774. Diamond, C. (2001). Integrity. In: L. Becker and C. Becker (eds.). Encyclopedia of Ethics, 2nd ed. (pp. 863-866).New York, NJ: Routedge.

RI PT

Douglas, H. (2000). Inductive risk and values in science. Philosophy of Science 67, 559–579. Douglas, H. (2009). Science, Policy, and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.

Douglas, H. (2013). The value of cognitive values. Philosophy of Science 80, 796-806.

SC

Douglas, H. (2014). Scientific integrity in a politicized world. In: P. Schroeder-Heister, G.

Heinzmann, W. Hodges, P.E. Bour (eds.). Logic, Methodology, and Philosophy of Science:

M AN U

Proceedings of the Fourteenth International Congress (pp. 253-268). London, UK:College Publications.

Douglas, H. (2016). Values in science. In: P. Humphreys (ed.). The Oxford Handbook of the Philosophy of Science (pp. 609-630). New York, NY: Oxford University Press. Douglas, H. (2017). Why inductive risk requires values in science. In: Author and Daniel Steel (eds.). Current Controversies in Values and Science (pp. 81-93). New York: Routledge.

TE D

Douglas, H. (2018). From tapestry to loom: Broadening the perspective on values in science. Philosophy, Theory, and Practice in Biology 10(8), 1-8. Douglas, H. (Forthcoming). Science and values: The pervasive entanglement. In: G. Zachary and T. Richards (eds.). The Rightful Place of Science: Values, Science and Democracy (The

EP

Descartes Lectures). Tempe, AZ: Consortium for Science, Policy & Outcomes. Elliott, K. (2011a). Direct and indirect roles for values in science. Philosophy of Science 78, 303-

AC C

324.

Elliott, K. (2011b). Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. New York: Oxford University Press. Elliott, K. (2016). Standardized study designs, value judgments, and financial conflicts of interest in research. Perspectives on Science 24(5), 529-551. Elliott, K. (2017). A Tapestry of Values: An Introduction to Values in Science. New York: Oxford University Press. Elliott, K. (2018a). Addressing industry-funded research with criteria for objectivity. Philosophy of Science 85, 857-868.

ACCEPTED MANUSCRIPT 32

Elliott, K. (2018b). “A Tapestry of Values: Response to My Critics,” Philosophy, Theory, and Practice in Biology 10(11), 1-10. Elliott, K. and D. McKaughan. (2009). How values in discovery and pursuit alter theory appraisal. Philosophy of Science 76, 598-611.

RI PT

Elliott, K. and D. McKaughan. (2014). Non-epistemic values and the multiple goals of science. Philosophy of Science 81: 1-21.

Elliott, K. and D. Resnik. (2014). Science, policy, and the transparency of values. Environmental Health Perspectives 122: 647-650.

SC

Elliott, K. and D. Resnik. (2015). Scientific reproducibility, human error, and public policy. BioScience 65(1), 5-6.

New York: Oxford University Press.

M AN U

Elliott, K. and T. Richards. (2017). Exploring Inductive Risk: Case Studies of Values in Science.

Elliott, K., and D. Volz. (2012). Addressing conflicts of interest in nanotechnology oversight: Lessons learned from drug and pesticide safety testing.” Journal of Nanoparticle Research 14:664-668.

Giere, R. (1988). Explaining Science: A Cognitive Approach. Chicago, IL: University of

TE D

Chicago Press.

Gupta, S. (2011). Intention-to-treat concept: A review. Perspective in Clinical Research 2, 109– 112.

Haack, S. (1998). Manifesto of a Passionate Moderate: Unfashionable Essays. Chicago, IL:

EP

University of Chicago Press.

Haack, S. (2003). Defending Science within Reason. New York, NY: Prometheus Books.

AC C

Heil, J. (1983). Believing where one ought. The Journal of Philosophy 80(11), 752-763. Hempel, Carl. (1965). Science and human Values. In: Aspects of Scientific Explanation and Other Essays in the Philosophy of Science (pp. 81-96). New York, NY: The Free Press. Hicks, D. (2014). A new direction for science and values. Synthese 191, 3271-3295. Holman, B. and J. Bruner. (2017). Experimentation by industrial selection. Philosophy of Science 84, 1008-1019. Holman, B. and K. Elliott. (2018). The promise and perils of industry-funded science. Philosophy Compass e12544.

ACCEPTED MANUSCRIPT 33

Hudson, R. (2016). Why we should not reject the value-free ideal for science. Perspectives on Science 24, 167-191. Jeffrey, R.C. (1956). Valuation and acceptance of scientific hypotheses. Philosophy of Science 23(3), 237–246.

RI PT

Kincaid, H., J. Dupré, and A. Wylie (eds). (2007). Value-Free Science? Ideals and Illusions. Oxford: Oxford University Press.

Klimisch, H-J., M. Andreae, and U. Tillmann. (1997). “A systematic approach for evaluating the quality of experimental toxicological and ecotoxicological data.” Regulatory Toxicology and

SC

Pharmacology 25: 1-5.

Kourany, J. (2010). Philosophy of Science after Feminism. Oxford: Oxford University Press.

M AN U

Kourany, J. (2018). “Adding to the Tapestry,” Philosophy, Theory, and Practice in Biology 10: forthcoming.

Kitcher, P. (2001). Science, Truth, and Democracy. New York, NY: Oxford University Press. Kitcher, P. (2011). Science in a Democratic Society. Amherst, NY: Prometheus. Krimsky, S. (2003). Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? Lanham, MD: Rowman and Littlefield.

TE D

Kuhn, T. (1978). The Essential Tension. Chicago, IL: University Chicago Press. Lacey, H. (1999). Is Science Value-Free? Values and Scientific Understanding. New York, NY: Routledge.

Laudan, L. (1984). Science and Values: The Aims of Science and Their Role in Scientific Debate.

EP

Berkeley, CA: University of California Press.

Longino, H. (1990). Science as Social Knowledge. Princeton, NJ: Princeton University Press.

AC C

Longino, H. (1996). Cognitive and non-cognitive values in science: Rethinking the dichotomy. In: L.K. Nelson and J. Nelson (eds.). Feminism, Science, and the Philosophy of Science (pp. 3958). Dordrecht, Netherlands: Kluwer. Longino, H. (2002). The Fate of Knowledge. Princeton, NJ: Princeton University Press. McMullin E. (1982). Values in Sscience. In: P.D. Asquith and T. Nickles (eds.) PSA 1982: Proceedings of the 1982 Biennial Meeting of the Philosophy of Science Association, vol. 1 (pp. 3-28). East Lansing, MI: Philosophy of Science Association. Michaels, D. (2008). Doubt is Their Product: How Industry’s Assault on Science Threatens Your Health. New York, NY: Oxford University Press.

ACCEPTED MANUSCRIPT 34

Moermond, C. R. Kase, M. Korkaric, M. Ågerstrand. (2016). CRED: Criteria for reporting and evaluating ecotoxicity data. Environmental Toxicology and Chemistry 35: 1297-1309. Myers, J. P., F. vom Saal, B. Akingbemi, K. Arizono, S. Belcher, T. Colborn, et al. “Why Public Health Agencies Cannot Depend upon Good Laboratory Practices as a Criterion for Selecting

RI PT

Data: The Case of Bisphenol A. Environmental Health Perspectives 117, 309-315.

National Academy of Sciences. (2002). Integrity in Research: Creating an Environment that Promotes Responsible Conduct. Washington, DC: National Academies Press.

Cambridge, UK: Cambridge University Press.

SC

Pielke, R. (2007). The Honest Broker: Making Sense of Science in Policy and Politics.

Resnik, D. (1996). Social epistemology and the ethics of research. Studies in the History and

M AN U

Philosophy of Science 27: 566-586.

Resnik, D. (1998). The Ethics of Science: An Introduction. New York, NY: Routledge. Resnik, D. (2007). The Price of Truth: How Money Affects the Norms of Science. New York, NY: Oxford University Press.

Resnik, D. (2009). Playing Politics with Science: Balancing Scientific Independence and Government Oversight. New York, NY: Oxford University Press

TE D

Resnik, D. (2015). Retracting inconclusive research: lessons from the Séralini GM maize feeding study. Journal of Agricultural and Environmental Ethics 28: 621-633. Rooney, P. (1992). On values in science: Is the epistemic/non-epistemic distinction Useful? In: K. Okruhlik, D. Hull, and M. Forbes (eds.). Proceedings of the 1992 Biennial Meeting of the

Association.

EP

Philosophy of Science Association, vol. 1 (pp. 13–22). East Lansing, MI: Philosophy of Science

AC C

Rooney, P. (2017). The borderlands between epistemic and non-epistemic values. In: Author and D. Steel (eds.), Current Controversies in Values and Science (pp. 31-45). New York, NY: Routledge.

Rudner, R. (1953). The scientist qua scientist makes value judgments. Philosophy of Science 20(1), 1–6.

Schroeder A. (2017). Using democratic values in science: An objection and partial response. Philosophy of Science 84, 1044-1054. Shamoo, A. and D. Resnik. (2015). Responsible Conduct of Research, 3rd ed. New York, NY: Oxford University Press.

ACCEPTED MANUSCRIPT 35

Shrader-Frechette, K. (1996). The Ethics of Scientific Research. Lanham, MD: Rowman and Littlefield. Shrader-Frechette, K. (2007). Taking Action, Saving Lives: Our Duties to Protect Environmental and Public Health. New York, NY: Oxford University Press.

New York, NY: Oxford University Press.

RI PT

Shrader-Frechette, K. (2016). Tainted: How Philosophy of Science can Expose Bad Science.

Solomon, M. (2001). Social Empiricism. Cambridge, MA: MIT Press.

Steneck, N. (2004). ORI Introduction to Responsible Conduct of Research. Washington, DC:

SC

Office of Research Integrity.

Steel, D. (2010). Epistemic values and the argument from inductive risk. Philosophy of Science

M AN U

77,14-34.

Steel, D. (2015). Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy. Cambridge, UK: Cambridge University Press. Steel, D. and K. Whyte. (2012). Environmental justice, values, and scientific expertise. Kennedy Institute of Ethics Journal 22, 163-182.

Steele, K. (2012). The scientist qua policy advisor makes value judgments. Philosophy of

TE D

Science 79, 893–904.

Weed DL. 1997. Underdetermination and incommensurability in contemporary epidemiology. Kennedy Institute of Ethics Journal 7(2):107-127. Wickson, F., and B. Wynne. (2012). Ethics of science for policy in the environmental

15:321-340.

EP

governance of biotechnology: MON810 maize in Europe.” Ethics, Policy & Environment

AC C

Wilholt, T., 2009. Bias and values in scientific research. Studies in History and Philosophy of Science 40: 92-101.

ACCEPTED MANUSCRIPT

Bullet Points

Rejecting the value-free ideal may undermine the integrity of scientific research.

policies, and procedures that promote the aims of science.

RI PT

Protecting the integrity of scientific research involves promoting adherence to norms, rules,

Academic institutions, funding agencies, journals, and professional associations play a key role in promoting scientific integrity.

AC C

EP

TE D

M AN U

SC

Scientific integrity is part of the social epistemology of science.