Studies in History and Philosophy of Science xxx (2017) 1e10
Contents lists available at ScienceDirect
Studies in History and Philosophy of Science journal homepage: www.elsevier.com/locate/shpsa
Defending the selective confirmation strategy Yukinori Onishi School of Advanced Sciences, The Graduate University for Advanced Studies, Hayama, Miura, Kanagawa, 240-0193, Japan
a r t i c l e i n f o
a b s t r a c t
Article history: Received 22 June 2016 Received in revised form 6 May 2017 Available online xxx
Most scientific realists today in one way or another confine the object of their commitment to certain components of a successful theory and thereby seek to make realism compatible with the history of theory change. Kyle Stanford calls this move by realists the strategy of selective confirmation and raises a challenge against its contemporary, reliable applicability. In this paper, I critically examine Stanford’s inductive argument that is based on past scientists’ failures to identify the confirmed components of their contemporary theories. I argue that our ability to make such identification should be evaluated based on the performance of the scientific community as a whole rather than that of individual scientists and that Stanford’s challenge fails to raise a serious concern because it focuses solely on individual scientists’ judgments, which are either made before the scientific community has reached a consensus or about the value of the posit as a locus for further research rather than its confirmed status. Ó 2017 Elsevier Ltd. All rights reserved.
Keywords: Scientific realism Selective realism Pessimistic meta-induction Social epistemology
1. Introduction The so-called pessimistic meta-induction is one of the major challenges against scientific realism (Laudan, 1981). It presents a list of theoretical entities that were once posited in a successful theory but discarded in a later theory change, such as the celestial sphere, phlogiston, caloric, and ether in the 19th century theory of light and electromagnetism. Since those theoretical entities did not have referents, the argument goes, one can hardly say that the theories in which they were posited were even approximately true. These historical cases serve either as a basis for an inductive argument that a currently successful theory may also turn out to be false in the future, or as counter-evidence to the so-called no-miracles argument, which claims that the only way to explain a theory’s success without appealing to miracles is to infer its (approximate) truth. If one applies that inference to a current successful theory (e.g., the currently accepted electromagnetic theory) and infers its truth, that will imply the falsity of its predecessor theory (the wave theory of light as conceived by Fresnel) despite its empirical success, since those theories disagree with each other in some respects (such as the posit of mechanical medium for light propagation). Hence, one may conclude, there must be something wrong with inferring a theory’s truth from its success. Faced with this difficulty, many realists felt it necessary to refine their position. The first step toward refinement was to specify the kind of ‘success’ that should elicit a realist commitment. Thus, they
narrowed down the notion of ‘success’ to ‘novel predictive success’ (e.g., Leplin, 1997; Worrall, 1989). That way, they argue, they could considerably shorten Laudan’s list of ‘successful, but false theories’ because many of them were not successful in this stricter sense. However, some items in the list, such as the luminiferous ether, still count as successful even in this narrower sense and remain counter-evidence. Thus, many realists took the second step for refinement, which Kyle Stanford calls the strategy of selective confirmation (Stanford, 2006), and argued that those successful theories were not outright false but contained some true components that brought about the successes. According to Stanford’s characterization, the realists who take the selective confirmation strategy “defend only some parts or components of past theories as responsible for their success, while abandoning others as idle, merely presuppositional, or otherwise not involved in the empirical successes those theories managed to achieve, and therefore never genuinely confirmed by those successes in the first place” (2006, p. 164, original emphasis). Philip Kitcher (1993), for example, categorizes theoretical posits into working posits and presuppositional posits and endorses realistic commitment only to the former type of posits (p. 149). Stathis Psillos (1999), on the other hand, distinguishes the truth-like constituents of a theory, which fueled the theory’s empirical success, from the idle ones, which made no such contributions; then, he claims that scientists themselves routinely make such differentiation, and those parts that they regarded as having evidential support tend to be retained through theory change (pp. 108e114). Though Stanford mentions these two particular realists as typically
E-mail addresses:
[email protected],
[email protected]. http://dx.doi.org/10.1016/j.shpsa.2017.07.001 0039-3681/Ó 2017 Elsevier Ltd. All rights reserved.
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
2
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
employing the strategy, the description of the strategy seems to apply to other versions of realism (often collectively called selective realism) as well (e.g., Cartwright, 1983; Chakravartty, 2007; Egg, 2016; Giere, 1988; Hacking, 1983; Harker, 2013; Peters, 2014; Saatsi, 2005; Worrall, 1989). Stanford (2006) challenges the selective confirmation strategy, pointing to a few historical cases in which scientists were committed to a certain posit of their contemporary theory that is now regarded as false. He claims that these cases call into question whether we can reliably identify the true/confirmed components of contemporary successful theories, and thus, the selective confirmation strategy provides no refuge for scientific realists. The aim of this paper is to defend the strategy from this challenge, which I call the no refuge argument. I claim that given the social nature of scientific inquiry, in which researchers pursue different approaches or hypotheses and subject their views to criticism, one needs to consider the reliability of the communitylevel judgments rather than those by individual researchers in order to examine the reliable detectability of confirmed theoretical components, and that the no refuge argument fails to pay attention to those community-level judgments. In particular, I claim that the cases of misjudgments that Stanford cites in support of the argument are made either (a.) before the scientific community at the time reached a consensus on the theory or the particular hypothesis in question or (b.) about the usefulness of that component as a working hypothesis rather than on its confirmed status. I argue that one cannot show the unreliability of the community-level judgments based on those cases, and thus, the no refuge argument fails to raise a serious challenge against the selective confirmation strategy. This is not to say that selective realism is free from concerns.1 As we will see in the next section, Stanford himself raises two more arguments against it. Timothy Lyons (2006) develops another argument against Psillos’ and other versions of selective realism. Hasok Chang (2002) also objects to Psillos’ analysis of caloric theory and questions the plausibility of what Chang calls ‘preservative realism’ in general. Thus, the argument developed below is only intended to be a defense of selective realism from a particular challenge (i.e., the no refuge argument) with particular grounds. The rationale for focusing on the no refuge argument is that unlike the other challenges against selective realism, it has not received much attention in the literature despite its possibly broad scope and its significance to the current scientific realism debate. In what follows, I first formulate Stanford’s no refuge argument and clarify its characteristics (Section 2). Then, I present my objection to the argument (Section 3), and finally, I consider some possible problems with my objection to the no refuge argument (Section 4). 2. The no refuge argument 2.1. Is reliable detection of the confirmed components possible? The central tenet of selective realism may be summarized as follows: successful theories contain (approximately) true components that are responsible for their empirical success, and the belief in such components is less vulnerable to the challenge of the pessimistic meta-induction, for they are typically retained through
1
Nor do I mean that the defense of selective realism here can show, even if it is successful, the approximate truth of the source of a theory’s success in a way that is convincing to anti-realists. For, even if realists can address the problem of the pessimistic meta-induction (old and new (Stanford, 2006)) and the no refuge argument, a disagreement between realists and anti-realists remains concerning the plausibility of underdetermination. The purpose of this paper is only to address the former challenges and not to resolve the latter disagreement.
theory changes. Given this response to the pessimistic meta-induction,2 Stanford now questions whether we can reliably identify such true components of our contemporary successful theories and argues that there are historical records that suggest our inability to make such judgments reliably. For example, Stanford argues, the 19th-century physicists thought that the existence of some mechanical medium is essential for optical and electromagnetic propagation, even if they were not committed to a specific model of the ether present at the time. To support this claim, he cites a passage from James Clerk Maxwell’s A Treatise on Electricity and Magnetism, where he says: [W]henever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other, for energy, as Torricelli remarked, ‘is a quintessence of so subtle a nature that it cannot be contained in any vessel except the inmost substance of material things.’ (Maxwell, 1873, p. 438) From this passage, Stanford argues, Maxwell seems to have believed that the existence of some mechanical medium was required for the success of the wave theory of optics and electromagnetism. The second example of misjudgment that Stanford points to is August Weismann’s commitment to what Stanford calls the hypothesis of germinal specificity (i.e., a hypothesis that “the nuclei of different cells must contain different constituent elements of the organism’s hereditary material” (Stanford, 2006, p. 111; original emphasis)). Contrary to the current view, he believed that the hypothesis was essential for the explanation of ontogenetic differentiation of cells constituting different body parts of an organism. Finally, Stanford points to Antoine Lavoisier’s belief in ‘the matter of heat and fire’ as an essential part of the explanation of various thermal phenomena (Stanford, 2006, p. 154). Based on these failures by Maxwell, Weismann, and Lavoisier to identify the true components of their contemporary theory, Stanford claims that it is questionable whether, as Psillos argues, scientists themselves can identify the genuinely confirmed parts of their successful theories, or, as Kitcher recommends, distinguish working posits of those theories from presuppositional ones. Thus, Stanford argues3: [T]he strategy of selective confirmation risks leaving us unable to trust our ability to determine, at the time a theory is a going concern, which parts, features, or aspects are actually required for the success of that theory. Accordingly, without some prospectively applicable and historically reliable criterion for distinguishing idle and/or genuinely confirmed parts of our theories from others, the strategy of selective confirmation offers no refuge for the scientific realist. (2006, p. 169; original emphasis) Let us call this inductive argument against the reliability of our judgment concerning the genuinely confirmed parts of our contemporary theories the no refuge argument (NRA, hereafter) and formulate it as follows:
2 Actually, as Chakravartty (2008) and Psillos (Psillos, Saatsi, Winther, & Stanford, 2009) note, and as Stanford himself (2006, p.159, p. 181) seems to be aware of, the strategy of selective confirmation can address not only the pessimistic metainduction but Stanford’s new induction as well (Stanford, 2006). Thus, the no refuge argument is meant to serve as a backup argument for his new induction. 3 In the following passage, Stanford’s claim on the necessity of a prospectively applicable criterion is based on another argument, which I call the Whiggish convergence argument. (I will discuss this later).
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
(NRA) Given the failures of past scientists’ judgments concerning the source of success (the confirmed components) of their contemporary theories, one can inductively generalize that we are unable to reliably identify such components of our successful contemporary theories.
3
version of the no-miracles argument). Hence, these arguments need to be addressed differently, and one cannot address one by responding to the other.
2.2. How far could the scope of the NRA reach? Note that what the NRA questions is the reliable, contemporary applicability of selective confirmation strategy, which is quite a different problem from the truth of the selective realists’ tenet (i.e., the claim that successful theories contain some true components that served for the theory’s success and that they are retained through theory changes). The tenet by itself is, if true, sufficient for addressing the pessimistic meta-induction; however, Stanford’s point is that addressing the pessimistic meta-induction in this way would result in an uninteresting position if we cannot trust our ability to detect such true components ‘at the time a theory is a going concern.’ One may think that the NRA is essentially the same as Stanford’s other argument, called the trust argument, which argues against a specific version of selective realism that appeals to a certain version of causal accounts of reference (Stanford, 2006, pp. 147e149). The version of realism that Stanford considers here holds that a referential relation between a theoretical term and its referent can be achieved even without requiring accuracy of the description associated with the theoretical term. Rather, it maintains that what is important in fixing a referent is the causal role the referent of the term is supposed to play in bringing about the phenomenon in question. Thus we may say, for example, that the ether referred to the electromagnetic field because they share the same causal role (i.e., as a bearer of electromagnetic phenomena), even though the theoretical descriptions attached to them are very different (Hardin & Rosenberg, 1982). Stanford criticizes this move, arguing that it would make realism a vacuous position because it leaves us unable to trust what our theories tell us about theoretical entities. Thus, it is true that the NRA and the trust argument are similar in that both of them question the substantiality of selective realism. However, they are different in that while the trust argument concerns the particular type of realism that gives up the commitment to the theoretical descriptions of posited entities once for all, the NRA has a much broader target than the trust argument, and they have different argumentative forms. As the arguments and the targets are different, they need to be addressed differently. The NRA is also different from Stanford’s other argument against the selective confirmation strategy, which I call the Whiggish convergence argument (2006, pp. 166e168). He claims that the convergence between the source of a past theory’s success and ‘the parts that the theory got right about the world’ is virtually guaranteed because both judgments are based on the currently accepted theory. Hence, he argues, selective realists cannot present such convergence as evidence for the claim that the sources of success are approximately true.4 Again, the target and the form of this argument are different from those of the NRA. The Whiggish convergence argument is not an inductive argument, and its target is the realists’ attempt to claim the truth of the sources of a theory’s success in the manner described above (which I do not think is essential for selective realists, for they can appeal to a modified
4 In a symposium paper on Stanford’s Exceeding Our Grasp, Psillos and Juha Saatsi independently respond to this challenge, claiming that such a convergence is not guaranteed (Psillos et al., 2009). Their claim that the identification of the sources of a past theory’s success and their truth do not automatically coincide may be further supported by the case of Ptolemaic astronomy. Today, we can understand how epicycles and the equant contributed to the prediction of the planet’s apparent motion under the theory, although we no longer employ those machineries.
One may think that the failures by Maxwell, Weismann, and Lavoisier suggest only that scientists are unable to specify the confirmed parts if the judgments are left solely to their intuition, and, one may think, selective realism with some directions for such identification (a prospectively applicable criterion) is free from the challenge of the NRA. However, Stanford argues that a similar concern may also arise with John Worrall’s (1989) structural realism, which claims that the object of realists’ commitment should be the structure of successful theories. The argument takes the form of a dilemma (2006, pp. 180e183). If, on the one hand, ‘the structure of a theory’ means only its mathematical relations, structural realism seems to provide us with a prospectively applicable criterion; but such a weak claim that certain abstract relations will appear “in some way, somewhere, somehow” in a future theory would make realism a vacuous position. On the other hand, if ‘the structure of a theory’ includes the interpretation of those mathematical relations, the historical record discussed above calls into question whether we can reliably distinguish the structure of the theory from its content (cf. Psillos, 1999, pp. 153e157). Stanford gives an example of Francis Galton’s Ancestral Law of Inheritance, which describes the ancestral contribution to ‘the stirp’ (a germinal substance) of an organism, and argues that, though it is true that “the fractional relationships described by Galton’s Ancestral Law show up somewhere in the account of inheritance offered by contemporary genetics,”5 its interpretation is different today, independent of any commitment to the claim that an organism inherits genetic substance from its ancestors at the rate the law states. Thus, Stanford claims, it is not clear whether we can identify or interpret the structures of a theory while avoiding commitment to its content. A similar concern may also arise with other versions of selective realism. In articulating her position, Nancy Cartwright recognizes the difficulty of drawing a line between causal explanations of a phenomenon, which she recommends us to take realistically, and theoretical treatments of it, which she recommends us not to take realistically (Cartwright, 1983, p. 77).6 Ronald Giere’s version of selective realism, which recommends us to take only certain aspects of models as representing certain aspects of the world, also seems to face the same difficulty as Psillos’ because he, too, leaves the identification of the realistic aspects of models to scientists’ own judgments (1988, p. 97).7 Stanford’s argument against structural realism may also apply to semirealism. Though it advises us to believe only the minimal interpretation of the relations of detection properties, the NRA may call into question whether we can make reliable judgments concerning ‘what is included in the minimal interpretation.’ For example, with the case of the so-called Fresnel equations, Chakravartty argues that “[i]n the very limited context of these specific equations, ethers and fields are auxiliary posits,” and that its minimal interpretation includes amplitudes (intensities) and angles (direction of propagation) but we do not have to interpret them
5
Stanford seems to mean the coefficient of relatedness. “How are we to distinguish the explanatory laws, which I argue are not to be taken literally, from the causal claims and more pedestrian statements of fact, which are? The short answer is that there is no way” (Cartwright, 1983, p. 77). 7 “The question of which aspects [of a model represent the world], and why not others, is left to be resolved on a case-by-case basis by scientists themselves” (Giere, 1988, p. 97). 6
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
4
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
further as properties of the ether (Chakravartty, 2007, pp. 48e53). However, it seems, the passage by Maxwell cited above calls into question whether he could have made such a differentiation as suggested by Chakravartty even if he had known the criterion of the minimal interpretation. In particular, the passage may call into question whether Maxwell could have regarded just ‘amplitudes,’ rather than ‘amplitudes of waves in a medium,’ as required for the equation’s minimal interpretation. More recent attempts by Dean Peters (2014) and David Harker (2013) to provide a prospective criterion for such differentiation do not seem to do better in this respect. Though they do provide a procedure to classify which theory components should be regarded as confirmed and which ones should not, they do not seem to provide one to split a theory into components to which one should apply the procedure of classification. For example, Peters’ empirically successful subtheory account (ESSA) “assume[s] that a scientific theory consists of a set of propositions and their deductive closure” (Peters, 2014, p. 390) and regards the theory as consisting of subtheories. Then, ESSA recommends, we start with an initial subtheory that consists of the known confirmed empirical consequences of the theory and add further propositions to the subtheory only if the addition entails a new confirmed empirical consequence (i.e., only if the added component unifies the new phenomenon with the known ones in the initial subtheory). By repeating this procedure, he claims, we can derive a subtheory consisting of all the theoretical posits essential to the theory’s empirical success. Peters illustrates this procedure with the case of Fresnel’s wave theory of light. According to his analysis, the initial subtheory is a set of all the confirmed observational consequences of the theory. Then, starting with this initial subtheory, he claims, “the proposition that light consists of ‘transverse waves’ is added, since this successfully unifies lower-level posits ..[H]owever, the posit of a luminiferous ether is not added” because “the posit does not entail any additional verified content from the existing subtheory” (Peters, 2014, pp. 390e392). However, the NRA may question whether contemporary scientists could have distinguished the posit that light is transverse waves from the posit of the medium for it (rather than regarding ‘light is transverse ether waves’ as a single confirmed posit). On the other hand, Harker (2013) recommends focusing on the issue of theoretical progress and considering what empirical progress has been made by the revision/replacement of the older theory. He argues that the progress is not due to the theory components that the new theory shares with its predecessors, nor do all the revised components contribute to the progress. Thus, he claims that empirical progress is “evidence for the approximate truth of those constituents of the new theory that precipitate progress” (p. 92). Again, Harker illustrates his position with the case of Fresnel’s wave theory. Noting that the concept of light being transverse waves and the actual commitment to the ether can be conceptually distinguished, he claims that the empirical progress made by the theory over the corpuscular theory of light is completely attributable to Fresnel’s equations and the assumption that light is a transverse wave, while the posit of the ether did not bring about any additional empirical progress (Harker, 2013, p. 99). He claims that this analysis could have been made by Fresnel himself (Harker, 2013, p. 99), but no historical evidence is provided for this claim.8 Considering that Harker admits that not all the revised
8 He cites Buchwald’s (1989) remark that the ether’s role was not “generative in a direct sense” (p. 307). However, this is a retrospective analysis and not intended to show that Fresnel could have conceptually differentiated the wave hypothesis from the existence of some medium for it.
components serve for bringing about empirical progress, the same concern for Kitcher’s position seems to arise, this time regarding whether scientists can identify which parts of the revised/added theoretical components were responsible for the empirical progress. Harker differentiates his position from Kitcher’s and Psillos’ positions, arguing that while “they haven’t identified a suitable perspective from which to evaluate past theories,” his position suggests “a new perspective from which to conduct historical analyses . [i.e.] that of older theories” (2013, p. 95). However, since he does not provide a further procedure to identify which parts of the revised/replacing theory are responsible for the success, it is not clear whether this difference makes his position any less vulnerable to the challenge of NRA. This concern becomes evident especially when the progress in question is a case of theory replacement rather than a gradual improvement of older versions of a theory. For example, Harker regards Fresnel’s wave theory of light as progress from corpuscular theory (p. 99), and therefore the question he faces is no different from the one selective realists have concerned with: ‘Which parts of Fresnel’s wave theory of light are responsible for its success?’ In fact, other than the case of Rutherford’s atomic model, the target of the analysis that he illustrates with his criterion seems to be the theory as a whole (i.e., the wave theory of light and phlogiston theory). Matthias Egg’s (2016) recent attempt to improve on semirealism and thereby provide a prospectively applicable criterion of confirmed theoretical components appears to leave less room for scientists’ judgment and hence is less vulnerable to the NRA (i.e., it may stand even if we cannot address the NRA). In a similar argument to the one presented above, he argues that semirealists need to explicate the notion of ‘the minimal interpretation’ and ‘detection property,’ and he suggests doing that with the notion of causal warrant that he developed elsewhere (Egg, 2012) based on Suárez’s (2008) insight. Following Chakravartty, Egg regards properties as the primary (but not the only) object of commitment for realists. Then, he claims that detection properties can be explicated as “those for which we have causal warrant,” where ‘causal warrant’ is one brought about by an inference to the best explanation that satisfies the criteria of non-redundancy, material inference, and empirical adequacy. Especially important among these criteria for the purpose of explicating the notion of detection property is material inference, which is defined as one “that results in ascribing to a concrete entity a property for which there is a well-defined notion of what it means to modify it” (Egg, 2012, p. 266). He claims that satisfaction of this criterion makes the inferred property (say P) a detectable property because, for such a property, we can tell “what would happen if P had not been present” (Egg, 2016, p. 126). If a detectable property also satisfies the other criteria (i.e., nonredundancy and empirical adequacy), it becomes a detection property. Having developed a prospectively applicable criterion this way, Egg illustrates its application with the case of the ether. Now the question he poses himself is whether one can categorize the amplitude of ether waves as a detection property and the substantiality of the supporting medium as auxiliary (pp. 127e128). He claims that it is possible because, while “there is a well-defined notion of what it means to modify” the amplitude of ether waves, there is none for the ether’s substantiality (p. 128). This means that the latter property does not satisfy the criterion of material inference and hence does not count as a detection property. He further argues that the amplitude of ether waves also satisfies the other two criteria, and concludes that it counts as a detection property. However, this procedure still leaves some room where the problem of the NRA may arise because it does not specify which properties one should pay attention to. For example, considering
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
that ‘substantiality’ is not among the properties that scientific theories concern (unlike ‘amplitude’), the above analysis is not necessarily straightforward: even if past scientists could have successfully categorized the amplitude of ether waves as a detection property, the NRA may question whether they would not have therefore believed in the ether as well but stopped to think about whether the ether also has the property of substantiality. Indeed, although Egg’s argument about substantiality seems to draw on the fact that substantiality is not the kind of property that allows for variation (either in quality or quantity), and thus the argument is applicable to the substantiality of anything, he does not consider the property of substantiality when he performs the analysis with atoms and concludes that causal realists can be committed to them (Egg, 2016, Section 4). Additionally, at one point in the analysis, he mentions that “[t]he assessment with respect to material inference is less straightforward because it is not immediately clear whether there is a sufficiently well-defined notion of what it means to modify the properties to which the atomic hypothesis refers” (Egg, 2016, p. 135). Though he performs this assessment by picking out the value of Avogadro’s number as such a property and by choosing an appropriate possible world to counterfactually consider what would happen if the value was different, the application of the criteria does not seem straightforward enough to make the position free from the problem of the NRA completely. It is not the purpose of this paper to decide the exact scope of the NRA, and the above arguments do not deny that the positions discussed point to plausible ways in which selective realism can be developed. However, as long as they leave some room for scientists’ own judgments (especially concerning the way a theory is divided into components) in the identification of confirmed components, the concern of the NRA can be relevant to them as well. Considering its potentially broad scope, selective realists cannot leave the NRA unaddressed.9
3. The NRA examined 3.1. Imperfection does not imply incompetence How serious is the concern raised by the NRA? The first point to be noted is that selective realists can admit a few failures by past scientists to identify the true parts of their contemporary successful theory.10 Though they suggest our imperfection in making such judgments, a few examples of failure do not immediately imply the unreliability of our judgments unless the failures are too frequent. Contrast this with the case of the pessimistic meta-induction, in which realists cannot leave even a few historical counter-examples (i.e., cases of successful but false theories11) to the no-miracles argument, for it would amount to appealing to miracles in explaining their success, which would undermine the intuitive appeal of the argument. On the other hand, the NRA concerns the reliable detectability of the true components of a successful theory. This is a different question from whether the theory contains some true components that served for its success, and reliability is a matter of degree. Thus, selective realists can admit the failures by Lavoisier, Weismann and Maxwell while maintaining the reliable
9 The NRA may threaten even a certain version of anti-realism as well because the fluid theory of heat, the hypothesis of germinal specificity, and the hypothesis of some mechanical medium are not empirically adequate from our current perspective. Past scientists would have been mistaken if they had believed only the empirical adequacy of those hypotheses. 10 A discussion with Takeshi Sakon and Tetsuji Iseda was very helpful in clarifying this point. 11 Hereafter, I use the term ‘success’ as meaning ‘novel predictive success’.
5
applicability of the strategy as well as the intuition behind the nomiracles argument. Selective realists may even remove the cases of Lavoisier and Weismann from the list of failed applications of the selective confirmation strategy by arguing that since their theories did not yield a novel predictive success, a selective realist would not have searched for the true components in those theories in the first place. Thus, they may claim that it is inappropriate to count those cases as failed applications of the strategy. Though this line of argument is not without problems,12 selective realists may be able to handle those difficulties by refining their claim on the relation between novel predictive success and the existence of true theoretical components (e.g., Vickers, 2013). However, these rejoinders work only as long as there are not too many other cases of failures; otherwise they will suggest our inability to perform such identification and will make selective realism an uninteresting position. Unfortunately for selective realists, the same type of judgments as those by Lavoisier, Weismann, and Maxwell may be found innumerably in the history of science, regardless of whether the committed theory had achieved a novel predictive success. However, in what follows, I argue that the track record of such judgments by past scientists has little relevance to the question of contemporary, reliable applicability of the selective confirmation strategy. This is because the commitments made by Lavoisier, Weismann, and Maxwell are those made either: (a.) before the scientific community at the time reached a consensus on the theory or the particular hypothesis in question, or (b.) about the usefulness of that component as a working hypothesis rather than on its confirmed status. Misjudgments of this kind are a routine part of scientific activity. If they were counted as counter-examples to the reliability of our judgment, the NRA would foreclose the future of selective realism. In what follows, I first show that Lavoisier’s, Weismann’s and Maxwell’s judgments had this character, and then explain why they have little relevance to the question of reliable, contemporary applicability of the selective confirmation strategy. 3.2. Examination of the base cases Lavoisier’s and Weismann’s judgments are typical examples of judgments made before the contemporary scientific community reached a consensus (i.e., the case (a.) above). It is well known that when the caloric theory of heat flourished and Lavoisier was convinced of the existence of such a substance, there were still supporters of the dynamical theory of heat. Thus, as of 1783, Lavoisier and Laplace were saying, “Scientists are divided about the nature of heat. A number of them think of it as a fluid diffused throughout nature . Other scientists think that heat is only the result of the imperceptible motions of the constituent particles of matter” (Lavoisier and Laplace, 1994[1783], pp. 189e190). Though the dynamic theory of heat was certainly far less popular among the scientists at the time and far less developed than the fluid theory, the theory was nonetheless recognized as a rival theory. Indeed, Stanford himself appeals to such rivalry between those
12 In a symposium paper on Exceeding Our Grasp, Stanford notes that Weismann’s theory actually achieved a certain kind of novel predictive success (Psillos et al., 2009, pp. 383e384). Peter Vickers points to another example of a false theory that achieved an unimpressive novel predictive success and suggests that selective realists should make a more nuanced claim concerning the relation between a novel predictive success and the warrant it confers on the realistic commitment to the theory in question (Vickers, 2013, pp. 195e196).
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
6
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
theories when he explains Lavoisier and Laplace’s apparently agnostic attitude expressed at one point in Mémoire sur la Chaleur towards the material fluid (Stanford, 2006, p. 176). According to Stanford, the part in which they express an agnostic attitude is when they introduce a new calorimetric method, and the agnosticism was due to their intention of presenting it as available to both the fluid theorist and the dynamical theorist of heat. Furthermore, to show that Lavoisier was actually committed to the material fluid, Stanford cites other passages from his writings, in which he defends the hypothesis of the material fluid against alternative conceptions of heat (Stanford, 2006, pp. 173e179).13 These points suggest that the situation in the scientific community at the time was still one of dissent with respect to the nature of heat. Similarly, Stanford recognizes that “[Weismann] alone followed [Wilhelm] Roux in insisting on a qualitative nuclear division and germinal specificity (and these aspects of his account were widely criticized by his contemporaries)” (Stanford, 2006, p. 119). The criticism was not without reason. There was putative counterevidence against the hypothesis of germinal specificity, such as budding and the famous experiments on sea urchins by Hans Driesch. In the experiments, Driesch separated an egg in its first cell division and found that a complete larva can develop from each half. This does not seem to agree with the hypothesis of germinal specificity, according to which each one of the divided cells contains a different germinal substance. Thus, again, the situation of the scientific community at the time was one of dissent with respect to the hypothesis. The case of the ether in the 19th-century theory of light and electromagnetism requires more subtle examination; namely, we need to evaluate the situation separately before and after 1888, the year in which electromagnetic waves were first detected by Heinrich Hertz. Before that year, I argue, the mechanical medium was widely accepted, but its epistemic status was still undecided (i.e., a state of opinion corresponding to (b.)); and soon after that, there appeared an alternative conception of the medium that lacks a mechanical character (i.e., the state of opinion was (a.)). In the following, I try to defend this reading of the history with some evidence. Let us return to the passage that Stanford cites from Maxwell’s A Treatise on Electricity and Magnetism (1873). In several pages preceding that part, Maxwell reviews several theories by Bernhard Riemann, Rudolf Clausius, Carl Neumann, and Enrico Betti on the manner in which electric action propagates from one body to another. Some of them were similar to the propagation of light. Others were not. All those scholars thought that the propagation takes place in time, but none of them made explicit mention to the medium in which the propagation occurs. Thus, Maxwell says, “There appears to be, in the minds of these eminent men, some prejudice, or á priori objection, against the hypothesis of a medium in which the phenomena of radiation of light and heat, and the electric actions at a distance take place,” and he suspects that this reluctance is due to the lesson they learned from the past theorists’ “habit of accounting for each kind of action at a distance by means of a special aethereal fluid, .the properties of which were invented merely to save appearances” (Maxwell, 1873, p. 437, my emphasis). Then, the part that Stanford cites follows (i.e., the idea that there should be some medium in which the energy is reserved during the
13 To be fair, it should be noted that Stanford’s purpose here is to refute Psillos’ seemingly universally quantified claim that “scientists of this period were not committed to the truth of the hypothesis that the cause of heat was a material substance” (Psillos, 1999, p. 119). For this purpose, Stanford’s historical claim that more than a few contemporary scientists were committed to the hypothesis may be sufficient.
propagation). However, the conclusion Maxwell draws from this consideration is, “Hence all these theories lead to the conception of a medium in which the propagation takes place, and if we admit this medium as an hypothesis, I think it ought to occupy a prominent place in our investigations” (Maxwell, 1873, p. 438, my emphasis). From these passages, as well as from the fact that Maxwell defends the assumption here, it seems that what is expressed is not his conviction on the established status of the assumption of the mechanical medium, but his view that the assumption is quite plausible or natural enough to be employed as a working hypothesis, based on which further investigations should be made. This reading of the passage also aligns with the fact that, as cited above, Maxwell was fully aware of the naïvety of positing a medium ‘merely to save appearances.’ Given the awareness, it seems unlikely that he was satisfied with just positing the ether and believed its confirmed status without further investigations. The same recognition of the epistemic status of the medium by that time can be seen from George Francis Fitzgerald’s address at the British Association for the Advancement of Science in 1888: In a presidential address on the borderlands of the known, delivered from this chair, the great Clerk Maxwell spoke of as an undecided question whether electromagnetic phenomena are due to a direct action at a distance or are due to the action of an intervening medium. The year 1888 will be ever memorable as the year in which this great question has been experimentally decided by Heinrich Hertz in Germany, and, I hope, by others in England. It has been decided in favour of the hypothesis that these actions take place by means of an intervening medium. Although there is nothing new about the question, and although most workers at it have long been practically satisfied that electromagnetic actions are due to an intervening medium, I have thought it worthwhile to try and explain to others who may not have considered the problem, what the problem is and how it has been solved. (Fitzgerald, 1902[1888], p. 231, my emphasis) Here again, one can see the hypothetical status of the medium by that time, though it had been widely accepted as a working hypothesis. These passages by Maxwell and Fitzgerald seem to suggest that although the conviction expressed by Maxwell on the existence of some mechanical medium may have been widely shared among the contemporary scientists (at least in Britain), its epistemic status was still hypothetical until 1888. What, then, was the situation after 1888? Hertz’s experiment had a major impact on continental scientists and made them accept the concept of electromagnetic field (Harman, 1982, p. 109). Hirosige (1969) points to Hertz’s experiment as well as Henri Poincaré’s Sorbonne lectures on Maxwell’s theory as the decisive factors for Hendrik Lorentz’s conversion to the conception of contiguous action (p. 186). However, the notion of the ether developed in Lorentz’s “La théorie électromagnétique de Maxwell et son application aux corps mouvants” (1892) was no longer a mechanical medium. Hirosige claims, “[Maxwell’s original conception] differs from our present concept of the electromagnetic field. It was Lorentz’ theory of electrons in 1892 that brought about that change. In Lorentz’ theory, .[t]he electromagnetic field is regarded as a dynamic state of the stationary ether, and it is deprived of all mechanical qualities” (Hirosige, 1969, p. 208). This conception of the medium by Lorentz gained followers by around 1900 (Harman, 1982, p. 119). Thus, I argue that although the mechanical medium may have been widely accepted by contemporary scientists, its epistemic status was still undecided before 1888 (i.e., the medium was accepted as a working hypothesis); and after that, there soon
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
appeared an alternative conception of the medium that lacks mechanical character (i.e., the consensus on the mechanical medium disappeared).14 Although the evidence presented here is limited, and although it is not clear how much evidence is required to show that the scientific community at the time was not falsely convinced of the mechanical medium, the above evidence suggests that the nineteenth century physicists had more subtle and varied attitudes even towards the existence of some mechanical medium than the picture Stanford suggests based on the remark by Maxwell.15 3.3. Are these cases relevant to the reliable applicability of the selective confirmation strategy? If these analyses are correct, the false commitments by Lavoisier, Weismann, and Maxwell are those made either: (a.) before the scientific community at the time reached a consensus on the theory or the particular hypothesis in question, or (b.) about the usefulness of that component as a working hypothesis rather than on its confirmed status. It is inappropriate to examine the reliability of contemporary application of the selective confirmation strategy based on these cases. First, as in the case of (b), if the commitment was not on the confirmed status of the theory component, the scientist was not really committed to its truth, and therefore it is not a case of false commitment. Second, regarding case (a), it is not appropriate to expect individual scientists to reliably identify the confirmed theory components. It has been noted that actual scientists are often affected by various (social as well as cognitive) biases and personal interests, and yet epistemic virtues such as rationality or objectivity can appear at the community level (e.g., Longino, 1990, 2002). Some social epistemologists also suggest that those putatively undesirable tendencies of individual scientists may actually play a positive role in bringing about the most effective distribution of cognitive labor in the scientific community, or that those tendencies may be the result of the rational credit-allocation system (Kitcher, 1990; Solomon, 2001; Strevens, 2003, 2006). These insights from social epistemology suggest that the reliability of community-level judgments and individual-level judgments could be different and that the former are more reliable than
14 This analysis suggests that the scientists immediately after 1888 agreed on the confirmed status of the mechanical medium. I will discuss this point in Section 4. 15 One may think that we need to consider the luminiferous ether and the electromagnetic ether separately. As noted above, the above passages, including that of Maxwell that Stanford cites, are about electromagnetic phenomena in general and not about the medium for light propagation. Thus, one may think that, at least by the 1850s, the mechanical ether as a medium for light propagation had achieved more established epistemic status as the wave theory of light gained more support from the experiments by Airy, Foucault and Fizeau (Whittaker, 1958[1910], pp. 126e 127). Thus, if we evaluate the track record of the scientists’ judgments separately for the electromagnetic ether and the luminiferous ether, even if I have successfully shown that scientists at the time had a subtle attitude to the former, it does not mean that they had the same attitude to the latter. In this case, the overall account of the scientists’ track record for the case of the ether will become fifty-fifty. Worse, one may say, was the track record of the scientists, because even Lorentz’s theory contained a postulate that was finally discarded for special relativity, i.e., the assumption of a stationary ether. Psillos also refers to this property of the ether and excludes it from the core causal description of the ether, i.e., the descriptions associated with theoretical terms that are essential for fixing their referents (Psillos, 1999, p. 314 n.9). But, again, the question is whether contemporary scientists could have made this differentiation successfully. Though this is a difficult question that requires substantial historical research, it seems that Lorentz himself thought that the choice between the theory of relativity and his theory is a matter of one’s preference and took a cautious attitude toward realism (Hirosige, 1976, p. 70; Frisch, 2005, pp. 669e672).
7
the latter. Indeed, if we look at the cases of caloric, germinal specificity, and the mechanical medium at the community level, the analysis in the previous subsection suggests that the scientific community at the time properly avoided false commitments to those entities or hypotheses. Namely, there were supporters of alternative theories in the community, and the scientific community at the time did not reach a consensus on the confirmed status of caloric, germinal specificity, and the mechanical medium. Given these putative reasons to suspect the difference in the reliability of judgments between individual-level judgments and community-level judgments, we should consider the reliability of the community-level judgments, rather than that of the individuallevel judgments, to examine the reliable applicability of the selective confirmation strategy. After all, misjudgments at the individual level would not be so problematic if the community-level judgment is satisfactorily reliable. This means that the target of the NRA should be the reliability of the community-level judgments. However, then, the current basis of the NRA is inappropriate for questioning such reliability because they are cases of failed individuallevel judgments. To question the reliability of community-level judgments, the basis of the NRA should be cases of failed community-level judgments.16 What, then, do the false commitments by Lavoisier and Weismann suggest? It is that individual scientists’ judgments concerning the true components of a successful theory are unreliable if they are made before the scientific community reaches a consensus on the theory or hypothesis in question. This should not be surprising. It seems quite natural that the supporters or proponents of a certain hypothesis express strong commitment towards their favorite hypotheses, especially in the context in which they are defending their hypotheses when their epistemic status is still in dispute; and, since all but a few existing rival hypotheses are finally discarded, it is no wonder that one can find many cases of false theoretical commitments in the history of science. On the other hand, the community’s consensus appears only after various approaches are pursued and various problems are examined by the opposing parties. It would be inappropriate to assess the reliability of the community-level judgments based on the record of misjudgments by individual scientists at the time of dissent. 4. Concerns, clarifications, and further research So far, I have argued that the problem of the reliable, contemporary applicability of the selective confirmation strategy should be examined at the community level rather than at the individual level and that the current basis of the NRA is insufficient for questioning the reliability of the former type of judgment. To do that, I argued, anti-realists should provide the cases of failed community-level judgment. This objection to the NRA implies a requirement for the application of the selective confirmation strategy, namely, that selective realists should apply their criteria to a successful theory or hypothesis only after the scientific community has reached a
16 Patrick Forber (2008) and Peter Godfrey-Smith (2008) also appeal to the community-level properties (such as division of labor and the number of scientists in it) in arguing against Stanford’s (2006) new induction, a history-based suspicion that we are unable to exhaust the range of possible alternatives to our current theory and that there are always unknown alternatives (‘the problem of unconceived alternatives’). They argue that scientists as a community may be able to come up with more alternatives and the new induction would look less problematic at the community-level. However, they are also aware of the limitation of this line of response, as well as that Stanford himself anticipates the response and argues that while the past scientists as a community came up with more alternatives, they still failed to conceive of all the alternative theories that appeared later (Stanford, 2006, p. 129). Hence, the community-level argument does not seem to work as a response to the new induction.
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
8
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
consensus on it. In what follows, I will discuss three concerns with my objection to the NRA and with this requirement. The first concern may be that the requirement of consensus would amount to giving up prospective identification of the confirmed components of a successful theory and that it will lead to a very weak realism. It is true that my objection implies that we need a kind of hindsight to reliably identify the confirmed components of successful theories. However, this is different from giving up contemporary commitment once and for all. We do need hindsight (i.e., the perspective at the time of consensus), but scientists in consensus of a certain successful theory can make a commitment to certain components of their contemporary theory and to the stability of those components through future theory changes. Without such a moment at which scientists can trust their contemporary understanding of the nature, retrospective realism would lead to a very weak version of realism that can never say which theory parts are really confirmed, for there will be a (probably infinite) regress (or, rather, progress) of perspective from which retrospective judgments should be made. What I suggested above is not that sort of retrospective realism. My point is that we should refrain from applying the selective confirmation strategy to a theory whose epistemic status is still under dispute and assuring scientists of the truth of its components based on whatever criterion that selective realists have. Instead of undertaking that task, or completely giving up contemporary commitments, selective realists can wait, letting scientists do their job until they decide on a certain theory. Stanford demands that “[A]ny convincing defense of realism by appeal to the strategy of selective confirmation will have to provide us with criteria that . can now be applied in advance of any future developments to identify those idle features or components of scientific theories that are not really confirmed by the empirical successes those theories enjoy” (Stanford, 2006, p. 168; original emphasis). However, if the ‘now’ in the above quotation means at any given time, this requirement is too demanding. Selective realists do need to be able to make a contemporary commitment, but not at any given time. The second problem with my objection may be that the notion of consensus is too vague. First, the extent to which consensus is required is not clear. For example, the fluid theory of heat was far more popular and better developed than the dynamic theory in the late 18th century and in the early 19th century. If, one may wonder, the situation of the fluid theory at the time is not counted as consensus, as I argued above, what would be? However, the notion of consensus with which we are concerned here is not a matter of the number of scientists who support a theory. What matters here is how the scientists at the time saw the state of the scientific community. It is not because the fluid theory did not have enough followers that the situation does not count as consensus, but because there was an alternative theory recognized as such. The state of the scientific community can be seen from writings of the contemporary scientists; for example, if a certain hypothesis is explicitly defended, that suggests the unsettled status of the hypothesis. Another ambiguity with the requirement of consensus would be regarding the stability of consensus. My analysis of the ether in the 19th century theory of light and electromagnetism seems to suggest that there was a period of 5e10 years during which scientists believed in the mechanical medium, and this seems to be a case of
17 For example, one may find a condition of ‘stability of consensus’ such that a hypothesis that satisfies it tends to be retained through theory changes. But, again, this allows for degrees, and the analysis should be something like ‘a hypothesis with such and such degree (or kind) of stability of consensus is more likely than not to be retained through theory change’.
failed community-level judgment. Perhaps, such a consensus is too short or too ephemeral, but then the question is how stable is stable enough for a consensus to be a reliable guide for selective realists. This is surely a problem that selective realists need to address if they employ my line of objection to the NRA, but it is an empirical problem to be solved based on historical inquiry17 and not the sort of problem that endangers the tenability of my objection. The third concern that one may have with my objection to the NRA is that it is not clear whether the community-level judgments are more reliable than the individual-level judgments. The point of my objection to the NRA is, in short, that it fails to rule out the possibility that individual scientists, who are often mistaken in identifying the confirmed components of their contemporary theories, can still make reliable judgments as a community. Unless selective realists show that the latter is actually the case, one may argue that they cannot really claim the reliable applicability of the selective confirmation strategy. Of course, as I noted above, community-level judgments are more reliable in the sense that the scientific community as a whole is more cautious than its individual members, and thus it is less vulnerable to hasty misjudgments. However, one may contend, this might be the case simply because the scientific community does not reach consensus on anything, and, if this is the case, the requirement of consensus would lead to agnosticism on the confirmed components. The reliability of judgment that selective realists need is not just about avoiding wrong commitments but also about making right commitments. To show that the community-level judgments are reliable in both these senses, selective realists also need to point to the cases where certain theoretical components on which scientists agreed were not discarded in later theoretical development.18 Stanford makes a similar criticism to a possible interpretation of Psillos’ selective confirmation strategy. If his position requires unanimous agreement among scientists regarding the confirmed components of their contemporary theories, Stanford argues, the resulting selective realism would be a very weak position, unable to tell us which theoretical components to believe “in the routine case of disagreement among contemporary scientists” (Stanford, 2006, p. 179, original emphasis). As I argued above, the notion of consensus I am concerned with does not require literally unanimous agreement among scientists, and it is too much to expect selective realists to identify the confirmed components of theories whose validity is still in dispute. However, a concern remains regarding how often the selective confirmation strategy is actually applicable. If it is too rare, or if its verdicts when applied are unreliable, selective realism would be an uninteresting position. This is a genuine question that requires substantial historical study before we make any conclusion, which is why I noted in the beginning that the aim of this paper is only to undermine the NRA and not to provide a positive argument for the reliable applicability of the selective confirmation strategy. On the other hand, it may not be so difficult to find such cases in the history of science. For instance, Miriam Solomon (2001) describes how consensus on
18 One may wonder whether working scientists care about which components of their theory are confirmed by evidence: is it not a problem that only philosophers of science care about? It may be true that scientists are not necessarily concerned with the scientific realism debate and they don’t talk about ‘confirmation’ in a realistic sense (cf. Fine, 1984). However, the question of which parts are essential in explaining/predicting certain phenomena should be their concern as well, for their understanding about which part allows for modification or which parts require further empirical support will guide their ensuing research. It is their ability to make such judgments that we are concerned with here, and scientists themselves do not necessarily have to be committed to the reality of the parts that they regard as confirmed by evidence.
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
plate tectonics appeared among scientists between the 1950s and 1970 (pp. 102e109). It was not, she stresses against some conventional descriptions of the case, that scientists at the time accepted the theory at once as soon as certain crucial evidence was obtained; rather, the consensus formation was gradual, and scientists working in different fields and working in different parts of the world accepted the idea of drift at different times and for different reasons. She notes, “some paleomagnetists (most famously, Runcorn) espoused drift already in the 1950s, when the data was still uncertain” (2001, p. 103); oceanographers and seismologists accepted the theory in mid-1960s with the emergence of new oceanographic data and the theories on the mechanism of drift (seafloor spreading and plate tectonics); it took a few more years before continental geologists (re)interpreted the data of their field in terms of drift and came to accept the theory. Chemists in the early 20th century agreed on the hexagonal arrangement of carbon atoms in benzene, as well as the number of carbon atoms and hydrogen atoms in it, even though they strongly disagreed with each other on the way the fourth valence of carbon atoms were directed (Brock, 1992; Brush, 1999, p. 264). This agreement on the hexagonal structure appeared when a hypothesis that did not share this component (Ladenburg’s prism model) was discarded in light of new evidence, and those agreed components of the theory are still retained in today’s theory, despite the later development on the nature of the chemical bond. These cases suggest that scientists can come to agreement on certain theory or theoretical components through collective research efforts. It is the reliability of such collective judgments that should be examined to assess our ability to identify the confirmed components. Note, however, that this is not to say that the scientific community has some kind of magical power. Certain conditions must be met for the community to embody epistemic virtues such as rationality and objectivity, which certainly have relevance to the reliability of community-level judgments. This means that not every case of failed community-level judgment is relevant to the reliability of community-level judgment we are concerned with: if the consensus was of the kind that was forced by a political authority, for example, it does not have any implication to the reliability of community-level judgment formed through mutual criticism in the community. However, then, what properties of the scientific community are relevant to the reliability of its judgments? This is something we should learn from social epistemology. Longino (1990) suggests four conditions for the objectivity of the scientific community: namely, the existence of (1) avenues for criticism, (2) shared standards of criticism, (3) responses to criticism, and (4) equality of intellectual authority in the community (pp. 76e79). Mutual criticism in such a community makes it less likely that the community’s decision on a hypothesis is affected by the idiosyncratic assumptions or subjective preference of its individual members. Solomon (2001) proposes another requirement on how what she calls decision vectors (i.e., factors that affect one’s theory-choice) are distributed in the community (2001, p. 54, pp. 117e118). Though they are not necessarily committed to scientific realism, and though one might not necessarily agree with their particular proposals, the properties of scientific community discussed in social epistemology have relevance to the reliability of community-level judgment, and its examination must take those properties into account. It should also be noted that this is not to claim that scientists can reliably identify the confirmed components as long as the community satisfies those properties, and hence my response to the NRA is not to defend only Psillos’ or Giere’s particular version of selective realism that leaves the identification of confirmed components to scientists’ own judgments (let us call these kinds of
9
selective realism freehand selective realism). Recall the way that the NRA challenged the other positions that provide certain criteria for identifying the object of commitment, such as structural realism and the recent positions by Peters and Egg19 (let us call these strands criteria-selective realism). The concern was that, before applying those criteria for classifying theory pieces into confirmed/ unconfirmed parts, it was questioned whether scientists can split a theory into pieces in an appropriate way. This was called into question by the quotation from Maxwell, in which he seems to express his commitment to the medium as essential for the propagation of energy in time. As I argued later, however, the remark by Maxwell should be understood as defending the naturalness of positing the medium as a working hypothesis on which further investigation should be made, and he took a more careful attitude towards the medium. If this is correct, the concern about the criteria-selective realism has also been addressed. When one wants to present evidence for the reliable applicability of one’s favorite version of selective realism, the historical cases one needs to present is different depending on whether one wants to defend freehand selective realism or criteria-selective realism. Suppose a mini-theory was espoused in the past that consists of three theory components (from the current perspective) {A, a, b}. A is some mathematical formula and a and b are some entities that might be involved in its interpretation. Only {A, a} are considered to be confirmed today (either by scientists or by the application of certain criteria of selective realism). Suppose further that the scientific community at the time satisfied the criteria of objectivity mentioned above. Now, for this case to serve as evidence for the reliability of scientists’ freehand judgments (i.e., the reliable applicability of freehand selective realism), the scientists at the time must have been able to classify {A, a} as confirmed and {b} as unconfirmed. On the other hand, for this case to serve as evidence of the reliability of scientists’ judgments with the aid of certain criteria of classification (i.e., the reliable applicability of criteria-selective realism), the scientists at the time only had to have been able to differentiate {A, a} and {b} as different theoretical components, not only in the sense that they labeled them differently, but in the sense that they recognized the possibility that {A, a} is true without {b} being true and hence the warrant on {A, a} does not automatically warrant {b}.20 As far as they were treated as separable theoretical components, criteria-selective realists can claim that scientists at the time could have reached the ‘right’ conclusion by applying the criteria suggested by criteria-selective realism. Furthermore, if the past scientists distinguished {A, a} and {b} in the above sense but believed in both of them based on some reasoning, this case will support criteria-selective realism against freehand selective realism because this is a case in which the past scientists’ freehand judgment failed but, counterfactually, it would not have failed if they had known the criteria of selective confirmation.21 Thus, the validity of each version of selective realism with regard to its reliable applicability should be decided based on historical investigation, and, as I have argued, such investigation should pay attention to community-level judgments, rather than those of individual scientists, as well as the community-level
19 As I argued, Harker’s version leaves the final decision to scientists’ own judgments and is similar to Psillos’ and Kitcher’s positions, especially when the case in question is theory replacement rather than minor modification of the preceding theory. 20 For, otherwise, {A, a, b} would be treated as one posit. 21 Actually, which (or any) one of the suggested criteria would have worked is an empirical question, depending on the case in question. If none of them would have worked, the case does not differentiate between criteria-selective realism and freehand selective realism.
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001
10
properties relevant judgments.
Y. Onishi / Studies in History and Philosophy of Science xxx (2017) 1e10
to
the
reliability
of
community-level
5. Concluding remarks The NRA introduces a new point of concern to the current scientific realism debate, i.e., the contemporary detectability of the confirmed theory components. While I have attempted to show that the challenge is undermined, the way I addressed it leads to new research questions, such as ‘How reliable was the past scientific community in making such identifications?’ and ‘What properties of the community are relevant to its epistemic performance?’ Answering these questions requires insights from social epistemology as well as from the history of science, for the question we now face is whether and under what conditions the community of actual scientists can perform a certain epistemic task. Thus, the response to the NRA developed in this paper directs our attention to the social nature of scientific activity, an aspect of science that has not been discussed very much in the scientific realism debate. Acknowledgements I am grateful to Otávio Bueno, Harvey Siegel, Peter Lewis, Dan Williams, and the anonymous reviewers of this journal for the insightful comments on the former versions of this paper. I also thank Tetsuji Iseda, Takeshi Sakon, the audience of Gesellschaft für Wissenschaftsphilosophie (GWP) Conference 2016 in Düsseldorf and the International Workshop on Scientific Realism in Kyoto for their helpful comments and discussions. This paper is based on a doctoral dissertation submitted to Kyoto University. This work was supported by Fulbright Doctoral Dissertation Research Grant [Grant number: 15131828]. References Brock, W. (1992). The fontana history of chemistry. Fontana Press. Brush, S. (1999). Dynamics of theory change in chemistry: Part 2.Benzene and molecular orbitals, 1945e1980. Studies in History and Philosophy of Science, 30(2), 263-302. Buchwald, J. (1989). The rise of the wave theory of light. The University of Chicago Press. Cartwright, N. (1983). How the laws of physics lie. Oxford University Press. Chakravartty, A. (2007). A metaphysics for scientific Realism: Knowing the unobservable. Cambridge University Press. Chakravartty, A. (2008). What you Don’t know Can’t hurt You: Realism and the unconceived. Philosophical Studies, 137, 149-158. Chang, H. (2002). Preservative realism and its discontents: Revisiting caloric. Philosophy of Science, 70, 902-912. Egg, M. (2012). Causal warrant for realism about particle physics. Journal for General Philosophy of Science, 43, 259-280. Egg, M. (2016). Expanding our Grasp: Causal knowledge and the problem of unconceived alternatives. British Journal for Philosophy of Science, 67, 115-141. Fine, A. (1984). The natural ontological attitude. In J. Leplin (Ed.), Scientific realism (pp. 83-107). University of California Press.
Fitzgerald, G. F. (1902 [1888]). Address to the mathematical and physical section of the British association. In J. Larmor (Ed.), The scientific writings of the late George Francis FitzGerald (pp. 229-240). Dublin University Press. Forber, P. (2008). Forever beyond our Grasp?: review of P. Kyle Stanford (2006), exeeding our Grasp: Science, history, and the problem of unconceived alternatives. Biology and Philosophy, 23(1), 135-141. Frisch, M. (2005). Mechanisms, principles, and Lorentz’s cautious realism. Studies in History and Philosophy of Modern Physics, 36, 659-679. Giere, R. (1988). Explaining science: A cognitive approach. The University of Chicago Press. Godfrey-Smith, P. (2008). Recurrent transient underdetermination and the glass half full. Philosophical Studies, 137, 141-148. Hacking, I. (1983). Representing and Intervening: Introductory topics in the philosophy of natural science. Cambridge University Press. Hardin, C., & Rosenberg, A. (1982). In defence of convergent realism. Philosophy of Science, 49, 604-615. Harker, D. (2013). How to split a theory: Defending selective realism and convergence without proximity. British Journal for Philosophy of Science, 64, 79-106. Harman, P. M. (1982). Energy, force, and matter: The conceptual development of nineteenth-century physics. Cambridge University Press. Hirosige, T. (1969). Origins of Lorentz’ theory of electrons and the concept of the electromagnetic field. Historical Studies in the Physical Sciences, 1, 151-209. Hirosige, T. (1976). The ether problem, the mechanistic worldview, and the origins of the theory of relativity. Historical Studies in the Physical Sciences, 7, 3-82. Kitcher, P. (1990). The division of cognitive labor. The Journal of Philosophy, 87, 5-22. Kitcher, P. (1993). The advancement of science. Oxford University Press. Laudan, L. (1981). A confutation of convergent realism. Philosophy of Science, 48, 1949. Lavoisier, A., & Laplace, P. (1994 [1783]). Memoir on Heat: Read to the royal academy of sciences (H. Guerlac, Trans.) Obesity Research, 2(2), 189-202 (Reprinted from Memoir on Heat: Read to the Royal Academy of Sciences. Neal Watson Academic Publications Inc. 1982) Leplin, J. (1997). A novel defense of scientific realism. Oxford University Press. Longino, H. (1990). Science as social Knowledge: Values and objectivity in scientific inquiry. Princeton University Press. Longino, H. (2002). The fate of knowledge. Princeton University Press. Lyons, T. (2006). Scientific realism and the stratagema de divide et impera. British Journal for Philosophy of Science, 57, 537-560. Maxwell, J. C. (1873). A Treatise on electricity and magnetism (Vol. II)Clarendon Press. https://archive.org/details/electricandmag02maxwrich (Accessed 8 November 2016) Peters, D. (2014). What elements of successful scientific theories are the correct targets for ‘selective’ scientific realism? Philosophy of Science, 81, 377-397. Psillos, S. (1999). Scientific realism; how science tracks truth. Routledge. Psillos, S., Saatsi, J., Winther, R. G., & Stanford, P. K. (2009). Grasping at realist straws. Metascience, 18, 355-390. Saatsi, J. (2005). Reconsidering the FresneleMaxwell theory shift: How the realist can have her cake and EAT it too. Studies in History and Philosophy of Science, 36, 509-538. Solomon, M. (2001). Social empiricism. The MIT Press. Stanford, K. (2006). Exceeding our Grasp; science, history, and the problem of unconceived alternatives. Oxford University Press. Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100, 55-79. Strevens, M. (2006). The role of the Matthew effect in science. Studies in History and Philosophy of Science, 37, 159-170. Suárez, M. (2008). Experimental realism Reconsidered: How inference to the most likely cause might Be sound. In S. Hartmann, C. Hoefer, & L. Bovens (Eds.), Nancy Cartwright’s philosophy of science (pp. 137-163). Routledge. Vickers, P. (2013). A confrontation of convergent realism. Philosophy of Science, 80, 189-211. Whittaker, E. (1958[1910]). A history of the theories of aether and electricity. Thomas Nelson and Sons Ltd. Worrall, J. (1989). Structural Realism: The Best of Both Worlds? Dialectica, 43, 99124.
Please cite this article in press as: Onishi, Y., Defending the selective confirmation strategy, Studies in History and Philosophy of Science (2017), http://dx.doi.org/10.1016/j.shpsa.2017.07.001