A4. L. G. REDHEAD* SOME PHILOSOPHICAL ASPECTS OF PARTICLE PHYSICS t Abstract - The paper is concerned with explaining some of the principal theoretical developments in elementary particle physics and discussing the associated methodological problems both in respect of heuristics and appraisal. Particular reference is made to relativistic quantum field theory, renormalization, Feynman diagram techniques, the analytic S-matrix and the Chew - Frautschi bootstrap.
1. Introduction
of the present paper is to examine the development of modern elementary particle theory (EPT) from the perspective of the philosopher of science. The first question we may ask is why should philosophers be encouraged to take an interest in this particular branch of physics? After all the subject abounds in technical jargon and involves a very extensive literature. Furthermore theories in particle physics are notorious for the rather elaborate mathematical development required before any ‘practical’ calculations can be carried out. Nevertheless we may list the following points of recommendation: (1) EPT is a modern ‘live’ branch of physics, in the sense that it is still actively developing and evolving. Philosophy of science often deals with examples which are no longer of current research interest in science. For this reason physicists tend to regard philosophy of science as somewhat irrelevant, particularly since the character of theoretical physics as currently practised appears to be significantly different from what it was in many of the historical examples. The study of EPT affords an excellent opportunity for examining the truth of this claim. THE OBJECT
*Department of History and Philosophy of Science, Chelsea College, University of London, Manresa Road, London SW3 6LX, U.K. + The present work is an expanded version of a paper presented at a meeting of the British Society for the Philosophy of Science at University College in February 1976. A later version was read to the Philosophy of Physics seminar in the Department of History and Philosophy of Science, University of Cambridge, in April 1978. 1 am grateful to the audiences on both these occasions for their lively comments. I would particularly like to thank Jon Darling, Donald Gillies, Allan Franklin, Mike Bloxham. Heinz Post and Jim Cushing for discussing various aspects of the work. The remaining errors and misapprehensions are entirely my own responsibility. Sfud. H&f. Phil. Sci., Vol. 11. (1980), No. 4, pp. 279 - 304. Pergamon Press Ltd., Printed in Great Britain.
279
280
Studies
in
Hiwory and Philosophy of Science
(2) It may be argued that EPT is in a state of Kuhnian crisis.’ If one regards normal science as philosophically rather dull, following Popper, then the study of a crisis situation that is currently ongoing, so to speak, should be of considerable interest to philosophers of science. (3) EPT thus provides an ideal testing ground for theories of how science develops. We may look at methodologies of heuristics from the point of view of correspondence relations between theories. This may be taken in the sense of investigating how successive theories are, as a matter of historical fact, logically related to one another. There is also the normative sense of specifying rules of discovery or heuristic stratagems which may themselves be derived from the descriptive historical analysis. Then there are methodologies of appraisal, how we appraise a theory once it has, for whatever reason and by whatever heuristic process, been proposed. This will typically involve considerations of simplicity and content internal to the theory and also the question of its empirical success. Here the idea of successful novel predictions is often thought to be of particular importance. In some cases novel predictions guide the experimental discovery, for example the R- particle,? while in other cases the novel predictions are verified ‘accidentally’, for example the positron.3 The importance attached to precise quantitative prediction can also be illustrated, as in the case of the famous Lamb shift anomaly in the spectrum of atomic hydrogen or the closely related anomaly in the magnetic moment of the electron. (4) In a reductionist view of science EPT is to be seen as a foundation for the whole hierarchical structure, which we might briefly schematize as Biology + Chemistry - Physics -, EPT. But if the foundation turns out itself to be in a state of confusion this throws doubt on the whole programme. Also the reductionist programme depends essentially on explaining complex phenomena in terms of simple phenomena, but there are indications in the bootstrap philosophy (see section 8 below) that the putative simplest objects may involve for their understanding consideration of complex objects. So in a sense the reductionist programme may be seen as circular; schematically: complex -, simple * complex. Another possibility is an open-ended infinite regress in which every elementary particle is itself resolved by more delicate probing into further constituents. At all events the idea of having reached with EPT a stable bedrock foundation for a reductionist programme may well be illusory. ‘cf.., e.g. K. Schrader-Frechette, ‘Atomism in Crisis: An Analysis of the Current High Energy Paradigm’, Philosophy’ Sci. 44 (1977), 409. Against this view, however, most physicists would probably hold that the recent work on quarks and gauge fields referred to in section 2 below already constitutes a new paradigm. ‘The background to the Q- discovery is given in some detail by P.T. Matthews, The N&ear Apple (London: Chatto and Windus. 1971), pp. lOO- 104. ‘Cf. N.R. Hanson, The Concepf of the Podron: A Philosophical Analysis (Cambridge: Cambridge University Press, 1963).
Some Philosophical Aspects of Particle Physics
281
(5) This leads us to pose the question: what light does EPT throw on the ultimate nature of matter? Is the atomistic programme still a valid one? Or should we subscribe to a bootstrap philosophy or to an Anaximanderian style of fundamentalism? We shall be looking at some of these issues, but before attempting to draw any philosophically interesting conclusions we shall want to outline some of the physical ideas on which to base these conclusions. A definite attempt has been made to make this paper relatively self-contained. Inevitably this means simplifying many of the arguments, but not, it is hoped, at the expense of distorting the essential physics involved in the various theories we shall be discussing. Another point we would like to stress is that we are primarily interested in the way the theories looked to physicists at the time they were being introduced, rather than from the more sophisticated point of view of later theoretical developments. This is of particular importance in assessing the historical - descriptive approach to methodologies of heuristics and appraisal. In order to keep the paper to a reasonable compass there are some topics we shall specifically not discuss. These include: (1) The role of symmetry in EPT. This topic has been treated in some detail in a recent paper by the author’ which may be regarded as complementing in an essential way the present discussion. (2) Basic philosophical problems associated with quantum mechanics, such as the theory of measurement, the interpretation of the uncertainty relations and so on. We shall adopt a naive ‘fluctuation’ view of the quanta1 aspects of observables, subject to the constraints of the relevant uncertainty relations. This view is not tenable in any simple sense but it will suffice for our purposes. (3) The ontological status of elementary particles. We shall not discuss for example the question of whether the ‘ultimate’ view of matter provided by EPT is more ‘real’ than the familiar objects of sense experience. We shall actually a.ssume a realist approach to the interpretation of physical theories with due reservations arising from the notion of surplus mathematical structure (see section 4 below).
2. Schematic History of Theoretical Developments
in Particle Physics
Around 1927 the putative elementary particles were the electron, whose existence had been established thirty years previously by the experiments of J.J. Thomson on cathode rays,& the proton, whose role as constituent of the ‘M.L.G. Redhead, ‘Symmetry in Intertheory Relations’, Synfhese 32 (1975). 77. ‘J.J. Thomson, ‘Cathode Rays’, Phil. Mug. 44 (1897). 293. This is really an oversimplification. There were of course other factors involved in the ‘discovery’ of the electron. For detailed discussion see for example J.L. Heilbron. ‘Thomson, Joseph John’, Dictionary of Scientific Biogruphy, Volume XIII. C.C. Gillispie (ed.) (New York: Charles Scribner’s Sons, 1976), p.362.
282
Studies in History and Philosophy of Science
atomic nucleus had emerged from Moseley’s work on X-ray spectra in 1913,6 and finally the photon, proposed by Einstein in 1905 in connection with his explanation of the photoelectric effect.’ The quantum mechanics of Heisenberg (1925)’ and the wave mechanics of Schriidinger (1926)’ were really attempts to provide a theory of the electron (and, less interestingly, of the proton). But our story of characteristically elementary particle theories will start with attempts to incorporate the photon in the new theoretical framework. Since photons can readily be created (i.e. emitted) and annihilated (i.e. absorbed) a theory was required that allowed for a variable number of particles. Now an important feature of most of the many particles later to be discovered was their spontaneous instability, i.e. they disappear (decay) after a very short interval of time (even without interacting with an absorber); hence, clearly, a theory which can deal with the photon is likely to be able to accommodate a description of this essentially evanescent character of elementary particles. The appropriate vehicle for describing the appearance and disappearance of particles turned out to be relativistic quantum field theory (RQFT). The essential framework for this development was provided by Dirac in 1927.‘O The subsequent history of theoretical developments can be schematically broken down into three main steps: (1) The successful introduction in 1947 of the technique of renormalization to deal with the problem of high-frequency divergencies in field theory and the closely associated Feynman diagram techniques. (2) The seminal work of Mandelstam in 1958 leading to the so-called analytic S-matrix programme as a rival to RQFT. This led in a rather direct way to the bootstrap approach to understanding the elementary particles. (3) The revival of interest in field theories in the 1960s due to the remarkable properties of a class of such theories, known as gauge theories. This dates particularly from the proposal of a unified gauge theory of weak and electromagnetic interactions by Weinberg” and Salam,‘* was greatly stimulated by the discovery by ‘t Hooft that such theories are
‘H.G.J. Moseley, ‘The High-Frequency Spectra of the Elements’, Phil. Mug. 26 (1913). 1024 and 27 (1914). 703. ‘Uber einen die Erzeugung und Verwandlung des Lichtes betreffenden ‘A. Einstein, heuristischen Gesichtspunkt’, Annln Phys. 17 (1905), 132. “For the early papers on quantum mechanics see B.L. Van der Waerden, Sources of Quantum Mechanics (Amsterdam: North-Holland, 1967). ‘See E. Schrodinger, Collected Puper on Wove Mechanics, (London: Blackie, 1928). “P.A.M. Dirac, ‘The Quantum Theory of Emission and Absorption of Radiation’, Proc. R. Sot. Series A, 114 (1927), 243. The historical and conceptual background to this seminal paper is discussed in M.L.G. Redhead, ‘Wave-Particle Duality’, Er.J.Phil.Sci 28 (1977). 65. “S. Weinberg, ‘A Model of Leptons’, Phys. Rev. L&t. 19 (1967). 1264. Elementary Purticle Theory, N. Svar“A. Salam, ‘Weak and Electromagnetic Interactions’, tholm (ed.) (Stockholm: Aimqvist and Wiksell Ferlag AB, 1968), p. 367.
Some Philosophical Aspects of Particle Physics
283
renormalizable,‘3 and led on to the most recent gauge theory of quark interactions, the so-called chromodynamics.” A parallel feature of work during the last ten years has been the emergence of the quark model of strongly interacting particles (hadrons) in opposition to the bootstrap approach which had been so prominent in the previous decade. We turn now to consider some of these ideas in more specific detail.
3. Relativistic Quantum Field Theory RQFT comprises the application of ideas from relativity and quantum mechanics to the dynamics of fields, i.e. systems with infinitely many degrees of freedom. Relativity had demonstrated the equivalence of mass and energy expressed via the familiar Einstein equation E = mc’. This suggests that in a relativistic theory we can expect the possibility of creating particles of rest mass m, by suitable input of energy with a threshold m&; similarly annihilation of a particle may be possible with release of this amount of energy. Now in a quantum-mechanical theory energy fluctuations hE in a system are related to the lifetime At of the quantum state by the uncertainty relation MAfX, so creation of a particle is possible provided it annihilates within a time of order R/moc2 (i.e. after travelling a distance R/mot at most since c is the limiting velocity of its motion). Particles produced in this way by spontaneous quantum fluctuations are known as virtual particles.15 An immediate consequence of this situation is that in RQFT every problem becomes a many-body problem which is the basic reason why the mathematical solution of problems posed in the framework of RQFT presents such formidable difficulties. We shall return to this point later. We want now to explain in as simple a way as possible how quantum mechanics can be adapted to deal with a variable number of particles and how this is related to the quantization of fields. The essential ideas are all contained in Dirac’s 1927 paper,” although the explicit spelling out of what was involved was not finally achieved until the work of Fock in 1932.” Consider the “G. ‘t Hooft, ‘Renormalizable Lagrangians for Massive Yang-Mills Fields’, Nucl. Phys. B 35 (1971). 167. “For a detailed discussion of chromodynamics with a comprehensive bibliography see W. Marciano and H. Pagels, ‘Quantum Chromodynamics’, Physics Reports 36C (1978), 137. ‘“This is equivalently a picturesque way of describing the intermediate states generated in the mathematical development of perturbation theory. In a sense virtual particles express a violation of conservation of energy to an extent permitted by the time-energy uncertainty relation of quantum mechanics. ‘Virtual’ particles are thus to be contrasted with ‘real’ particles for which energy is conserved in their creation and annihilation. ‘*Lot. cit., note 10. “V. Fock, ‘Konfigurationsraum und Zweite Quantelung’, Z. Phys. 75 (1932), 622.
Studies in Htitory and Philosophy of Science
284
N-particle Schriidinger equation for a system of identical particles obeying Bose statistics, i.e. described by symmetrized wave functions. We denote the Hilbert space of symmetric states by zN. An arbitrary state is built up as a superposition of states corresponding to assignments of definite numbers of particles to one-particle states. The effect of an operator Q in XN (associated with some observable according to the rules of quantum mechanics) can thus be represented in terms of switching particles between particular one-particle states. In the diagram the boxes represent the one-particle states, and we illustrate a state with three particles, one in each of the boxes labelled 1, 3 and 4. Box No.
u I
Suppose the action of Q on this state just has the effect of switching a particle from box 3 to box 2 as indicated by the arrow.” Now this can be thought of as a two-stage process - a particle comes out of box 3 leaving two particles only, then a particle is put into box 2, so we again have three particles in all. Schematically we introduce an operator PJwhich takes a particle out of a box and another operator t which puts it back (in general) in another box. So we factorise the whole operation denoted by Q as Q = 4~. But to describe the action of r) we must introduce an ‘auxiliary’ Hilbert space HN-, to accommodate the result of acting with q on the initial state in HN. The idea is illustrated below:
“More generallyQ might produce a linear combination (superposition) of appropriately switched states. Each term can then be dealt with as described in the text.
285
Some Philosophical Aspects of Particle Physics
So long as we restrict ourselves to operators like 5~ we have done no more than reformulate the N-particle Schriidinger equation. But now the formalism can be extended very easily to describe creation or annihilation of particles. For example we might introduce an operator 5 + q which would admit both processes. Appropriate linear combinations of ‘square root’ operators like .$ and ‘7 are known as quantized fields, since they can be shown to correspond precisely to quantities obtained by regarding the SchrCidinger field as ‘classical’ and subjecting it to a second quantization. All this can be extended to the context of relativistic wave equations and to systems of particles obeying Fermi statistics. But our immediate concern is to discuss the methodological impact of the relationship we have described between the first and second-’ quantized versions of quantum mechanics.
4. Surplus Structure, Reformulation
and Stretching
of Theories
The notion of surplus structure in the mathematical formulation of a theory was introduced in the author’s 1975 paper referred to in section one.” We begin by adopting a realist approach to an arbitrary theory T. We think of Tas consisting of axioms and deductive chains flowing from these axioms to produce theorems and finally empirical generalizations to be confronted with experiment in the form of singular observation statements. We now envisage the possibility of embedding Tin a mathematical structure M’ in the sense that there exists an isomorphism (a one-to-one structure-preserving correspondence) between T and a sub-structure M of M’ . Identifying a structure with the collection of all true statements that can be made about the structure in an appropriate language 2, we define the relative complement of M in M’ as the surplus structure in the mathematical representation of the theory T by means of M’ (relative to the language 9 ). In many cases the question of what constitutes the surplus structure in a given mathematical formulation is not at all clear. In classical mechanics we associate the coordinates of a particle with points on the real line. It is at least arguable whether the irrational points belong to the surplus structure. Indeed, is the ordering ‘really’ dense? Or again, for some coordinate q does 4 or q2 belong to surplus structure? The answer depends on one’s attitude to the role of quantities such as velocity and energy in mechanics. Are they just auxiliary constructs useful in formulating and solving differential equations when all that we are given in reality is the ‘fact’ that particles occupy different places at different times? Or is energy just as ‘real’ as position? Clearly if any affirmative answers are forthcoming to such questions they will smack of “See note 4.
Studies in History and Philosophy of Science
286
essentialism. However in other cases the situation is much more clear-cut. When in reformulating the N-particle Schrodinger equation we introduce the Fock space Z= ZO@Z’,e..P#.. as the direct sum ofZN and auxiliary spaces associated with different numbers of particles these additional components are clearly just a mathematical contrivance. But having introduced this auxiliary machinery we may now allow the original theory to be stretched in the sense of according a realistic interpretation to some of this surplus structure*’ together with a ‘natural’ adjustment of the axioms; in the example of Fock space the introduction of interaction operators which will connect component spaces with different numbers of particles, thus allowing one to describe the creation and annihilation of particles. In general such physical effects may be regarded as polarizing phenomena which dictate the direction in which the stretching occurs. In a sense the stretched theory supplies only an ad hoc explanation of the polarizing phenomena. In order to appraise the new theory in term of predictive power we must look to novel phenomena other than the polarizing ones. In the case of RQFT the mere explanation of particle creation and annihilation is less striking than the successful prediction of quite unexpected effects such as the Lamb shift referred to in the next section.
5. Renormalization
and Feynman
Diagrams
Dirac’s radiation theory was in a sense still-born. Ehrenfest was apparently the first” to point out that divergencies (infinite results) would arise if the theory was used to calculate radiative reaction effects in which virtual photons are reabsorbed by the same particle as emits them. Such infinite reaction effects occur already in the classical theory with point electrons. In 1930 Wailer” worked out the self-energy of the electron in accordance with the Dirac theory and found a quadratic divergence, so the situation is actually more serious in quantum theory than in classical theory where the self-energy ‘Owe may contrast the relationship between the first and second-quantized version of the Schriidinger equation in which there is an addition to ontology with the more revolutionary metaphysical paradigm shifts in which the ontology is changed, not just added to. See J.W.N. Watkins, ‘Metaphysics and the Advancement of Science’, f3r. J. Phil. Sci. 26 (1975), 91, for further discussion of this point. The phrase metaphysical paradigm shift is due to M. Masterman, ‘The Nature of a Paradigm’, Criticism and the Growth of Knowledge, 1. Lakatos and A. Musgrave (eds.) (Cambridge: Cambridge University Press, 1970). p.59. The heuristic role of mathematics in physics is also referred to by E.G. Zahar, ‘Why did Einstein’s Programme Supersede Lorentz’s?‘, Br. J. Phil. Sci. 24 (1973), 95, p. 109 ff. “See W. Pauli, ‘Paul Ehrenfest’, Naturwissenschaften 21 (1933), 841. “1. Wailer, ‘Bermerkung uber die Rolle der Eigenenergie des Elektrons in der Quantentheorie der Strahlung’, 2. Phys. 62 (1930). 673. CJ also J.R. Oppenheimer, ‘Note on the Theory of the Interaction of Field and Matter’, Phys. Rev. 35 (1930), 461.
Some Philosophical Aspects of Particle Physics
287
is linearly divergent. Indeed if one takes RQFT seriously and calculates any quantity beyond the first non-vanishing order of perturbation theory one obtains infinite results. The situation is rather like set theory and the paradoxes. One can use naive set theory whilst knowing that the whole apparatus is actually inconsistent. The patching up operation of say Russell’s theory of types can be compared with the renormalization programme described below for dealing with the divergencies in RQPT. The extra complication introduced by quantum theory is the effect of forced oscillations of the electron under the influence of the vacuum fluctuations of the electromagnetic field. This contributes to the self-energy of the electron over and above the classical effect arising from the interaction of the electron with its own electric and magnetic field. The self-energy problem is ameliorated but not eliminated in hole theory,” the self-energy divergence being only logarithmic.*’ The effect of hole theory is to ‘smear’ the charge distribution of the electron over a distance of the order of the Compton wavelength (Wmc) due to the effect of Pauli statistical repulsion on the virtual electron-positron pairs produced by vacuum fluctuations in the charge and current density of the electron field. The vacuum fluctuations of the electromagnetic field now interact with this extended charge distribution. The technique of renormalization for dealing with these infinities consists in recognizing the possibility of absorbing the infinite quantities as contributions to the mass and charge of the electron, the resulting ‘renormalized’ values being equated with the experimental mass and charge. To see how this works in classical theory consider the equation of motion for an electron of radius co and mass m under the action of its own field. Lorentzz5 showed one could write mi’ = K’O’ + K”’ + . . . . with K’O’ = -/3 e’/ coti -4 K”’ = z/3 . $/&’ . i” etc., where e is the electron’s charge and /3 is a numerical constant approximately equal to unity. so m ‘i’ = K’” + O( c 0) where m’=m+flt+/
ioCa.
“By hole theory we refer to Dirac’s well-known treatment of the negative energy states of the electron. “V. Weisskopf, ‘uber die Selbstenergie des Elektrons’, Z. Phys. 89 (1934). 27, 90 (1934), 817. ‘)H.A. Lorentz, The Theory of Electrons and its Application to the Phenomena of Light and Radiant Heat (Leipzig: Teubner, 1909), pp. 251 -254. We work in a nonrelativistic approximation.
288
Studies in History and Philosophy of Science
If we now identify m ’ with the experimental mass and then let o-Owe obtain the finite equation m ‘2 = 2/j .e2/p .‘i Of course if we keep 0 small but non-vanishing there will be small correction terms of order co on the right-hand side of this equation. But we have succeeded in identifying a finite theory even for &o = 0. The idea of charge renormalization was already introduced by Dirac in 193326who showed how the polarization of the vacuum by the electron’s field produced an infinite ‘unobservable’ contribution to the electron’s charge. A residual finite effect modifying the Coulomb law of interaction between two charges was evaluated by Uehling in 1935.” The technique of mass renormalization was used by Bethe in 1947’* to provide an explicit calculation for the Lamb shift anomaly in the hydrogen spectrum referred to in more detail below. The problem at this point in time was to show that an unambiguous subtraction procedure could be defined in which infinite contributions from all orders of perturbation theory could be consistently absorbed into the renormalization of the mass and charge of the electron, the residual finite quantities being used to calculate and predict observable effects. Of course, in general the subtraction of infinite quantities is entirely ambiguous. In order to obtain a unique result agreeing in the limit with what we would expect from a ‘finite’ theory it is necessary to formulate the whole subtraction procedure in a manifestly Lorentz (and gauge) invariant manner. To see how this helps in the subtraction problem consider as a simple example evaluating I = JQFdu = ‘/z(b’ - 02) OLbi_mmZis quite ambiguous, e.g. with a = b, Z-0 and with a = b - l/b, I+1 and 60 on. Zis only conditionally convergent, but for a ‘finite’ theory the integrand would behave properly at infinity and the value of I would then be zero, assuming the symmetry properties of the integrand are the same in the two theories, in particular the behaviour under the inversion x- - x (compare for example J-be-x’dx = ‘Yz [eea2- e-*‘l-+0 as a, ZF~). But we can impose the correct value I = 0 by specifying that the region of integration is invariant with respect to the symmetry operation x - -x which clearly enforces the value I = 0. This is the kind of argument used to resolve ambiguities in subtracting infinite quantities in the renormalization programme. The requisite manifestly covariant formulation of quantum electrodynamics (QED) was first provided independently by TomonagazB and SchwingeP but “P.A.M. Dirac, ‘Theorie due Positron’, SeptiPme Conseil de Physique Solvay - Structure et PropriMs des Noyaux Atomiques (Paris: Gauthier-Villars 1934). p. 203. “E.A. Uehling, ‘Polarization Effects in the Positron Theory’, Phys. Rev. 48 (1935), 55. “H.A. Bethe. ‘The Electromagnetic Shift of Energy Levels’, Phys. Rev. 72 (1947). 339. ‘OS. Tomonaga, ‘On a Relativistically Invariant Formulation of the Quantum Theory of Wave Fields’, Prog. Theor. Phys. Osaka. 1 (1946). 27, 42.
289
Some Philosophical Aspects of Particle Physics
their method was soon superseded by the formulation of Feynman with his space - time approach to QED.3’ Once again an important theoretical advance depended on a suitable reformulation of an already existing theory. Feynman contrasts his approach with the traditional Hamiltonian one which considers a scattering process for example in terms of successive time slices of the total space-time history of a particle. j2 Consider a process of pair creation and annihilation in a fixed potential field described in second-order perturbation theory by the conventional formalism. rc
--------__ IrJ Xl
e-
.&, l-
--
-
e-
IA
-
e*
--
----
----------- x, Space
Take three time slides at fA, tB and tc as shown in the illustration. At time t, there is one particle, an electron, at time ts there are three particles (two electrons and a positron), at time tc a single particle again. This is described by saying that a pair of particles is created at X2 and one member of the pair thus created annihilates the incoming particle at X,. Feynman draws the diagram thus
Space ‘OJ. Schwinger, ‘Quantum Electrodynamics. I. A Covariant Formulation’, Phys. Rev. 74 (1948), 1439. “R.P. Feynman, ‘Theory of Positrons’, Phys. Rev. 76 (1949), 749, and R.P. Feynman, ‘SpaceTime Approach to Quantum Electrodynamics’, Phys. Rev. 76 (1949). 769. These papers were derived from his earlier reformulation of nonrelativistic quantum mechanics in terms of path integrals; see R.P. Feymnan, ‘Space-Time Approach to Non-Relativistic Quantum Mechanics’, Rev. Mod. Phys. 2tl (1948). 367. This work was anticipated essentially in P.A.M. Dirac, ‘The Lagrangian in Quantum Mechanics’, Phys. Z. Sowj. Un. 3 (1933), 64. The equivalence of Feynman’s methods and the Schwinger - Tomonaga approach was demonstrated by F. J. Dyson, ‘The Radiation Theories of Tomonaga, Schwinger. and Feynman’, Phys. Rev. 75 (1949). 486. See also R.P. Feynman ‘Mathematical Formulation of the Quantum Theory of Electromagnetic Interaction’, Phys. Rev. 80 (1950). 440. “See p. 749 of the first of Feynman’s 1949 papers referred to in note 31.
290
Studies in History and Philosophy of Science
A single electron moves along a continuous trajectory in space - time. Between X, and X2 it propagates backwards in time with negative energy. This ‘description’ is not to be taken seriously. Negative energies belong to the realm of surplus structure referred to in the preceding section. Indeed there is no new physics in the Feynman diagrams - they are exactly equivalent to the older less picturesque formalism.33 However, there are real advantages in Feynman’s method. To obtain the total effect of the scattering potential Feynman integrates over all values for A, and X2, thus placing processes like
and
on an equal footing. The four-dimensional character of the integrations demonstrates the manifest Lorentz covariance of the formulation. This enables one to deal with the renormalization programme. Furthermore, as shown in the above illustrations, processes such as pair creation and double scattering which are apparently unrelated in the old formulation are now combined together in a single calculation. So there is an enormous practical contribution towards closing the computation gap which separates the underlying theory from detailed calculations of specific theoretical predictions for comparison with experiment. “Misunderstanding of the role of the Feynman diagram is widespread among philosophers who believe it has something to do with backward causation. Cf. H. Putnam, ‘It Ain’t Necessarily So’, J. Phil. 59(1%2), 658, and J.C. Graves and J.F. Roper, ‘Measuring Measuring Rods’, Philosophy Sci. 32 (1%5). 39. More reasoned comment is to be found in J. Earman, ‘On Going Backwards in Time’, Philosophy Sci. 34 (1%7), 211, and R. Weingard, ‘On Travelling Backwards in Time’, Synfhese 24 (1972). 117.
Some Philosophical Aspects of Particle Physics
291
As a direct result of Feynman’s approach DysorP was able to handle the very complicated proof of the renormalizability of QED to all orders of perturbation theory. Having achieved the theoretical advantage of a theory which gives finite resultP5 the practical advantage of carrying out calculations with the diagram techniques became rapidly apparent. In 1952 Brown and Feynman obtained the fourth-order correction to the scattering of a photon by an electron (Compton scattering)36 while in 1953 Redhead solved the same problem for the scattering of an electron by an electron (Mdller scattering) and of a positron by an electron (Bhabha scattering).37 But the most spectacular success of the renormalization and diagram techniques was in the detailed calculation of the Lamb shift anomaly in the atomic spectrum of hydrogen and the anomalous magnetic moment of the electron. Over the last thirty years a continuing story of refinements in experimental techniques and increasingly sophisticated theoretical CalculationP has culminated in remarkable quantitative agreement between theory and experiment extending to seven significant figures in the case of the magnetic moment anomaly.38 The significance of these results from the point of view of a Bayesian logic of appraisal has been considered elsewhere by the present author.40 “See F.J.Dyson, ‘The S Matrix in Quantum Electrodynamics’, Phys. Rev. 75 (1949), 1736. Gaps in the proof were filled in in different ways by J.C. Ward, ‘Renormalization Theory of the Interactions of Nucleons, Mesons, and Photons’, Phys. Rev. 84 (1951), 897, and A. Salam, ‘Overlapping Divergencies and the S-Matrix’, Phys. Rev. 82 (1951), 217. ‘“Of course Dyson only demonstrated that individual terms in the perturbation expansion in powers of the fine structure constant e’/lrc were finite. The question of whether the resulting series was convergent is quite a separate issue. Calculations with simplified models by C.A. Hurst, ‘The Enumeration of Graphs in the Feynman -Dyson Technique’, Proc. R. Sot. Series A, 214 (1952), 44, and by W. Thirring, ‘On the Divergence of Perturbation Theory for Quantized Fields’, He/v. Phys. Acfu 26 (1953), 33, suggested that the renormalized series is at best asymptotic. See also F.J. Dyson, ‘Divergence of Perturbation Theory in Quantum Electrodynamics’, Phys. Rev. 85 (1952), 63 1, and R. J. Riddell, ‘The Number of Feynman Diagrams’, Phys. Rev. 91(1953), 1243. Perhaps the main argument in favour of this view is the excellent agreement between experimental results and theoretical calculation referred to below. A related question is whether the infinite ‘renormalization constants’ may actually be finite and only appear infinite due to illegitimate mathematical expansions. Recent work by Glimm and Jaffe with exactly soluble models suggest this is not the case. See the comments by Salam in A. Salam, ‘Progress in Renormalization Theory since 1949’, The Physicist’s Concepfion of Nature, J. Mehra (ed.) (Dordrecht: D. Reidel, 1973), p. 430. “L.M. Brown and R.P. Feynman, ‘Radiative Corrections to Common Scattering’, Phys. Rev. 85 (1952). 231. “M.L.G. Redhead, ‘Radiative Corrections to the Scattering of Electrons and Positrons by Electrons’. Proc. R. Sot. Series A. 220 (1953). 219. The results were independently confirmed by R.V. Polovin, ‘Radiative Corrections to the Scattering of Electrons by Electrons and Positrons’, J. Exp. Theor. Phys. (USSR) 31 (1956). 449; trans. Soviet Physics - JETP 4 (1957). 385. “A detailed bibliography is given in B.E. Lautrup, A. Peterman and E. de Rafael, ‘Recent Developments in the Comparison between Theory and Experiments in Quantum Electrodynamics’, Phys. Reports 3C (1972), 193. ‘*See R.S. Van Dyck Jr, P.B. Schwinberg and H.G. Dehmelt, ‘Precise Measurements of Axial, Magnetron, Cyclotron, and Spin-Cyclotron - Beat Frequencies on an Isolated I-meV Electron’, Phys. Rev. Lerr. 38 (1977), 310. ‘OM.L.G. Redhead, ‘Ad Hotness and the Appraisal of Theories’, Bri. J. Phi/. Sci. 29 (1978), 355.
Studies in History and Philosophy
292 6. Models
of Science
and Approximations
The success of the Feynman diagram techniques in overcoming the computation gap in QED is due to the applicability of perturbation theory which is related to the small value of the fine structure constant (e*/ti(-“/137), i.e. to the weakness of the electromagnetic interaction. In the case of hadron” physics however the particles interact via strong forces which invalidate the perturbation approach. In the early 1950s considerable effort was devoted to developing new schemes of approximation to deal with this situation. For example there was the Tamm-Dancoff42 scheme in which a limited number of virtual particles was considered in an arbitrary number of states and the contrasting Tomonaga4’ approximation involving an arbitrary number of virtual particles in a limited number of possible states. The present author has considered elsewhere44 the connection between approximations and theoretical models.45 For our purposes a theoretical model is an inadequate theory in the sense of making simplifying assumptions which are known to misrepresent some aspects of the phenomena which the theory is being used to explain.‘” We may hope to obtain exact solutions for this approximate theory which is another way of expressing approximate solutions to the exact theory. While every model is equivalently an approximation, the converse is not true. What is true is that an approximation scheme can always be thought of as the exact solution to an approximating theory but the latter is not to be regarded as a model insofar as it lacks both mathematical simplification in its formulation, and also physical interpretation in respect of what features of the exact theory are being ignored by the approximating theory. In this sense perturbation theory is not equivalent to a model while the Tamm-Dancoff or Tomonaga approximations referred to above are equivalent to models in our sense. An even more direct sense of model in hadron physics is to introduce simplified field equations and interaction terms which allow the model theory
“Hadron is a generic term for all particles such as pions and nucleons which are subject to strong interactions. “See I. Tamm, ‘Relativistic Interaction of Elementary Particles’, J. Phys. (USSR) 9(1945), 449, and S.M. Dancoff, ‘Non-Adiabatic Meson Theory of Nuclear Forces’, Phys. Rev. 78 (1950). 382. ‘5. Tomonaga, ‘On the Effect of the Field Reactions on the Interaction of Mesotrons and Nuclear Particles. III’, Prog. Theor. Phys. Osaka 2 (1947), 6. “M.L.G. Redhead, ‘Models in Physics’, Br. J. Phil. Sci. 31 (1980), 145. ‘5The term is due to M. Black, Models und Metaohors: Studies in Lannuane and Philosonhv. (Ithaca, New York: Cornell University Press, 19b2), p. 226. See also-th; discussion b; $1 Achinstein, Concept of Science: A philosophicul Analysis (Baltimore: The John Hopkins Press, 1968), pp. 212-218. “Redhead, loc.crt., note 44, distinguishes such impoverishment models from enrichment models which complete a theoretical framework, that is not fully specified, by filling in the missing detail.
Some Philosophical Aspects of Particle Physics
293
to be solved exactly. These exact model solutions can then be compared with the results obtained by applying approximation schemes of various sorts to the model theory and hence one can explore hopefully the shortcomings of such schemes when applied to the exact theory. A good example of such a device was the Lee model of the meson-nucleon system.” Of course such methods of exploring or probing exact theories may prove entirely misleading - at best they can suggest possible ways in which the exact theory might behave. A fundamental difficulty always arises in using an approximate solution in connection with an empirical test of a theory. If experiment disagrees with an approximate prediction we do not know whether to direct the modus tollens at the underlying theory or at the approximation scheme. This is the basic methodological difficulty with the empirical appraisal of theories in hadron physics. For more detailed discussion of these issues reference may be made to the present author’s paper referred to above.‘* The first model in hadron physics to achieve some success in discussing simple systems such as low-energy pion-nucleon scattering was the Chew-LowWick model.” This arose from an approximation scheme which involved expressing the scattering amplitude for a real (renormalized) physical process in terms of scattering amplitudes for all real processes which could connect with both the initial and final states. It was soon demonstrated50 that this model was an example of a dispersion relation and was connected with analytic properties of the scattering amplitude. This topic will be taken up in the next section. 7. The Analytic S-Matrix The scattering matrix or S-matrix was introduced as the basic object of study in EPT by Heisenberg in 1943.” The S-matrix is simply a representation of the operator which transforms an initial or ‘incoming’ asymptotic state into the associated final or ‘outgoing’ asymptotic state in an arbitrary scattering phenomenon. The S-matrix comprises two sorts of information: (a) Scattering or reaction cross-sections are calculated directly from the transition amplitudes which are simply proportional to the appropriate “T.D. Lee, ‘Some Special Examples in Renormalizable Field Theory’, Phys. Rev. 95 (1954), 1329. %ee note 44. “See G.C. Wick, ‘Introduction to Some Recent Work in Meson Theory’, Rev. Mod. Phys. 27 (1955). 339. a0R. Oehme, ‘Dispersion Relations for Pion-Nucleon Scattering. I. The Spin-Flip Amplitude’, Phys. Rev. 100(1955). 1503, and ‘Dispersion Relations for Pion-Nucleon Scattering: No-Spin-Flip Amplitude’, fhys. Rev. 102 (1956), 1174. %ee W. Heisenberg, ‘Die “beobachtbaren Grossen” in der Theorie der Elementarteilchen’, Z. Phys. 120(1943). 513, and ‘Die beobachtbaren Grossen in der Theorie der Elementarteilchen. II’, Z. Phys. 120 (1943), 673. The notion was actually first used in the context of nuclear physics by Wheeler. See J.A. Wheeler, ‘On the Mathematical Description of Light Nuclei by the Method of Resonating Group Structure’, Phys. Rev. 52 (1937). 1107.
Studies in History and Philosophy
294
S-matrix
element.
(b) Bound singularities
states
and
resonances
in the S-matrix
(unstable
at ‘unphysical’
speaking
the idea here is that anomalies
derived
from
‘complexes’ as bound
of Science
such
composed
singularities
In his original
work Heisenberg
in scattering
associated particles.
according
states)
are related
values of its arguments.52
or ‘bumps’
are
of the incoming
states or resonances
particle
with
Such complexes
to the location
wanted
the
to identify
to
Loosely
cross-sections formation
of
are classified
of the singularities.
an object
for theoretical
study from which the observable quantities characteristic of elementary particles such as scattering cross-sections, lifetimes and energies of resonance states, and energies of bound states could be calculated, and which could accommodate extensions of quantum mechanics involving the introduction of a fundamental length in order to deal with divergence difficulties. The advent of the renormalization programme in a sense removed the necessity for the second objective, but the idea of using the S-matrix rather than the Hamiltonian as the fundamental object of study was continued in the work of Feynman which was discussed in section 5 above. Of course if we have a Hamiltonian formulation a corresponding S-matrix can be calculated - this is essentially what is achieved in Dyson’s approach to the Feynman theory.53 Having calculated an S-matrix we can look at its singularities, but could we not reverse this procedure and by specifying the singularities attempt to calculate the S-matrix? This is the basic idea we shall be discussing in the rest of this section. By the singularities in the S-matrix we mean singularities in the sense of complex variable function theory, i.e. the arguments of the S-matrix are to be regarded as complex variables and we are going to look for singularities in the S-matrix regarded as a function of these complex arguments. Since some of these singularities will be branch points, the whole theory is complicated by having to work with many-sheeted Riemann surfaces to deal in the usual way with the many-valuedness of the resulting functions; in particular, care must be exercised as to which sheet of the Riemann surface various singularities occur on. In what follows
we shall largely
ignore
these aspects
in setting
out
the main lines of the argument. For historical reasons the approach we are describing is often referred to as the dispersion relations approach, since analyticity arguments were first used by KramerP in formulating the relationship between the real and imaginary “The arguments are those parameters on which the S-matrix elements depend. They comprise quantities related to the energies and momentum transfers which specify the kinematics of the collision. “See the paper by Dyson referred to in note 3 1. “H. A. Kramers, ‘La Diffusion de la Lumitre par les Atomes’, Estrutto dug/i Attidel Congresso Internuzionale de Fisici, Coma (Bologna: Nicolo Zonichelli, 1927). Vol 2, p. 545. Kramers’ result was independently obtained by R. de L. Kronig, ‘On the Theory of Dispersion of X-Rays’, J. Opt. Sot. Am. 12 (1926). 547. who did not however note the connection with analyticity.
Some Philosophical Aspects of Particle Physics
295
parts of the complex refractive index in the theory of optical dispersion.s5 It was Kronig in 1946 who first suggested applying similar methods in S-matrix theory.58 The dispersion relations approach involves in summary the following sequence of ideas: (a) We consider the S-matrix as a function of Lorentz-invariant energy and momentum transfer variables which we now allow to assume complex values. (b) We assume that the S-matrix is an analytic function of its arguments except for certain singularities we shall specify in a moment.5’ (c) We use Cauchy’s theorem to relate scattering amplitudes to the singularity structure, i.e. to the location of the singularities and the behaviour of the S-matrix in the neighbourhood of the singularities comprising residues at poles and discontinuities across branch cuts. (d) We use certain physical principles referred to as unitarity and crossing symmetry to specify part of the singularity structures. Unitarity expresses essentially the conservation of probability while crossing symmetry identifies the scattering amplitudes for certain related processes as being connected by analytic continuation. (e) We assume there are no other singularities than those demanded by unitarity and crossing. This is known as the Mandelstam conjecture.S8 We will consider its justification in a moment. ‘(f) We now have a feed-back situation which we can represent schematically thus
The possibility presents itself that this scheme would allow a self-consistent determination of scattering amplitudes and singularities, i.e. a complete determination of the S-matrix. In fact there are three possibilities concerning the solution of the dispersion relation equations: 5’For an excellent survey of the whole subject of dispersion relations see J. Hamilton, ‘Dispersion Relations for Elementary Particles’, Prog. Nucl. Phys. 8 (1!960), 143. seR. de L. Kronig, ‘A Supplementary Condition in Heisenberg’s Theory of Elementary Particles’, Physica. 12 (1946). 543. “By Liouviile’s theorem a bounded analytic function with no singularities is uninterestingly a constant. “Strictly this applies to the singularities on the so-called physical sheet of the appropriate Riemann surface. We shall ignore this refinement in our simplified presentation.
296
Studies in History and Philosophy of Science
(1) There are many solutions. The equations are just constraints on the S-matrix but do not constrain it uniquely. (2) There is one unique solution. (3) There are no solutions - the whole system is inconsistent. If we disregard the third possibility then at an early stage it was recognized that (1) is almost certainly the case. This was due to the fact that approximation schemes related to the dispersion relations, comprising the Chew-Low-Wick equations, which we noted at the end of section 6, admitted of a certain ambiguity in solution known after its discoverers as the CDDS9 ambiguity. Methods for eliminating the CDD ambiguity led essentially to the idea of the Chew-Frautschi bootstrap to be described in section 8 below. For the moment we return to discussing the status of the Mandelstam conjecture on which clearly the whole programme of the dispersion relations approach depends. Now in the case of optical dispersion relations the analyticity properties of the scattering amplitude could be related to causality conditions imposed on the scattering process, essentially requiring that no scattered wave should be emitted prior to the arrival of the incident wave.60 A great deal of work has been done in trying to derive analyticity properties of the S-matrix from microcausality conditions imposed on an underlying field theory expressed via the commutativity of operators representing local observable quantities at space-like separations. Only rather limited results have been obtained.s’ A quite different approach was initiated by Mandelstam in 1958.62 Instead of trying to investigate the analytic properties of the full s-matrix Mandelstam has considered the properties of perturbation approximations to the S-matrix. For a two-body scattering amplitude he investigated the analytic properties of appropriate fourth-order Feynman diagrams and showed that a particular representation (a double dispersion relation in the relevant arguments) was possible. Mandelstam now introduced the assumption that this representation was appropriate also for the full unapproximated amplitude. This so-called Mandelstam representation is really a specialisation of what we referred to above as the Mandelstam conjecture since it includes both the general assumption that the singularities in the full amplitude are only those demanded by unitarity and crossing together with additional assumptions as to what these latter singularities would be like. In “L. Castillejo,R. H. Neutral ScalarTheories’,
Dalitz and F. .I. Dyson, ‘Low’s Scattering Equation for the Charged and Phys. Rev. 101 (1956). 453. ““Cf. in particular J. S. Toll, ‘Causality and the Dispersion Relation: Logical Foundations’, Phys. Rev. 104 (1956). 1760, and earlier references quoted therein. “For a discussion of rigorous analyticity results derived from axiomatic quantum field theory the review article by Sommer may be usefully consulted. See G. Sommer, ‘Present State of Rigorous Analytic Properties of Scattering Amplitudes’, Forfschr. Phys. 18 (1970). 557. “See S. Mandelstam, ‘Determination of the Pion-Nucleon Scattering Amplitude from Dispersion Relations. General Theory’, Phys. Rev. 112 (1958), 1344.
Some Philosophical Aspects of Particle Physics
297
fact the Mandelstam representation as such, while true in special cases such as non-relativistic Schrodinger scattering for a wide class of potentials,63 is not generally true even of Feynman diagram approximations.” But the less restrictive conjecture remained as a guideline for all the further developments. There are two points of view that can be adopted at this stage. Mandelstam himself regarded the conjecture as referring to the mathematical properties of field theories,65 but a number of other people, Chew, Landau, Frautschi and Stapp in particular,B6 saw the conjecture as the basis for an analyricity postulate which would lead to a new theoretical programme which would be a rival to and indeed quite independent of any underlying field-theoretic approach. The question of whether field theories satisfied the analyticity postulate was no longer of interest from the point of view of the new analytic S-matrix programme. In the course of developing the new axiomatic S-matrix theory a further divergence of view concerning the status of the analyticity postulate emerged. For Stapp, for example, there remained the hope of basing the postulate on some suitably formulated causality condition, although not one based on fieldtheoretic considerations. For a review of progress in this direction the recent work by Iagolnitzer may be consulted.87 However no completely general link between analyticity and ‘macrocausality’ conditions has in fact been established. By contrast Chew saw no need to relate the analyticity an underlying physical principle. In 1962 he wrote
postulate
to
The fundamental principle . . . is of maximum smoothness . . . and a natural mathematical definition of smoothness lies in the concept of analyticity.” The postulate is concerned essentially with what we termed in section 4 surplus structure but Chew does not regard this as detrimental. In his 1966 book he
remarks In a deep sense physics is based on analytic functions. It is pointless to seek a logical origin for this circumstance. Physical theory cannot be based on logic . . . This simply is the scheme that works.ag
*‘See R. Blankenbecler, M. L. Goldberger, N. N. Khuri and S. B. Treiman, ‘Mandelstam Representation for Potential Scattering’, Ann. Phys. 10 (1960). 62. 5ee S. Mandelstam, ‘Analytic Properties of Transition Amplitudes in Perturbation Theory’, Phys. Rev 115 (1959). 1741. ““Cf. S. Mandelstam, ‘Dispersion Relations in Strong-Coupling Physics’, Rep. Prog. Phys. 25 (1962). 99. *‘The orininal suaaestion of using dispersion relations to replace field theory appears to be due to Gell-Mann at theRochester Conference 1956. E’D. laaolnitzer. The S Mutrix (Amsterdam: North-Holland, 1978). VI. F.-Chew, %.-Matrix Theory of Strong Interactions without Elementary Particles’, Rev. Mod. Phys. 34 (1%2), 394. I have rearranged the order of the sections in the quotation so as to make the sense clearer. 6eG. F. Chew, The Ana~yric S Matrix (New York: Benjamin, 1966), p. 2.
298
Studies in History and Philosophy of Science
From the methodological point of view we may understand the original heuristic stratagem employed by Mandelstam in the following terms. An approximation A (perturbation series) is applied to the original theory T(field theory). A property P (analyticity) is recognized for the approximate theory which we denote symbolically as T + A. It is not known whether P is also a property of T. But now we start afresh by incorporating P as an axiom in a new theory T’ (S-matrix theory) which may or may not be equivalent to the original T. Although it may be conjectured ti la Mandelstam that P is a property both of T + A and of T this conjecture may be left on one side when with Chew and his collaborators the new theoretical programme of S-matrix theory is embarked on.
8. The Chew-Frautschi
Bootstrap
In the preceding section we referred briefly to the CDD ambiguity in solving the dispersion relation equations (in a particular approximation). What this amounts to is that there remains an arbitrariness in determining part of the singularity structure of the S-matrix, viz. the poles associated with particle states of low spin quantum number.“’ In 1961 Chew and Frautschi suggested eliminating this remaining arbitrariness by extending the analyticity postulate to include angular as well as linear momentum. ” The details need not concern us. The objective was to obtain an S-matrix theory in which all the hadrons would have their masses, coupling constants and internal quantum numbers determined by a unique solution of self-consistent equations.‘* Each hadronic particle would sustain its own and every other hadron’s existence in a mutually supportive bootstrap’3 situation in which every particle would play an equally fundamental role. The resulting approach to the hadrons is usually referred to as nuclear democracy, as opposed to the more familiar idea of some of the particles being truly fundamental or elementary, the rest of the particles being regarded in some sense as composite entities built up from the fundamental ones. The essential contrast between the fundamentalist and the bootstrap ‘OThe CDD ambiguity is restricted to particle poles of spin 61 on account of the so-called Froissart bound on total scattering cross-sections. See for example A. D. Martin and T. D. Spearman, Elementary Particle Theory (Amsterdam, North-Holland, 1970), pp. 413 -414. ‘Principle of Equivalence for All Strongly Interacting “G. F. Chew and S. C. Frautschi, Particles within the S-Matrix Framework’, Phys. Rev. Lett. 7 (1961). 394. “An undetermined dimensional parameter would still be required to set the scale of the hadron masses. An additional principle of ‘maximal strength’ was also invoked by G. F. Chew and S. C. Frautschi, ‘Regge Trajectories and the Principle of Maximum Strength for Strong Interactions’, Phys. Rev. Lerr. 8 (1%2), 41, to fix the overall strength of the interactions. This may not in fact be an independent assumption. “The term ‘bootstrap’ is a reference to the colloquial phrase ‘to pull oneself up by one’s own bootstraps’.
299
Some Philosophical Aspects of Particle Physics
approach is whether or not the theory should itself determine all the basic parameters entering into the theory. In the fundamentalist approach the properties of the elementary particles are assigned arbitrarily so far as the theory is concerned; the object of the theory is then to determine the properties of the composite particles. In the bootstrap approach the theory is designed to determine the properties of all the particles on a uniform basis. We can represent the Chew-Frautschi bootstrap in the following way. Suppose we have constituent particles denoted by X,, X2 . . . which can bind together by ‘exchanging”’ particles Q, Qz . . . so as to form composite particles P, P2 . . . So we write schematically P,P, . . . =X,
+X,
+ . . .(Q,Qz.
..).
There will be many such ‘equations’ for different choices of the set of constituents X,X2 . . . The essential point of the bootstrap is that the sets of all possible composites, constituents and ‘exchange’ particles are all one and the same set, i.e. any of the particles can potentially play any of the three roles. At first sight we seem to be involved in some immediate anomalies. For example A can be a constituent of B and also B a constituent of A or A can be a constituent of itself. All that this demonstrates is the inadequacy of a ndive ‘containment’ model of the constituents of a composite entity. If we break up a composite A into constituents B and C by firing another particle X at A so as to supply a suitable input of energy, we must regard the reaction X + A + B + C + X, as expressing the creation of B and C out of the available energy of the colliding system, not as the release of particles B and C already present as ‘parts’ of A. On this interpretation the so-called containment paradoxes simply disappear. For example there is no inconsistency in the two reactions
X+A+B+C+X and
X+B+A+C+X which demonstrate B as a constituent of A and A as a constituent of B. It has become a commonplace exercise among philosophically minded physicists and scientifically minded philosophers to trace links between the bootstrap approach and a wide variety of other philosophical writings. Favourite candidates are Leibniz, Anaxagoras, Whitehead, oriental mysticism
“The exchange of virtual particles as the mechanism by means of which other particles interact with one another is a picturesque way of describing the physics of interacting fields. In S-matrix theory the role of virtual particle exchange is actually taken over by crossed channel particles and resonances. For purposes of exposition we have adopted the picturesque field theory language.
Studies in Hktory and Philosophy of Science
300
and
even
the
logical
remarks76 such parallels similarity! Principle course
of
Tractatus.75
Wittgenstein’s
demonstrate
in a striking
of arbitrary
of Sufficient is related striking
parameters Reason,
to the mirroring
disanalogies
in the bootstrap
and
the involvement
relationship
are also apparent.
As
Miller
way the non-transitivity
In my view most of this work is largely misdirected.
the absence another
atomism
between
is linked
to Leibniz’s
of one particle the monads.
For Leibniz
of
For example,
there
with But of
are many
possible worlds, for the bootstrappers only one self-consistent world,” while the essence of the bootstrap is the interaction between particles which is the exact opposite of the independent existence of Leibniz’s monads. In the case of Anaxagoras we meet the claim that every substance contains every other substance.78 But in the case of the bootstrap the mutual involvement of the particles is limited by conservation laws related to internal quantum numbers. Also Anaxagoras really has a containment model in which the containment paradoxes referred to above are avoided by a doctrine of seeds and portions.7g Furthermore Anaxagoras’s system is really a degenerate form of atomism which is quite contrary to the evanescent nature of the particles involved in the bootstrap picture.80 In general the problems and perspectives which gave rise to the specific scientific theory of the bootstrap are quite different from those informing the putatively parallel metaphysical systems. Of course the Chew-Frautschi bootstrap in its original formulation applies only to the hadrons. The photon and the leptons (electrons, muons and neutrinos) are not incorporated in the overall scheme. Chew looks forward speculatively to a complete bootstrap which would demand for selfconsistency even that
‘confronting the elusive concept of observation and, possibly, of consciousness’.81 But at this point Chew concedes that the programme would lead us ‘outside physics’. As we remarked in section 2 the bootstrap approach has been largely replaced in the last ten years or so by a revival of fundamentalism, namely that the structure of the hadrons is to be understood in terms of the fundamental “See in particular G. F. Chew, “‘Bootstrap”: A Scientific Idea?‘, Science, N. Y. 161 (1968), J. Hist. Ideas 35 (1974), 339; H.P. Stapp, ‘S-Matrix 762; G. Gale, ‘Chew’s Monadology’, Interpretation of Quantum Theory’, Whys. Rev. D3 (1971), 1303; F. Capra. Tire 700 of%vsics of Atomic Facts in (London: Wildwood House, 1975); and D. Miller, ‘The Uniqueness Wittgenstein’s Tractatus’, Theoriu 43 (1977). 174. “Lot. cit., note 75. “We must not however overlook the contingency of the fundamental principles such as analyticity on which the bootstrap is built. “See in particular D. E. Gershenson and D. A. Greenberg, Anaxagoras and the Birth OfPhysics (New York: Blaisdell, 1964) for a happy collaboration between a physicist and a classicist in commenting on the writings of Anaxagoras. ‘*Thus ‘Every seed contains portions of every other seed’ replaces ‘Every seed contains every other seed’. ‘Olf we are looking for parallels the bootstrap is much closer to the Heraclitan than the Parmenidean tradition in Greek philosophy. “Lot. cit., note 75, p. 765.
Some Philosophical Aspects of Particle Physics
301
entities named quarks. We shall make some remarks on this development in the concluding section. At this point we just want to emphasize the distinction from the bootstrap. Matthews has argued that there is no direct inconsistency between the fundamentalist and bootstrap approaches. As he puts it ‘If hadrons are made of quarks, then hadrons made of hadrons are also made of quarks’ .82 However the quarks are now part of an arbitrary input for the bootstrap equations - they are not themselves determined by the bootstrap. Furthermore the bootstrap equations are no longer the most perspicuous way of approaching the structure of the hadrons - we can go directly to an analysis in terms of quarks. In other words the bootstrap equations are now to be regarded as expressing correct but very complicated relationships between the parameters specifying the hadrons, which can be understood or ‘solved’ in a much more direct way by the fundamentalist approach.83
9. Conclusion One feature that emerges from a study of the historical development of EPT over the last fifty years is the element of continuity, the very definite links between successive theories. The way such links operate is, as we have seen, by the device of reformulating the old theory, either directly as in the case of the Feynman diagrams or by introducing new elements of ‘surplus structure’ as in the case of second quantization. The old theory may now be stretched by giving a realistic interpretation to this additional structure and introducing new axioms involving the newly interpreted elements of the surplus structure. This is what happened in the case of RQFT. Alternatively the new axioms may be incorporated in the realm of surplus structure without giving them any realistic interpretation. This, we explained, was the status of the analyticity postulate in S-matrix theory. The importance for heuristics of the process of reformulating an old theory is clearly expressed by Feynman in his 1965 Nobel Lecture
“Op. Cit.. note 2, p. 118. Chew has made the rather different point that quarks may not in fact be fundamental at all but are themselves part of the hadron bootstrap. See for example the papers by Chew cited in notes 75 and 94. “Empirical differences between the bootstrap and quark approaches are sometimes referred to in the literature. See in particular J. Harte. ‘Bootstraps and Partons a Field Guide to Composite Hadrons’, Nucl. Phys. BSO(1972). 301. These depend on the properties of specific bootstrap and quark models. There is no necessary incompatibility. In passing we may note that recent work on the so called ordered S-matrix has demonstrated quark-like features of a pure S-matrix approach. See the review article by G. F. Chew and C. Rosenrweig, ‘Dual Topological Unitarization: An ordered Approach to Hadron Theory’, Physics Reporfs 41C (1978), 263.
Studies in History and Philosophy of Science
302
of the known, which are described by different physical ideas may be in all their predictions and hence are scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in one’s attempt to understand what is not yet understood.8’ Theories
equivalent
In this quotation
the emphasis
is on what
Feynman
refers
to as ‘physical
ideas’. But equally important in EPT is the emphasis on the role of ‘surplus i.e. purely mathematical considerations.85 We have stressed this structure’, aspect particularly in regard to the analytic S-matrix, although we have been at pains to point out that divergent views are generally held as to whether the analyticity postulate should simply be justified post hoc by the pragmatic success of the resulting theory or whether it should be based on or linked to some more directly physical principle of macrocausality. Turning from heuristics to appraisal the major difficulty in testing theories of the strongly interacting particles lies, as we have stressed, in the computation gap. It is as true in field theory, via the role of virtual particles, as it is in S-matrix theory, via the interlocking of real as opposed to virtual processes, that there are no elementary isolutable problems which can be solved exactly to test the underlying theory. This is in marked contrast to the case of atomic or molecular physics where we can isolate simple systems such as the hydrogen atom for exact study in terms of the Schriidinger or Dirac equations.Bs For more complicated systems we can then use agreement with experiment as a reasonable test of the adequacy of the approximations that have been made. But in hadron physics we never know whether we are testing the theory or the approximation since there are no ‘simple’ phenomena to be solved exactly. In a sense we see here the breakdown of the traditional scientific method as it has been practised since the time of Galileo. The progress of science depends on being able to isolate simple phenomena and of being able to disregard in certain circumstances, and for practical purposes, the enormous
complexity
of every real-life
situation.
In the realm
dynamics this is no longer possible. As a parallel we can celestial mechanics would have developed (or failed planetary system was a strongly interacting one and treatment by means of a suitable perturbation theory. situation the importance of theoretical models in hadron
of hadron
speculate as to how to develop!) if the not susceptible to As a result of this physics is apparent,
“R.P. Feynman, ‘The Development of the Space-Time View of Quantum Electrodynamics’, Nobel Lectures Physics 1963 - 1970 (Amsterdam: Elsevier, 1972), p. 155. See p. 177. “Discussion of the importance in EPT of symmetry principles which operate in the realm of surplus structure is given by Redhead, foe. cit.. note 4, p. 106. “Of course these equations are themselves approximate in the sense of not allowing for the specifically quantum electrodynamical effects such as the Lamb shift we referred to in Section 5. By ‘exact study’ we mean relative to appropriate purposes.
Some
Philosophical Aspects of Particle Physics
303
but their status vis-ci-vis the underlying theory remains always the major issue. In the case of QED where the computation gap was effectively closed by means of the Feynman diagram techniques the predictions of the theory are extraordinarily accurate. In view of the mathematical status of renormalization theory this circumstance is all the more remarkable. Let me add, finally, a few remarks concerning the ultimate nature of matter and the argument between the bootstrappers and the fundamentalists. We can distinguish two different traditions in fundamentalism which we may term Thalean and Anaximanderian. The former sees the fundamental ‘stuff’ as a particular sort of matter of which all other forms of matter are constituted. The latter regards the fundamental ‘stuff’ not as a form of matter at all but as a substratum of which what we call matter is some form of manifestation. The currently popular quark approach to fundamentalism has shown an interesting transition from a Thalean to an effectively Anaximanderian version. When the quark theory was originally proposed in the 1960s it was confidently expected that free quarks would appear among the products of particle reactions at sufficiently high energies. In fact free quarks have not so far been observed. Two points of view are possible. On the one hand we may envisage that even the current reaction energies are not sufficiently high to materialize the quarks due to their rest masses being too large. Alternatively the more popular approach is the quark confinement one in which it is argued that free quarks can never be produced even in principle. An attractive mechanism for this state of affairs is provided by the current non-Abelian gauge theory of quark interactions, the so-called chromodynamics in which the interaction energy between quarks increases effectively in a linear manner with distance so the energy required to separate a free quark would be infinitely large. Feinberg has distinguished manifest entities, which can be directly ‘observed’ as free particles, from existent entities which includes in addition anything which can affect or influence the former set.” In particular in RQFT the virtual particles are for Feinberg existent but not manifest. However Feinberg remarks that all the existent particles can be made manifest in appropriate circumstances; in other words one can ‘explore the existent through the manifest’.8B The quark confinement theories challenge this possibility. In their recent review article Marciano and Pagels express the situation in these terms: Confinement
implies
that
the
co10ur~~ degrees
“G. Feinberg, ‘Philosophical Implications of Contemporary Purudoxes,
R. G. Colodny
(ed.) (Pittsburgh:
University
of
freedom
are
in principle
Particle Physics’, Purudigms and
of Pittsburgh
Press, 1972). p. 33. See pp.
43-44.
“Ibid,,
p. 44. “In chromodynamics the quarks possess one of three colours. Full details are given in the review article by Marciano and Pagels from which the quotation in the text is taken. See note 14.
Studies in History and Philosophy of Science
304
unobservable. . It would be an irony if quantum mechanics, founded with an insistence on operationalism,gO produced a theory in which the fundamental entities were non-empirical, mathematical constructs.g’ Of course infinite
the whole
regress.
question
We quote
of fundamentalism
raises
the spectre
of an
Kogut and Susskind:
It seems very unlikely to us that partonP are really point particles . . . We can speculate that a parton . . . will be resolved even further into yet another hierarchy of constituents.B3 Perhaps
we should
leave the last word with Chew:
1 would find it a crushing disappointment if in 1980 all of hadron physics could be explained in terms of a few arbitrary entities. We should then be in essentially the same posture as in 1930 . . . To have learned so little in half a century would to me be the ultimate frustration? In conclusion then we would like to affirm that both in respect of heuristics and of appraising theories for empirical adequacy and in relation to their ultimate ontological commitments, particle physics has exposed or at any rate emphasised methodological problems which deserve the close attention of philosophers of science.
OPlt is interesting how physicists cling to the view that quantum mechanics in Heisenberg’s original version expresses an operational formulation. “Op. Cit., note 14, p. 142. “Partons are phenomenological constituents of hadrons which would currently be identified with quarks. “.J. Kogut and L. Susskind, ‘The Parton Picture of Elementary Particles’, Physics Reports 8C (1973). 75, p. 167. @‘G. F. Chew, ‘Hadron Bootstrap: Triumph or Frustration. v’, Physics roday 23 (IO) (1970), 23, p. 125.