High-precision nonperturbative QCD

High-precision nonperturbative QCD

Annals of Physics 315 (2005) 193–212 www.elsevier.com/locate/aop High-precision nonperturbative QCD G. Peter Lepage* Physics Department and Laborator...

343KB Sizes 2 Downloads 146 Views

Annals of Physics 315 (2005) 193–212 www.elsevier.com/locate/aop

High-precision nonperturbative QCD G. Peter Lepage* Physics Department and Laboratory for Elementary-Particle Physics, Cornell University, Ithaca, NY 14853, USA Received 29 September 2004; accepted 29 September 2004 Available online 22 December 2004

Abstract Recent advances in lattice QCD have resulted in the first simulations with realistic quark vacuum polarization. Consequently a wide variety of high-precision (few percent) nonperturbative calculations are possible now. This paper reviews the recent developments that make this possible, and presents early evidence that the era of high-precision nonperturbative QCD is at hand. It also discusses the future impact of lattice QCD on experiments, and particularly for heavy-quark physics.  2004 Elsevier Inc. All rights reserved. PACS: 11.15.Ha; 12.38.Aw; 12.38.Gc Keywords: Quantum chromodynamics; Lattice QCD; B physics

1. Introduction Quantum chromodynamics (QCD) was invented 30 years ago to account for the internal structure of protons, neutrons, and other hadrons. It is modeled after quantum electrodynamics (QED): hadrons are built out of quarks, which are analogous to electrons, and these are bound by gluons, which are vector gauge particles analogous to photons. QCD differs from QED, however, in that the quarks QCD charge is large and therefore perturbation theory, the only generic tool we have for solving *

Fax: +1 6072558463. E-mail address: [email protected].

0003-4916/$ - see front matter  2004 Elsevier Inc. All rights reserved. doi:10.1016/j.aop.2004.09.018

194

G.P. Lepage / Annals of Physics 315 (2005) 193–212

quantum field theories, is useless. This greatly limits our ability to analyze QCD. The problem is somewhat ameliorated by asymptotic freedom, which implies that a quarks effective charge geff in an interaction depends upon the momentum q transferred by that interaction, and that geff ” g (q) vanishes as q fi 1. This property allows us to solve QCD for high-energy, short-distance processes by expanding in powers of the QCD fine structure constant as ðqÞ 

g2 ðqÞ : 4p

ð1Þ

Such perturbative expansions allowed for the detailed experimental verification of QCD at high-energy accelerators in the 1970s and 1980s. Unfortunately as (q) is Oð1Þ for the momentum transfers of order 1 GeV or less that are typical inside hadrons. Consequently perturbation theory is useless for analyzing hadronic structure. An alternative approach, developed shortly after QCD, is lattice QCD, which employs large computers to obtain nonperturbative numerical solutions of QCD reformulated on a discrete space–time lattice. For most of its 30 year history, however, lattice QCD has been stymied by its inability to include realistic effects from quark vacuum polarization. As a result most lattice QCD calculations from this period suffer from uncontrolled systematic errors that are typically of order 10–30% and sometimes much larger. This seemingly insurmountable problem has apparently now been solved, just in the past few years, and high-precision (few percent) nonperturbative QCD calculations are now possible for the first time in the history of strong interaction physics. For the first time we can move beyond the severe limitations of perturbative QCD. High-precision, nonperturbative QCD is important for several reasons. It is essential for Standard Model physics, and, in particular, for current experimental studies of the weak interactions of heavy quarks. Failures of the Standard Model (and therefore evidence of new physics) are most likely to be found in the weak interactions of heavy quarks, and consequently there is a large experimental program to measure these interactions with great precision (few percent). Typical experimental studies at tempt to extract information about the weak interactions from such processes as B–B  mixing, B fi plm, D fi lm, and K–K mixing. Unfortunately the QCD interactions of the heavy quarks in these processes are much larger than their weak interactions. Amplitudes for such processes have a weak-interaction part, which is what we want, multiplied by a nonperturbative QCD part, which must somehow be computed and removed. The biggest obstacle to high-precision work currently lies in the 10–30% errors coming from the QCD part. High-precision, nonperturbative QCD is essential for B and D physics experiments to reach their full potential. The techniques of nonperturbative QCD are important for physics beyond the Standard Model, as well. Strongly coupled (nonperturbative) field theories are an outstanding challenge to all theoretical physics: while field theory is quite generic in physics, weak coupling is not. Indeed two of the three known interactions in particle physics, QCD and gravity, are strongly coupled, and we are not yet sure whether or not symmetry breaking in electroweak physics is nonperturbative. New strong coupling physics is possible, perhaps even likely, at the LHC and/or beyond.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

195

It is generic at low energies in almost any nonabelian gauge theory—unless the gauge symmetry is spontaneously broken, and even then the most popular schemes involve dynamical symmetry breaking, which again relies upon strongly coupled dynamics. Asymptotic freedom, which leads to strong coupling at low energies, provides an attractive mechanism for explaining large mass hierarchies such as are apparent in nature, and therefore may be a feature of new physics. In QCD, for example, the value of the coupling constant is determined or least strongly affected by physics at the Planck scale, 1019 GeV, where the coupling constant is as (Mplanck)  0.02. Hadrons can only form at scales mhadron where as (mhadron)  1. Since as evolves very slowly (logarithmically), the ratio of these two scales is tiny mhadron =M planck  1019 :

ð2Þ

Asymptotic freedom and strong coupling provide a natural explanation for such large scale hierarchies. There is a pressing short-term need for reliable, generic techniques for analyzing strongly coupled field theories. Such techniques are essential to complete our understanding of the Standard Model, and may be critical for disentangling the new physics we find at future high-energy accelerators like the LHC—it took half a century to figure out low-energy QCD, because it is strongly coupled. Lattice QCD is in the midst of a major breakthrough on this front. In this paper, we review key ideas behind lattice QCD, and describe the recent breakthroughs. We then survey current efforts to demonstrate that todays lattice QCD is real QCD, with detailed comparisons between lattice QCD calculations and precise experimental measurements. Finally we discuss the prospects for high-precision, nonperturbative QCD calculations in the near future, and their likely impact on Standard Model physics.

2. What is lattice QCD? Lattice QCD is the most fundamental theory of strong-interaction physics, including both perturbative and nonperturbative aspects of the theory. In principle, lattice QCD should tell us everything we want to know about hadronic spectra and structure, including such phenomenologically useful things as weak-interaction form factors, decay rates, and deep-inelastic structure functions. The basic approximation in lattice QCD is the replacement of continuous space and time by a discrete grid. The nodes or ‘‘sites’’ of the grid are separated by lattice spacing a, and the length of a side of the grid is L (see Fig. 1). The quark and gluon

Fig. 1. The lattice approximation.

196

G.P. Lepage / Annals of Physics 315 (2005) 193–212

fields from which the theory is built are specified only on the sites of the grid, or on the ‘‘links’’ joining adjacent sites; interpolation is used to find the fields between the sites. In this lattice approximation, the path integral, from which all quantum mechanical properties of the theory can be extracted, becomes an ordinary multidimensional integral where the integration variables are the values of the fields at each of the grid sites  Z  Z Z Y n X o DAl . . . exp  L dt ! dAl ðxj Þ . . . exp a Lj : ð3Þ xj  grid

Thus the problem of nonperturbative relativistic quantum field theory is reduced to one of numerical integration. The integral is over a large number of variables and so Monte Carlo methods are generally used in its evaluation. Monte Carlo methods are computationally expensive, and the cost grows rapidly as the lattice spacing decreases cost / ð1=aÞx ;

ð4Þ

where x P 6. Obviously it is essential to keep a as large as possible: decreasing a by only a factor of two requires computers almost a hundred times larger. Lattice QCD was invented by Ken Wilson in 1974, shortly after QCD was invented. Its first and greatest triumph was to explain quark confinement, which is readily understood in a strong coupling expansion of QCD [1]. Early enthusiasm for the lattice approach to nonperturbative QCD gradually gave way to the sobering realization that very large computers would be needed for the numerical integration of the path integral—computers much larger than those that existed in the mid 1970s. Much of a lattice theorists effort in the first 20 years of lattice QCD was spent in accumulating computing time on the worlds largest supercomputers, or in designing and building computers far larger than the largest commercially available computers. While advances in computer hardware have played an important role in the development of lattice QCD, theoretical and algorithmic developments have been far more important. Lattice simulation errors were of order 100–1% in 1990; by 2000 errors of order 10–30% had been achieved in a wide range of nonperturbative QCD. The most recent developments mean that errors of order a few percent are possible right now. In the next section we describe several of the developments from the 1990s that make high-precision lattice QCD possible today. 3. Quantum field theory on a lattice Critical theoretical technologies developed in the last decade for lattice QCD include lattice perturbation theory [2], effective field theories for b and c quarks [3,4], improved discretizations that allow larger lattice spacings [5], and, very recently, a practical way of including light-quark vacuum polarization [6–8]. We now examine each of these.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

197

3.1. Approximate derivatives In the lattice approximation, field values are known only at the sites on the lattice. Consequently we approximate derivatives in the field equations by finite-differences that use only field values at the sites. This is completely conventional, and very familiar, numerical analysis. For example, the derivative of a field w evaluated at lattice site xj is approximated by owðxj Þ  Dx wðxj Þ; ox

ð5Þ

where Dx wðxÞ 

wðx þ aÞ  wðx  aÞ : 2a

ð6Þ

It is easy to analyze the error associated with this approximation. Taylors Theorem implies that   2a Dx wðxÞ  wðx þ aÞ  wðx  aÞ ¼ eaox  eaox wðxÞ and therefore   a2 Dx w ¼ ox þ o3x þ Oða4 Þ w: 6

ð7Þ

Thus the relative error in Dxw is of order (a/k)2 where k is the typical length scale in w (x). On coarse lattices we generally need more accurate discretizations than this one. These are easily constructed. For example from Eq. (7) it is obvious that ow a2 ¼ Dx w  D3x w þ Oða4 Þ; ð8Þ ox 6 which is a more accurate discretization. When one wishes to reduce the finite-a errors in a simulation, it is usually far more efficient to improve the discretization of the derivatives than to reduce the lattice spacing. For example, with just the first term in the approximation to oxw, cutting the lattice spacing in half would reduce a 20% error to 5%; but the cost would increase by a factor of 26 = 64 in a simulation where cost goes like 1/a6. On the other hand, including the a2 correction to the derivative, while working at the larger lattice spacing, achieves the same reduction in error but with a cost increase of only a factor of 2. Eq. (8) shows the first two terms of a systematic expansion of the continuum derivative in powers of a2. In principle, higher-order terms can be included to obtain greater accuracy, but in practice the first couple of terms are sufficiently accurate for most purposes. Simple numerical experiments with a2-accurate discretizations like this one show that only three or four lattice sites per ‘‘bump’’ in w are needed to achieve accuracies of a few percent or less. Since ordinary hadrons are approximately 1.8 fm in diameter, these experiments suggest that a lattice spacing of 0.4 fm would suffice for simulating these hadrons. However, QCD is a quantum theory, and, as we discuss in the next section, quantum effects can change everything.

198

G.P. Lepage / Annals of Physics 315 (2005) 193–212

3.2. Ultraviolet cutoff The shortest wavelength oscillation that can be modeled on a lattice is one with wavelength kmin = 2a; for example, the function w (x) = +1, 1, +1,. . . for x = 0, a, 2a,. . . oscillates with this wavelength. Thus gluons and quarks with momenta p = 2p/k larger than p/a are excluded from the lattice theory by the lattice; that is, the lattice functions as an ultraviolet cutoff. In simple classical field theories this is often irrelevant: short-wavelength ultraviolet modes are either unexcited or decouple from the long-wavelength infrared modes of interest. However, in a noisy nonlinear theory, like an interacting quantum field theory, ultraviolet modes strongly affect infrared modes by renormalizing masses and interactions—consider ultraviolet divergences, for example. Thus we cannot simply discard all particles with momenta larger than p/a; we must somehow mimic their effects on infrared states. Modern renormalization theory tells us how to do this [9,10]: we mimic the effects of states excluded by the cutoff by adding extra a-dependent local interactions to our lattice lagrangian, currents, and other lattice operators. These extra terms are similar in form to the terms that we add to correct an errors in derivatives, and so we must modify the analysis of such errors given in the previous section. When we discretize the derivative in the QCD quark action, for example, we replace wo  cw ! ZðaÞðwD  cw þ a2 cðaÞ wD3  cw þ   Þ ð9Þ as before, but now there is an overall renormalization, Z (a), and c (a) has two pieces: a part, 1/6, from numerical analysis (Eq. (8)) plus a contribution that mimics p > p/ a parts of the quark self energy. The quantum field theory renormalizes the numerical analysis result. These renormalizations are theory and context specific; they are not universal. This poses a problem. We want to include Oða2 Þ corrections in our simulations, so we can use larger lattice spacings, but traditional numerical analysis does not tell us what values to use for parameters such as c (a). Even forgetting a2 corrections, there are overall renormalizations such as Z (a) that we must somehow determine. The good news is that p > p/a QCD is perturbative if a is small enough, because of asymptotic freedom. This means that we can compute the extra pieces in c (a). . . using perturbation theory: cðaÞ ¼ 16 þ c1 as ðp=aÞ þ    ; ZðaÞ ¼ 1 þ z1 as ðp=aÞ þ    ð10Þ Perturbation theory fills in the gaps in the lattice, allowing us to obtain continuum results without the enormous expense of taking a fi 0. Asymptotic freedom in QCD implies that short-distance QCD physics is perturbative and therefore simple, while long-distance physics is nonperturbative and difficult. The lattice separates ‘‘short’’ from ‘‘long’’:  p > p/a QCD is incorporated by including renormalizations and local correction terms, computed using perturbation theory, in the lattice lagrangian and other operators;  p < p/a QCD is nonperturbative and is solved using numerical Monte Carlo integration.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

199

The cost of a QCD calculation depends critically upon where we place the boundary between these two regimes—that is, on the choice of lattice spacing a. The convergence of perturbation theory, for c (a)s and Z (a)s, is the decisive factor. This is the topic of the next section. 3.3. Perturbation theory Improved discretizations and large lattice spacings are old ideas, pioneered by Wilson, Symanzik, and others [11], but their promise was not realized until the mid 1990s. The reason is that they rely critically on perturbation theory, and perturbation theory, when applied to lattice QCD problems, did not seem to work at lattice spacings that could be afforded. The perturbative expansion parameter for quantities like c (a) is as (p/a). This increases with increasing a, and expansions for c (a)s degrade. The critical issue, therefore, is how large we can make a before perturbation theory fails. Before 1992, it seemed that perturbation theory failed unless a < 0.05 fm. Today we know that it works even out to a = 0.4 fm, with most simulations using as between 0.1 and 0.25 fm. Given that Monte Carlo integration costs vary as (1/a)6, an increase in lattice spacing from 0.05 fm to, say, 0.15 fm reduces the cost by almost three orders of magnitude! Testing perturbation theory is straightforward. One designs short-distance quantities that can be computed easily in a simulation (i.e., in a Monte Carlo evaluation of the lattice path integral). The Monte Carlo gives the exact value which can then be compared with the perturbative expansion for the same quantity. An example of such a quantity is the expectation value of the Wilson loop operator 8 9 I < = W ðCÞ  h0j13 Re Tr P exp ig A  dx j0i; ð11Þ : ; C

where A is the QCD vector potential, P denotes path ordering, and C is any small, closed path or loop on the lattice. W ðCÞ is perturbative for sufficiently small loops C. We can test the utility of perturbation theory over any range of distances by varying the loop size while comparing numerical Monte Carlo results for W ðCÞ with perturbation theory. Fig. 2 illustrates the highly unsatisfactory state of traditional lattice-QCD perturbation theory. It shows the ‘‘Creutz ratio’’ of 2a · 2a, 2a · a and a · a Wilson loops   W ð2a  2aÞW ða  aÞ v2;2   ln ; ð12Þ W 2 ð2a  aÞ plotted versus the size 2a of the largest loop. Traditional perturbation theory (dotted lines) underestimates the exact result by factors of three or four for loops of order 1/2 fm; only when the loops are smaller than 1/20 fm does perturbation theory begin to give accurate results. A technical error in lattice perturbation theory was identified and corrected in the early 1990s [2]. As shown in the figure, the corrected perturbative expansions con-

200

G.P. Lepage / Annals of Physics 315 (2005) 193–212

Fig. 2. The v22 Creutz ratio of Wilson loops versus loop size. Results from Monte Carlo simulations (exact), and from new (tadpole-improved), and traditional (old) lattice perturbation theory are shown.

verge rapidly to the (exact) Monte Carlo results for loops as large as 1/2 fm. This result has been confirmed in dozens of other tree-level, and one, two or three loop calculations. 3.4. Improved discretizations The previous sections suggest that errors of order a few percent are possible from lattice calculations with lattice spacings of order 0.1–0.4 fm. Improved discretiza-

G.P. Lepage / Annals of Physics 315 (2005) 193–212

201

tions are essential for speed and for high precision. The power of improved discretizations is easily illustrated. Wilsons original discretization of the gluon action has order a2 errors  X 1 a2 2 2 2 Tr F lm þ TrF lm ðDl þ Dm ÞF lm    : LWilson  ð13Þ 2 24 l;m Note that the a2 term violates rotation/Poincare´ invariance; it is an artifact of our rectangular lattice. We remove such errors by adding correction terms that cancel their effects. The results are shown in Fig. 3, which compares calculations of the static-quark potential, V (r), with a = 0.4 fm using Wilsons action and using a lattice action designed to remove the Oða2 Þ errors [5]. The dominant errors in the uncorrected calculation of V (r) reflect a failure of rotational invariance, which is expected since the a2 error in the Wilson action Eq. (13) is

Fig. 3. Static-quark potential computed on 64 lattices with a  0.4 fm using the Wilson action (A) and the improved action (B). The dotted line is the standard infrared parameterization for the continuum potential, V (r) = Kr  p/12r + c, adjusted to fit the on-axis values of the potential.

202

G.P. Lepage / Annals of Physics 315 (2005) 193–212

neither Lorentz nor rotationally invariant. The points at r = a, 2a, 3a,. pffiffi.ffi. are pffiffiffifor separations that are parallel to an axis of the lattice, while the points at r ¼ 2a; 3a; . . . are for separations between the static quark and antiquark that are along lattice diagonals. These errors are dramatically reduced for the improved action. Similar results have been obtained for a large variety of lattice operators. Another example is the light-quark action, where again the traditional discretization has Lorentz noninvariant Oða2 Þ errors a2 X Llat  wðD  c þ mÞw þ wD3l cl w þ    ð14Þ 6 l To test for these errors we can compute [12] c2 ðpÞ 

E2 ðpÞ  m2 ; p2

ð15Þ

which should equal one for all p in a Lorentz invariant theory. Plots of c2 with and without correction terms, for a = 0.25 fm, are shown in Fig. 4. The improved discretization again gives dramatically better results. 3.5. Heavy quarks Heavy quarks, such as the c and b quarks, pose a special problem for lattice QCD. Lattice errors are proportional to (aE)n and (ap)n where E and p are typical energies and momenta in the problem of interest. This means that lattice spacings with a  1/ M are necessary for simulating hadrons with mass M. Consequently lattice spacings must be 10· smaller, at an added cost of 106, when one analyzes a B or ! meson rather than an ordinary hadron. This is impossible. The situation is saved by the fact that the b quark is nonrelativistic in these mesons, with v2/c2 6 0.1. Consequently one can replace the Dirac theory for the heavy quark with a nonrelativistic effective field theory, based upon the Schro¨dinger equation [3,13]

Fig. 4. Plots of c2 (p) versus p (in lattice spacing units) for the traditional and improved quark actions.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

H0  

D2 þ igA0 : 2M 0

203

ð16Þ

Correction terms are added to correct Oðvn =cn Þ errors as well as Oðan Þ errors: P   ðD2 Þ2 aM 0 a2 i D4i g dH  c1 1þ  c3 rB þ c2 3 2M 2n 24M 8M 0 0 0 ig g þ c4 ðD  E  E  DÞ  c5 r  ðD  E  E  DÞ; 2 8M 0 8M 20 where perturbation theory gives us the coupling constants ci ¼ 1 þ ci1 as ðp=aÞ þ   

ð17Þ

Note that there are only two parameters in this theory: the b quarks mass and the QCD coupling. These are typically tuned so that the lattice calculation gives correct results for the ! mass and for the mass splitting between the ! and the ! 0 . The effective field theory works very well. This is illustrated by the results in Figs. 5 and 6 [14,15]. These results come from only two inputs, two numbers. There is no quark model, or phenomenological potential. The energies come directly form the QCD path integral.

4. ‘‘Unquenching’’ and vacuum polarization Quark vacuum polarization occurs when gluons create a virtual quark-antiquark pair out of the vacuum, creating a quark loop as shown in Fig. 7. Vacuum polariza-

Fig. 5. The ! spectrum as computed in LQCD. The open circles are for a simulation without quark vacuum polarization; the closed circles include vacuum polarization. The dashed lines show the experimental values, except for the P states, where the spin average of the triplet P states is shown (which should agree with the 1P1 masses to within errors).

204

G.P. Lepage / Annals of Physics 315 (2005) 193–212

Fig. 6. ! fine structure as computed in LQCD. The open circles are for a simulation without quark vacuum polarization; the closed circles include vacuum polarization. The dashed lines show the experimental values. Systematic errors, due to radiative corrections to the couplings, are of order the statistical error bars shown here.

Fig. 7. An example of quark vacuum polarization (or quark loops).

tion is very difficult to simulate, particularly for very light quarks like the u and d. Consequently most QCD simulations in the past have omitted vacuum polarization, or used quark masses that were 10· too large or more. QCD without vacuum polarization is called ‘‘quenched QCD.’’ This severe approximation introduces uncontrolled systematic errors of order 10–30% in most calculations. It was the major limitation in lattice QCD until 2000. In the late 1990s, a new light-quark discretization was introduced. This was an improved version of an old discretization referred to as ‘‘staggered quarks.’’ Calculations with the improved staggered quark discretization are 50–1000 times faster and 2–3 times more accurate than any competing discretization. They also are very efficient for small quark masses. Using this discretization, it is now possible to include realistic vacuum polarization with lattices that have a useful volume (2.5– 3 fm on a side) and lattice spacing (0.1–0.25 fm). The u and d quark masses are still too large, but they are now small enough that we can use chiral perturbation theory to reliably extrapolate to the correct values. In the following sections, we discuss several unusual features of staggered quarks, and then we will review some recent simulation results that strongly suggest that the unquenching problem has been solved.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

205

4.1. Naive/staggered quarks Ignoring gluons for the moment, the simplest discretization of the light-quark lagrangian is L ¼ wðxÞðD  c þ mÞwðxÞ:

ð18Þ

It is simple to show that this has an exact ‘‘doubling’’ symmetry ~ wðxÞ ! wðxÞ  ic5 cq ð1Þxq =a wðxÞ ¼ ic5 cq expðixq p=aÞwðxÞ: This means that given any low-momentum quark mode w (x) on the lattice, there is ~ another exactly equivalent mode wðxÞ with pq  p/a—the largest momentum possible on the lattice. This new mode is one of the ‘‘doublers’’ of the naive quark action. The doubling transformation can be applied successively in two or more directions; the general transformation is Y ~ wðxÞ ! wðxÞ  ðic5 cq Þfq expðix  fp=aÞ wðxÞ; ð19Þ q

where f is a vector with one or more components equal to 1 and all the others 0. Consequently there are 15 doublers in all (in four dimensions), which we label with the 15 different fs. As a consequence of the doubling symmetry, the standard low-energy mode and the 15 doubler modes must be interpreted as 16 equivalent flavors or ‘‘tastes’’ of quark. We use the word taste here to distinguish this property from true quark flavor. (The 16 tastes are reduced to four by staggering the quark field.) Sixteen tastes is bad! Traditionally there were two options for dealing with this problem: 2

(1) (Wilson . . .) Break doubling symmetry by adding awðD  cÞ w=2 to the lagrangian. This removes the 15 extra doublers, but also destroys all chiral symmetry, making calculations with small mu,d very difficult. (2) (Kogut–Susskind . . .) Live with the 16 tastes by inserting factors of 1/16 in strategic places. This preserves a chiral symmetry, making calculations at small masses relatively efficient. Most work in the 1990s built on the first option. Improved staggered quarks rely upon the second. Recently, a third solution—overlap or domain-wall quarks—has emerged, but the resulting algorithms are far too slow to compete with improved staggered quarks. (These last algorithms may prove important for extending simulation techniques beyond QCD, and in particular for axial gauge theories.) The procedure for reducing 16 tastes to one is simple for heavy-quark mesons like the B. Sixteen exactly equivalent B mesons can be constructed from a b-quark, which might be described by NRQCD on the lattice, and the 16 tastes of u antiquark. Two of these, for example, involve the quark momenta shown in Fig. 8. We can ignore all but the first of these by limiting total B momentum so that all components of

206

G.P. Lepage / Annals of Physics 315 (2005) 193–212

Fig. 8. Momentum configurations for 2 of the 16 different, equivalent B mesons when naive/staggered quarks are used to describe the light quark.

Fig. 9. Flavor-changing interactions between light quarks when using the naive/staggered quark discretization. Initial and final state quarks are all on shell; the gluon is highly virtual.

P B  ðM b ; 0; 0; 0Þ  pb þ pu  ðM b ; 0; 0; 0Þ

ð20Þ

are less than p/2a; PB distinguishes between tastes. Light hadrons are harder. This is because there are taste-changing strong interactions: radiating a gluon whose momentum is near p/a in one or more directions changes a quarks taste. The initial and final quarks in Fig. 9 are all on-shell when one uses a naive/staggered discretization, but they have different tastes. Flavorchanging strong interactions are bad. The gluons in such processes, however, carry the largest momentum on the lattice, p/a, and so are highly virtual and highly perturbative. This means that we can remove such taste-changing interactions to any order in as (p/a) by local, perturbative modifications to the quarks lagrangian. Indeed such corrections are part of the standard improvement procedure for lattice actions; taste-changing is an Oða2 Þ error in the naive quark action [7]. This cure for taste-changing interactions was the new development, at the end of the 1990s, that brought naive/staggered quarks back to the forefront in lattice QCD. Combining this correction with the more conventional Oða2 Þ corrections discussed above gives the most accurate quark action in current use [6–8]. Once taste-changing interactions are sufficiently suppressed one is free to select any one of the multiple copies of each light hadron that arise in such a formalism. The discussion so far has focused on valence quarks. The 16 tastes must be dealt with for sea quarks as well. This is easy. The quarks enter the gluon path integral as a factor det (D Æ c + m). This factor incorporates 16 identical tastes of quark. To reduce this to one taste, we replace the determinant by its 16th root. This in effect multiplies each quark loop by 1/16, which is obviously correct provided taste-changing interactions are negligible. One might worry about formal complications arising from the 16th root of the quark determinant. Much is known, however, that is reassuring:  no problems result from fractional roots of the fermion determinant in any order of continuum QCD perturbation theory [16];  phenomena, such as p0 fi 2c, connected with chiral anomalies are correctly handled (because the relevant (taste-singlet) currents are only approximately conserved [17]);

G.P. Lepage / Annals of Physics 315 (2005) 193–212

207

 instanton mass renormalizations are properly handled (for physical quark masses);  the CP violating phase transition that occurs when mu + md < 0 does not occur in this formalism, but the real world is neither in this phase nor near it;  the nonperturbative quark-loop structure is correct except for taste-changing interactions. Taste-changing interactions are short-distance, so they can be removed with perturbation theory [18,19]—at present to order a2as. They may also be removed after the simulation with modified chiral perturbation theory [20]. To press further requires nonperturbative studies. The tests we present later are among the most stringent nonperturbative tests ever of staggered quarks (and indeed of QCD). The doubling symmetry leads to a surprising result for the quark propagator, computed in an arbitrary gauge field. The propagator has the remarkable structure S F ðx; y; Al Þ ¼ gðx; y; Al ÞXðxÞXy ðyÞ;

ð21Þ

where g (x, y; Al) is a Dirac scalar (i.e., it has no spinor indices), and spinor matrix X (x) is completely independent of the gauge field Al Y x =a XðxÞ  ðcl Þ l : ð22Þ l

The spinor structure of the quark propagator is uniquely specified by the doubling symmetry. Consequently the quark propagator for naive/staggered quarks has 16· fewer components to compute than are required for other discretizations. This property, together with its residual chiral symmetry, makes naive/staggered quark discretizations far more efficient than all other alternatives. 4.2. Lattice QCD confronts experiment The HPQCD and MILC collaborations have been working together to explore the extent to which realistic unquenching can be achieved using the improved staggered quark discretization and current computing resources. The first results were recently published in [15]. These lattice simulations had only five parameters: the bare QCD coupling constant as, the bare u and d quark masses, taken equal, and the bare s, c, and b masses. (The approximation mu = md ignores isospin breaking effects, which are typically of order 1%.) These parameters were tuned to reproduce experimentally measured values of m 0  m , m2p , 2m2K  m2p , mDs , and m!, respectively. The quantity chosen to tune each quark mass is approximately proportional to that mass, and fairly independent of the other masses. For example, chiral perturbation theory implies that 2m2K  m2p is roughly proportional to ms, but depends only weakly on mu, d, mc, etc. The logarithm of m 0  m depends linearly on 1/as, but is approximately independent of the quark masses, including mb. This makes it ideal for tuning the QCD coupling. Once tuned, we can readily convert the lattice coupling into the more standard MS coupling. These lattice simulations give [21]

208

G.P. Lepage / Annals of Physics 315 (2005) 193–212

aMS ¼ 0:1180ð15Þ;

ð23Þ

where the error is completely dominated by uncertainties in the perturbation theory that connects the lattice coupling to the MS coupling. This result is in excellent agreement with the current world average, 0.1187 (20), which is based largely upon comparisons of results from perturbative QCD with those from high-energy experiments on jet production, deep inelastic scattering, etc. [22]. This agreement is striking quantitative evidence that the QCD of confinement and lattices is the same theory as the QCD of asymptotic freedom and jets! Having tuned the five parameters of QCD, using five pieces of experimental data, there are no other parameters to tune. Any further results from the QCD simulation must agree with experiment—or there is a serious problem for experiment, lattice QCD, or the Standard Model. In Fig. 10 (right panel) are lattice QCD results for various masses and decay constants, each divided by the corresponding experimental result. The error bars shown are dominated by lattice QCD errors, both statistical and systematic. Theory and experiment are in excellent agreement to within the 1.5–3% errors shown in the figure. This is one of the most accurate calculations in the history of strong-interaction physics, and certainly the most accurate and comprehensive nonperturbative analysis. The agreement shown in this figure is remarkable because it involves a wide variety of hadrons, ranging from pions to upsilons. In simulations without vacuum polarization, it is impossible to find a single tuning of the QCD parameters that gives correct results for everything. This is illustrated by the left panel in Fig. 10 which shows the same ratios, but without vacuum polarization in the simulation. These quenched lattice QCD results differ by as much as 25% from experiment.

Fig. 10. Lattice QCD results, divided by the corresponding numbers from experiment, for the p and K decay constants, for the Bs mass, and for various mass splittings in the w and ! families of heavy-quark mesons. The left panel shows simulation results that include quark vacuum polarization; the right panel shows the same results but without the effects of quark vacuum polarization.

G.P. Lepage / Annals of Physics 315 (2005) 193–212

209

The u and d masses used in these simulations range from ms/6 to ms/2. These are larger than the correct mass (ms/25), but small enough that chiral perturbation theory can be used to extrapolate to the correct results. Previous work was restricted (by the cost) to masses larger than ms/2, where low-order chiral perturbation theory is no longer reliable. The error bars for the pion and kaon decay constants in Fig. 10 are dominated by uncertainties in the chiral extrapolation; the entire extrapolation is less than 10% in each case. These simulations used results from two lattice spacings, 1/8 and 1/11 fm, to verify that the remaining Oða2 as Þ errors are under control. This analysis illustrates the difference between lattice QCD and various models, like the quark model, or approximations, like HQET. Quark models, for example, are quantitatively successful in describing !s but have little to say about B mesons. Conversely, HQET is a very useful tool for B physics, but has very little application to !s. Lattice QCD is QCD; it is not a model. Consequently it connects the physics of !s to that of Bs: the Bs mass in the figure, which is accurate to within 20 MeV, results from tuning to ! and K physics data. The particular quantities plotted in Fig. 10 were selected because they are very well measured experimentally, and because they are particularly easy to analyze in lattice QCD, and therefore have the smallest systematic uncertainties. ‘‘Goldplated’’ calculations in lattice QCD today are restricted to hadrons that are well below decay threshold (100 MeV or more) or have negligible decay widths. We can readily compute masses, and matrix elements that involve at most one hadron in the initial and/or final state; multihadron states are difficult to analyze with current techniques. This may seem overly restrictive, but there are literally dozens of interesting nonperturbative quantities that can be studied with todays lattice technology. These include:  masses, decay constants, semileptonic form factors, and mixing amplitudes for D, Ds, D*, D s , B, Bs, B*, B s , and baryons;  masses, leptonic widths, electromagnetic form factors, and mixing amplitudes for any meson in w/! families below D/B threshold;  masses, decay constants, electroweak form factors, charge radii, magnetic moments, and mixing for low-lying light-quark hadrons. Since the developments here are primarily algorithmic, progress will be much faster than the pace of computer hardware evolution.

5. The (Near) future The tests discussed in the previous section suggest that a major effort on high-precision lattice QCD is warranted, and that it could have a significant impact in the near future. Of particular interest to many lattice theorists are analyses related to heavy-quark physics. For example, there are gold-plated lattice QCD processes for every CKM matrix element but Vtb (see Fig. 11). These and other quantities are being actively studied by lattice theorists with the goal of reducing theoretical

210

G.P. Lepage / Annals of Physics 315 (2005) 193–212

Fig. 11. Gold-plated lattice calculations for most CKM matrix elements.

QCD errors to a few percent, down from current errors that are typically 15–20%, in order to take maximum advantage of the high-precision data coming from B factories and from colliders. The theoretical errors assumed in the right panels of Fig. 10 are within reach! A critical challenge for lattice QCD is to demonstrate its reliability at the level of 1–3% errors given its past history of 10–30% errors. This will be particularly important should lattice QCD results, when combined with accurate experimental data, indicate a failure of the Standard Model. Will we believe it? We can only establish credibility for lattice QCD, and its error estimates, by comparing its predictions with a wide variety of highly accurate data. A wide variety of tests is required to test all the different components of lattice QCD; we must test    

heavy-quark lattice actions (NRQCD, Fermilab, etc.); light-quark lattice actions (improved staggered quarks); the gluon lattice action; high-order perturbation theory, for computing renormalization constants (Zs and cs);  simulation techniques for computing spectra, form factors, mixing . . ..

And the tests must be accurate, at the level of a few percent. While there are relatively few accurate measurements in existence that are relevant to such tests, the new CLEO-c experiment will shortly provide high-precision results that are highly relevant. These will include detailed ! measurements that will both test and overconstrain the b-quark action, which will then be ready for use in B analyses. CLEO-c will then study D and Ds mesons, including new measurements, to within a few percent, of their leptonic widths, and their semileptonic widths and form factors. These are all gold-plated quantities in lattice QCD. These processes are direct analogues of some of the most important B processes. They also will provide few-percent accurate determinations of Vcd and Vcs, leading to new tests of the Stan-

G.P. Lepage / Annals of Physics 315 (2005) 193–212

211

dard Model. Lattice QCD is currently in a race to predict CLEO-cs results, before they are measured. With success on this front, lattice QCD will be prepared to engage the accurate B physics coming from B factories and colliders over the next several years.

6. Conclusion For the first time in its history, lattice QCD has a superb opportunity to impact broadly on mainstream particle physics. Despite its 30 year history, lattice QCD is just now emerging from its infancy, thanks to a series of algorithmic and theoretical developments during the past decade, culminating in an efficient approach to vacuum polarization. Lattice QCD is essential to high-precision B/D physics, and it is now ready to meet that challenge. The verification of lattice methods at the level of a few percent, in a broad range of calculations, will be a landmark in the history of quantum field theory—the quantitative verification of powerful, nonperturbative techniques. Recent work on discretizations for chiral fermions may dramatically extend the applicability of these techniques, so that we may soon be ready to engage nonperturbative physics beyond the Standard Model.

Acknowledgments Many of the ideas presented in this paper were developed in close collaboration with Christine Davies, Aida El-Khadra, Andreas Kronfeld, Paul Mackenzie, Quentin Mason, Junko Shigemitsu, Howard Trottier, and our many colleagues in the HPQCD collaboration. We have also benefitted from close interaction with Claude Bernard and Doug Toussaint, and their colleagues in the MILC collaboration. This work was supported in part by the NSF.

References [1] K.G. Wilson, Phys. Rev. B 10 (1974) 2445. [2] G.P. Lepage, P.B. Mackenzie, Phys. Rev. D 48 (1993) 2250–2264. [3] G.P. Lepage, L. Magnea, C. Nakhleh, U. Magnea, K. Hornbostel, Phys. Rev. D 46 (1992) 4052– 4067. [4] A.X. El-Khadra, A.S. Kronfeld, P.B. Mackenzie, Phys. Rev. D 55 (1997) 3933–3957. [5] M.G. Alford, W. Dimm, G.P. Lepage, G. Hockney, P.B. Mackenzie, Phys. Lett. B 361 (1995) 87–94. [6] K. Orginos, D. Toussaint, R.L. Sugar, Phys. Rev. D 60 (1999) 054503. [7] G.P. Lepage, Phys. Rev. D 59 (1999) 074502. [8] K. Orginos, D. Toussaint, Phys. Rev. D 59 (1999) 014501. [9] G.P. Lepage, in: T. DeGrand, D. Toussaint (Eds.), From Actions to Answers, World Scientific Press, Singapore, 1990. [10] G.P. Lepage, How to renormalize the Schro¨dinger equation, 1997. Available from: . [11] K.G. Wilson, Rev. Mod. Phys. 55 (1983) 583.

212

G.P. Lepage / Annals of Physics 315 (2005) 193–212

[12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22]

M.G. Alford, T.R. Klassen, G.P. Lepage, Phys. Rev. D 58 (1998) 034503. B.A. Thacker, G.P. Lepage, Phys. Rev. D 43 (1991) 196–208. A. Gray, et al., Nucl. Phys. Proc. Suppl. 119 (2003) 592–594. C.T.H. Davies, et al., Phys. Rev. Lett. 92 (2004) 022001. G.G. Batrouni, et al., Phys. Rev. D 32 (1985) 2736. H.S. Sharatchandra, H.J. Thun, P. Weisz, Nucl. Phys. B 192 (1981) 205. Q. Mason, et al., Nucl. Phys. Proc. Suppl. 119 (2003) 446–448. E. Follana, et al., Nucl. Phys. Proc. Suppl. 129 (2004) 447–449. C. Bernard, Phys. Rev. D 65 (2002) 054031. HPQCD Collaboration, preliminary result (unpublished). S. Eidelman, et al., Phys. Lett. B 592 (2004) 1.