Monte-Carlo solution to the many-body schrödinger equation

Monte-Carlo solution to the many-body schrödinger equation

MONTE-CARLO SOLUTION TO THE MANY-BODY SCHRODINGER EQUATION J O H N G. Z A B O L I T Z K Y lnstitut ./'fir Theoretische Physik, Universitiit zu KOln, D...

3MB Sizes 0 Downloads 34 Views

MONTE-CARLO SOLUTION TO THE MANY-BODY SCHRODINGER EQUATION J O H N G. Z A B O L I T Z K Y lnstitut ./'fir Theoretische Physik, Universitiit zu KOln, D-5000 KiJln 41, FRG

Almtract--Thc Green's function Monte-Carlo method is reviewed.This method is used to obtain solutions to the many-body Schr'6dingerequation to arbitrary accuracy. Applications to manyBosonsystemsas wellas the complicationsarisingin the caseofmany-Fermionsystemsare discussed. For few-Fermionsystems,in particularatomicnuclei,existingcalculationsare reviewedand possible future work is outlined.

CONTENTS 1. 2. 3. 4. 5. 6.

INTRODUCTION VARIATIONALMONTE-CARLO GREEN'S FUNCTION MONTE-CARLO FOR MANY-BOsoN SYSTEMS GREEN'S FUNCTION MONTE-CARLO FOR FERMIONS OTUER APPROAC-~ES SUMMARYAND OUTLOOK

ACKNOWLEDGEMENTS APPENDIX 1. THE ALGORITHM OF METROPOLIS, ROSENBLUTH, ROSENBLUTH+TELLER AND TELLER APPENDIX 2. SAMPLING OF THE UNPERTURBED GREEN'S FUNCTION G v REFERENCES

103 105 109 117 127 132 134 134 134 137

1. I N T R O D U C T I O N The solution of the many-body SchrSdinger equation is a central problem in various areas of theoretical physics and chemistry. Unfortunately, while a variety of methods exists for oneand two-body problems, the many-body problem in general is rather difficult to solve. There are only rare and mostly trivial cases where an analytic solution is possible. There are a few more cases where numerical solutions have been found to the accuracy required to answer physical questions. The latter situation in particular is found in the nuclear three-body problem. General many-body problems like medium or heavy atomic nuclei however have not been found susceptible to sufficiently accurate solutions. Therefore, a large part of nuclear physics has been devoted to the construction of approximation schemes thought to provide some description of one or a selected set of nuclear states. In many instances, because of the complexity of the problem, rather crude models need be used, in other cases, in particular when considering nuclear ground-states, more refined approximations have been found possible. In any ease one usually has a very hard time verifying the accuracy of the approximation scheme employed and in some cases one never succeeds in doing so, i.e. one is forced to live with approximations of unknown quality. An additional complication arises in nuclear physics since atomic nuclei cannot exactly be considered as ensembles of nucleons interacting via some suitable known potential. 103

104

J.G. Zabolitzky

Subnucleonic degrees of freedom are present in nuclei. For some questions these may safely be neglected, for other questions they are of paramount importance. Within the present discussion, however, I wish to assume from the outset problems with non-relativistic inert particles interacting via known potential functions. This model problem certainly has its shortcomings and is unable to describe a variety of nuclear phenomena accurately. However, since approximation-free treatments of systems with subnucleonic degrees of freedom are even more scarce than if just considering nucleonic degrees of freedom it is necessary to exploit this simpler model to its limit before the more complicated problem involving subnucleonic degrees of freedom can be attacked seriously. From both the points of view given above it is desirable to obtain solutions to the manynucleon Schr6dinger equation free of approximations, even if only a few restricted but nontrivial problems can be studied. It is the purpose of the present paper to outline the only route found so far capable of producing such solutions. At the present point in time the essential restrictions are simple interactions, i.e. purely central and local ones, and not too many degrees of freedom. The origin of these rc~trictions will be made clear below. The main purpose of studying problems of this type is the verification of more widely applicable approximation schemes. After proving the accuracy of suggested approximate ways to obtain information about nuclear states on those cases where the approximation-free treatmmat is possible, these methods may be applied with much more confidence to more complicated problems where an exact treatment is not possible to date. Approximation-free solution to the many-body Schr6dinger equation rests essentially on the equivalence between SchriSdinger's equation and the diffusion equation. It seems that Fermi was the first to notice this formal equivalence, t45) Also at that time it was conceived that a simulation by Monte-Carlo methods should therefore be possible. However, computational devices available at that time were clearly inadequate to attack the problem. It was only after the advent of more powerful electronic computers that implementations were provided. ~29-42) The intervening 20 years have seen a large number of very successful applications to ground-states of many-Boson systems several of which will be discussed below. Excited states and many-Fermion systems have posed a much more severe problem. One essential first step in making use of the diffusion analogy is interpreting the quantummechanical wavefunction, physically a probability amplitude, as a probability density for simulation purposes. Probability densities necessarily are non-negative. The probability interpretation of the wave,unction is therefore possible in a simple way only if the wavefunction itself can be chosen non-negative everywhere, as is true for many-Boson ground-state wavefunctions. Excited-state and many-Fermion wavefunctions cannot be of uniform sign because they have to be orthogonal to the many-Boson ground-state which may be chosen to be non-negative everywhere. Also the required antisymmetry of many-Fermion wavefunctions is incompatible with uniform sign for these wavefunctions. The simulation of the Schrt~dinger equation thereby is significantly more involved in the case ofmany-Fermion systems since the probability interpretation of the wavefunction needs to be introduced in a more involved manner which will be discussed below. Because of this reason ground-states of many-Fermion systems have been obtained free of approximations only recently. ~16,38.47,25) These calculations ~equire significantly larger computational resources than the manyBoson calculations and typically are performed on the most powerful vector-computers or dedicated smaller machines. Future calculations for larger systems or more complicated Hamiltonians involving, e.g. exchange and tensor forces will presumably require still more computer power and do not seem feasible with present-day technology. In order to understand the Monte-Carlo solution method for the SchriSdinger equation a few general properties of Monte-Carlo methods have to be discussed. This is done in the

Many-Body Schradinger Equation

105

context of variational Monte-Carlo methods in Section 2. Variational methods are useful and interesting in their own right since they are applicable to a much larger class of problems with good accuracy though not being exact. The Green’s function Monte-Carlo method for many-Boson systems is presented in Section 3. This method is the basis for the many-Fermion methods discussed in Section 4. Furthermore several nuclear problems may be mapped onto many-Boson problems and therefore be treated successfully with this simpler method. For many-Fermion problems there exist approximate and exact Monte-Carlo algorithms. The approximate versions are still more accurate than most numerical approximation schemes and therefore warrant a discussion in their own right. These as well as exact methods and applications to nuclear problems will be discussed in section 4, where present results are also summarized. Section 5 is devoted to a discussion of various alternative approaches. In Section 6 the present status of the Monte-Carlo method as applied to many-Fermion problems is summarized and possible future developments are discussed. A final remark seems in order clarifying the meaning of the words “exact” and “free of approximations” as used here. These words are used in the same sense as 27~ is the “exact” circumference ofa circle with radius r. Since the numerical value of x is not known but may be calculated to arbitrary precision given sufficient computational resources the same holds true of the circumference for given radius. A calculational method is called “free of approximations” or “exact” if there exist finite amounts of computer time and memory such that the desired answers may be calculated to any arbitrary finite prescribed precision. The adjective “numerical” is used for calculations not employing stochastic, i.e. MonteCarlo methods. Typically, calculations of this type will involve solution of equations on meshes as opposed to Monte-Carlo integrations and random walks.

2. VARIATIONAL

MONTE-CARLO

For most quantum-mechanical many-body systems there does not exist any calculational method which will find the unconditionally lowest energy state. It is usually necessary to specify at least some symmetry property. One may then find the lowest energy state having the desired symmetry. An example of an approximate method allowing one to estimate this lowest energy is the variational method. Assuming some trial wavefunction ‘PvBrwith the desired symmetry and a Hamiltonian H a variational energy estimator is given by E”,, = (‘y”,,l~l‘y”,J/WW

1‘y”,J.

(2.1)

The symmetries to be incorporated into the trial wavefunction typically include the exchange symmetry (symmetric for a many-Boson system, antisymmetric for a many-Fermion system), total angular momentum for a nuclear system, solid or fluid phases for an infinite system, and others. In general we may assume that the trial waveltmction depends on a few parameters. The variational principle assures us that eqn. (2.1) constitutes an upper bound to the lowest energy for any state of the symmetry used-all Schriidinger eigenstates with different symmetry will be exactly orthogonal to the trial wavefunction so that we have independent variational principles for all possible symmetry types. The only computational task then is simple evaluation of eqn. (2.1) for a given set of parameters in the trial function. Taking this as a building block the minimization of eqn. (2.1) with respect to the parameters in order to find the lowest energy may then be achieved by a variety of strategies which are not the topic of the present paper.

106

J.G. Zabolitzky

Evaluation of eqn. (2.1) may be viewed as an integration problem in some suitable highdimensional space. Let us summarize all degrees of freedom of the system under discussion in the variable R. For a system composed of N electrons R will be the collection of 3N electron coordinates and N electron spins. For nuclear systems isospin degrees of freedom may be added, for systems of spin-zero Bosons just spatial coordinates will be present. In order to simplify the presentation let us consider this latter case. R then may be viewed as a vector in 3N-dimensional space, and eqn. (2.1) may be written

Evar=[ f dRp(R)(HtI'wr(R))/Wvar(R)l/[ f dRp(R)]

(2.2)

where p(R) dR is the probability density for finding the N-body system within dR of the configuration R,

p(R) = [War(R)[2.

(2.3)

Obviously, for any wavefunction (Fermi, Bose or whatever) p(R) is non-negative representing a physical probability. The expression for the variational energy eqn. (2.2) may be restated as average of the local energy E(R) over all configurations R, where each configuration R of the N-body system is to be used with probability p(R) and the local energy is given by

E(R) = n~var(R)/tPvar(R).

(2.4)

The Monte-Carlo variational method may now be stated very simply. Assume there exists some computer algorithm which will sample configurations 'R from the prescribed probability density p(R). It is then possible to obtain a large but finite number M of such configurations, Ri, i = 1... M, in sequence. The average over the local energy, i.e. the 3Ndimensional integral of eqn. (2.2), may then be approximated by the finite sum

Evar = (E(R))p ,~ ~ E(R,)/M

(2.5)

i

where the angular bracket with subscript p denotes average with respect to probability density p(R). Equation (2.5) will become a strict equality for infinitely large M. For finite but large M, as may be achieved in a computer simulation, the exact value eqn. (2.1) is approximated with an error which vanishes like M-1/2 and thus may be made arbitrarily small. This latter behaviour is typical of all Monte-Carlo methods and is due to the statistical nature of the averaging process. It is important to note the way in which functions of many variables like p(R) are represented in a Monte-Carlo scheme. Normally, when one considers a functionf(r), say, one imagines function values for given values of r. In a computer f(r) might be represented on some mesh, i.e. via a table of pairs (ri,f(ri)). In a Monte-Carlo calculation in contrast a function is represented by a list of argument values. It is the frequency with which argument values occur in any given interval which represents the function value. The function sin(x), for example, may be represented in the interval zero to pi by a list of real positive numbers all from the same interval. Numbers in the vicinity of pi/2 will occur more frequently than will numbers in the vicinity of zero or pi. Specifically, the probability to find a number in the list smaller than h vanishes linearly with h representing the linear behaviour of sin(x) for small values ofx. This representation of functions which may seem rather unusual to the reader not acquainted with Monte-Carlo methods has significant advantages for functions of many variables if one is interested in integrals over these functions only. Consider a 100-body system. The configuration space is 300-dimensional real space. Attempting to formulate a

Many-Body Schr6dinger Equation

107

mesh filling this space and computing function values on the same is totally ridiculous: with only two mesh points per dimension one would already require 2 a°° or approximately 1 0 9o mesh points and function values far beyond any computer capacity that can be envisaged. On the other hand integrals like eqn. (2.2) may be evaluated to better than % accuracy employing lists of the order of 10s configurations Rl only, which is very well manageable on existing machines. The seemingly miraculous achievement of accurate high-dimensional integration has, of course, an easily understandable reason. The integral eqn. (2.1) is approximated by the finite sum eqn. (2.5). Obviously, if E(R) would be the constant function, i.e. independent of R, the value of the sum eqn. (2.5) would not depend upon M, the number of terms in the sum. Therefore, M = 1 would be sufficient to obtain the exact answer. Less extreme, if the variance of E(R) is small, i.e. if E(R) is approximately a constant independent of R, the sum eqn. (2.5) may be computed accurately with only a few terms. Considering eqn. (2.4), however, it is obvious that if a good trial wavefunction was chosen E(R) will in fact be approximately constant. If one was lucky enough to choose a genuine Schri~dinger eigenstate as variational trial function, E(R) will be exactly constant, equal to the SchriSdinger eigenvalue. The variance of E(R) will increase with the "distance" between the trial wavefunction and the true eigenstate and is a continuous function of this distance. Therefore, if the trial function has not been chosen in a ridiculous way, one may rest assured that Monte-Carlo evaluation of eqn. (2.1) will produce reasonable accuracy with relatively limited computer effort. The important lesson to be learned here is an augmentation of the above statement about the convergence of a Monte-Carlo estimate. The deviation from equality in eqn. (2.5) is proportional to M - 1/2 the constant of proportionality being the square root of the variance of the local energy E(R). In fact, a numerical estimate of the Monte-Carlo statistical error of eqn. (2.5) is given by AEva, =

[((E(R)2)p

-

(E(R))2p)/M] 1/2

(2.6)

where this is clearly seen. The success recipe in Monte-Carlo therefore is obvious: while accurate results may be made a little more accurate by increasing the denominator in eqn. (2.6) this really is a very expensive procedure in terms of computational resources. The important and extremely useful suggestion is to reduce the variance of the quantity to be estimated as much as possible, that is to reduce the constant of proportionality in the c M - 1/2 error law. While M at the current growth rate of computational resources may not be increased by more than a factor of two per year, reductions of many orders of magnitude are possible in c in many cases. It is seen that any effort invested in inventing better variational trial wavefunctions pays off doubly in the variational Monte-Carlo method: not only does one achieve results closer to those corresponding to the desired Schr~dinger eigenstate, but also the computer time to obtain results of constant statistical accuracy decreases. The computational task of eqn. (2.5) may be broken into two parts. One task is to sample configurations R from the probability density p(R), the other task is mere evaluation of the local energy at those configurations. The first task is solved by means of the algorithm due to M(RT) 2t44) which I will discuss in Appendix 1. The algorithm is capable of sampling any arbitrary (non-negative) probability density p(R), the only requirement being that the numerical value ofp(R) may be evaluated for given R. This requirement obviously is fulfilled by eqn. (2.3) since the trial wavefunction usually is prescribed in closed form, maybe containing as ingredients some functions of one variable being solutions to ordinary differential equations. These functions to arbitrary accuracy may easily be tabulated and interpolated so that p(R) in fact can be evaluated for arbitrary R.

108

J.G. Zabolitzky

The second task of evaluating the local energy E(R), eqn. (2.4), formally looks much easier than sampling p(R). This is not true, however. The Hamiltonian H contains a kinetic and a potential part. The differentiations implied by the kinetic energy term in eqn. (2.4) may be quite cumbersome. To be specific, let me assume the potential to be local and pairwise additive and all particles to have the same mass m. The Hamiltonian then reads n = -h2/2mE

V .2, + ~,, i

v(rij)

(2.7)

i
where v(r) is the pair potential function and rij the pair distance. For most physical systems of interest here (electrons, nucleons, Helium atoms) the pair potential v(r) is strongly repulsive at small distances and may be weakly attractive at larger distances. For systems of this type the two determining ingredients to the trial wavefunctions are overall shape of the system and short-range as well as long-range pair correlations, in addition to statistics (symmetry or antisymmetry of the wavefunction with respect to pair exchange). For many-Boson systems with symmetric wavefunctions a reasonably accurate trial wavefunction is given by the socalled Jastrow wavefunction{1°'2°'2s~ tiJvar(R) -- H

f(rij) H i
i

go(ri)

= FP.

(2.8)

This wavefunction is manifestly symmetric with respect to pair interchange. The product of one-body factors P for finite systems like droplets of 4He atoms considered to be elementary spin-0 bosom describes the surface of the droplet. {73'5°) The one-body functions ~0 will be approximately constant in the interior of the droplet and fall off exponentially outside the droplet. The product of two-body factors F falls to small values or even zero whenever a pair of particles approaches small pair distances, i.e. moves into the region of strongly repulsive potential. The two-body factorf(r) therefore will become small for small r and approach a constant for large separations. The wavefunction eqn. (2.8) is not normalized. Other choices of the two factors are possible. For infinite fluids, no one-body factors will be present, P = 1. For infinite solids, P should be a symmetrized product of Gaussians centered at the lattice sites. For finite droplets again, one could have P = 1 and incorporate the droplet surface into the two-body factors f(r) by having them fall off exponentially at large distances. {57,39) For finite systems it must be ensured that no spurious center-of-mass motion is introduced into the description. This is achieved best by employing translationally and rotationally invariant wavefunctions. Within eqn. (2.8) translational invariance implies that all coordinates are taken with respect to the center-of-mass of the system. Since the two-body factors are expressed in terms of coordinate differences, no modification occurs. If one-body factors are present, each coordinate ri is to be replaced by ri - (1/N)~ rk. k The differentiation of eqn. (2.8) implied by the kinetic energy term in eqn. (2.4) does not pose significant difficulties. However, several refinements of eqn. (2.8) need to be considered. For many-Boson systems, three-body correlations must be considered.{59} For manyFermion systems, the symmetric product P of one-body factors must be replaced by an antisymmetric determinant D of one-body functions. This replacement suffices to render the overall wavefunction antisymmetric since the two-body factor F (and eventual three-body factors) are symmetric. In the case of spin degrees of freedom, a product of several determinants, one for each possible spin projection, must be used, similarly for isospin." 7) The simplest reasonable many-Fermion wavefunction is the Jastrow-Slater one kI'/FT(R) = F H Detklq~i(rj)[ k

(2.9)

Many-Body Schrfdinger Equation

109

where F is the product of two-body correlation factors from eqn. (2.8), k runs over the different species of particles (spin-up neutrons, spin-down neutrons, spin-up protons, spindown neutrons in the nuclear case), and the particle numberj runs over the different particles (nucleons) of each species. A ground-state wavefunction for t 60 would have a product of four determinants, each one made up from one s- and three p-orbitals. The index F T denotes a Fermi trial wavefunction. Within the determinants the center-of-mass coordinate again must be subtracted from the one-body coordinates written in eqn. (2.9). A function of the type (2.9) is of acceptable quality if the Hamiltonian does not act on the spin degree of freedom. This assumption applies for many-electron systems, liquid aHe systems and for models of atomic nuclei where the interaction is approximated by a spin-independent Wigner force, i.e. a local central spin-independent potential. More accurate description of many-Fermion systems in addition requires a statedependent factor in the wavefunction which usually is taken in the Feyman-Cohen backflow form.(2t'6°) The evaluation of the kinetic energy term in eqn. (2.4) for these refined wavefunctions is the most difficult part of the variational calculation. In spite of not being a rigorous expression for an eigenvalue of the Schr6dinger equation, the variational expression [eqns. (2.1) and (2.5)] is very useful for realistic nuclear problems where the more refined methods to be discussed below are much more difficult to apply. It will be seen below that for those cases where exact and variational methods may both be employed very good agreement is found in the results. It is, therefore, generally believed that the application of the variational method to problems involving the full complexity of realistic nuclear forces, like spin-orbit and tensor components, which induce the same type of correlations in the variational wavefunction, also are quite accurate. The most elaborate calculationst~2) furthermore show very good agreement with numerical Faddeev calculations in the case of the three-nucleon ground-state problem, and with Coupled-Cluster calculations(37) for the four-nucleon ground-state problem. The major advantage of the variational Monte-Carlo method is the ease with which many-body interactions may be incorporated without inducing any significant computational problems. By the same method excited (resonant) states of the 4He nucleus have been obtained. (x3) Some selected results will be discussed below in connection with the more rigorous method.

3. GREEN'S FUNCTION MONTE-CARLO FOR MANY-BOSON SYSTEMS Consider the ground-state ofa many-Boson system. The wavefunction may be chosen real and positive. Let me assume that the potential be bounded from below. The scale of energy then may be shifted by a positive constant V0 such that the many-body potential V(R) > 0 everywhere. Under these assumptions the time-dependent SchrSdinger equation reads - i h ~ P ( R ) / a t = h2/2mAa~P(R) -

V(R)~P(R)

(3.1)

which is to be compared with the diffusion equation under absorption V ( R ) ~C(R)/t~t = D A R C ( R ) -

V(R)C(R).

(3.2)

Integrating the SchrSdinger equation in imaginary time to large times will filter out the ground-state from any arbitrary initial state not orthogonal to the ground-state. It is seen that this imaginary-time integration is equivalent to the integration of the diffusion equation in real time.t45'3) There are two essential points to be observed. Firstly, the quantummechanical wavefunction, a probability amplitude, formally has to be interpreted as a

110

J.G. Zabolitzky

concentration or probability density, necessarily a non-negative quantity. Secondly, the potential has to be interpreted as an absorption. The diffusion analogy has been used in very direct ways to obtain ground-states of manyBoson systems.(3'43' 15,46) One of the problems in employing the diffusion analogy directly is the use of a finite time step in performing the time-integration. It is then required to perform calculations for various different time steps and extrapolate to zero time step. In order to avoid this procedure I prefer to discuss another method, the Green's function Monte-Carlo (GFMC) method,(29-32'35'7°'1a'34) where this time step does not occur explicitly but the time integration is performed implicitly by another Monte-Carlo process not requiring any kind of extrapolation. On the surface, this approach seems to differ appreciably from the diffusion method. However, the basic steps finally executed on the computer are essentially the same, the only exception being the treatment of time steps. Still another equivalent formulation is in terms of path integrals. (2'54) This formulation is closely related to the direct application of the diffusion analogy, and will not be discussed here. Consider the time-independent Schr6dinger equation written in 3N-dimensional coordinate space for a N-Boson system ( - A + V(R))~(R) = E~F(R)

(3.3)

where A, the 3N-dimensional Laplacian, is equal to the sum of N three-dimensional Laplacians and h2/2ra = 1 for notational simplicity. In order to completely specify the states of the system eqn. (3.3) must be supplemented with the appropriate boundary conditions. For the finite systems of interest in nuclear physics open boundary conditions are suitable, i.e. the wavefunction is supposed to vanish whenever the magnitude of the coordinate R approaches infinity. For other systems, in particular infinite fluids like nuclear matter or liquid Helium, periodic boundary conditions are used together with a finite number of particles in order to simulate the infinite system on a finite computer. Let us assume the Green's function G for the differential operator in eqn. (3.3) were given, defined by (-A1 + V(R1))G(R,R~) = ~(R - Rt). (3.4) Under the assumption V(R) > 0 and thereby all E > 0 in eqn. (3.3) the Green's function G is positive. It is a symmetric function of its arguments. It may now easily be seen that iteration of the integral equation

@"+ ~(R) = E, j G(R,RI)tl)"(RI)dRi

(3.5)

with an arbitrary positive constant E,, the trial energy, will converge to the ground-state solution ofeqn. (3.3). This is because of the representation of the Green's function in terms of eigenfunctions, G(R, R~) = ~ ( R I ~ , ) I / E , ( ~ , ]R, >.

(3.6)

If the initial guess for the ground-state wavefunction is expanded @°(R) = ~ c~W~

(3.7)

i

the nth iterate eqn. (3.5) may be found from eqns. (3.4)-(3.7) @"(R) = ~, (Et/E~)"c,~i(R). i

(3.8)

Many-Body Schr6dinger Equation

111

Since the trial energy E, as well as all eigenenergies Ei are positive (because of the shift Voin the energy scale) the ratio Ef/E~is largest for the smallest Ei, i.e. the ground-state. Iteration of eqn. (3.5) therefore geometrically (exponentially) amplifies the ground-state component in the iterate. After some number of iterations all other states will have been damped out of the iterate. In other words, the integral operator in eqn. (3.5), the Green's function, acts as a filter to purify the wavefunction iterate towards the ground-state. Obviously, this process is rather similar to the integration in imaginary time if continuous time is replaced by the integer iteration number. It should be noted that because of the presence of coefficients ci in eqn. (3.8) the initial iterate should not be orthogonal to the ground-state (which would be difficult to achieve anyhow). Also only those excited states contribute to the sum eqn. (3.8) which are non-orthogonal to the initial iterate. The significance ofeqn. (3.5) arises since the iteration process may be easily translated into a Monte-Carlo random walk process. Assume some initial set of configurations be given, representing the initial iterate. As stated in the previous section, the Monte-Carlo representation of non-negative functions is by lists of configurations the density of which at any point represents the function value at that point. The next iterate may then be obtained in the same representation by taking in turn each configuration from the set representing the initial iterate, RI say, and sampling configurations R for the new iterate from the unnormalized probability distribution p(R) = EtG(R, R 1). That is, the integral kernel in eqn. (3.5) is interpreted as a transition probability for a random walk in 3N-dimensional configuration space governed by eqn. (3.5). Sampling from an unnormalized probability distribution p(R) is to be interpreted as first forming the normalization

K= f p(R)dR = f EtG(R,R1)dR

(3.9)

and then sampling on the average K configurations from the accordingly normalized probability distribution. The trial energy Et so far has been an arbitrary positive constant. Its significance lies in determining the number of configurations present in the wavefunction iterates. The normalization of the transition probability, eqn. (3.9), depends linearly upon Et. The number of configurations generated from the nth iteration, called the nth generation, in the (n + 1)st generation therefore is linear with Et. Assuming that the asymptotic state of the iteration has already been achieved, i.e. that all other states but the ground-state have already been damped out in eqn. (3.8), it is obvious from eqn. (3.8) that the function will remain constant in size, that is the number of configurations within a generation will remain constant, if and only ifEt = Eo, the ground-state energy. IfEt is chosen above Eo the number of configurations, i.e. the generation size will increase with iteration number. If Et is chosen below E0 the generation size will decrease with iteration number. Since it must be avoided in the course of a calculation that a generation dies out, that is no configurations are left to continue the calculation, it is recommended to choose the trial energy Et just above the best estimate for Eo, the ground-state energy. The generation size will then grow slowly with generation number. More generally, the trial energy E, may be used to control the generation size for computational convenience. It is obvious that this fact can be used to estimate the true ground-state energy. From eqn. (3.8) we have in the asymptotic regime, that is after some sufficient number of iterations, O,+ I(R) dR (3.10)

- n, J

dR/j p

should become equal to the ground-state energy. Equation (3.10) is called the growth estimator for the ground-state energy.

112

J.G. Zabolitzky

The above algorithm will generate a population of configurations representing the ground-state wavefunction [eqn.(3.5)] and an estimate for the ground-state energy [eqn. (3.10)]. If the algorithm can be implemented in a practical way the many-Boson ground-state problem can be considered solved. The major problem with the above algorithm seems to arise because the Green's function G(R, R') has been assumed known. Calculating this function from eqn. (3.4) seems to be an even more involved task than merely solving eqn. (3.3) for the ground-state. However, this is not true. In order to iterate eqn. (3.5) it is not necessary to know values of the Green's function for arbitrary pairs of configurations R and R'. Much less is required: one needs only to have an algorithm capable of sampling from the probability distribution in eqn. (3.9). It is possible to construct a second random-walk process which will sample this function. For a given configuration Ro a finite domain D(Ro)around Ro may be chosen so that the potential function V(R) is bounded from above throughout the domain D(Ro)by a constant U(Ro). A Green's function Gv is defined by (-A1 + U)Gc(Rt,Ro)= 6(Rt -Ro)

(3.11)

subject to the boundary condition that Gv(R~,Ro) = 0 whenever R1 is outside the domain D(Ro).The function Guwithin this (small) domain D serves as a zero-order expression for the full Green's function. The complete Green's function is developed from Gv by means of a random walk process. For a suitable choice of the domains D(Ro) the function Gu may be obtained in closed form. The most commonly used choice is a cartesian product of N threedimensional spheres for a N-body system.(aa'a6) In this case there exists an algorithm to sample the function Gv. Multiplying eqn. (3.4) by Gv(RI, Ro), eqn. (3.11) by G(R,R l) and integrating both eqns. over R~ yields upon subtraction of the two eqns. G(R, R0) = Gv(R,Ro) + (

[-t~Gu(RI,Ro)/an]G(R,Rl)dR1

• !,'D(Ro)

+J

DtR,,)

[U

w

(3.12)

V(RI)]Gv(R1,Ro)G(R,R1)dR1.

In the second term on the r.h.s, the volume integral has been converted into an integral over the surface dD(Ro) introducing the normal derivative d/dn and use has been made of the boundary condition for Gv. Equation (3.12) is a linear, inhomogeneous integral equation for the complete Green's function G. Because of the various definitions and provisions made it may easily be seen that all terms in eqn. (3.12) are non-negative. In shorthand form, eqn. (3.12) may be written

G = Gv + f KG.

(3.13)

The norm of the kernel K in this integral equation is smaller than one. It is, therefore, possible to solve this integral equation by iteration, i.e. by summing the von-Neumann series. This mathematical process is equivalent to a random walk process with initial density Gv and transition probability K. The random walk will terminate since the norm of Kis smaller than one, that is at each transition within the random walk there is a finite probability for the random walk to terminate. Equation (3.12) therefore constitutes a Monte-Carlo algorithm to sample the Green's function G. This algorithm is employed to iterate eqn. (3.5). In this way by executing these iterations for a sufficiently long time on a computer the ground-state of a many-Boson system may be approximated arbitrarily well, i.e. be obtained to any desired accuracy.

Many-Body Sehr&tinger Equation

113

As discussed in the preceding section the practicality of a Monte-Carlo algorithm rests essentially on the magnitude of the variance of the estimator. In order to achieve statistically more accurate results with available computational resources the variance should be reduced as much as possible. In the present context this goal has been achieved to a very satisfactory degree by introduction of importance sampling,tas~ In principle the idea is rather simple. Assume a variational calculation has been performed for the system under study. One then knows in dosed form a more or less accurate approximation to the ground-state wavefunction, the variational trial function at the optimal energy. The magnitude of this wavefunction can be used to guide the random walk process. Those regions where the importance function W~ = Wvir is large are important and the random walk should spend most of its time there. Those regions in 3N-dimensional configuration space where ~F~ is small are unimportant and the random walk should not spend much time there. The importance function is used to guide the random walk away from the unimportant regions-which actually make up the overwhelming part of configuration space--towards the relevant regions. This is accomplished by modifying eqn. (3.5) to read

X n+X(R) = On +x(R)tPj(R) = E, f [~P,(R)G(R, R')/Wt(R') ] t~(R')tPt(R ') dR" (3.14)

= E, f K(R, R')Xn(R') dR'. In eqn. (3.14) one considers the product of the old iterate and the importance function as the function to be solved for. It is this product which is represented by a set ofconfignrations. The random walk transforms this nth iterate X" into the next generation X n+ 1 by employing the modified transition probability K(R, R'). The same modifications may be introduced into eqn. (3.12) so that K may be sampled as easily as G. Equation (3.14) opens up another way to compute an estimator for the ground-state energy, the variational estimator.

E~o"" = / = f (H~PI(R))/~P'(R)X"(R)dR/f X"(R)dR .~ 1/M ~. (HV,(R,))/~P,(R,).

(3.15)

i

i = 1... M, Ri from p(R) = X'(R) = Wt(R)~"(R). The iterate tl~ tends towards the ground-state wavefunction. The first equality in eqn. (3.15) holds since H applied to the right yields the ground-state energy times the overlap which cancels. The second equality holds since one may multiply and divide by ~FI and use the fact that X is a product of • and the importance function. The third equality expresses the integral via the Monte-Carlo approximation by M samples from X. It is seen dearly that the property of the variational energy expression eqn. (2.5) is retained: if the importance function, i.e. the variational trial function, tends towards the Schr'6dinger eigenstate the variance tends to zero. The introduction of importance sampling thus allows for dramatic reduction in computer time for many problems, and provides the chance to treat larger problems than otherwise possible. Another advantage of the method with importance-sampling is an accurate and easy-tocalculate approximation to expectation values of operators. Without importance sampling

114

J.G. Zabolitzky

the computation of expectation values is rather cumbersome. (ls'7°) Without importance sampling the wavefunction itself is represented as a probability. In order to evaluate quantum-mechanical expectations, i.e. squares of the wavefunction, one must square the Monte-Carlo probabilities. This is possible in principle and has been used in actual calculations. With importance sampling the quantity represented as a Monte-Carlo probability is the product W~P~ where ~P denotes the Schr6dinger eigenstate. Another product easy to evaluate is the variational probability, ~P~P~. One may then approximate the desired Schr/Sdinger probability ~

~ 2~Px - ~ l

= 2~P[~ + A] - [~

+

A ] 2 = ~IJUtj -

AA

(3.16)

where the small difference A between the Schr6dinger eigenstate and the importance function has been introduced. Equation (3.16) is an approximation correct to second order in A. The accuracy of eqn. (3.16) has been tested several times (7°'73) and found to be within statistical error bars. Throughout the discussion use was made of the positivity of the ground-state wavefunction. Applications of the method therefore primarily are to many-Boson systems and to liquid and solid 4He in particular. (7°'a4) Ifa good description of the H e - H e interaction is used to define the Hamiltonian (9) on essentially every experimental datum agreement with experiment is achieved within the experimental and statistical error bars. In contrast to most nuclear physics problems the Bose-Helium problem is perfectly well understood. Another application of more interest in the nuclear context is the calculation of droplets of 4He atoms, ts°'42'2a) These droplets are the theoretically simplest possible approximation to atomic nuclei having a Hamiltonian of realistic form, kinetic energy plus pairwise additive interactions. By scaling in a suitable way from Fermis to Angstroms for the distances and MeV to Kelvins for energies it is seen that the Helium and nuclear problems are rather similar owing to the nature of the H e - H e interaction. This interaction is strongly repulsive at small distances and weakly attractive at larger distances. The most significant qualitative difference is the Bose vs Fermi statistics, of course. In order to illustrate the typical behaviour of a G F M C iteration I present in Fig. 1 the variational estimator for the ground-state energy together with its kinetic and potential energy parts, and the growth estimator. The initial values for generation zero correspond to the initial choice for the wavefunction in the zeroth generation, a variational Jastrow-type function. It is seen that after some initial drop of the energy corresponding to the relaxation from the trial function to the true SchriSdinger ground-state there remain only statistical fluctuations around the long-time average. It is obvious that the first generations have to be discarded in forming averages. It may also be seen that consecutive estimates for the energy are not statistically independent. There are correlations of a number of generations corresponding to the number of generations in the initial fall off. The anticorrelation between potential and kinetic energy is beautifully exhibited. This is to be expected since SchriSdinger's equation tells us that the sum of both is constant while neither kinetic nor potential energy are individually. In Fig. 2 I present the ground-state energy as a function of droplet size. The crosses represent G F M C solutions to the SchriSdinger equation and circles variational estimates with two- and three-body correlated wavefunctions. It is seen that the variational method misses only a few ~ of the binding energy with these wavefunctions. The solid line is an empirical fit through the data points. In calculating the fit the result for the infinite system, liquid *He, has not been taken into account. From the calculation of droplets in the three to a hundred atoms range a verification of nuclear mass formula extrapolations may be deduced. It is found that extrapolating the mass

Many-Body Schr~SdingerEquation i

!

115

I

I

Ln~.u,,ltL.,.~al-~ ~ / . i ~ LJ.ja i,k -~. h,l.....L J

L,Lda. i,,haj - , l ,

,,, I. ~',', ~ ,~VTP~"I,, I F ~lTn r r" "" w'~ ~I " ' T PC"*" " . g n r n F ~ r ~'.~1 " " ~ ' ,,

| I

0

1000

, .

2000

3000 3475 GENERATION

Fig. 1. Energy estimators as function of generation number (-- iteration number) for a droplet of N = 70 '*He atoms, in arbitrary units. Generation zero corresponds to a variational estimate. E-GROWTH, growth estimator [eqn. (3.10)]. E-VAIL variational estimator [eqn. (3.15)]. T and V denote the kinetic and potential contributions to the variational estimators. Horizontal lines indicate the long-times average. The scales of all four graphs are arbitrary and different from each other.

formula empirically fitted to the results for droplets in the mass range of existing atomic nuclei predicts accurately to 1% the saturation properties of the infinite fluid. This same procedure is commonly employed to estimate the nuclear matter saturation energy and density from experimental information about finite nuclei. In Fig. 3 the radial density profile is shown for a number of droplets. Because of the positivity of the wavefunction all these droplets are spherically symmetric. It is seen that for the extremely loosely bound light systems the density is rather low. With increasing size of the 0.0

I

I

I

I

I

I 40

i 20

I

I

I

E/NI°K) - 1.0 -2.0 -3.0

Vari

-/..0

/

-5.0

GFMC

-6.0 -7.0 Inf.

I 1000

I 200

I 70

I I 10 8

I 6

I /. N

Fig. 2. Ground-state energy per atom for droplets of N 4He atoms as function of N. Crosses, GFMC results; circles, variational results. The solid line is a three-term liquid-drop formula fitted to the N = 20 to N = 112 mass range.

116

J.G. Zabolitzky '

'

'

I

'

'

'

I

'

'

'

I '

'

'

o.o2

I

'

'

'

ANL-P-16,978 I ' ' ' ]

-.

E \ \ \ \ r

\

",

\ : o'S \

"4

,

\

\

~

, \

112

3

', 240

,,

728

,,

\\

\\

. . . .

0

4

8

12

16

20

24

r(~) Fig. 3. P o i n t

a t o m d e n s i t y p r o f i l e f o r d r o p l e t s o f N 4EI¢ a t o m s . S o l i d lines, G F M C calculation; v a r i a t i o n a l c a l c u l a t i o n . T h © a r r o w i n d i c a t e s t h e i n f i n i t e l i q u i d s a t u r a t i o n density.

d a s h e d lines,

droplet the binding and the central density increases until the saturation density is reached. This saturation density coincides with the one calculated independently for the infinite fluid. After about N ~ 40 the central density remains constant at that value and the droplets grow outward. The comparison between solid (GFMC) and dashed (variational) line at N = 112 indicates the reliability of the variational wavefunction used. The existence of exact results opens up the possibility to test the accuracy of a commonly employed approximation scheme, the local density approximation. (53) It is found that rather accurate energy density functionals may be constructed which yield results in surprisingly good agreement with the G F M C calculation. Some experimental information exists about clusters of Helium atoms.t62! Unfortunately it is necessary in the experiment to ionize the Helium clusters. Considering the binding energies in the Kelvin range and ionization energies in the eV range (1 eV ~ 13 000 K) it is obvious that ionizing the droplet is not a small perturbation. In fact the ionization problem transforms the weakly van der Waals bound system to a chemically bound system the properties of which are quite different from the neutral system. The so-called "magic numbers" observed in these experiments are due to the different structure of the chemically bound droplets and cannot be present in neutral droplets. (42)Further experiments on Helium droplets (22) employ the same method and furthermore consider much larger droplets. Another nuclear application of the Bose G F M C method arises since there exist four species of nucleons, protons and neutrons with spin projection up and down. If a Hamiltonian is assumed which does not act on these spin/isospin degrees of freedomt4°) a system of up to four nucleons may be treated as a system of distinguishable particles. In that case the coordinate-space wavefunction may be calculated in exactly the same way as for a many-Boson system. This mapping of three- and four-body nuclei onto Bose systems has been exploited several times. (29'7a'74'72) The simple Hamiltonians to be used allow an easy comparison of various approximate and exact few-body methods. Typical results are presented in Table I. It is seen that variational and coupled-cluster results agree very well with the G F M C result verifying the accuracy of the former methods. Faddeev solutions for the three-body problem also may be obtained with good numerical accuracy. The Faddeev-Yakubovsky integral equation approach to the four-body problem is exact in principle. In the numerical ex~ution however some approximations have been introduced: separable expansions of integral kernels and partial-wave expansions. It is seen that in the

Many-Body Schrbdinger Equation

117

Table 1. Ground-state enersies for A -- 3 and A ~- 4 nuclei with pure pairwise additive Wigner force,~*°) in MeV total System Method

Energy (MeV)

Ref.

A ~- 3 GFMC Variational Faddeev A -- 4 GFMC Variational Faddeev-Yakubovsky Coupled-Cluster Variational

-8.26 4- 0.01 -8.22 4- 0.02 - 8.253 -31.3 4- 0.2 -31.19 4- 0.05 -28 - 31.24 - 31.3

74 12 51 73 12 67 71 48

calculation of Tjon ~6~) it was not possible to exhaust these expansions sufficiently well. Similar care must be exercised within the Faddeev calculations for the three-body problem. A sufficient number of partial waves must be carried along in order to achieve numerically accurate results. The theoretical soundness of a method is certainly helpful but of no significance if numerical expense is so great that expansions cannot be carried to the required accuracy. The theoretically less rigorous, i.e. approximate variational method in many cases is superior to "exact" methods because of more ease in numerical implementation. One great advantage of the G F M C method is the ease with which more complicated than pairwise additive forces may be implemented. Three-body forces pose severe problems in Faddeev calculations and most other nuclear methods. The variational Monte-Carlo and G F M C methods are just slightly more involved with than without three-body forces. This arises since the only requirement is to evaluate the numerical value of the potential for a given configuration of the system. In the variational method in the case ofstrong three-body forces the induced two- and three-body correlations must be taken into account, i.e, one must invent a sufficiently flexible trial wavefunction. It is shown in Table 2 that in the presence of a strong three-body force the accuracy of the variational method is somewhat reduced compared to its accuracy with two-body forces only. The same trend is seen in the density profiles.¢~4) Table 2. Ground-state energies for A - - 3 and A---4 nuclei with inclusion of strong three-body force.
Energy (MeV)

E3/MeV

Ref.

A -- 3 GFMC1 GFMC2 Variational A = 4 GFMC1 GFMC2 Variational

-9.05 -I- 0.05 -9.00 + 0.08 -8.79 + 0.07 -37.09 + 0.06 - 37.2 + 0.2 - 36.2 + 0.25

-0.79 -0.74 -0.57 -5.8 - 5.9 - 5.0

74 74 12 74 74 12

4. GREEN'S FUNCTION MONTE-CARLO FOR FERMIONS Throughout the last section the quantum-mechanical wavefunction has been represented by a probability density for the purpose of constructing a Monte-Carlo algorithm, or by a factor in a probability density. A many-Fermion wavefunction necessarily changes sign. The problem of constructing a Monte-Carlo algorithm for this case arises because of the

118

J.G. Zabolitzky

impossibility to define negative probabilities. In more physical terms, it is the destructive interference capability of quantum-mechanical amplitudes which cannot be represented by probabilities. The problem has been given much thought but no completely satisfactory solution has been found to date. However, some useful results have nevertheless been obtained: the so-called fixnode approximation, and the method oftramient estimators which allows one to obtain results free of approximations at enormous expense in computational resources. The fixnode approximation has first been used by Anderson c~6) for problems in quantum chemistry. In order to apply this approximation, knowledge is required about some reasonable structure of the nodal hypersurface, i.e. those points in 3N-dimensional configuration space where the wavefunction changes sign. This knowledge in most cases has been embodied in a Fermi trial wavefunction. In some few-Fermion problems geometrical symmetries may be used34) As a first step one therefore has to perform some calculation to obtain the best-estimate for the nodal hypersurface, usually a variational calculation of some kind, e.g. Hartree-Fock or variational Monte-Carlo. The structure used for the Fermi trial function usually is the one ofeqn. (2.9). Antisymmetry and thereby the nodal hypersurface is implemented via a determinant. Equation (2.9) has been used first in a calculation for the electron gas. ~16~ A superposition of determinants is more flexible in specifying nodal surfaces. ~47~ For quantum-chemical purposes--and with augmentation by a correlation factor like in eqn. (2.9) also for nuclear problems-- a multi-configuration self-comistent-field (MCSCF) calculation would probably yield the best results. The fixnode approximation consists of assuming the structure of the nodal surface from the first step and solving the SchriSdinger equation exactly within that part of 3Ndimensional configuration space where the Fermi trial function is uniformly of one sign, positive say. The boundary condition is that the wavefunction should vanish wherever the Fermi trial function is of the opposite sign. This calculation may be performed by the algorithm used for Bosom described in the previous section. Once the ground-state subject to the artificial boundary conditions has been obtained the complete wavefunction may be reconstructed by using antisymmetry, i.e. permutation of particles, to generate the negative part. Since the nodal structure is prescribed by some approximate type of calculation it is obvious that one will not obtain the true Schrtidinger ground-state this way. It may be shown that the energy obtained is an upper bound, t47) For systems of interacting electrons like the electron gas ~16)or atoms and moleculest47'55) it is found that the fixnode approximation is an extremely good upper bound, however. The resulting correlation energy for these systems is accurate to about 1%. For nuclear or liquid Helium problems it is expected that the nodal structure may be affected more severely due to the much stronger correlations in these systems. No fixnode calculations have been reported in print. A calculation by Ceperleytl 5) using the nodal surface from a Slater determinant of plane wave single-particle states results in a ground-state energy for aHe at equilibrium density of - 2.06 ___0.05 K per particle, to be compared with experimental - 2.47 +__0.01 K. ~56)The discrepancy of 0.4 K can be attributed to the approximate choice of nodes, and to the finite simulation volume used to approximate the infinite system. The very same choice of nodes gives more than one order of magnitude better accuracy for the electron gas. ~t6) The only modification of the algorithm for Bosom from the last section is the introduction of the artificial boundary condition due to the nodal surface. Within the context of the Green's function Monte-Carlo method there arises only a slight technical problem. The boundary condition is treated by choosing the domains D in eqn. (3.12) so that they never overlap with a nodal surface, i.e. either are remote from all nodal surfaces or tangential to one

Many-Body Schrbdinger Equation

119

or more nodal surfaces. This is done in the very same way as treating a hard core pairpotential where the hard core also is implemented as a boundary condition,t32~ It is found that the computer time required is comparable to other methods of comparable accuracy where these exist. The final antisymmetrization to generate the negative part of the wavefunction need not be done explicitly. The energy, for example, may be evaluated from the mixed estimator

E~i'ed(n)= [ f dR{H~PFr(R)}t~,N(R)]/[f dR~Fr(R)~FN(R)] (4.1) where the index FT denotes the Fermi trial function and FN denotes the fixnode iterate from the GFMC algorithm. Since H is symmetric the projection onto an antisymmetric function in eqn. (4.1) suffices to get rid of all symmetric components within the iterate. Nevertheless, the integrand in eqn. (4.1) is always non-negative. This is because of H being positive due to the shift in energy scale and the iterate being zero where the trial function is negative. The evaluation of eqn. (4.1) therefore does not lead to any potentially dangerous cancellation of positive and negative terms. The only nuclear application of the fixnode method I am aware of is a calculation of a degenerate triplet of states in 6Li.t2~The presentation of this work is cast in the language of path integrals but there exists no essential difference to the formalism employed here. The problem of having to extrapolate finite time steps to zero time step for evaluating the path integrals can be considered a technical matter well under control. By the arguments given in Section 2 the fixnode approximation may be, and has been,t4J used to calculate the excited states of symmetry different from the ground-state. The nodal surface suffices to prescribe certain types of symmetry. The GFMC iteration within the regions delineated by the nodal surface then finds the lowest energy state of that symmetry. The problem of N Fermions in one dimension is solved exactly by the fixnode method because the nodal surface is known exactly in one dimension. The nodes occur precisely wherever two identical Fermions occupy the same space, x~ -- x~, because the only way to interchange particles in one dimension is to pass them through each other. In one dimension, therefore, the Fermion problem is totally equivalent to the Boson problem within the space of a specific ordering of the particles, xl < x2 < "'" < with the boundary condition of zero wavefunction whenever two coordinates coincide, i.e. on the boundary of the space. Onedimensional many-Fermion problems therefore may be considered solved. The Bose GFMC algorithm will always produce stable results free of approximations. The less rigorous but otherwise equivalent path-integral algorithm making use of finite time steps¢26Jhas been used in actual calculations for model systems. Another calculation applied a similar algorithm to one- and two-channel scattering problems in one dimension.~1~ Unfortunately this beautiful property of one-dimensional space does not have any generalization in more than one dimension. Particles may easily be interchanged by moving around each other at a finite distance. There does not exist any knowledge about the nodal surfaces in general. The fixnode calculation will produce upper bounds to ground-state energies but not genuine Schr6dinger eigenvalues. The method of transient estimation aims at obtaining true Schr6dinger eigenvalues free of approximation for many-Fermion systems. At first sight the generalization of the Bose GFMC method does not seem to be difficult. As stressed above the Green's function

XN,

a priori

G(R,R'),

eqn. (3.4), is symmetric with respect to pair interchange. Iteration of eqn. (3.5), • =

GO,

then should preserve the symmetric or antisymmetric nature of the zeroth iterate. By taking some antisymmetric trial function as zeroth iterate one therefore should remain within the

120

J.G. Zabolitzky

space of antisymmetric functions, i.e. find the Fermion ground-state. Unfortunately this statement holds true only if the integrations were performed analytically. Any machine integration in general and Monte-Carlo integration in particular will admix some symmetric noise at each step of the iteration. From eqn. (3.8) it is easily seen that the reproduction rate for the Fermion ground-state component within the iterate is where EF is the Fermi ground-state energy. The reproduction rate for the symmetric Boson ground-state is where Es is the Bose ground-state energy, and Et is the trial energy to be chosen at will. Note that all energies are positive due to the shift V0 of the energy scale. The ratio of Fermi/Bose reproduction rates then is < 1 since the symmetric Bose state in all realistic problems is below the antisymmetric Fermi state costing much kinetic energy due to the nodes. In other words, at each iteration the Fermi component grows less than the symmetric Bose noise component. The Fermi component consequently vanishes geometrically (exponentially) with iteration number relative to the Bose component. After some number of iterations the desired Fermi ground-state will be completely lost within the overwhelming amount of Bose ground-state admixed. Another way to phrase the same problem is the formal representation of an antisymmetric state via positive probabilities. A straightforward implementation is given by taking the positive and negative parts of the Fermion wavefunction separately, ~FF = ~F+ - q'_ (4.2) W+ = 1/2[Wr + I~FI] t> 0 (4.3) W_ = - 1/2 [Wr - IWrl] >1 0. (4.4) This decomposition is not unique. An arbitrary positive function may be added to positive and negative parts without affecting the Fermion wavefunction, eqn. (4.2). The two positive components W+ and W_ may be interpreted as probability densities, and the minus sign in eqn. (4.2) may be carried along as a weight factor minus one with the configurations representing this component. This weight factor of minus one will then be used in all evaluations to weight the contribution from these configurations. In a random walk all configurations descending from one with negative weight will receive negative weight as well. This discussion immediately reveals that the iteration of the antisymmetric state (4.2) with the integral eqn. (3.5) actually iterates independently two otherwise unrelated populations, eqns. (4.3) and (4.4). This is because there is no way to perform a "Monte-Carlo subtraction". The positive and negative contributions will both remain in the calculation. It is only at the very end, when forming energy expectation values for example, that adding up numbers with positive and negative weights will affect the cancellation [eqn. (3.15)]. Both populations to be iterated, eqn. (4.3) and eqn. (4.4), have significant overlap with the Bose ground-state. Both these populations are uniformly non-negative and therefore cannot be orthogonal to the non-negatively chosen Bose ground-state. It is clear then that iteration of eqn. (3.5), the GFMC iteration, will develop two Bose ground-state wavefunctions independently from both these populations, one with negative weight and one with positive weight. It is only in the final energy evaluation, eqn. (3.15), that these will cancel out against each other and the final Fermi contribution will remain. Unfortunately it is obvious that eqn. (3.15) now will involve the cancellation of a large positive and a large negative contribution to produce a small final result. Specifically, the transient estimator for the Fermi ground-state energy is given by

E#EF,

E#Ea,

Ea/Er

~"'(n) =[ f dg{H~rr(R)}~(R)]/[f dR~Fr(R~'(R)]

=[~',wi{H~Frr(Ri)}]/[~w,W,r(R,)],

R, from ~"

(4.5)

Many-Body Schr6dinger Equation

121

where ~ ' is the current nth iterate of eqn. (3.5) represented by the ensemble of configurations R~ with weight factors w~ = _+1, and the index FT again denotes a suitable Fermi trial wavefunctiorL Equation (4.5) projects the current iterate onto the antisymmetric trial function. It is obvious therefore that no symmetric component will contribute to the resulting value of the integral, the transient energy estimate. The admixture of symmetric Bose ground-state will not invalidate the value of the integral eqn. (4.5). However, the variance of the integral eqn. (4.5) involves one term containing the square of the integrand. The square of an antisymmetric function is symmetric. The Bose admixture therefore does contribute to the variance. The variance will grow geometrically in the course of the calculation. It is crucial at this point to discuss the actual choice of trial energy, Et. There are two variants to be discussed. One possible choice is Et ~ EB, the Bose ground-state energy. In this case the size of the Bose (symmetric) population will stay approximately constant in the course of the iteration, while the Fermi (antisymmetric) population will vanish geometrically with the factor Ea/EF < 1 (see above discussion). The total amount of work to be performed in order to execute one iteration remains approximately constant since the total number of configurations remains approximately constant and is given by the Bose population. After some finite number of iterations the Fermi information will have been lost completely. At some intermediate time, however, there may exist a regime where the calculation has already converged to the Fermi ground-state sufficiently well and the Bose noise is still sufficiently small to have a bearable variance, i.e. it may happen that the Fermi component decreases sufficiently slowly so that it remains sufficiently dominant in the energy evaluation before the calculation is terminated anyway. The other extreme choice is to set Et ~ E~, the Fermi ground-state energy. In this case the size of the antisymmetric Fermi population will remain approximately constant and the size of the symmetric Bose population will grow geometrically with the factor EF/Es. The total amount of work for one iteration will grow geometrically with the same factor, approximately [see discussion above eqn. (4.2)]. In this case the Fermi population will converge to the Fermi ground-state in some number of iterations. The Fermi population will then remain stable for an infinite number of further iterations. The Bose component will converge to the Bose ground-state within some number of iterations. However, the number of configurations representing this symmetric Bose component will grow geometrically. Because of the ensuing geometrical growth of computer time the calculations are significantly limited by the amount of computational resources available. However, there is a fortunate aspect in this type of calculation. Since the Bose component is represented by a geometrically growing number of configurations the cancellation in eqn. (4.5) is computed with constant accuracy, i.e. constant variance. It is therefore possible to continue the calculation for arbitrarily long times without running into Monte-Carlo instabilities, i.e. increasing variance. The Fermi ground-state energy may be estimated by eqn. (4.5) with constant statistical error for any number of iterations. The error may be made arbitrarily small by choosing a sufficiently large sample size, i.e. sufficiently many configurations R~, in the zeroth iterate or averaging over a sufficient number of iterations after convergence has been achieved, or some combination of both. The method would be quite satisfactory were it not for the enormous price to be paid in order to achieve this: geometrical growth in computer time with iteration number. In practice any choice of trial energy between the two extremes discussed above could be useful. Unfortunately the first alternative does not seem to be realized in any physical system of interest: usually the variance deteriorates before convergence to the Fermi ground-state is achieved, if one does not allow the generation size to increase. Some growth in generation size

122

J.G. Zabolitzky

therefore usually has been allowed, or even the full rate EF/Ea was taken. The choice of trial energy Er is a flexible means to adjust the growth rate. One attractive feature of the method of transient estimation is that under certain circumstances the estimator will be an upper bound to the Fermi ground-state energy,t5s~In order to have eqn. (4.5) as upper bound at each step ofthe iteration, the zeroth iterate must be chosen the same Fermi trial wavefunction as is used in the transient estimator, eqn. (4.5). The estimator Etrans(n) will then be a monotonically decreasing function of n, with Etr~n~(0)equal to the variational upper bound corresponding to the chosen Fermi trial function, and Etrans(n) approaching the Fermi ground-state energy exponentially from above for large values of n, the iteration (= generation) number. The first calculation reported employing the method of transient estimation was a treatment of the electron gas.tl 6) In this calculation in a first step a fixnode approximation was computed, employing the nodal surface from a Slater determinant of plane wave states. In a second step the nodal restriction was removed, i.e. the transient estimation procedure used. In that case the energy estimate is not an upper bound. However, because of the extremely good quality of the fixnode approximation for electronic systems, the release of nodes lead only to rather insignificant changes in the Fermi ground-state energy. The second calculation considered liquid 3He approximated by a system of 38 3He atoms with periodic boundary conditions,t3a) These calculations have been performed using the same Fermi trial function as zeroth iterate and in the transient energy estimator eqn. (4.5), i.e. evaluate upper bounds to the Fermi ground-state energy. Unfortunately the calculations could not be carried to a sufficient number of iterations to definitely establish convergence. It does not seem probable that further iterations could lower the energy very much. The best (lowest) upper bound obtained for the ground-state energy at experimental equilibrium density is - 2.20 + 0.05 K per particle to be compared with the experimental value of - 2.47 + 0.01 K. The most probable explanation for the difference is the smallness of the system used to approximate the infinite fluid. Within a system of 38 particles with periodic boundary conditions long-range correlations present in the infinite system cannot be described. In the finite system the maximum size of a correlation is 1/2 the simulation cube, equivalent to about two atoms in the 38 atom calculation. Long range correlations have been found to account for a significant amount of binding energy.~41) It appears extremely difficult to account for this type of correlation in a Monte-Carlo calculation. Monte-Carlo simulation always has to work with finite systems and periodic boundary conditions to approximate infinite systems. The amount of computational work grows with some power of the number of particles between 1 and 4, (3 to 4 for Fermion systems) depending on the type of system, type of interaction, algorithm or approximations used, etc. Monte-Carlo treatment of longrange correlations requires to simulate large systems, and extrapolate numerical results to infinite system size. In conjunction with the problematics of transient estimation, computational requirements are so large that it is not clear at present if the calculation is feasible or not. The Monte-Carlo treatment of finite systems where finite is of the order of 10 to 20 is not plagued by this problem. There exists a calculation for a model of a 160 nucleus and for a model of a 3He droplet. ~25~Calculations have again been restricted to Hamiltonians with pairwise additive pure Wigner forces. The Fermi trial function used is of the Slater-Jastrow form, eqn. (2.9). The two-body correlation factors are obtained from a differential equation, t49) The single-particle wavefunctions are obtained as solutions to a one-body Woods-Saxon Hamiltonian. For 160 a product of four determinants for the four species of nucleons is used. Each determinant is made up from one s-function and the three p-functions. The parameters of the Woods-Saxon potential are used as variational parameters in an

Many-Body Schr6dinger Equation -1000

i

i

i

123

i

EIMeV -1050

-1100

-1t50

EFo --

-1 200 I

0

10

I

I

20

30

I

I

~0 50 Generation

Fig. 4. Transient energy estimator [eqn.(4.5)] for a model of the t~O nucleus pairwise additive pure Wigner force,(*°)as a functionof the iterationnumber.The solidcurveis the best fit by a singleexponentialplus constant. The horizontal line with error bar givesthe extrapolated energyfor infinitegenerationnumber.

initial variational energy minimization. In Fig. 4 1 present the variational transient energy estimator [eqn. (4.5)] for the 160 model. The Hamiltonian is the same as the one used in the few-body calculations.(4°) This Hamiltonian does not saturate, i.e. will collapse nuclear matter to infinite energy and infinite density. This asymptotic property is reflected in a huge overbinding of the 160 nucleus. Generation number zero corresponds again to a variational calculation with a Jastrow-Slater wavefunction [eqn. (2.9)]. The energy then decreases monotonically with generation number and for large generation numbers exponentially approaches the Schr6dinger eigenvalue. The solid curve is an empirical fit of a single exponent!al through the data points given. Within statistical error there is no second exponential of different time constant discernible. Each individual of the energy calculations within Fig. 4 is an upper bound to the Schr6dinger ground-state eigenvalue. The value of the ground-state energy extrapolated from the single-exponential fit is - 1194 + 20 MeV. This number again is an upper bound even if one claimed the existence of a second exponential with much slower fall-off than the single one obtained from the data points. However, since light, tightly bound systems will not have low-energy excited states corresponding to slowly damped out admixtures within the G F M C iterate, the above number can be expected to be a faithful estimate of the Schrbdinger eigenvalue. The magnitude of the error bar arises mostly due to the uncertainty in the extrapolation, i.e. the variance of the fit parameters. It would definitely be desirable to continue the calculation for a few more generations. However, it has to be borne in mind that the computer time required grows exponentially with generation number, see Fig. 6 below. The calculations for Fig. 4 used about 70 hours on a Control Data Cyber 205 vector computer. While it would have been possible to continue for a few more generations no significantly larger number of generations could have been calculated. Similarly, extensions to more complicated Hamiltonians, i.e. nucleon-nucleon interactions, would have rendered the calculation either not feasible or not meaningful. The point-nucleon density exhibits the ceetral depression typically found in calculations carried out for more realistic models of 1~O(37) being performed at the expense of only approximating the Schr6dinger solution. PPICF-g

124

J . G . Zabolitzky

Droplets of aHe atoms should exhibit striking similarities to atomic nuclei. The interaction being of similar nature as well as the statistics only quantitative differences should occur. The two most significant of such differences are the single-particle level degeneracy of two for 3He atoms as opposed to four for nucleons, and the relative weakness of the H e - H e interaction. Both these factors make it more difficult to bind light droplets. Variational calculations (52) indicate that at least 40 atoms are required to form a bound system, i.e. no electrically neutral droplet can exist with less than 40 atoms. An experiment on charged clusters (62) where clusters as light as four atoms have been found does not contradict this finding(42) since ionizing one of the He atoms is an extremely dramatic modification of the droplet. Unfortunately it appears to be too difficult to perform a transient estimators calculation for a just barely bound droplet of 40 3He atoms. In order to verify the accuracy of the variational wavefunctions used a model of these droplets was invented in order to make the calculation feasible. The model consists of giving the 3He atoms the mass of 4He atoms. No change of statistics is implied, i.e. we consider Fermi 4He atoms as a computational exercise. The kinetic energy thereby being reduced by a factor of 3/4 the eight atom droplet becomes bound and can be studied by the method of transient estimation. The Fermi trial function used is the analog of the one used in the t 60 calculation, a product of two-body factors and two determinants for the spin-up/-down atoms. In Fig. 5 I present the transient energy estimator as a function of generation number. Similarly to the a60 calculation there is a monotonic decrease in energy due to the excited states being damped out. The first few generations starting from the variational estimate show a more rapid fall-off than the remainder. This behaviour is expected if some high-lying excited state(s) being damped out very fast contribute(s) initially. It is to be noted that because of the projection onto the variational Fermi trial wavefunction [eqns. (4.5) and (2.9)] only states of the same symmetry as the assumed trial function contribute. The eight atom two-species Fermi system is a closed-shell system in nuclear terminology having two atoms in a s-state and six atoms in a p-state with respect to the center-of-mass. A determinant being composed of these single-particle states has total angular momentum L = 0. Only excited states of this symmetry contribute to the energy estimator. Of course, if the ground-state of the system were of different total angular momentum, the present calculation would obtain the lowest-energy L = 0 excited state. ,

,

,

,

,

,

,

ElK -0.5 ~p~,, -1.0 "~ " ":'.. t

-1.s'( o

16 3'2

64

Generotion

Fig. 5. Transientenergyestimator [¢qn.(4.5)] for a modelof an eight-atomaHe droplet.The solidcurveis the best fit by a single exponential plus constant, where the first 20 generations have been discarded. The horizontal line with error bar gives the extrapolated energy for infinite generation number.

Many-Body Schr~Sdinger Equation

125

As in the 160 case every single data point in Fig. 5 represents an upper bound to the ground-state energy. The extrapolated value corresponding to infinite generation number is - 1.64 _+ 0.06 K total in comparison with the currently best preliminary variational estimate of - 1.39 _+ 0.01 K. (52)At first glance this discrepancy seems to indicate rather poor qualify of the variational wavefunction. However, the small binding arises from a cancellation of about 13 K potential against about 12 K kinetic energy. In terms of accuracy in any of these the error in the variational energy is about the same as in the many-Boson droplet calculation discussed above, of the order of 3 ~ . In Fig. 6 the number of configurations used to represent the many-body wavefunction are given. The exponential increase with generation number implying a corresponding increase in computational effort is obvious. The almost complete cancellation between positive and negative configurations necessitating this increase may be seen by subtracting twice the bottom curve from the top curve. In Fig. 7 the point atom density is given. This density is calculated from the last 42 generations, i.e. the first 60 generations are discarded since they cannot be assumed to represent the ground-state well enough. Fortunately, however, the point atom density calculated depends only very weakly on the number of generations discarded. It appears that the admixed component of excited state does not contribute significantly to this density. The dashed line gives the point atom density from the preliminary variational calculation. (52) For larger distances from the center-of-mass no statistically significant difference exists. At small distances however there seems to exist room for improvement in the variational wavefunction. The function currently in use is a Slater-Jastrow function [eqn. (2.9)] with inclusion of backflow.(2~) Backflow is of significant importance to achieve the acceptable agreement with the transient estimator G F M C calculation. The Fermi trial function used as generation number zero as well as projecting function in eqn. (4.5) is plain Slater-Jastrow [eqn. (2.9)]. As in the 160 calculation one would have liked to perform about twice as many iterations as it was possible to do. The data in Figs 5 to 7 required about 80 hours on the Control Data -

9

."

8 7

.".'

6

_.

5

.o.

L, 3 2

0

......"""" .°.. .,o.°,°....~

2o

~0

6o

...''"" i ,.°

8o lOO 0eneration

Fig. 6. Number of configurations used in the model 3He droplet calculation. Top curve, total number of configurations with either positive or negative weight. Bottom curve, configurations with negative weight. The net number of configurations describing the many-Fermion wavefunction is given by subtracting two times the bottom curve from the top curve. The amount of computational work performed per iteration is proportional to the top curve, the total number of generations.

126

J. G. Zabolitzky i

p(r)/A-3

v

i

i

i

i

0.0088

00072 t~,t~ 0.0056

%

'..

00040

0,0024

"°° *°

0.0008

°% %,°° I

0

2

4

/

I

6

8

"°°°o--.-

10

°° °,, °°~,,. °° .,°°i

12

°°° °.,i

rIA

Fig. 7. Point atom dcmity for the ~]-[e modal droplet. Error bars, transiem estimator G F M C calculation; dashed line, prdiminm-y variational rmult. (52)

Cyber 205 vector computer. It is dear from Fig. 6 that doubling the number of iterations would be rather difficult. Similarly a treatment of heavier systems faces quantitative problems. While further generations of computing machinery may alleviate this problem it cannot be denied that from an aesthetic point of view the transient estimators technique is not appealing. It is seen from Fig. 6 that most of the computer time is spent in calculating very accurately two populations for the Bose ground-state, one with positive weight and one with negative weight, just in order to cancel them against each other to good precision so that the remaining difference, the Fermi ground-state, can be extracted meaningfully. As far as existing calculations are concerned this seems to be the price to be paid for obtaining approximation-free solutions for many-Fermion ground-states. In order to apply the method of transient estimation to more realistic nuclear problems involving at least spin-dependence and tensor forces it must be noted that the wavefunction and the tensor operator in the same representation cannot both be real. The problem of having to cancel positive and negative configurations therefore is extended to having to cancel positive imaginary and negative imaginary contributions as well. Furthermore, the Fermi trial function should now contain tensor correlations as well. Optimally, the Fermi trial function should be of the type used in the few-body calculations. (12)In principle this type of calculation is quite tedious to implement but nevertheless possible. However, in practice the computational work to be carried out would be quite unacceptable. Because of the necessity to treat the spin/isospin degrees of freedom explicitly now, and the complicated structure of the Fermi trial function the computational effort for this type of calculation grows essentially as the factorial of the number of particles. The reason for this enormous growth rate is the requirement to symmetrize the product of tensor operators occurring in the Fermi trial function [eqn. (2.9)] if each pair operator f may contain a tensor operator Su. Because of this feature which is already present in the purely variational calculation (12) even this simpler type of calculation cannot be carried up to the mass 16 range. Combining all these quantitative problems with the geometric growth in computer time with generation number for the method of transient estimation it seems quite hopeless to attempt a meaningful calculation on a more realistic model of the 160 nucleus. This statement will not be invalidated by one or two new generations of computing devices since there is a discrepancy of many orders of magnitude between requirements and current resources. In order to treat systems of this complexity new methods have to be invented.

Many-Body SchriSdinger Equation

127

5. OTHER APPROACHES The two methods discussed in the previous section, the fixnode approximation and the method of transient estimation, are capable of producing relevant results and have been used to do so. However, it cannot be said that the many-Fermion ground-state problem is solved as satisfactorily as the many-Boson one is by the G F M C algorithm. The exponential growth of computer time with iteration number as opposed to linear growth for Bosons imposes severe restrictions on the types of systems to be treated. A large number of different ideas has been conceived in the course of the past years to find a more appealing algorithm. The more promising of these ideas have been tried out in actual implementations. Some of the results have been published, many have not because of lack of success. None have provided a general and satisfactory solution to the many-Fermion problem. This section is devoted to a discussion of some of the attempts, and possible future routes. The central problem involved in a Fermion calculation should be clear by now from the discussion in the preceding section: it is required to find some way to do a Monte-Carlo subtraction, i.e. to cancel against each other configurations of opposite weight. In a numerical calculation on a mesh this problem does not exist at all: function values on mesh points are real numbers and may easily be subtracted. In a Monte-Carlo scheme, however, a positive function is represented by some ensemble of configurations Ri with positive weight + 1. A negative function is represented by some different ensemble of configuration, R~, with negative weight - 1. It will of course never happen that one of the Ri coincides with one of the Rk. If that were the case, both configurations would contribute to any integral with equal magnitude and opposite sign. It would be correct to cancel one configuration against the other in the very same way as cancellations are effected on a mesh. It is at the very heart of the Monte-Carlo method in general that functions of many variables are represented by extremely sparse populations of configurations Ri. The extreme sparsity is obvious from the example in the Introduction: any mesh in 300-dimensional space would use more than 109° mesh points, whereas a useful Monte-Carlo representation might be as few as 10,000 configurations. Spreading these few configurations representing one function over all space and spreading the same number of configurations representing another function over all space it is intuitively clear that it will just never occur that one of the R~ is even remotely close to one of the Rk. If it is required that configurations are dense in space in order to effect some cancellation between them all the advantages of Monte-Carlo are lost and one could equally well or equally impossibly use some space-filling mesh. In order to construct a Fermion Monte-Carlo algorithm where cost grows linearly with iteration number some way has to be found to cancel remote configurations against each other. This cancellation cannot possibly be a simple one-to-one cancellation since remote configurations will contribute to any integral with radically different contributions in general. One therefore needs some integral relationship connecting the two populations to be cancelled

f(R)

f K(R, R')[f+ (R') - f_(R')] dR'.

(5.1)

Here the function f(R) is represented by two sets of configurations, those with weight + 1 denoted by f+ and those with weight - 1 denoted by f_. It is crucially important that the integral not be split into the sum of the two independent integrals over the two populations + and - . More precisely, given some sets of configurations R~ and Rk it is required to sample from the probability distribution

p(R) = ~ K(R, R,) - ~ K(R, Rk) i

k

(5.2)

128

J . G . Zabolitzky

where the subtraction has to be done analytically within the sampling process and not by sampling the two terms independently and then attaching weight factors. The meaning of eqn. (5.2) where the resultant p(R) < 0 is to sample from Ip(R)l and attach - 1 weight factors to those configurations for which p(Ri) < O. The problem incurred by extn. (5.2) is represented graphically in Fig. 8. Assume the closest pair of configurations present within the ensembles are Ri and Rk. In order to have an analytic cancellation within extn. (5.2) the range of the integral kernel K must be larger than the distance between the two configurations (Fig. 8a). If the kernel K is zero or negligibly small at this distance (Fig. 8b), no cancellation will take place but two independent sampling procedures will in effect be executed. It is only within the cross-hatched area in Fig. 8a where the two kernels centered at the pair of configurations overlap that cancellation is effective. The density of configurations in high-dimensional spaces cannot be enlarged arbitrarily. The problem therefore is to construct sufficiently long-ranged kernels K. I will discuss three different approaches to this problem. In addition there is the problem of pairing up the configuration of opposite sign, i.e. to decide which one is best cancelled against which one of opposite sign. This problem of minimizing pair distances by assigning optimal pairing is involved in itself but will not be discussed here in detail. The first approach (7.s) is a generalization of the G F M C method as described above. In this approach the Green's function G, eqn. (3.12), is itself used in place of the kernel K, eqns. (5.1) and (5.2). This is quite natural since eqns. (5.1) and (5.2) are of the same form as the G F M C iteration prescription itself, exln. (3.5). The calculation required to obtain a new generation from the previous one consists then of first pairing up positive and negative configurations to pairs as close as possible, a minimization problem. For each pair it is then required to sample from the difference of Green's functions [eqn. (5.2)]. One must not sample each Green's function individually. Unfortunately the Green's function is not given in closed form. In that case an analytic subtraction would be possible. Since the Green's function is itself given by a random walk process only [eqn. (3.12) ], no subtraction is possible for the full Green's function. However, the iterations (3.12) may be performed for both initial configurations simultaneously, and the zeroth-order Green's function Gv may be used to achieve some cancellation at each step. In effect, it is this zeroth-order Green's function which affects the cancellation eqn. (5.1). For this algorithm to work it is required that configurations are dense with respect to the range of the function Gv, i.e. the situation Fig. 8a must occur in the dominant number of pairs. Therefore, the function Gv must be long-ranged and also the configurations must be

(a) (b)

Fig. 8. Cancellationof configurationswith oppositeweight,Ri and Rk. (a) Density of configurationsor range of integral kernel sufficientlylarge to produce cancellationwithin the cross-hatchedarea. Left-hatchedand righthatchedareas indicatethe rangesof the kernelsfor the two configurationsto be cancelled.(b) Insufficientdensityor range of kernels. N o overlap occurs and cancellationis not effective.

Many-Body Schr6dinger Equation

129

relatively dense in space. The latter requirement limits applications of the algorithm to lowdimensional few-body systems. The only application reported(a) treats several three-body systems. The former requirement implies that Gv not be limited to finite-sized domains as it were in Sections 3 and 4. The choice of finite-sized domains where Gv is non-zero was motivated by eqn. (3.12) where it is necessary to keep the difference U - V(R) > 0 in order to hold up the transition probability interpretation of the integral kernel. If Gv should be longranged, i.e. be non-zero everywhere, one must require U > V(R) for all R, i.e. V(R) be bounded from above. This requirement excludes Yukawa-type or Coulomb forces. For the algorithm to be practical, the constant U must not be too big. The algorithm is therefore limited to be applied to rather weak forces only without very strong repulsion at small particle distances. Consequently, attention was focussed on model three-body problems in the existing calculation. More realistic nuclear forces, even the simple Wigner force used in previous sections, pose a more difficult problem for the cancellation scheme. The problems studied numerically behave rather well under the algorithm discussed so far. The size of the generations remains perfectly stable, i.e. the disease of the transient estimator method is totally absent. The algorithm does constitute a stable Fermion method, where computational expense is linear in the number of generations calculated just like in the Bose case. The goal of devising a Monte-Carlo subtraction has been achieved under the special provisions made above. Unfortunately most problems of interest involve harder interactions and/or higher dimensionality. The range of applicability for this algorithm is rather limited because of the necessity to find pairs within the range of the function Gv. Interactions of interest in nuclear physics usually involve a strong short-range repulsion, mostly of Yukawa-type. A potential of this type has rather a long "range" in momentum space, where a short-range repulsion reads (5.3)

v~,(q) = c/(q 2 + m 2)

where q is the momentum transfer and m the mass of the naeson exchanged. Because of the short-range repulsion m is large and v(q) therefore slowly decaying at large values of q. If an integral equation could be constructed from the many-body Schr6dittger equation with v(q) as kernel the pairing of configurations would be rather easy even for relatively remote configurations, and cancellation would be effective between configurations of opposite sign. For the case of a two-body problem it is in fact possible to rewrite the Schr6dinger equation in momentum space with v itself as integral kernel(2.) ( T + A - E)~P(p) =

fdp'tA,<

- v(Ip

p'D)W(p')

(5.4)

where T = p2/M is the kinetic energy and A a suitably chosen positive constant. Equation (5.4) immediately may be transformed into a Monte-Carlo algorithm since the parenthesis on the left hand side is just a number and may be divided through. The constant A is easily adjusted so that iteration of eqn. (5.4) has as fixpoint the ground-state wavefunction if the energy E is chosen as the ground-state energy. The latter may be found by performing iterations with E above and below some estimated ground-state energy leading to increasing or decreasing populations, respectively. Since the integral kernel is known analytically and is of long range in the momentum integration variable, eqn. (5.4) is ideally suited for a cancellation method to treat the interaction of positive and negative contributions. The solution W changes sign even for a symmetric (Bosonic) state, and already this problem requires a scheme capable of cancelling positive and negative contributions against each other. Because of the non-uniform sign of the kernel eqn. (5.4) it is clear that both signs will occur in the course of the iteration.

130

J.G. Zabolitzky

The Fermion problem appears in a two-body case as the task to calculate an antisymmetric, i.e. p-state. Since the kernel is known analytically it may be antisymmetrized, and the antisymmetrized kernel can be sampled by a rejection technique3 TM The ensuing iteration is perfectly stable with a rather small number of configurations (200) and converges to the desired lowest-energy p-state. Of course this state can also be obtained to arbitrary accuracy by numerical methods. Equation (5.4) constitutes another form of the Schr6dinger equation where a way to perform Monte-Carlo subtraction has been achieved. Another advantage is the ease with which spin-dependent forces are introduced. Formally the use of antisymmetrized kernels is equivalent to working in the space of antisymmetric wavefunctions, i.e. determinants. Again it is problematic to continue this approach to heavier systems. In a three-body system it is well-known that the integral kernel is not connected, i.e. consists of a sum of three terms each affecting only a pair of particles. This kernel therefore connects two configurations-- in the sense of Fig. 8 - - only if they happen to coincide in one of the particle momenta which is a negligible fraction of phase space. By formally iterating once the kernel of the integral equation one may obtain a connected kernel which is capable of cancelling arbitrary configurations, eqn. (5.2). In general, for an N-body system, the appropriate generalization of the integral equation (5.3) must be iterated (N - 2) times to generate a connected kernel. Since the kernel (the interaction potential) is known analytically this may be done. The ensuing many-dimensional integral kernel then must be sampled, i.e. one must devise some algorithm capable of producing configurations according to the prescribed many-dimensional probability amplitude. For Yukawa-type interactions it has not been possible to devise an efficient algorithm to do so. Already for the three-body problem the sampling efficiency is so small that computer times appear unreasonably large. This observation may point to maybe the most remarkable though greatly unrecognized feature of the G F M C algorithm of Section 3: there exists a highly efficient algorithm to sample the many-dimensional transition probabilities in eqn. (3.12). ~33'36J (See Appendix 2.) Another problem with the momentum space scheme arises since it is not quite clear how to employ importance sampling in order to reduce the variance. While it is rather well-known how a reasonable trial function should look in coordinate-space [see eqn.s (2.8) and (2.9) ], no such simple form is known in momentum space. It seems doubtful, therefore, if this scheme may be useful for a larger number of interacting bodies. The third and probably most promising approach to solve the many-Fermion problem is the mirror potential method, t14~ In this method attention is not focussed on cancelling positive and negative contributions against each other as it was in the previous two. Instead one tries to modify the fixnode approximation in a way which will allow non-approximate calculations to be performed. The fixnode calculation puts the wavefunction equal to zero in those regions of 3Ndimensional configuration space where some Fermi trial function is negative. This is equivalent to employing an infinitely repulsive potential in this region..If the region could be chosen properly, i.e. could be chosen to be the region where the SchriSdinger solution is negative and not some trial function, the method would be exact. The mirror potential method attempts to replace the Schr6dinger solution by the current iterate, and infinite repulsion by some suitable large but finite repulsion. If we write separate Schr6dinger equations for the positive and the negative populations, [H(R) + C(R)"F_iR) ]W +(R) = EW +(R)

(5.5.)

[H(R) + C(R)W+(R)]~F_(R) = EW_(R)

(5.6)

Many-Body Schrtklinger Equation

131

it is seen that by subtracting the two equations from each other the mirror potential term cancels out and the difference of the two populations fulfills the desired original equation,

H(R){~P+(R) - ~P_(R)} = E{~P+(R) - ~P_(R)}.

(5.7)

The function C(R) should be chosen positive, as are the populations ~P+ and ~P_. It is seen that the mirror potential term within eqn. (5.5) acts as a repulsive (many-body) potential which drives away positive configurations from those regions of 3N-dimensional configuration space where ~P_ is large, and vice versa in eqn. (5.6). If C(R) is chosen sufficiently large these repulsive potentials will suffice to suppress most configurations of negative sign in positive regions and vice versa. The necessity to perform a Monte-Carlo subtraction thereby never arises. The two populations stay out of each other's way. On the other hand, close to the nodes the functions will be small so that the repulsive mirror potential is small. Consequently, in the vicinity of nodes a mixture of positive and negative configurations will form, capable of correcting the initial nodes used to begin the calculation, i.e. the fixnode approximation. In this sense the mirror potential method constitutes an improvement upon the fixnode approximation in that it will allow the nodes to move without introducing the instabilities inherent in the method of transient estimation. Actual implementation is hindered since the mirror potential cannot be computed readily for arbitrary R. This is because the solutions ~P used in the potential are given only as ensembles of configurations in the usual Monte-Carlo way. In essence, the mirror potential of eqns (5.5) and (5.6) if taken seriously is just a sum of delta functions centered at the location of the configurations representing the current iterates for ~P+ and ~P_. The problem encountered previously seems to be present again: it will just never occur that a positive configuration is close to a negative one, so that in effect the mirror potential is equivalent to zero. In other words, it is required to "spread out" the configurations in order to produce a continuous mirror potential within the desired parts of configuration space. Two methods have been proposed~14~to do so. In the first method, use is again made of the expression for the wavefunction in terms of the Green's function, ~P = [- G~P. If G can be calculated pointwise, a representation by an ensemble of configurations may be used to evaluate the integral and obtain function values at desired points for the mirror potential. The more promising approach is the use of non-local mirror potentials. /PF+(R) + ~ S(~ R', R")'I'_ (R')V+ (R")dR' dR" = ~-'P+(R)

(5.8)

HV_ (R) + f S(R, R', R")~F+(R')~F_ (R") dR' dR" = E~F_(R)

(5.9)

where S(R, R', R")= S(R, R", R ' ) > 0 but otherwise arbitrary. With the provision made about S the mirror potential terms cancel out as before. Since S may be freely chosen the mirror potential may be computed easily for arbitrary configurations R~and R k representing the positive and negative populations

Vmirror(R~Rt ) m ~ S(R, Rk, Ri) (5.10) k for the potential in eqn. (5.8) and equivalently for eqn. (5.9) where the sum runs over the Ri. Equation (5.10) has the advantage that, by means of the function S, configurations may be spread out in a controlled way over a prescribed range, namely the range of S. In this manner the repulsive action from some configuration can be made felt by the population of opposite sign over some distance controlled at will. PPMP-E"

132

J.G. Zabolitzky

The mirror potential has been tested only on a small number of low-dimensional problems. ~14)Experience with other attempts teaches that many methods can be brought to work in low-dimensional spaces and nevertheless are not very useful in many-dimensional spaces, i.e. for the many-body problem as opposed to the few-body problem. At present no fault with the mirror potential method has been pointed out. It may well be that further experiments with this method will finally lead to a general and stable approximation-free many-Fermion scheme. So far all discussion has concerned attempts to obtain lowest-energy states of N-body systems with given symmetry. Quantum Monte-Carlo methods have been used to study two more general problems: Calculation of properties of quantum N-body systems at finite temperature and meson-nucleon field theories. Both will be discussed briefly. A generalization of the G F M C method ~69)has been found capable of obtaining solutions to the finite-temperature Bloch equation ~ ~ for the quantum-statistical density matrix. More recently, the path integral formulation has been used to the same end. ~27's4) Much as in the zero-temperature (ground-state) case attention has mainly been focussed upon Boltzmann or Bose particles. The antisymmetry required for Fermions poses quite the same kind'of problem it does for the ground-state. It has already been stated in the Introduction that nuclei cannot rigorously be considered as ensembles of inert nucleons. Much work has been and still is devoted to the study of subnucleonic degrees of freedom in nuclei. Monte-Carlo solutions exist for one special model problem, N nucleons interacting via a scalar Bose field in one dimension ~61,63.64)and in three dimensions,t6s'66) This model is similar in spirit to the models utilized by Walecka and coworkers, t6a'~9) Nucleons are assumed to act as sources and sinks for a scalar neutral Bose field. If not more than four nucleons are assumed to be present the problem may be totally mapped onto a Bose problem, as was the case in the potential models of Section 3. It is found that there are only small corrections arising in an exact treatment as compared to an approximate treatment where nucleon interaction potentials are computed from the static limit, and the purely nucleonic degrees of freedom are treated exactly. In this special model, the corrections arising due to subnucleonic degrees of freedom are small.

6. SUMMARY AND O U T L O O K The many-body Schr'6dinger equation is susceptible to solution by stochastic methods free of approximation. There exist at least three different formulations of the method. One formulation is in terms of path integrals. ~2)The second very closely related formulation is in terms of direct analogy between SchriSdinger's equation and the diffusion equation in 3Ndimensional space for a N-body problem. ~3-6) Both these formulations require the introduction of a finite time step. In order to have a method free of approximations, one must extrapolate from finite time steps used in the stochastic simulation to zero time step. The third formulation avoids finite time steps from the outset. ~3s'ls) This is the Green's function Monte-Carlo (GFMC) method used in the present paper. Finite time steps can be avoided in this method because the time variable is integrated overby another Monte-Carlo process. On the surface the equivalence of the different formulations may be obscure but in essence all three formulations exploit the same kind of analogy with the classical diffusion problem, and differ only in technical details and ways of presentation. The ground-state ofmany-Boson systems may be chosen real and positive. This quantummechanical amplitude therefore may be interpreted as a probability density for the sake of constructing an analog stochastic process. As a consequence stochastic simulation for many-

Many-Body Sehr6dinger Equation

133

Boson systems can be carried out in a completely satisfactory stable way. For a number of physical systems accurate results have been~obtained (see Section 3). Where experiments are available, agreement is almost perfect and remaining discrepancies can be attributed to insufficient knowledge about the Hamiltonian operator. Treatment of many-body forces is formally trivial and computationally not very cumbersome. Several simulations for Hamiltonians which include three-body forces have been reported (Section 3). Many-Fermion systems pose more severe difficulties. There are a few fortunate cases where many-Fermion problems can be mapped onto equivalent many-Boson problems. Notably these cases include some few-body problems with simple Hamiltonians (Section 3) and all problems in one dimension (Section 4). For those cases the proven GFMC algorithm for Bosons may be used, and all remarks of the preceding paragraph carry over. A number of calculations have been performed for these classes of problems. One important result is the verification of the accuracy of other many-Fermion methods. It could be shown that variational and Coupled-Cluster methods are of very satisfying accuracy (see Tables 1 and 2). These more approximate methods may now confidently be used for more complicated problems where solutions free of approximation are not feasible to date. The many-Fermion problem in general--as opposed to the many-Boson problem--has not yet received a completely satisfactory solution. There exists a small number of calculations (see Section 4) with the rigorous method of transient estimators. This method would be a satisfactory and general solution to the many-Fermion problem were it not for the exorbitant amount of computational effort to be spent. While the accuracy of a Monte-Carlo process usually is proportional to the inverse square root of the computational effort invested the statistical error in the method of transient estimators is only proportional to the inverse logarithm of computational effort. This feature arises because of an instability in the MonteCarlo algorithm requiring an exponential increase in computing effort with the number of steps performed. The instability in turn is due to the varying sign of many,Fermion wavefunctions (Sections 4 and 5). The occurrence of non-uniform sign renders impossible an interpretation of the many-Fermion wavefunction in terms of probabilities in a simple, straightforward manner. The mapping of destructive interference of quantum-mechanical amplitudes onto (classical) probabilities has not yet been achieved in a satisfactory way. The few existing transient-estimators calculations for many-Fermion systems have provided useful results, discussed in Section 4. For nuclear physics problems it appears extremely doubtful that applications to realistically complicated Hamiltonians with inclusion of spin-dependence and tensor forces will be feasible within the next few years, with the possible exception of the four-body problem. The fixnode approximation (Section 4) provides a stable algorithm which essentially is another mapping of many-Fermion onto many-Boson problems. This approximate method where the nodal hypersurface of the wavefunction is prescribed and everything else solved for turns out to be extremely accurate for a variety of many-Fermion problems, in particular those involving electrons. Many attempts have been tried to overcome the Fermion disease outlined above. A selection of these attempts is reported in Section 5. In low-dimensional problems many ideas work out all right, but carrying them over to higher dimensionality, i.e. genuine manyFermion problems, has not been successful up to the present point in time. In view of a large number of unreported and unsuccessful ideas to overcome the problem it remains to be seen if a stable, generally applicable algorithm with reasonable computational cost will ever be found. At present the only candidate in sight is the mirror-potential method~14)sketched in Section 5. However, since no application of this method to realistic many-Fermion problems has been carried out so far, no definite conclusion can be reached at this time.

134

J.G. Zaboltzky

Acknowledgements--I am much indebted to Malvin H. Kaios for continuing discussions which provided a constant source of new vantage points and helped me significantly in shaping my ideas about Monte-Carlo methods in general and GFMC in particular.

APPENDIX THE ALGORITHM

1

OF METROPOLIS, ROSENBLUTH, TELLER AND TELLER

ROSENBLUTH,

An important step in a variational Monte-Carlo calculation is the sampling of the probability density eqn. (2.3), for arbitrary prescribed wavefunction. This task is accomplished by means of the algorithm of Metropolis, Rosenbluth, Rosenbluth, Teller and Teller.(4") Since the original algorithm is directed towards a sampling task slightly different from the present one, and since the reference may not be easily accessible, in the present Appendix I present the algorithm as used in the present type of calculation. No proof will be given but just a statement of the computational steps required. The sampling algorithm consists of a random walk process in 3N-dimensional configuration space the individual steps of which constitute the samplings of the desired probability density. It is assumed that an (arbitrary) initial configuration Ro is given. This configuration specifies the initial positions of all particles. One then considers each particle i, i = 1... N in turn for a "move". While all other particles are kept at fixed positions, a new position is sampled for particle i uniformly within a cube centered at the previous position of particle i. The side of this cube is chosen smaller than the mean interparticle distance. At the new position the value of the variational wavefunction is evaluated and squared to form the probability for the new position. This probability is compared to the probability at the previous position before the attempt to move. If the new probability is bigger than the old one, the move is accepted, i.e. particle i assumes the new position. If the new probability is smaller than the old one, the move is accepted with probability P,,w/Po~d < 1. If the move is not accepted, particle i remains at its previous position. In any case one proceeds to the next particle. From particle N one returns to particle 1 and so on. We thereby have defined an infinite random walk producing an arbitrarily large number of probability density samples. There is one advantage and one problem with this algorithm. The particular advantage for the present type of calculation arises since the variational wavefunction usually is of product form, see, e.g. eqn. (2.8). Therefore, in the above evaluation of ratio of probabilities almost all factors will cancel out since only one particle was moved. The method is, therefore, computationally highly efficient. On the other hand, it is obvious from the above prescription that consecutive samples, i.e. configurations R~, cannot be statistically independent since they are computed from each other, and differ in at most one particle position. One therefore has to execute the above algorithm many times before a statistically independent new sample has been achieved, usually the order of 5 to 10N attempted singleparticle moves. It can be rigorously proven that this algorithm asymptotically samples the desired probability density, that is for a sufficient number of attempted moves. It is a matter of experience to find a practical estimate for this number. The above stated 5 to 10 moves per particle has generally been found sufficient.

APPENDIX SAMPLING

2

OF THE UNPERTURBED

GREEN'S

FUNCTION

Go

At the heart of the GFMC method as described in Section 3 there arises the need to sample the unperturbed Green's function Gu and its normal derivative at the surface of the domain D, see eqns. (3.11) to (3.14). The methods used to sample these objects are described by Kalos and Hansen. (33) Since these notes have been circulated only privately I reproduce the essential steps here. The domain D(R') is chosen as the cartesian product of three-dimensional spheres centered at the individual particles,

D(R') =dt ®d2 ® " " ®dt~.

(A2.1)

Within the subspace d, for particle k consider the one-body time-dependent Green's function gk(r,, r~, t) defined by (-V~ + c3/c3t)gk(r~,r'~,t ) = 0 gk(r~, r~, 0) = 6(r~ - r~,)

g~(r~,r'k,t) = 0 for rk on ddh and outside dk.

(A2.2)

Many-Body Sehr~linger Equation

135

For symmetry and dimensionality reasons, g is a universal function of two dimensionless variables p = Irk -- r~[/a~ and z = t/a~, where at is the radius of the sphere dr. An expficit expression for this universal function may be given as two infinite series, one for small times and one for large times. Each of these series is quickly convergent, with about five terms suff~ient for 12-digit accuracy. For small times, where the finite size of the domain may be neglected, g tends towards a Gaussian. For large times, g becomes a linear function. In both time domains, small and large times, it is therefore rather straightforward to devise a rejection technique for sampling g. In a second step let me define a N-body Green's function for zero potential by ( - V 2 + O/c3t)Go(R,R',t) = 0 Go(R, R', 0) -- 6(R - R')

(A2.3)

Go(R, R', t) = 0 for R on OD and outside D.

Obviously, we have

Go(R, R', t) = ]-I gk(rh, r~, t)

(A2.4)

k

since application of the operator ( - V 2 + O/at) yields

( - V 2 + a/t3t)Go(R, R', t) = ~ [(-V~ + O/Ot)g~(r~,ri,,t)]/g~(rk,r;,,t) k

x l-I g,(r.r;,t)=

(A2.5)

O.

i

It is now easy to see that a time-dependent Green's function for constant potential U defined to be the solution of ( - V 2 + U + a/Ot)Gv(R,R',t) = 0

Gv(R, R', 0) = 6(R - R') Gv(R,R',t) = 0 for R on t?D(R') or outside

(A2.6)

is given by

Gv(R, R', t) = e x p ( - Ut)Go(R, R', t). In the next step we can show that the desired time-independent Gv(R, R') is given by

Gv(R,R') =

Gu(R,R', t) dt.

(A2.7)

(A2.8)

Application of ( - V 2 + U) m both sides of eqn. (A2.8) yields

( - V 2 + U)Gv{R,R') = =

{ - V 2 + U)Gu(R,R',t)dt ( - OlOt)Gv(R, R', t) dt

= Gv(R,R',O)

(A2.9)

= 6 ( R - R').

Collecting together all the previous formulae we have

UGv(R,R')=

d t U e x p ( - U t ) I - I Hk(t)I- ] g~(r~,r~,t)/Ht(t) k

(A2.10)

k

where some so far arbitrary functions H~(t) have been introduced, and Gv has been multiplied by U. In addition to sample from Gv itself, we will need to sample R for fixed R' from the normal derivative V,Gv,

-V,Gv(R,R') =

I/

d t e x p ( - Ut)~. (-Vo.g~(r~,rL t))l- I

=~afodtcxp(-Ut'[-H~(t'I-I.

g~(rt,r;,t) =

H,(t']

[ - v~(r~, r~, t)/(- n;,(t) ) ] [I-l,kg,(r,,r~, t)/H,(t)] for arbitrary functions H,(t). Selecting specifically

Ht(t) = f dk g~(r~, r;, t) dr~ J

(A2.12)

136

J.G.

with

Zabolitzky

'"

H~(t) =" O/OtH~(t)

(A2.13)

we have H~(0) -- 1,

H~(ov) = 0,

(A2.14)

and t

[-- V ~gt(rk,rl,t) ] drk.

--Hi(t) = -~H~(t)/~t = [

(A2.15)

d,,

Equation (A2.15) is a probability density function because of (A2.13). This last line follows from eqn. (A2.2) by application of Green's theorem. With these definitions, the last two factors in brackets in eqn. (A2.11) may be interpreted as normalized probability density functions. At this point we have reduced all desired expressions to analytically known formulae. It remains to be seen how these can be sampled in practice, i.e. how an efficient algorithm can be constructed. In order to construct such an algorithm we require the following

Theorem: Let Pk(T) =

pk(t)dt, k = 1... N.

Let t~ be drawn from pk(t). Then ,,Prob{min(tt ... tN) t> T} = H Pk(T) and Prob{min(q ... tN) = th and is in dt at t} =

p~tt) I-[

P,(t).

(A2.16)

Proof by inspection. If we identify

p~(t) = - H~,(t)

(A2.17)

it follows

P~(t) =

H~,(t)dt = Hk(t).

(A2.18)

Let us further define formally (in order to unify and simplify the notation)

Ho(t) = e x p ( - Ut)

-H'o(t) = U e x p ( - Ut)

g0(t) = Ho(t)

- V,go(t) = - H'o(t).

(A2.19) Then we may write U Gu(R, R') - V,Gu(R, R') =

dt ~

(A2.20) k=O...N

[ - Vo.gk(rk,r~, t)/(--X~(t)) ] [ I - I g , ( r , r ~ , t)/Hi(t)]. We have now collected all the formulae required to describe the algorithm to sample a configuration R for given R' from the probability density eqn. (A2.20). Equation (A2.20) will always produce exactly one new configuration per sampling, i.e. is normalized to unity since we have from eqn. (3.11)

f ( U Gv(R, R') - V, Gu(R, R')) dR = 1.

(A2.21)

A time t and index k, k = 0 , 1 . . . N is sampled from the distribution given by the first factor within the sum, eqn. (A2.20). This sampling is achieved by sampling times tk from each - Hi(t) for k = 0, 1... N and selecting the smallest of these, see above theorem. Iffthe smallest t was to, we did sample from UGv(R, R'). We call this an "interior move". Then, rk is sampled for all k = 1... N from the last factor within the sum eqn. (A2.20). The second factor is unity in this case according to the above definitions. Collecting these rk together constitutes the new configuration R. With probability EJU (or weight, in particular if > 1) this configuration contributes to the next generation since (EdU)

Many-Body Schr~linger Equation

137

UGv(R, R') is the contribution from R' to EtG(R, R') [see eqn. (3.5)]. Consider next the same configuration R as the next step in the random walk to build up the full Green's function G, eqn. (3.12). Because of the normalization eqn. (A2.21) the probability to terminate the random walk, i.e. not continue it via either of the two terms within eqn. (3.12), is given by V(R)/U. This number (which is between 0 and 1) is easily calculated for given R. With this probability the random walk is terminated [Le. V(R) is subtracted from U in the normalization, see eqn. (A2.21)]. If the random walk is not terminated, the interior move is accepted as the next configuration in the random walk. Next let us treat the case where the smallest t found was a tt, k = 1... N. In this case we did sample from - V,Gv(R, R') ("surface move" to 0dr). In this case, we sample a point on the surface Odt from the second factor in eqn. (A2.20) and points r , i ~ k, from the lest factor in eqn. (A2.20). Collecting these coordinates we obtain a new configuration R for the surface move. This configuration is the next configuration in the random walk to build up the full Green's function G, eqn. (3.12). Obviously, the surface move is from the first integral in this equation. More concisely, the GFMC algorithm to iterate the integral eqn. (3.12) is given by 1. sample a tt for k -- 1 ...N from each -Hi(t); 2. sample a to from U e x p ( - Ut); 3. find smallest of all t: t := min(t0,tt); 4. /ffsmallest t ffi to: interior move. Sample rk from last factor in eqn. (A2.20), = R. With probability (weight) Et/U configuration R contributes to next generation. Random walk terminates with probability V(R)/U, else R is next configuration in random walk for eqn. (3.12); 5. iff smallest t = tk, k > 0, i.e. came from -H~,(t): surface move to ~dk. Sample point on surface ~dk from second factor in eqn. (A2.20); Sample points rj, i ~ k, from last factor in eqn. (A2.20); collecting these coordinates for the N particles obtain next configuration R in the random walk for eqn. (3.12). This algorithm makes rather explicit the relationship between GFMC and diffusion-type or path-integral solutions to the Sehr6dinger equation: whereas the latter schemes involve an explicit time integration, within the GFMC scheme the time integration is performed as another Monte-Carlo integration, impficit in the above sampling procedure for a time t.

REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28.

Y. ALHASSIDand S. E. KOONIN,Ann. Phys. 155, 108 (1984). Y. ALHASSID,G. MADDISON,K. LANGANKE,K. CHOW and S. E. KOONIN, Phys. Rev. C, in press (1985). J. B. ANDERSON,J. Chem. Phys. 63, 1499 (1975). J. B. ANDEItSON,J. Chert Phys. 65, 4121 (1976). J. B. ANDERSON,J. Quantum Chent 15, 109 (1979). J. B. ANDERSON,J. Chem. Phys. 73, 3897 (1980). D. M. ARNOW,Thesis, Courant Institute of Mathematical Sciences, New York University, Report DOE-23 (1982). D. M. ARNOW,M. H. KALOS, M. A. LEE and K. E. SCHMXDT,J, Chem. Phys. 77, 1 (1982). R.A. Azlz, V. P. S. NAIN,J. S. CARLEY,W. J. TAYLORand G. T. McCoNVILLE,J. Cher~ Phys. 70, 4330 (1979). A. BUL, Physica 7, 869 (1940). F. BLOCH,Z. Phys. 74, 295 (1932). J. CARLSOS,V. R. PANDHARIPANDEand R. B. WIRINGA,NucL Phys. A401, 59 (1983a). J. CARLSOS,V. R. PANDH^RIP^NOEand R. B. WIRINCA,preprint (1983b). J. CARLSONand M. H. KALOS,private communication and unpublished notes (1984). D. M. CF.PERLEY,private communication (1982). D. M. CEPERLE¥ and B. ALDER,Phys. Rev. Lett. 45, 566 (1980). D. M. CEPERLEV,G. V. CHr:STERand M. H. K^LOS, Phys. Rev. B16, 3081 (1977). D. M. CEPERLEYand M. H. KALOS,In: Monte-Carlo Methods in Statistical Physics, (ed. K. BINDER),Topics Current Physics, Vol. 7, Springer (1979). S. A. CHIN, Ann. Phys. 16, 403 (1977). R. B. DINGLE, Philos. MaR. 40, 573 (1949). R. P. FEVNMANand M. COHEN, Phys. Rev. 102, 1189 (1956). J. Gsp^NN, Physica I ~ B , 1309 (1981). U. HELMaRECHTand J. G. ZABOLrrZKY,In: Monte-Carlo Methods in Quantum Problems, (ed. M. H. KALOS), NATO Advanced Science Institutes, Series C, Vol. 125, ReideL Dordrecht (1984). U. HELMBRECHTand J. G. ZABOLITZKY,Nacl. Phys. A442, 95 (1985a). U. HELMagECHTand J. G. ZASOLITZKV,Nucl. Phys. A442, 109 (1985b). J. E. HIRSCH,D. J. SCALAPINO,R. L. SUGARand R. BLANKENBECLER,Phys. Rev. Lett. 47, 1628 (1981). G. JACUCCI,preprint (1984). R. JASTROW,Phys. Rev. ~ , 1479 (1955).

138 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74.

J.G. Zabolitzky

M. H. KALOS, Phys. Hey. 128, 1891 (1962). M. H. KALOS,J. Comput. Phys. 1, 127 (1966). M. H. K^LOS, Nucl. Phys. A126, 609 (1969). M. H. KALOS, Phys. Hey. A2, 250 (1970). M. H. KALOSand J. P. HANSEN,unpublished (1979). M. H. KALOS, M. A. LEE, P. A. WH1TLOCKand G. V. CHESTER,Phys. Hey. B24, 115 (1981). M. H. KALOS, D. LEVESQUEand L. VERLET,Phys. Hey. A9, 2178 (1974). M. H. KALOSand P. A. WHrrLOCK, to be published (1985). H. KOMMEL, K. H. LOHRMA~ and J. G. ZABOLITZKY,Phys. Rep. 36C, I (1978). M. A. LEE, K. E. SCHMIDT,M. H. KALOS and G. V. CHESTER,Phys. Rev. Lett. 46, 728 (1981). J. LOMNITZ-ADLER,V. R. PANDHARIPANDEand R. A. SMITH, Nucl. Phys. A301, 399 (1981). R. A. MALFLIETand J. A. TJON, Nucl. Phys. A127, 161 (1969). E. MANOUSAKlS,S. FANTONI,V. R. PANDHARII'ANDEand Q. N. USMAm, Phys. Hey. B28, 3770 (1983). R. MELZER and J. G. ZAaOLITZKY,J. Phys. AI7, L565 (1984). F. MENTEL and J. B. ANDERSON,J. Chem. Phys. 74, 6307 (1981). N. METROI'OUS,A. W. ROSENaLUTH,M. N. ROSENBLtrrH,A. M. TELLERand E. TELLER,J. Chem. Phys. 21, 1087 (1953). N. METROPOLISand S. ULAM,J. Art Star. Assoc. 44, 335 (1949). J. W. MOSKowrrz, K. E. SCHMIDT,M. A. LEE and M. H. KALOS,J. Cheat Phys. 76, 1064 (1982a). J. W. MOSKOWlTZ,K. E. SCHMIDT,M. A. LEE and M. H. KALOS,J. Cheat Phys. 77, 349 (1982b). S. NAKAICHI,Y. AKAISHIand H. TANAKA,Prog. Theor. Phys. 64, 1315 (1980). V. R. PANDHARIPANDEand H. A. BETHE Phys. Rev. C7, 1312 (1973). V.R. PANDHARIPANDE,J. G. ZABOLITZKY,S. C. PIEPER,R. B.WIRINGAand U. HELMRmeCHT,Phys. Rev. Lett. 50, 1676 (1983). G. L. PAYNE,J. L. FRIAR, B. F. GmSON and I. R. AFNAN, Phys. Reo. C22, 823 (1980). S. C. PIEPER, R. B. WIRXNGAand V. R. PANDHARIPANDE,private communication (1984). S. C. PIEPER, R. B. WIRINGAand V. R. PANDHARIPANDE,Phys. Rev. Lett., in press (1985). E. L. POLLOCKand D. M. CEPERLEY,Phys. Rev., in press (1985). P. J. REYNOLDS,D. M. CEPERLEY,B. J. ALDERand W. A. LESTER,J. Chem. Phys. 77, 5593 (1982). T. R. ROBERTS,R. H. SHERMANand S. G. SYDORIAK,J. Res. Nat. Bur. Stand. Sect. A68, 567 (1964). E. W. SCHMID,J. SCHWAGER,Y. C. TANG and R. C. HERNDON, Physica 31, 1143 (1965). K. E. SCHMIDTand M. H. KALOS,In: Applications of the Monte-Carlo Method in Statistical Physics, (ed. K. BINDER), Topics in Current Physics, Voi. 36, Springer (1984). K. E. SCHMIDT, M. H. KALOS, M. A. LEE and G. V. CHESTER,Phys. Hey. Lett. 45, 573 (1980). K. E. SCHMIDT, M. A. LEE, M. H. KALOS and G. V. CHESTER,Phys. Hey. Lett. 47, 807 (1981). B. D. SEROT, S. E. KOONIN and J. W. NEGELE, Phys. Hey. C7,8, 1679 {1983). P. W. STEPHENSand J. G. KING, Phys. Hey. Lett. 51, 1538 (1983). L. SzYBISZand J. G. ZABOLrrZKY,In: Recent Progress in Many-Body Theories, (eeLs. H. KOMMEL and M. RISTIG), Lecture Notes in Physics, Vol. 198, Springer (1983). L. SzYmsz and J. G. ZABOLITZKY,In: Monte-Carlo Methods in Quantum Problems, (ed. M. H. KALOS),NATO Advanced Science Institutes, Series C, Vol. 125, Reidel, Dordrecht (1984a). L. SzYmsz and J. G. ZABOLITZKY,Nucl. Phys. A245, 423 (1984b). L. Szvmsz and J. G. ZABOLITZKY,Nucl. Phys. A436, 639 (1985). J. A. TJON, Phys. Lett. 56B, 217 (1975). J. D. WALECKA, Ann. Phys. 83, 491 (1971). P. A. WHITLOCKand M. H. KALOS,J. Comput. Phys. 30, 361 (1979). P. A. WHITLOCK,D. M. CEPERLEY,G. V. CHESTERand M. H. KALOS, Phys. Hey. BI9, 5598 (1979). J. G. ZABOLITZKY,Phys. Lett. 100B, 5 (1981). J. G. ZABOLITZKY,In: Few-Body Problems in Physics, VoL I, (ed. B. ZmTNITZ), North Holland (1984). J. G. ZABOLITZKYand M. H. KALOS, Nucl. Phys. A356, 114 (1981). J. G. ZABOLITZKY,K. E. SCHMIDTand M. H. KALOS, Phys. Reo. C25, 1111 (1982).