A numerical approach to nuclear shell-model calculations

A numerical approach to nuclear shell-model calculations

Nuclear Physics A182 (1972) 290-300; Not to be reproduced by photoprint @ North-Holland Publishing Co., Amsterdam or microfilm without written pe...

679KB Sizes 44 Downloads 101 Views

Nuclear Physics A182 (1972) 290-300;

Not to be reproduced by photoprint

@ North-Holland

Publishing

Co., Amsterdam

or microfilm without written permission from the publisher

A NUMERICAL APPROACH TO NUCLEAR SHELL-MODEL CALCULATIONS R. R. WHITEHEAD Department

of Natural Philosophy,

University

of Glasgow

Received 8 October 1971 Abstract: An application of the Lanczos algorithm for the matrix eigenproblem to nuclear shellmodel calculations is described. The method has many advantages over conventional shellmodel techniques, one of the most important being a very much slower growth of computation time with increasing dimensionality of the shell-model basis.

1. Introduction Despite the advances of the last decade in the understanding of nuclear structure and the development of highly sophisticated models like the RPA and pairing, the need for shell-model calculations of the traditional kind is increasing rather than decreasing. One reason for this is that many states, in light nuclei especially, are “shellmodel” in nature and apparently not strongly mixed with neighbouring collective states. Another is that we are now at the stage where detailed information about the “true” nucleon-nucleon interaction is becoming available in the form of effective interactions for use with particular model spaces, and the parameter-free shell model is the appropriate means of testing the validity of these interactions. The only alternative to the shell-model approach, the projected Hartree-Fock method, does not yet appear to be highly developed enough for this task. The main trouble with the shell model is of course that, except in particularly simple cases, the basis space is so large that the computation becomes all but prohibitive. It is all the more frustrating that the hold up is in the quite elementary task of diagonalizing matrices. For example in a straightforward calculation of the “Si spectrum to lowest order of perturbation theory with 12 active nucleons in the 2s-ld shell, there are about 93000 distinct states. Even the reducibility of this set into smaller spaces, each with different angular momentum and isospin, which can be handled separately, gives matrices which are too large to be handled by existing shell-model programs without further truncation of the basis. Furthermore, only 20 or 30 of the eigenstates produced may be compared with experiment, and to calculate thousands more is simply a waste of time. What is required for further progress in this field is not just bigger computers to extend the range of existing programs but new computational methods which produce as few or as many eigenstates as required with a minimum of additional useless information. One line of attack is to break away from the traditional shell-model formalism of 290

NUMERICAL

APPROACH

291

group theory, fractional parentage and even angular momentum theory itseff and replace these with more elementary operations which lend themselves better to computation. Once free of these beautiful but compu~atjonally inefficient trappings we have a wider choice of numerical methods available for the heavy task of matrix diagonalization. Since only a few eigenstates are required it should not be necessary to fully diagonal&, or even fully tri-diagonal&, an enormous matrix. This immediately suggests the use of some iterative method of solving the matrix eigznproblem. This paper describes a computational procedure that follows this line. The c,-ntral feature of the method is the use of the Lanczos algorithm for the iterative tri-diagonalization of real symmetric matrices. The use of the Lanczos method was first proposed in this connection by Sebc and Nachamkin ‘) and by the present author ‘), but neither of these earlier attempts was of much practical value because they both made use of states coupled to good J and T which limits the full power of the method. While there is still considerable room for improvemen& the method to b: outlined below has been programmed for the 1BM 360/44 (65 k words core, three tape drives, two 2311 disk drives) in this department and has successfully performed calculations, notably of the high angular momentum states of 24Mg and ‘sSi, which would have b:en unthinkable by more conventional methods on such a small machine. 2. The Lanczos algorithm The problem of obtaining eig,-nvalues and eigmvectors of a real symmetric matrix is usually handled by the well-known Hous~hoider ‘) tri-diagonaIization method whenever the size of the matrix allows. This method is very efficient for matrices that can be stored comfortably in the fast core of the computer, and even more so, compared to the Jacobi ‘), or Givens “) methods, where the matrix is very large and has to be stored on tapes or disks. Its one drawback is that the whole of the tri-diagonalization step must be completed before any eigen-information becomes available, The Lanczos 3- “) met h o d IS an older and possibly less well-known tri-diagonalization algorithm which for most purposes has been superseded by Householder. The two are equivalent in that with exact arithmetic and the appropriate starting conditions they give the same &i-diagonal matrix. It is more appropriate to regard the Lanczos method as a finite iterative scheme because, quite unlike Householder, it shares many of the characteristics of iterative eigenvalue methods; for example, some eigenvalues may be obtained to full accuracy after a very few iterations. It has furthermore the advantage over t.he various open iterative methods ‘* “) that ii produces sub-dominant

eigenvalues automatically without restarting. For the present application the most important feature of the direct iterative methods, including Lanczos, is that they can b: applied directly to operators without the intermediate step of constructing a matrix representative of the operator. We can thus eliminate at once all storage problems associated with large shell-model matrices. Full details of the Lanczos algorithm are given in refs. 3P“) and much that is relevant

292

R. R. WHITEHEAD

to the present work may be found in ref. ‘); only a bare outline will be given here. Let A be a real, symmetric, n x n matrix, and ui an arbitrary n x 1 vector normalized as usual so that UT = 1 (T denotes transpose). New vectors u2, v3 etc. are formed by iteration thus Au, = a1 a1+Prv,, -4% =

81q+%212+Pz%,

Au, =

Bzv,+%%+P3%3

Au, =

Pn-l%l-l+%4l~

where, at the ith step, uifl is the normalized remainder after orthogonalizing respect to Yi and Di_ i. It is clear from the eqs. (1) that the u provide an orthogonal in which the matrix A takes the required tri-diagonal form:

V~AUi=

v;~,AL+

Qi)

U~*,Aui

=

0,

=

(1) with basis

;_,

(k 2 2).

(2)

The iterations terminate automatically after n steps. In the nuclear problem the case of degenerate eigenvalues never arises and we will not discuss the rather interesting behaviour of the Lanczos vectors that follows degeneracy “). Close, rather than degenerate, eigenvalues may occasionally occur, but these cause no difficulty if standard precautions “) are observed. The method relies on the strict orthogonality of the Lanczos vectors u. Since these are produced by an orthogonalization process severe loss of significance can occur due to cancellation “) and this necessitates a re-orthogonalization of each u to all previous vectors. Were it not for this re-orthogonalization the Lanczos algorithm would be superior in nearly all respects to the Householder one. There are two features of the method as outlined above that are vitally important for the application to shell-model calculations. First, A need not be a matrix, it can be an abstract operator (the Hamiltonian) and the u can be linear combinations of abstract vectors (shell-model states) with real coefficients. To form AU, we use the rules for operating with A on ai instead of ordinary matrix multiplication. The rules depend on the representation chosen for the shell-model states. Only the direct iteration methods can be interpreted in this way. The second feature is that the eigenvalues of the r x r tri-diagonal matrix obtained after r iterations converge monotonically to the eigenvalues of the full n x it tri-diagonal matrix. The convergence itself is a consequence of the eigenvalue separation theorem 4), but the rate of convergence is remarkable, though somewhat dependent on the eigenvalue distribution. It is therefore not necessary to complete the full n iterations, and for large spaces only a tiny fraction of the total computation need be done to obtain the interesting eigenstates. Having computed the tri-diagonal matrix the eigenvalues and eigenvectors may be obtained by the standard methods of bisection and inverse iteration “).

~~~~~~~~

APPROACH

293

Fig. 1 shows the convergence of the first few (highest) eigenvalues of a 100 x 100 random matrix. Fig. 2 shows the eigenvectors corresponding to the eigenvalues of fig. 1; the plotted points represent the overlap of the eigenvector with the successive Lanctos vectors ul, t12,etc. The behaviour of these eigenvector curves is exceptionally

____11.0

-

-7

_-.--

--=f

66

EIGEW’A:UES 100x

IO0

RANbbM

MATRIX

I 5.0

I

10

.sL___

1

50

60

Fig. 1. Convergence curves for the 10 highest eigenvalues of a 100 x 100 random matrix. The matrix were chosen from a (-0.5, OS) unifarm d~trib~tion and 10.0 was added to the diagonal elements. The heavy dots indicate the point at which the eigenvalue attains its final value. The 44 highest eigenvakes are plotted at the right to show the general character ofthe spectrum which is not unlike a typical nuclear spectrum.

eicments

interesting, but it is only the indication of the rapidity of convergence that they provide that is of importance here. 3. Application

to shell-model calculations

3.1. REPRESENTATIONS En their application of the Lanezos algorithm Sebe and Naehamkia “) simply calculated the full energy matrix and then used a 8%taken from a nuclear model (e.g_, Sag to ensure rapid convergence. The author “) used the operator i~terpretatioa

294

R. R. WHITEHEAD

mentioned above, but the states were coupled to good J and Tin a kind of fractional parentage representation. This was a bad choice because operating with the Hamiltonian on such states destroys antisymmetric which tends has to be recovered at each l-

1

.

.

.

. .

0.0

2

: . -.

.

0.;, _

q

q ::

.. 04

*.

Ib

* .*. .*

*.

*r. . . . )... , 20

I :0

.-

.

,

* .

*. I “r....B

1

IO

;0

:.

:0

30

Lb

*.

q

.

3

. . 0.i, _

q 4

1:

’ .

.

*

0.01 -

-0.2

-0.1

LANCZOS

VECTOR

NUMBER

Fig. 2. Eigenvectors corresponding to the first four eigenvalues of fig. 1. The ordinates are the overlap between the eigenvector and the successive Lanczos vectors. The points are plotted until the overlap falls to less than 0.0005. This value is normally reached about three iterations after the corresponding eigenvalue has fully converged.

step by projection with an antisymmetrizer. representation for the Hamiltonian:

It is better to use an occupation number

(3) where the two-body matrix element (c@lHl$>

includes all one-body terms for con-

NUMERICAL

venience, and the a and u+ are the normal also for the basis states I(n))

= u:a:

APPROACH

destruction

295

and creation

. . . a,+/O).

operators

“); and

(4)

The initial Lanczos vector u1 is an arbitrary linear combination of the states [eq. (4)], and the multiplication of u1 by2 is done by means of the commutation rules for the a and a+. This representation ensures that the v are antisymmetric as well as orthogonal, though not necessarily having definite angular momentum or isospin. The basis states may be stored in the occupation number representation in the computer by assigning a single-particle orbit to each bit position in the word, a l-bit representing an occupied orbit and a O-bit an unoccupied one. All operations may be performed very efficiently using normal computer logic and bit-handling techniques. This requires a certain amount of machine code programming, but this is not excessive and is limited to a few often-used subroutines. In addition, owing to the large number of basis states special rapid searching techniques are needed to locate the correct storage locations for the terms arising when/E operates on a o. The main drawback of the occupation number representation is that, since the basis states are uncoupled, the spaces are much larger than in conventional shell-model calculations. This is not as serious as it might at first seem thanks to the rapidity of convergence discussed in subsect. 3.2. On the other hand, there is considerable flexibility in the choice of the single-particle orbits which may be either spherical or deformed, although the present program works only with spherical orbits. 3.2. CONVERGENCE

Convergence curves for various nuclear spectra are shown in figs. 3, 4 and 5 for calculations with from 600 to 6000 basis states. The most striking feature is that the ground state in each case has converged to its final value in fewer than 25 iterations despite the large variation in dimensionality. This insensitivity to the size of the space was one of the hopes raised earlier ‘*“) and is fully borne out in these calculations. No calculation performed to date has required more than 50 iterations to produce all the physicaliy sign&ant information to sufficient accuracy. In all of the cases illustrated here the convergence has been improved slightly by the use of two simple tricks. The most important of these is to ensure that the initial Lanczos vector, D1, contains relatively large components of the lowest eigenvectors. This is done by operating a few times with X on an arbitrary vector v,, to give

The optimum number, p, of pre-iterations depends on both v,, and the eigenvalue distribution. In the absence of any real information about the latter p = 5 turns out to be about right for the cases studied so far. Too many pre-iterations reduce the rate of convergence of the higher eigenvalues by weighting its too heavily in favour of the ground state. The other precaution is to ensure that the low-lying states have eigen-

296

R. R. WHITEHEAD

values larger in modulus than the highest states. This is usually true anyway in nuclear matrices but it can be guaranteed by an origin shift,* -+& -E. The magnitude of the shift E is not critical because in the Lanczos method, unlike other iterative algorithms,

both ends of the spectrum

converge

at about the same rate and the shift merely

I -

i

,.1111~.~.“.~‘~...~~‘~“““~‘.~“’

5

10

15

ITEkAtlON Fig. 3. Convergence

20 NUMBER

25

30

35

curves for the first 10 states in Z”Ne. All 640 shell-model states were included. The two-body matrix elements of Kuo ‘) were used.

ensures that the interesting end goes a little faster than the high states. The simultaneous convergence of both high and low states is somewhat annoying because it means that the ratio of useless to useful information is of the order of 2 : 1. However this is a considerable improvement over the typical 100 : 1 or more in conventional calculations. I know of no matrix eigenvalue algorithm that has the essential features of the Lanczos one and does not suffer from this or more serious drawbacks. One other disagreeable feature is evident in the figures. This is the apparent convergence of a state to the “wrong” eigenvalue, shown very clearly in figs. 1 and 5,

NUMERICAL

297

APPROACH

where in the latter case even the ground state is affected and its convergence to the right value is delayed by perhaps five iterations. I have been unable to find either a completely satisfactory explanation of this behaviour or a cure for it. Choice of a different siroften improves matters but there is no systematic way of making the choice. -15

10 0 -30

10 0

t

\

\

Fig. 4. Convergence

curves for the lowest of four kf, = 10 states in 28Si. The dimensionality 1439. Kuo matrix elements.

was

On the whole, however, the convergence appears to be quite satisfactory, particularly in view of the extreme insensitivity to the size of the basis space. 3.3. REDUCTION

OF BASIS

The prototype program has been written with provision for a maximum of 10000 basis states, the limit being imposed by the size of the machine available. Rather than setting up the maximal basis set for a given nucleus and trying to do enough iterations to get all the required eigenstates, it is better to reduce the basis into sets with given z-components of spin and isospin and calculate for each set separately. This gives the

298

R. R. WHITEHEAD

lowest states for each M,, MT. For example in 28Si we can calculate separately for MJ = 10, MT = 0 (fig. 4) and obtain the lowest J = 10 states after 25 iterations, then

start again with MJ = 9 to get the lowest .I = 9 states in a similar number of itera-

sa

w

z

w

-45

J. T

-50

60

Fig. 5. Convergence curves for the three lowest M, = 6 states in zaMg. The third state, which has J = 7, has not quite converged to full working accuracy within the 31 iterations shown, but it is much better than it need be for shell-model purposes. Kuo matrix elements. Dimensionality 6183.

TABLE1 Typical iteration times Case 2*Mg M, = 10 24Mg M, = 8 =Mg M, = 7 24Mg M, = 6 *sSi M, = 10

Dimensionality 225 1638 3384 6183 1439

Time/iteration 0.5 3.2 7.3 14.8 4.8

(min)

NUMERICAL

tions,

299

APPROACH

the total work being less than if both J = 9 and J = 10 states were required

from the same calculation. The Hamiltonian has spin and isospin symmetries built into it and the final eigenstates must therefore have good J and T. The J- and T-values may be calculated by applying the operators 5’ and TZ to the final eigenvectors. This also provides a quite sensitive test for complete convergence in doubtful cases, because departure from integral or half-integral values is severe even if quite small components are missing from an “eigenvector”. 3.4. TIMING

For completeness some representative timing figures are quoted in table 1 as a guide to the performance of the method. The figures in column 3 refer to the first version of the prototype program running on the 360/44. An improved version which is faster by a factor of between two and three, depending on the size of the calculation, is now being used but no comparable timings are yet available. This improvement is mainly the result of making better use of the symmetry of%. There is still much room for improvement by means of better programming techniques. Running the program on a more appropriate computer should produce an improvement of at least a factor of 10, and so the times shown in the table should be divided by about 25 to obtain a reasonable impression. It should be noted that, for a given nucleus, the time per iteration is nearly linear in the size of the basic set. Because the number of iterations required is independent of the basis size, the total time therefore depends linearly on the number of basis states rather than the typical cubic dependence of the conventional approach. This is one of the most remarkable features of the Lanczos method.

5. Conclusions The approach to shell-model calculations described lived up to expectation and has the following advantages

in the previous over conventional

sections has techniques:

(i) It can handle really huge calculations. (ii) Computing time is linear (nearly) in the dimensionality of the space, the slope for 24Mg being about 2 min/iteration/lOOO basis states with the timings of table 1. (iii) More efficient use is made of the computer store. This will allow calculations to keep pace with available computers and avoid the onset of the law of diminishing returns arising from quadratic growth of storage requirements and the much slower growth of actual computer stores. The most serious disadvantage of the method is that by replacing virtually everything by elementary computer operations the calculations simply produce numbers at the end and leave little room for gaining physical insight along the way. The only serious calculations done so far using this method have been those in 24Mg and 28Si already mentioned, which were undertaken for comparison with the

300

results of projected Hartree-Fock lished elsewhere ‘).

R. R. WHITEHEAD

calculations. The results of these have been pub-

I would like to thank Drs. N. MacDonald and A. Watt for continuous encouragement and discussion. References 1) 2) 3) 4) 5) 6) 7) 8)

T. Sebe and J. Nachamkin, Ann. of Phys. 51 (1969) 100 R. R. Whitehead, Glasgow University preprint, 1969, unpublished J. H. Wilkinson, The algebraic eigenvalue problem (Oxford, 1965) p. 388 A. S. Householder, The theory of matrices in numerical analysis (Blaisdell, New York, 1964) p. 254 R. A. Brooker and F. H. Summer, Proc. IEE convention on digital computer techniques, 1956, p. 114, and the discussion following that paper G. E. Brown, Unified theory of nuclear models and forces (North-Holland, Amsterdam, 1967) P. 3 R. R. Whitehead and A. Watt, Phys. Lett. 35B (1971) 189 T. T. S. Kuo, Nucl. Phys. A103 (1967) 71