Computer Physics Communications 54 (1989) 273—294 North-Holland, Amsterdam
273
A VECTOR CODE FOR THE NUMERICAL SIMULATION OF COSMIC STRINGS AND FLUX VORTICES IN SUPERCONDUCTORS ON THE ETA-b K.J.M. MORIARTY Institute for Advanced Study, Princeton, NJ 08540, USA and John von Neumann National Supercomputer Center, 665 College Road East, Princeton, NJ 08540, USA
Eric MYERS Physics Department, Boston University, Boston, MA 02215, USA and Department of Mathematics, Statistics and Computing Science, Daihousie University, Halifax, Nova Scotia B3H 3J5, Canada
and Claudio REBBI Physics Department, Boston University, Boston, MA 02215, USA Received 4 January 1989
We present a FORTRAN-77 code for the simulation of the dynamical interactions of cosmic strings. This code also simulates the interactions of tubes of magnetic flux trapped in a superconducting material. The code has been written to take advantage of the vector architecture and data motion features of the ETA-lU supercomputer; however, it is written entirely in standard FORTRAN-77 so that it is quite portable and may also be run on scalar machines for testing and development.
PROGRAM SUMMARY Title of program: VX
High speed storage required: size dependent (39 Megawords for 3 lattice) i00
Catalogue number: ABJD
No. of bits in a word: 64 Program obtainable from: CPC Program Library, Queen’s University of Belfast, N. Ireland (see application form in this issue) Computer: ETA-b or Cyber 205 (code is portable); Installation: John von Neumann National Supercomputer Center, Princeton, New Jersey Operating system: CDC Cyber 200
VSOS
No. of lines in combined program and test deck: 11325 Keywords: Cosmic strings, superconductivity, flux tubes, vortices, Abelian Higgs model Nature of physical problem Simulation of the motion of topological defects in the early universe or tubes of magnetic flux in a superconductor.
Programming language used: FORTRAN-77
OO1O-4655/89/$03.50 © Elsevier Science Publishers BY. (North-Holland Physics Publishing Division)
274
KJ. M. Moriarty et al.
/ Numerical simulation of cosmic strings and flux
Method of solution Create an initial configuration representing two isolated vortices or vortex lines moving toward each other and then propagate the system forward in time using the equations of motion.
vortices
Restrictions on the complexity of the problem The size of the lattice to be studied is limited by the amount of high speed memory available. Typical running time: Approximately 14 minutes.
LONG WRITE-UP 1. Introduction The currently accepted standard model of cosmology allows us to trace back the evolution of the universe to the first fraction of a second after the big bang. At that early time we know that the universe was quite dense and incredibly hot (on the order of 10” degrees centigrade.) The matter that filled the universe was a gas of interacting elementary particles and radiation. At high temperatures the interactions between these particles were at very high energies, greater than 1015 or 1016 GeV. Current theories of the interactions of elementary particles suggest that at such high energies all of the fundamental forces of nature (except for gravity) have the same strength. As the universe expanded from the primordial fireball it cooled, and the interactions between the particles became less energetic. Eventually a phase transition (actually several distinct transitions) occurred and the symmetry between the various interactions was broken. The strong, weak and electromagnetic forces then, in their turn, revealed their separate distinguishing properties. The symmetry breaking transitions in the early universe are in many ways similar to phase transitions in a more familiar setting, such as the freezing of liquid water into solid ice. When a crystal forms in this way it will very often have defects. Similarly, a cosmological phase transition can form topological defects in the fields that represent matter coupled to radiation [1]. Point-like defects are called “monopoles”, surface-like defects are called “domain walls”, and line-like defects are called “strings”. (These cosmic strings should not be confused with “superstrings”, which are also currently of great interest in high energy particle physics.) Observations put strict upper limits on the number of monopoles in the visible universe and rule out domain walls almost entirely. The observational limits on strings are much less strict, although even if strings are not seen directly they would have left evidence of their existence in the present structure of the universe. Cosmic strings could have played an important role in the formation of galaxies and clusters of galaxies. Observations show that the distribution of galaxies, as measured by the galaxy—galaxy correlation function, is roughly scale invariant [2]. Previous work on galaxy formation showed that this distribution would have evolved from a scale invariant spectrum of initial density fluctuations [3]. If cosmic strings form loops the loops would exert a gravitational attraction on nearby matter, which would have caused these initial density fluctuations. An important property of strings is that they have no inherent length scale, so that loops of all sizes would form, leading to the necessary scale invariant spectrum of fluctuations [4].If cosmic strings do not form loops, however, it is difficult to explain the observed scaling of the distribution of galaxies in the universe. Loops of cosmic string loose energy by giving off gravitational radiation. The loops that would have formed our galaxy and local cluster of galaxies would have decayed long ago, but larger loops or infinitely long strings might still be observable. The string scenario of galaxy formation depends crucially on the formation of string loops. Simple simulations [5] indicate that most strings formed in a cosmological phase transition would be infinitely long, with only about 20% of the string forming loops. More loops may be created, however, if strings intercommute (that is, trade ends) when they collide, but determining whether or not strings intercommute is not easy. The topological defect that forms the string is described by a complicated system of non-linear differential equations for a Yang—Mills gauge field coupled to a symmetry breaking Higgs field. Because
K.J.M. Moriarty et al.
/ Numerical simulation of cosmic strings and flux
vortices
275
the equations cannot be solved analytically, we have performed a numerical simulation of the collisions of cosmic strings [6] to determine whether or not they intercommute. This paper describes our computer code for the simulation of the collisions of cosmic strings. In the sections that follow we describe the mathematical background of the system of equations that model cosmic strings, and then give an overview of the computational techniques used. We then present a more detailed analysis of the code, describing how we obtained the initial data for our simulation, how we perform the update step for integrating the equations of motion, and how we impose the boundary conditions after each update step. Finally we present a sample run of the code. The target machines for this code are the Cyber 205s and the ETA-b at the John von Neumann Supercomputer Center at Princeton, New Jersey, but the code has been developed in a highly machine independent fashion by using a standardized library of FORTRAN-77 “vector” subroutines. These subroutines allow vector code to be written and tested on a scalar machine, but the routines will all be vectorized by any good vectorizing compiler and can be optimized for improved speed on any particular machine (e.g. by using “Q8” special calls on the Cyber 205). This library of vector subroutines is also useful for teaching the concepts of vector programming, both by giving students a “Fortran equivalent” to machine vector instructions and by allowing them to run “vectorized” programs on a scalar machine when a vector machine is unavailable. Simulations using this code in two dimensions have yielded results that are of interest in the field of condensed matter physics as well. This is because the field equations that describe cosmic strings also describe tubes of magnetic flux (called “flux vortices”) trapped in a superconducting material. In the Ginzburg—Landau theory of superconductivity [7,8] when the self-coupling of the scalar field is below a critical value (corresponding to a Type I superconductor) flux vortices attract each other, while when the coupling is above the critical value (giving a Type II superconductor) they repel. At the critical coupling the interaction energy between isolated vortices is zero. This lead to the speculation that in a collision critically coupled flux vortices would behave like solitons and simply pass through each other. Our simulations show [6] that critically coupled flux vortices interact non-trivially in a collision by scattering at 900 and are therefore not solitons. Results similar to ours for cosmic strings and vortices, albeit obtained by different computational methods, have been obtained recently for global strings by Shellard [9] and for local strings by Matzner [10].
2. Mathematical background The system we consider is described by the action (sometimes referred to as the Abelian Higgs model or the Ginzburg—Landau theory) in d 2 or 3 spatial dimensions: =
S
=
fdd+1x{
—(~~~)
—
—
—
~1~~FMP}.
(2.1)
Here 4(x) is a complex scalar field representing the Higgs field in a unified field theory of particle interactions or the quasiparticle density in a superconductor. The gauge covariant derivative of ~(x) is =
8.~~4(x) ieA~(x)~(x),
(2.2)
—
where A,~( x) is the electromagnetic vector potential, and F,~,, A,, ( x) — B,,A~,( x) is the electromagnetic field strength tensor. We use units where c 1 throughout. The action is invariant under the gauge transformation =
=
~(x) ~exp(ix(x))~(x),
A~(x)~A~(x)
+
~B~~(x),
(2.3)
K.J. M. Moriarty et al.
276
/ Numerical simulation of cosmic strings and flux vortices
for any function x(x). The potential for the 4)(x) field, known in quantum field theory as the Higgs potential (or “Mexican Hat” potential), is minimized by 4) a 0. As a result the vacuum (i.e. lowest energy) state of this system is not invariant under a gauge transformation. The gauge symmetry is then said to be spontaneously broken, and the gauge field particles acquire an effective mass proportional to a. This is the way in which the W ± and Z° particles acquire their mass in the standard model of the electroweak interactions. In the context of condensed matter physics electrons traveling in the material now move unimpeded because the electromagnetic interaction is exponentially damped by the photon mass, and the material is therefore superconducting. Seen in this way superconductivity is a general feature of any system in which the electromagnetic gauge symmetry is broken spontaneously. Another general feature of this model is that the vacuum is degenerate and topologically non-trivial, so it is possible to create topological defects, which are the vortices. The vacua are the set of states where 4)(x) = a exp(i00) for any fixed 00, which form a circle in the complex plane. In the vortex configuration =
the 4)(x) field “winds” around the central maximum of the potential like 4)(T,
0)
a eIFIO.
(2.4)
Although the phase of 4) is not constant in eq. (2.4), a suitable gauge potential can be introduced to make 4) covariantly constant. Locally then the field is gauge equivalent to the vacuum, but globally there is a twist in the phase of 4)(x) which cannot be removed by any gauge transformation. Continuity of 4)(x) requires both that ~(0) = 0, and that n, the “winding number”, be an integer. Outside the vortex 4) = a, so the gauge symmetry is spontaneously broken (the material is superconducting). At the centre of the vortex, however, I 4) I = 0 so the gauge symmetry is restored (the material is non-superconducting). In fact, inside the vortex there is a magnetic field in the z direction. The total magnetic flux through the vortex is quantized and is given by tk(B)=2’rrn/e.
(2.5)
Vortices in the Abelian Higgs model therefore represent both cosmic strings and quantized tubes of magnetic flux trapped in a superconducting medium.
3. Computational techniques The computer simulation of any continuous dynamical system presupposes a discretization of its degrees of freedom. In order to maintain as much as possible the symmetries of the continuous system in its discretized form we have restored to techniques from lattice gauge field theory [11]. The points of space are replaced with vertices x of a cubic lattice with lattice spacing a. The field 4)(x) is represented by the variables 4~,which live on the sites x of the lattice, while the gauge fields are represented by the variables 0~,which live on the links of the lattice. To transport 4)~covariantly across a link in the forward ~s direction one must multiply by expfi0fl, so the lattice version of the covariant derivative is
v~= ~
(3.1)
where ~2represents a unit vector in the ~t direction. Time is left continuous (eventually also time will be discretized, but with a time step ~ t ‘~za) and the dynamical evolution is described by a Hamiltonian formalism. With a d~~fand a ~ ~ the momenta conjugate to the fields 4~and 0~’,and A~representing
K.J.M. Moriarty eta!.
/
Numerical simulation of cosmic strings and flux vortices
277
A0(x), the Hamiltonian for the discretized system is
(3.2)
~
a The virtue of using the lattice gauge field theory formalism is that it preserves the local gauge symmetry of the system. The Hamiltonian above is invariant under the transformation: (3.3a) (3.3b) (3.3c) with x~an arbitrary function of position. This is the discrete version of the transformation in eq. (2.3). As a consequence of this symmetry the system obeys the constraint (3.4)
(E~—E~_~)=e(1~4)5—1T54)fl,
which is the lattice version of Gauss’ law. 3.1. Dynamics The dynamics of the system is described by Hamilton’s equations for the H of eq. (3.2). The equations of motion are (3.5a)
2d ~ a
,
~
2
—-—i-4)x—~(4)~4)x—a)~, a
(3.5b) (3.5c)
~ a (3.5d) In obtaining these equation we have used the gauge symmetry of the system to impose the condition = 0 (temporal gauge). This choice of gauge both simplifies the calculations and makes the Hamiltonian in eq. (3.2) manifestly positive-definite. For convenience we will measure the 4) fields in units of a, i.e. we shall set a = 1. We want to emphasize the importance of our discretization procedure. We have discretized the action (or, equivalently, the Hamiltonian) first and then derived the exact equations of motion for the discrete system. These equations of motion are therefore guaranteed to preserve the symmetries and conservation laws of the discrete system (such as the Gauss law and the conservation of energy), at least in the limit of an infinitely small time step. An alternative approach, which we have avoided, would be to derive the
K.J.M. Moriarty et a!. / Numerical simulation of cosmic strings and flux vortices
278
equations of motion for the continuous system and then discretize. Since the process of discretization is not unique there is no guarantee that the symmetries of the original system will be preserved unless one takes additional special care to insure this. 3.2. Initial data We begin the simulation with initial data describing two isolated vortices approaching each other from a distance. This configuration is obtained by boosting the continuum field configuration of a stationary vortex. Because the n = 1 vortex is radially symmetric, we may assume the fields of a static vortex to be of the form 4)(x,
y) =
x +U’f(r),
A~(x, y) = A~(x,y)
(3.6a)
—
-~b(r),
(3.6b)
= +
-~b(r),
(3.6c)
where the functions f(r) and b ( r) describe the radial profile of the vortex. We have assumed that the flux tubes are translationally invariant in the z direction. Both f( r) and b ( r) vanish at r = 0, and both go to unity in the limit of large r. Substituting eqs. (3.6) into the Hamiltonian in eq. (3.2) and assuming a static configuration, ir(x) = 0 and E’(x) = 0, the energy density of the system is E= fdzf 2~rdr{(~1)
+
—~-~(~-)2
(b
_~)2
+
X(12
- 1)2).
(3.7)
The functions f( r) and b ( r) are to be chosen such that E is minimized. These functions may be obtained numerically to any desired accuracy by integrating the “relaxation” equations, df
~E
dT
~f’
db dr
~E
38
To do this we have used a multigrid relaxation method, which is described in further detail in section 4. Briefly, we obtain values for f(r) and b ( r) for several hundred non-uniformly spaced values of r, and can then obtain the values of the functions at any other value of r by interpolation. The result is that we may consider f(r), b(r), and their derivatives to be known functions. One minor complication with boosting the field configuration of eqs. (3.6) is that this leads to a mixing between the x and t components of the vector potential, so that the temporal gauge condition A0(x) = 0 is no longer respected. To compensate for this we use a remaining gauge freedom to make a time-independent gauge transformation such that A1(x) = 0 at t = 0. The gauge transformation that accomplishes this is
x(x,
y)
(3.9a)
=y12(x, y),
where 11/2 12(x,
jx
~
2~
idE,
(3.9b)
and ~=I2(x,
y)+y~’2~~
~.
(3.9c)
KJ. M. Moriarty et a!.
/
Numericalsimulation of cosmic strings and flux vortices
279
The integral ‘2 is performed numerically so it and its derivative may also be considered to be known functions of x and y. Applying first the gauge transformation and then the Lorentz transformation gives the field configuration for a moving vortex in the temporal gauge. Two such vortex fields 4), and 4)2 are superimposed with the composition rule: 4)(x)
=
4)1(x)4)2(x),
ir(x)
A’(x) =A~(x)+A’2(x),
=
(3 10)
sr,(x)4)2(x) + 4),(x)7T2(x),
E’(x)
=
E~(x)+ E~(x).
For convenience we have usually collided vortices by boosting them in the ±x direction, but we have also run simulations which included a further rotation of the direction of motion of the vortices, so that they collide along no special lattice direction. 3.3. Boundary conditions In this kind of simulation the proper treatment of the boundary conditions is important and requires special attention. We would like to simulate an infinitely large volume on a finite lattice, and to maintain energy conservation for the closed system. To these ends we developed a code where either periodic or free boundary conditions can be implemented in any direction. Of course the choice of a particular kind of boundary conditions should not affected the end result of a simulation run on a sufficiently large volume, but in practice care must be taken so that the boundaries do not introduce spurious or reflected signals. We have specifically avoided any sort of “reflected” or “driven” boundary conditions. Because we are dealing with a gauge theory, there is more to imposing periodic boundary conditions than simply identifying boundaries — the fields which are to be identified on the boundaries may still differ by a time-independent gauge transformation. Indeed, periodicity only up to a gauge transformation must be imposed if the system is to contain a non-vanishing net number of vortices. We therefore map the fields from one boundary into another with an extra “patching” function to account for this gauge transformation. The transformation is: 4)(xT):=exp{iX(xF)}4)(xF), 0’(xT)
:=
0’(x) + z~O’(xF),
1r(xT):=exp{iX(xF)}IT(xF),
E’(xT)
:=
(311)
E’(xF),
where data from the XF sites is carried to the XT sites. The gauge transformation is determined by the phase of the Higgs field in the initial configuration, namely, X(XF)
=
arg 4)(’T)
—
arg 4)(xF),
z~0’(xF)= X(XF +
î)
—
X(XF),
(3.12)
(c.f. eq. (3.3c)). The patching functions in eqs. (3.12) are computed once from the initial data and stored for subsequent use; they are used to impose the periodic boundary conditions according to eqs. (3.11) before each update step. For free boundary conditions one can imagine that the links that cross out of the lattice across the boundary simply do not exist. Actually, we set the values of the fields on the boundaries so that the derivatives across these links are zero. This corresponds to Neumann boundary conditions. The algorithm for imposing free boundary conditions is described in detail in section 4 below.
4. Program structure Our code consists of three main programs called “vxmake”, “vxmake3d” and “vxrun”, a collection of
special purpose subroutines, and several libraries of general purpose functions and subroutines, including a
280
K.J.M. Moriarty eta!.
/
Numericalsimulation of cosmic strings and flux vortices
library of routines that perform standard vector operations. The program vxmake is used to create an initial two dimensional vortex configuration and store it in a disk file, while vxmake3d creates a three dimensional string configuration. The program vxrun reads a configuration from a disk file, performs a specified number of update steps and measurements, and then stores the updated configuration back in the disk file. Thus to perform a particular numerical experiment one must run the programs vxmake or vxmake3d once to create the initial configuration and then run vxrun as many times as needed to complete the experiment. 4.1. Creating a configuration The dependence of the program vxmake on other functions and subroutines is described by the Unix “make” file shown in fig. 1. The first thing vxmake does is ask for a value of the “message level”, which is stored in the integer variable MSGLVL. The program and many of its subroutines use this value to determine whether or not to print certain informational, warning, or error messages. The message level should be a number between 1 and 10, with 1 being relatively quiet and 10 quite verbose. We usually run the program with a message level of 5. However, if a problem or error occurs one of the first things we can # makefile for Cosmic String program vxmake
SUFFIXES: MAIN
=
.0
.f
vxmake.o vxind.o vxprof.o vxbint.o vxboost.o \ vxffunc.o vxbfunc.o sni2fn.o vxgetby.o vxbndry.o vxenergy.o vxleap.o vxgauss.o vxtrak.o \ vxzero.o vxstor.o \ vxvmap.o ullib.o vectlib.o stringlib.o unixlib.o
~make: $(MAIN); f77 $(MAIN) f77
-u -c -C $
-0
vxmake
*
vxmake.o: ~make.f ~dndx.o: vxindx.f vxprof.o: vxprof.f vxbint.o: vxbint.f vxboost.o: ~boost.f vxffunc.o: vxffunc.f vxbfunc.o: vxbfunc.f vxi2fn.o: vxi2fn.f vxgetby.o: vxgetby.f vxbndry.o: vxbndry.f vxenergy.o: vxenergy.f vxleap.o: vxleap.f vxgauss.o: vxgauss.f vxtrak.o: vxtrak.f ~zero.o: vxzero.f vxstor.o: vxstor.f vxvmap.o: vxvmap.f ullib.o: ullib.f vectlib.o: vectlib.f stringlib.o: stringlib.f unixlib.o: umxlib.f Fig. 1. Makefile used to
vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml .f vxcoml.f vxcoml.f ~com1.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f
vxcom2.f vxcom2.f vxcom2.f vxcom2.f vxcom2.f
construct the program vxmake.
\
K.J.M. Moriarty et a!.
/ Numericalsimulation of cosmic strings and flux vortices
281
do is re-run the program with a higher message level and use the additional output to track down the error. This technique proved quite useful in the debugging phase of the development of our code. The program next asks for the value of the coupling constant A (stored in LAMBDA), the kind of boundary conditions to be used for the boundaries in each direction (stored in the array KINDBY), and the size of the lattice (stored as the number of sites in each direction in the array NSITE). It then calls the subroutine vxindx to initialize the indexing and shift vectors. All of the field variables for a two or three dimensional lattice are actually stored as one dimensional vectors. Each lattice site is assigned an index (often called IDX) that corresponds to its position in the data vectors. Computing derivatives requires that we obtain data from nearest neighbor lattice sites. This is done by performing a gather instruction using the indices for the nearest neighbor sites in a given direction. We use a similar scheme for imposing the boundary conditions: we gather data from chosen sites, manipulate it appropriately and then scatter it to the boundary sites. The subroutine vxindx computes the indices for all these gather and scatter operations. Any scheme for indexing the lattice sites could be used, but we have chosen the following: those lattice sites not on a boundary make up the “fiducial” volume, and these sites appear first in the ordering, followed by the boundary sites. The advantage to this is that often we need only perform a calculation using the sites within the fiducial volume. We can do this simply by using a short vector length, which saves us the unnecessary work of applying the operation to the boundary sites. The indexing and shift vectors created by vxindx are stored in the common block /SHIFTV/. The array ISHFW(IDX,MU) gives the index of the lattice site in the forward MU direction from the site IDX. The array ISHBK(IDX,MU) gives the index of the lattice site in the backward MU direction from the site IDX. The array IDXFN(IX,IY,IZ) gives the index IDX of the lattice site with coordinates (IX,IY,IZ), while IDXINV(IDX,MU) gives the MUth coordinate on the lattice of site IDX. The subroutine vxindx stores the gather and scatter index arrays for the boundary conditions in the common block /BNDRY/. The array IDXFM(I,MU,JBKFW) is a list of the indices of sites from which boundary data are to be taken (gathered) for the boundary perpendicular to the MUth axis in the backward (JBKFW = 1) or forward (JBKFW = 2) direction. Similarly, IDXTO(I,MU,JBKFW) is a list of the indices of sites to which boundary data are to be put (with a scatter instruction). NVBY(MU) is the number of sites in the boundary perpendicular to the MUth axis. Sites at the corners of the lattice (or the edges in the case of 3 dimensions) are actually counted more than once because they appear in more than one boundary. This proves useful when free boundary conditions are imposed and results in a small and harmless redundancy when periodic boundary conditions are imposed. After the index arrays have been defined the main program initializes the field variables to the vacuum configuration 4)(x) = 1, ir(x) = 0, 0~(x)= EM(x) = 0. (4.1) The fields 4)(x) and ir(x) are stored in the arrays F(IDX,J) and P(IDX,J). The fields are complex valued so the second index runs over real/imaginary parts. We have chosen to store the fields this way rather than as type COMPLEX vectors to allOw for the possibility of generalizing the code to a non-Abelian gauge group in the future. The fields O~,(x)and E~,(x)are stored in the arrays TH(IDX,MU) and EH(IDX,MU); the second index runs over vector components. The arrays containing the field values are made available to all subroutines by storing them in the common block /FIELDS/. The next task to be performed is to obtain the radial profile functions f(r) and b ( r) of eq. (3.6) so that the initial data for a moving vortex can be computed. This is done by the subroutine vxprof, which computes these functions by a multigrid relaxation method and puts the results into the common block /RADPROF/ in the arrays FOFR(J), BOFR(J) and RVAL(J). The routine divides the interval 0 r RMAX into at least N = NMAX non-uniformly spaced points rj= _iln(1_~~),
(4.2)
282
KJ.M. Moriarty et aL
/ Numerical simulation of cosmic strings and flux
vortices
where (4.3) This distribution of the r1 gives many more sampling points closer to r = 0, where the functions f(r) and J~r)bavenwre curvature. The valuesofr~.are storcdinRVAL(J)~whiJeihecorrespondingvaiuesoLf(3)~ and b(r1) are stored in FOFR(J) and BOFR(J). Later these tables of values are used by the functions ffunc(r) and bfunc(r) to compute any value of f(r) or b(r) by four-point interpolation. The derivatives of f(r) and b(r) are also needed and these are also computed numerically for any value of r by the functions fderiv(r) and bderiv(r) using a weighted left—right difference scheme which has an error that is second order in the spacing between sampling points. Using a multigrid technique to calculate f(r) and b(r) speeds the relaxation considerably. The routine first relaxes to the appropriate solution on a relatively small number of points. Then it doubles the number of points (by interpolation between existing points in the ~ variables) and repeats the process to get a better approximation. The division into finer and finer grids repeats NMGMAX 1 times. The advantage to using this multigrid method is that on the coarser grid we can use a larger time step so that the system relaxes to the approximate solution more quickly. On the finer grids we must use a smaller time step but we are starting from a configuration that is already very close to the exact solution so fewer steps are needed. Once vxprof has computed this radial profile data it writes it to the disk file VXRADDAT. The next time the program is run the first thing vxprof does is look for this file. If the file exists and the data are acceptable it reads the radial profile data from the file rather than recomputing it by the multigrid relaxation. This saves us the trouble of repeatedly re-computing the same data each time the program is run. The condition for the data to be acceptable is for the coupling constant LAMBDA to be exactly the same, and for the data on disk to extend at least to RMAX and contain at least NMAX data points. After the radial profile data has been obtained the main program calls the routine vxbint to obtain a similar table of values for the integral ‘2 in eqs. (3.9), which is used to make a gauge transformation of the boosted vortex such that 0,(x) is zero. The results are stored in the common block /1214/ in the array AINT2(I,J), and are later used by the subroutine vxi2fn to compute 12(x, y) and 8I2(x, y)/By for arbitrary x and y by 2-dimensional interpolation. The routine vxbint computes 12(x, y) at the vertices of a non-uniformly spaced grid in the first quadrant of the x—y-plane. The spacing of the grid in both directions is given by the transformation of eqs. (4.2) and (4.3), with the Ith value of either coordinate stored in the array XYVAL(I). The routine steps through each value of y arid performs the integration in x up to the maximum value XYMAX. The integration between each value of x on the grid is performed using Simpson’s rule with NDXINT subintervals in the quadrature. NDXINT must be even because the quadrature is second order. We have used 60 subdivisions in the quadrature with good results. Once this “boost integral” has been evaluated the values are also saved for future use in a file called I2I4DAT. The next time the program is run the information is read from this file rather than re-computed if the file exists and the data are acceptable. The conditions for the data to be acceptable are that the coupling constant must be the same, the number of grid points in each direction must be at least NXYMAX, the range of coordinate values in each direction must be at least XYMAX, and the number of subintervals used in the quadrature must be at least NDXINT. The program vxmake is now ready to create the initial configuration of two moving vortices. It asks for input values for the lattice spacing, ALAT, whether the collision is to be between two vortices or a vortex and an antivortex, the separation of the two objects in the x direction, the separation in the y direction (which is the impact parameter b), and the velocity of the two colliding objects with respect to the lattice frame. The main program then calls the subroutine vxboost twice, once to create each moving vortex or antivortex and adds it to the field configuration. vxboost takes as input the position on the lattice of the —
K.J.M. Moriarty et al.
/
Numerical simulation of cosmic strings and flux vortices
283
center of the vortex, the velocity of the vortex and its direction of motion, and a flag that is greater than zero if it is to create a vortex and less than zero if it is to create an antivortex. The subroutine then loops over all lattice sites and computes the values of the field variable on all sites and links. On the sites the fields and their momenta are given by 4)(x,
y) =
(yx cos ~ —y sin
+
i(yx sin ~ +y cos ~
(4.4)
~(x, ~ (r) (~~){~,‘) (1— b(r’))(y2 sin (r )
yxy cos
~—
~i’
22
(4.5)
x sin~+yxycosx).~—
where r’ = (y2x2 +y2)’~2is the radial distance of the point (x, transformation ~(x,y) is given by
y)
in the vortex rest frame, and the gauge
~(x, y) =yI 2(-yx, y).
(4.6)
Because of the gauge transformation all of the link variables 0,(x, y) are zero. The link variables 02(x, y) represent covariant transport across the y-directed links by a finite distance a, so they are, the sum of infinitesimal steps along the link, 92(x, y)
Jy
ry+a
=
A2(x, y’) dy,
(4.7a)
where -yx A2(x, y)=—1b(r’)+12(yx, r
y)+y
812(yx y)\ ~ Y
~.
(4.7b)
The routine computes the integral in eq. (4.7a) using a 10 point Gauss—Legendre quadrature. The momenta conjugate to the link variables (the electric fields) are computed at the midpoint of the link as E2(x, y)=
(4.8)
rcir~r’
The routine vxboost calls the functions ffunc(r) and bfunc(r) to obtain values of the radial profile functions f(r) and b ( r) for any given r. Both of these routines take r and transform it to the ~ variable in eq. (4.2). The nearest known value below ~ in the data table RVAL(K) is Ek~where k = INT((N+ 1)(1 — e_c~)).
(4.9)
The interpolated value of the function is then computed by four-point interpolation as f(~(r))=
2
2
~
f(E,~+1)fl i——1 i,~j
~-
~
cck+,6k+r ~k+j
(4.10)
The interpolation interval is shifted appropriately if E lies too close to the top or bottom of the interval in which the values of f(~k)are known.
K.J.M. Moriarty et a!. / Numericalsimulation of cosmic strings and flux vortices
284
The subroutine vxboost also calls the functions fderiv(r) and bderiv(r) to obtain the values of the derivatives of f( r) and b ( r) for any given r. These routines find the appropriate interval rk r < rk ±, using eq. (4.10) and then compute the derivative as df
—
h2.~1— h,~2
rr
2~
~
where r,
Tk —
=
h2
=
rk+, —
r,
(4.12)
and
=
f(r~)-f(r)
(4.13a)
1÷1~1).
(4.13b)
The advantage to this over a simple first order difference is that the leading term in the error is
~hlh2.~_4L,
(4.14)
that is, the usual second derivative, order h term in the error has been canceled. The routine vxboost also obtains the values of the boost integral 12(x, y) and its derivative B12/By for a given (x, y) by calling the subroutine vxi2fn. This routine transforms x and y into ~ and ~ according to eq. (4.2) and then finds the square in the data grid such that
Then with
~ ~
and
~
~
f,~,the
known values of the function at
~Y2~
(~,
~j, the interpolated value of 12(x,
~) is
12(X, ~
(4.15)
where A
=
~
- ~x1)(~y2
—
The routine actually computes the function for x 0, y 0 and then multiplies the result by the proper sign to get the value for any other quadrant. Similarly, the derivative is computed as Bi2(x~ ~
=
~
(4.16)
where -~-~ =cexp(-cy)=c(1—~~).
This will be recognized as a linear interpolation between expressions for the derivative at two values of x.
KJ.M. Moriarty et at
/ Numerical simulation of cosmic strings and flux vortices
285
Once the field values for a single moving vortex have been determined the vortex is “added” to the field configuration on the lattice with the composition rule in eq. (3.10). Once both vortices have been created (it would be possible to add more vortices to the configuration) the main program calls the routine vxgetby to get the information necessary for imposing the boundary conditions. For periodic boundaries this subroutine computes the transformation function x ( x) and L~0’ ( x) in eq. (3.12), while for free boundary conditions it simply sets these transformation functions to zero. The transformation functions are stored in the common block /BNDRY/ in the array EXPCHI(IDX,J,MU,JBKFW) and THBNDY(IDX,ID,MU, JBKFW), where IDX runs over boundary sites, J labels real and imaginary parts, ID labels vector components, MU is the direction perpendicular to the boundary, and JBKFW selects either the backward or forward directed boundary. With the boundary information determined, the main program calls the subroutine vxbndry to impose the boundary conditions. This routine loops over the possible boundary directions and imposes periodic or free boundary conditions, as appropriate. Periodic boundary conditions are imposed by applying the gauge transformation in eq. (3.11), using the information stored in common by the routine vxgetby. Free boundary conditions are imposed as follows: first the 4)(x, y) and ¶(x, y) fields are gathered from the sites at the edge of the fiducial volume and scattered to the neighboring boundary sites. The gauge potentials on the links across the boundary are then set to zero, so that covariant derivatives across these links are now zero. Finally the gauge potentials on the links perpendicular to the boundary links are copied from the surface of the fiducial volume to the neighboring boundary links, so that the plaquettes which contain boundary sites are zero. Once the boundary conditions have been imposed the main program calls the routine vxenergy to compute the energy density and total energy, as given by the Hamiltonian ~
~
(
,,
La
0~~~~0j
S~ y
y+I
~
y-1-)
—o-”’(~ p y,x j4~
-i-s y,x—I
+~
y,x—3
-~-~
y,x—I—3)J
.
p417
This is the same as the Hamiltonian in eq. (3.2) (with A°1= 0), except for the s-function factors. These are included so that the energy density computed on the lattice sites more accurately reflects the energy density of the continuum system. The derivative terms in the Laplacian properly live on the links between two sites, so we average over the forward and backward links from a site. Similarly, the plaquette (curl) term properly lives at the center of the plaquette, so we average over the 4 plaquettes that join at a given site. One can easily verify that the equations of motion for the Hamiltonian above are the same as those given in eqs. (3.5), and that the total energy is also the same. In addition to computing the energy density, which is stored in the array H(IDX), the routine also accumulates the potential and kinetic energies in the arrays PE(IDX) and KE(IDX) and the scalar potential V(4)) and the momentum density ,r~irin the arrays V(IDX) and PISQ(IDX). The energy density and totals for all of these sub-partitions of the energy are stored in the common block /ENERGY/, while the arrays PE, KE, V, and PISQ are stored in working storage, so that they are available to an immediately following routine but the storage can also be used for working storage by other routines. See the description of how we conserve on working storage given below. The final preparation of the configuration is to step the momenta half a time step ahead of the fields so that we may use a leapfrog updating scheme to integrate the equations of motion. The leapfrog algorithm gives marked improvement in the stability of the dynamics, as measured by the conservation of energy, over the simpler approach of updating both the fields and their momenta on the same time step. The main program chooses a time step for the leapfrog update step of =
TSF x a/NSXY,
(4.18)
286
KJ. M. Moriarty et a!.
/
Numerical simulation of cosmic strings and flux vortices
where a is the lattice spacing, NSXY is the number of lattice sites in any one direction, and TSF is a scale factor supplied by the user. The time step is automatically made smaller for 1agg~flattices and smaller lattice spacings. With TSF = 1.0 we obtain excellent stability, and so it is possible to increase TSF to conserve computer time. To start the leapfrog (a process which we refer to as the “jump-start”) the main program saves the field values in temporary storage and divides the time step into smaller intervals of z~t/NDTST,where NDTST = 50 X TSF, but with NDTST at least 10. The main program then calls the routine vxleap to perform NDTST of these very small time steps using the leapfrog implementation of the equations of motion. After this the fields are restored to their original values, leaving only the momenta one-half of a time step ahead. With the initial configuration completely specified the main program writes the fields and momenta to a disk file by calling the routine vxstor. The program also creates several other files for the storage of measurements during a run. One file, called the ZO file (the name of this file is the name of the configuration with “ZO” appended to the end), is for storing the positions of the zeros of the scalar field 4)(x, y). Another file, called the Zi file, is for storing consecutive copies of the field configurations (as if we were writing to magnetic tape) for later analysis or to facilitate plotting the field configuration with a graphics program. A third file, the “Z2” file, is similar to the ZO file except that the positions of the vortices are determined from the center of the energy distribution. The program vxmake creates a two dimensional vortex configuration. To create a three dimensional string configuration we run the program vxmake3d instead. This program is similar to vxmake, except that to create a line-vortex (a string) it calls the routine vxstring instead of vxboost. The routine vxstring takes as input the coordinates of a point (x, y, z) along the string, the velocity $ of the string in the + x direction, and the angle 0 the string is to make with respect to the z axis. The routine loops over all sites and computes the field values just as vxboost does. 4.2. Running a simulation The dependence of the program vxrun on other functions and subroutines is described by the Unix “make” file shown in fig. 2. The first thing vxrun does is ask for the message level and the name of the configuration to run; it calls the routine vxload to read in the field configuration from that file. The program then read as input the total number of iterations for the run, and the frequency with which it is to (1) report the energy of the system, (2) measure the position of the zeros of the scalar field and write them to the ZO file, (3) appended a copy of the current field configuration to the Zi file, and (4) measure the position of the center of energy. The main program opens the ZO, Zi and Z2 files and positions the file pointers at the end of the files. If for some reason one of these files does not exist it is created and appropriate headers are written at the beginning of the file. In the main loop the program determines the number NSTEP of iterations to be performed until the next scheduled measurement or other “event” and calls the subroutine vxleap to perform these iterations using the leapfrog dynamics. The routine vxleap in turn calls the subroutine vxbndry after each iteration to impose the boundary conditions. After the specified number of iterations have been run the main program calls the routine vxstor to write the configuration back to the disk file. It also prints timing information about the run. Both of the programs vxmake and vxrun call various minor subroutines to make measurements on the configuration. The routines vxcm and vxtrak determine the position of the vortices by measuring the center of energy (the first moments of the energy distribution). vxtrak divides the lattice into five “sectors”: left, right, up, down, and central. It reports the center of any energy distribution which is above a certain threshold, which is chosen to insure that there is then a vortex in the sector. The routine vxzero determines the positions of the zeros of the scalar field 4)(x, y). It first finds the plaquettes on the lattice which have a non-zero winding number for the phase of 4)(x, y). It then uses a
K.J.M. Moriarty et at.
/ Numerical simulation of cosmic strings and flux
vortices
287
# makefile for Cosmic String program vxrun .SUFFIXES: MAIN
=
.0
vxrun.o vxload.o vxindx.o vxbndry.o vxgetby.o \ vxenergy.o vxleap.o vxtrak.o \ vxgauss.o vxzero.o vxstro.o \ vxvmap.o ullib.o vectlib.o stringlib.o unixlib.o
vxrun: $(MAIN)
f77 $(MAIN) -o vxrun
f77 -u -c -C -q nocompiler_ registers $ vxrun.o: vxload.o: vxindx.o: vxgetby.o: vxbndry.o: vxenergy.o: vxleap.o: vxgauss.o: vxtrak.o: vxzero.o: vxstor.o: ~vmap.o: ullib.o: vectlib.o: stringlib.o: unixlib.o:
vxrun.f vxload.f vxindx.f vxgetby.f vxbndry.f ~energy.f vxleap.f vxgauss.f vxtrak.f vxzero.f ~stor.f vxvmap.f ullib.f vectlib.f stringlib.f unixlib.f
*
vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f vxcoml.f
Fig. 2. Makefile used to construct the program vxrun.
two-dimensional interpolation of the complex field (based on eq. (4.15)) to find the point where both the real and imaginary parts vanish. The routine vxgauss measures the small deviation of the fields from the gauss law constraint in eq. (3.4). We can then choose our time step small enough that the numerical violation of the constraint (as well as the numerical violation of the conservation of energy) is kept to an acceptably small value. 4.3. Libraries Our code uses several libraries of subroutines for performing general purpose functions and standard vector operations. The routines in vectlib perform such basic vector operations as addition, subtraction and multiplication, gather and scatter, vector trigonomic functions, and the dot product. The routines are written in standard Fortran-77 so they may also be used on a scalar machine, but the routines are properly vectorized by the FORTRAN-200 compiler on the Cyber 205 and the ETA-lU. In some cases “unsafe” vectorization is required for the routine to be successfully vectorized, so we have taken special care to make these routines “safe” for vectors longer than 65535 elements. A further level of optimization for the Cyber 205 and ETA-lU could be obtained by replacing the innermost loop of each routine with a “Q8” special call. The routines in ullib are for performing vector operations on complex numbers (which are elements of the Abelian group U(1)). The routines in stringlib are for performing basic manipulations on character strings, such as finding the length of a string (omitting trailing spaces), obtaining the next token from a token list, converting a string to uppercase, determining whether a token occurs in a list of possible
288
KJ.M. Moriarty eta!.
/ Numerical simulation of cosmic strings and flux vortices
responses, or converting a string to a real number. The routines in cyberlib perform all machine-dependent operations on the Cyber 205, as described later, while the routines in ETAlib do the same on the ETA-lU. For developing the code on a smaller Unix-based machine we replace this library with unixlib, or on a VAX/VMS system we use vmslib. 4.4. Working storage Our programs have been written as vector codes to speed execution on a vector machine such as the ETA-lU. Vector code is faster due to a hardware advantage, but it also requires more storage for saving temporary vector results. We have therefore taken extra steps to conserve such storage whenever possible. Most implementations of FORTRAN do not provide dynamic allocation of working storage, so we have implemented our own crude system for doing this by putting temporary vector storage in the shared common blocks /WORKO1/, /WORKO2/, and /WORKO3/. Each routine that needs temporary storage allocates the necessary portion of these common blocks by means of the FORTRAN EQUIVALENCE statement. To avoid possible conflicts between different routines the character variables LOCK1, LOCK2, and LOCK3 are used as locks on the working storage common blocks. When a working storage area is free for use its lock variable is blank, but when a routine is using that area it puts the name of the routine in the lock variable. Any other routine which tries to use a working storage area which is already locked by another routine prints an error message and halts program execution. In practice, of course, this never happens, but if a change to the program produces a conflict this is quickly discovered. 4.5. Machine dependence
Even though our code is written entirely in FORTRAN-77 there are a small number to things that vary from machine to machine, such as the way in which a program gets timing information from the system clock or the way it interacts with the File System. We have put all of the machine dependent part of our codes into a small library of routines, and the code can then be moved to another machine simply by changing this library. In anticipation of the future use of a Unix-like operating system on the ETA-lU we have used routines that mimic standard Unix routines when appropriate. The biggest difference between the Cyber 205 and other machines as far as I/O is concerned is the way in which a FORTRAN program interacts with the File System (SIL) to do things such as opening and closing files. The VSOS operating system maintains two kinds of files, permanent and local. Only local files are accessible to a job, so before a permanent file can be opened by a program it has to be made local to the job. This can be done either by issuing an ATTACH control statement or by making a call to the system routine Q5ATFACH. Similarly, a FORTRAN CLOSE statement, even with the DISP = ‘KEEP’ parameter, will keep the file only as a local file unless it is made permanent by issuing a DEFINE control statement or a call is made to the system routine Q5DEFINE. To implement these system dependent calls in a way that is transportable we introduce subroutines FOPEN and FCLOSE to open and close files. FOPEN calls Q5ATTACH to attach to an existing permanent file (unless the status argument is ‘SCRATCH’) before opening the file with an OPEN statement. It also makes sure that the file is rewound. FCLOSE closes the file, and if the disposition parameter is ‘KEEP’ it calls Q5DEFINE to insure that the file is a permanent file. On other machines we still call the routines FOPEN and FCLOSE instead of using OPEN and CLOSE statements, but we then use modified forms of these routines that do not include calls to the system routines Q5ATT’ACH or Q5DEFINE. Because we have used the VSOS operating system on the ETA-b we have used similar routines on that machine, but with one important difference: the SIL return codes on the ETA-lU for conditions that can
K.J.M. Moriarty et at
/
Numerical simulation of cosmic strings andflux vortices
289
be ignored, such as trying to attach to a file which is already attached, are different from the return code values on the Cyber 205. We therefore use different copies of the libraries on the two machines. Another subtle difference between machines is how they treat the file pointer when the end of a file is reached. On a VAX when you attempt to read from a file and instead get EOF the pointer remains before the EOF mark, so that a record written to the file is appended to the end of the previous data. On the Cyber 205 and ETA-b when you attempt to read from a file and instead get EOF the file pointer is positioned after the EOF mark, so the file pointer must be backspaced before writing a record to the file. To treat both cases when we wish to append to a file and have found the EOF we call the routine BACKEOF, which performs a BACKSPACE on the Cyber 205 and ETA-b, but does nothing on the VAX. Other routines that are in the machine dependent libraries are the following: INQUIRE prints a message and reads a response, returning the response as a string. CHCNVT converts a character string to internal Hollerith format. TODAY returns the data and time in standard Unix format, HOSTNM returns the name of the host machine. DSEC returns the time in seconds since the last time DSEC was called, or since the program was started.
5. Sample runs Our sample runs consist of two jobs, one to run the program vxmake to create a two dimensional vortex configuration and the other to run the program vxrun to propagate this configuration forward in time through the head-on scattering of the two vortices. The first job creates a 200 x 200 lattice with a lattice spacing a = 0.15, a coupling constant A = 2.0 (the critical value) and periodic boundary conditions (KINDBY = 1). The velocity of the two colliding vortices is $ = U.S (half the speed of light). The x and y coordinates on the lattice range from 0.15 to 30.0 and the initial separation of the vortices in their direction of motion is 10.0. The impact parameter is zero (b = 0.0), corresponding to a head-on collision, and both objects are vortices (no antivortices). Once the configuration has been created it is stored in the file IWASHI. The program vxrun then loads this configuration from the file and performs 7000 leapfrog update steps to propagate it forward in time to t = 21.3015, which is after the collision. The program measures the position of the zeros of the Higgs field and the positions of the center of energy for the vortices and writes them to the ZU and Z2 files every 200 iterations. It then stores the final state of the fields back in the file IWASHI. The output periodically reports both the positions of the vortices and their velocities and directions of motion, so it is easy to see that the vortices have scattered to 900. The program could have also been asked to periodically save a copy of the complete lattice configuration in the Zi file, although we have not done this in the sample run because the extra I/O slows the program down. The results of this simple simulation are shown graphically in fig. 3, where we plot the total energy density of the fields as a function of position. At first the two vortices are clearly isolated objects moving toward each other. When they collide they form (briefly) a single doubly wound vortex. (Note the characteristic dimple in the center.) After the collision two isolated vortices emerge, but at 900 from the incident direction. It turns out that this 900 scattering in two dimensions implies that nearly parallel cosmic strings will intercommute when they cross [12]. We have also studied vortex/ antivortex collisions in two dimensions, which corresponds to the collision of two perfectly antiparallel cosmic strings. We found that when the vortex and antivortex collide they annihilate, and this means that nearly antiparallel strings also intercommute when they cross. Thus our two dimensional simulations actually tell us a lot about cosmic strings in three dimensions when they cross at very shallow angles. To do determine whether or not cosmic strings intercommute at large crossing angles, however, requires a full three dimensional simulation. The results of such a simulation are shown in colorplate 1. We begin
KJ.M. Moriarty et a!. / Numerical simulation of cosmicstrings and flux vortices
290
L~~Q5O2~,,~5X
L
1.00050
~OUOO2SOO25X
.
Os
Fig. 3. Energy density of two critically coupled vortices (or parallel cosmic strings) as they collide and scatter at 900.
with two strings approaching each other from a distance with a crossing angle of 900. In the simulation shown each string moves toward the other with half the speed of light. We plot the local energy density as a distribution of random points — the thicker the distribution of points the higher the energy density. We have also color coded the distribution, so that high energy regions are red and lower energy regions are yellow. At first the two strings approach each other, unaware of each others existence. Once the two strings get close enough (within a string width or so) they start to deform toward each other, even though the coupling constant in this simulation is at the critical value where vortices in two dimensions neither attract nor repel. When the two strings cross they trade ends. When the twe new strings separate- they-have a large amount of energy concentrated on them near the interaction region, and these “kinks” begin to move down the string. We end the simulation when the kinks reach the boundaries. Thus we see that cosmic strings intercommute when they cross at large as well as small angles.
6. Perfonnance and timing As the output from the sample runs shows, each program prints timing information at the end of a run. In vxrun the total time reported is the time used to propagate the system forward. This does not include overhead for initialization, or for reading the data from disk at the beginning of the run or writing the data at the end of the run, but it does include the time needed for making measurements. The performance of the program is best measured by dividing this run time by the number of update steps performed and by the number of sites in the lattice. In vxmake the time reported is that used to perform the “jumpstart” to
K.J.M. Moriarty et a!.
/ Numerical simulation
of cosmic strings and flux vortices
291
S
Colorplate 1. (a) and (b) Energy density of two cosmic strings colliding at a crossing angle of 90 0, demonstrating that they intercon,mute. The density of points is proportional to the energy density. (c) Energy density of two cosmic strings after colliding at a crossing angle of 900. The density of points is proportional to the energy density.
292
KJ.M Moriarty et a!.
/ Numerical simulation of cosmic strings and flux
vortices
initialize the leapfrog algorithm. Although this is usually measured over a smaller number of update steps it is a better measure of the maximum speed of the machine, because it does not include any measurements. For the ETA-lO we obtained a speed of 1.77 microseconds per update step per site. On the Cyber 205 the same simulation runs at a speed of 3.45 microseconds per update step per site, so the ETA-b is approximately 2 times as fast. For comparison we have also run a somewhat similar job (60 x 60 lattice running 50 iterations) on a VAX 8600. The speed was 1688 microseconds per update step per site. The code therefore runs about 850 times faster on the ETA-lU. We should point out, however, that since the code was optimized for the vector machine we expect it to run on the scalar machine at a speed less than the optimal speed that could have been obtained had the code been designed for the scalar machine. Both programs also list the total amount of time used by each major subroutine, both in seconds and as a percentage of the total run time. The total run time listed at the end is the total time the program ran, including initialization and I/O, but the rim speed quoted above is computed just from the dynamics of propagating the configuration forward in time. Acknowledgements We would like to thank Laurence Jacobs, So-Young Pi, Peter Ruback, Paul Shellard, and Terry Walker for useful conversations, and Richard Strilka, Lola Anacker and Dick Hendrickson for assistance with running the programs. We thank Lloyd M. Thorndyke and Carl S. Ledbetter of ETA Systems, Inc. and Robert M. Price, Tom Roberts and Gil Williams of Control Data Corporation for their continued interest, support and encouragement, and for access to the Cybemet CDC CYBER 205 at Kansas City, MO. [Grant No. 135781. We also thank the National Allocation Committee for the John von Neumann National Supercomputer Center for access to the two CDC CYBER 205’s and the ETA-lU at JYNC [Grant Nos. 110128, 17182, 17183, 551701—551705]. This work was supported by the Control Data Corporation PACER Fellowships [Grant Nos. 85PCR06, 86PCRO1 and 88PCRO1], by ETA Systems, Inc. [Grant Nos. 304658 and 1312963], by the Natural Sciences and Engineering Research Council of Canada [Grant Nos. NSERC A8420 and NSERC A9030], by DOE contract DE-ACU2-86ER40284, by Scotia High End Computing Ltd., by Dalhousie University, by the Continuing Education Division of The Technical University of Nova Scotia, by Energy, Mines and Resources, Government of Canada [Grant NO. 23313-7-0U2/U1SQ] and by the Canada/Nova Scotia Technology Transfer and Industrial Innovation Agreement [Grant Nos. 87TT1101 and 88TTIIU1]. References [1] A. Vilenkin, Ploys. Rep. 121 (1985) 263, and references therein. [2] N. Turok, Phys. Rev. Lett. 55 (1985) 1801. [3] Ya.B. Zel’dovich, Monthly Notices Roy. Astron. Soc. 192 (1980) 663. [4] A. Albrecht and N. Turok, Phys. Rev. Lett. 54 (1985) 1868. [5] T. Vachaspati and A. Vilenkin, Phys. Rev. D 30 (1984) 2036. [6] KJ.M. Moriarty, E. Myers and C. Rebbi, Phys. Lett. B 207 (1988) 411. [7] V.L. Ginzburg and L.D. Landau, Zh. Eksp. Teor. Fiz. 20 (1950) 1064. [8] A.A. Abrikosov, Zh. Eksp. Teor. Fiz. 32 (1957) 1442 (Soy. Phys. JETP 5 (1957) 1174). L.P. Gorkov, Zh. Eksp. Teor. Fiz. 34 (1958) 734; Ibid. 36 (1959) 1918 (Soy. Phys. JETP 7 (1958) 505; ibid. 9 (1959) 1364). [9] E.P.S. Shellard, Nucl. Phys. B 283 (1985) 624. [10] R.A. Matzner, Computers in Phys. Sept/Oct (1988) 51. [11] For reviews see: M. Creutz, L. Jacobs and C. Rebbi, Phys. Rep. 95 (1983) 201; J. Kogut, Rev. Mod. Phys. 55 (1983) 775. [12] K.J.M. Moriarty, E. Myers and C. Rebbi, in: Proc. of the Yale Workshop on Cosmic Strings, eds. F. Accetta and L. Krauss, to be published.
K.J.M. Moriarty et at
/ Numerical simulation
of cosmic strings and flux vortices
293
TEST RUN OUTPUT 9XIDAE:
CREATE VORTICES
•
ETA1O-ETC
ETC SEP 29 14:1539 1980
5PLEASE ENTER MESSAGE LEVEL (SQLJIET, 1SVE060SE)
2 ZEROS OF ThE HIGGS FIELD. (( 20.0001 9.99994
~
MOAT IS LAMBDA 2 0000000000000 WHAT KIND OF BOL*ORRY CONDITIONS? Oil AIR MANY LATTICE SITES IN EACH DIRECTION? 200 STOAARE: CREATING ORDER VECTORS... POPES: ERROR ATTATCHING FILE VSRADDAT ERROR CODE • 1402 Fil. SSRADDAT do.. not .oi.t (QSATTACR). VXPROF: FILE ERRADDAT NIT FIEND OR DATA NOT GOOD ENOUGH. INITIAL VALUES: 8.25 COVtAR.59939614213O6 DXI.0.S3846053840153
• *
FILE: IRASHIZO
•
WRITING VERSION
15.0000 10.0000
))
1.00000 1,000000
0
• CONFOI: bASAl
FORMAT:VXZO SAVED ON: ETC SEP 29 14:22:02 1988 • T. 0.150000E-02 DT. 0.300000E-02 ALAT. 0.150000000 LMIRDAo 2.50000 • IVER5= 0 KDV1BM(=—1 KORDER.-2 NDIEI= 2 NSITE=200 200 0 KINDBY. 0
DT=0.02000000000000
8<IGRII LEVEL S N=25
2 SECTORS ABOVE THRESHOLD. •
8&8.TIGRII LEVEL 2 N=50
CONFIG: o•
46.LTOGROI LEVEL 3 9.100
WRITING VERSION
FILE: IWASHIZ2 IRASHO
FOMO4AT:VIZ2
0
SAVED ON: ETC SEP 29 14:22:03 1988
Tn 0.5500000-02 IT. 0.300000E-02 ALAT=2 NSITE.200 0.150000000200 OVERS. 0 KDYTHRO.-1 KORIER.-2 NDIW=
1LAMBDA. KINDBY.2.00000
4<OORID LEVEL 4 81.200 J.LTOGRID LEVEL 5 84=400 VXPROF: WRITING TO HXRADDAT. OPEN: ERROR ATTATCHONG FILE O2I4DAT F~I. I2I4DHT do.. ,ot.o~.t (Q5ATTACH) USRIRT: INTEGRATING... 02818ff: WRITING TI I2I4DAT.
.
ERROR CODE —
0
• TOTAL 6.94 TOME:
1402
WHAT IS THE INVERSE LATTICE SPACING to/A)? N.6686666800000 LATTICE SPACING A.0.1500000001500 COORDINATES RANGE FROM 0.150000
TO
•
ROMIING AT
•
ROUTINE
•
MAIN EXINDO VOl_EM’ VXRNERGY SSBNDRT HXTRM( NICOETBY ASZERO TOTAL:
•
30.0000
o
• • • • •
WHAT OS THE SEPARATION IN THE 0 DIRECTION? 15 050005080500 MOAT OS THE SEPARATION IN THE Y DIRECTION (B)? O O0O00O00~O WHAT OS THE VELOCITY OF EACH VORTEX (BETA)? O . 5OO000G00R~ VORTEX-VORTEX (V) OR VORTEX-ANTIVORTEO (A) CILLOSOON?
*
VORTEX 0: OO.000R00015000 15.00000001S000 0.S000000R00000 VORTEX 2: 20.0000000S5000 15.000000015000 —O.5~0G000RRRR VXGETOY: 864=5 REAL—FIELD PERIODIC 8OI2I000Y CONDITIONS VIGIlS?: (4=2 REAL—FIELD PERIODIC BDLRIDAMT CONDITIONS o
• • • • o o o
To O.000000E.O0 TOTAL ENERGY: KINETIC EHERIV: POTENTIAL ENERGY: SCALAR PSTENTOHL: MOMEN11.W.l SQUARED: ELECTRIC FIELD: GRAD SQUARER: MAGNETOC FIELD:
ENERGY 14.0243 1.80144 8.76438 2.29396 1.06175 0.739689 7.47G4O 2.95853
ENTER THE TOME STEP SCALE FACTOR: 4 0000000000000 LEAPFROG DYNAMICS. TSF.4 .0000000000000 DT=V.OO3000000003000 STEPPING MOMENTA FORWARD To 0,S00000E—03 TITAL ENERGY: o KINETIC ENERGY: POTENTIAl. ENERGY: o SCAL.AR POTENTIAL: • MOAENTIJ4 SQUARES: o ELECTRIC FOELD: GOAD SQUARED: • MAGNETIC FIELD:
0.0500000-02
199 STEPS TO To 1.500000E-S3 FOR LEAPFROG... ENERGY 14.5243 1.80144 9.70437 2.29386 1.06177 0.738680 7.47041 2.85803
GAUSS 0_AR DEVIATION
•
To
o
TOTAL DEVIATION OVER 39204 SITES: MAXIIRM DEVIATION AT A SINGLE SITE: 802186.94 CHARGE DENSITY:
o
-2.480570E-04 S.S3S760E—02 O.339S74E-03
WHAT FILE TO STORE 1886 C0NFIGL~ATIINON? 060260 FILE: IRASHI •
•
WRITING VERSION
0
CONFOG: OWASHI POROMAT:VXSTO2 SAVED ON: ETC SEP 29 14:21:28 1988 To 0.O5OGOOE-O2 DT. 0.3000005-02 Al_AT. 0.150000008 LMDA 2.00000 OVENS. 0 6DY8886.-S KORDER.—2 8ROIM 2 8ISITE200 200 5 KINDRY. S S
1
13.8057
1.78959
SECONDS FOR
199 ITERATIONS
IROEC/STEP/SOTE
TIME 103.909 0.624 13.501 0.136 0.240 0.321 0.018 0.538 SS9.351
87.062 0.523 11.343 0.114 0.206 0.2R9 0.015 0.449
___________________________________________________________________________
*
K.J. M. Moriarty et at
294
d34tA4: VORTEX DYNAMICS o
•
ETA1O—ETC
/ Numerical simulation of cosmic strings and flux vortices
ETC OCT 03 02:46:12 1980
PLEASE ENTER MESSAGE LEVEL (DoQUIET, OGVERBOSE)
o
7000
SECTOR 3 SECTOR 4 14.5021 ZERO 1 ZERO 2 SECTOR 3 SECTOR 4
21.3015
WHAT IS ThE NAME OF ThE CONFID.RATIIN FILE TI LOAD? IRASRO * •
FILE: bASAl
1
HIM MANY O1’ERATOONS TO RlJ4? 7000 HOW OFTEN TO REPORT ENERGY? 500 HOR OFTEN TO REPORT ZEROS? 100 AIR IFTEN TO WRITE CONFOGLRATOON TO ZO FILE? 0 HOW OFTEN TO 8IEASLRE ThE CENTRE OF ENERGY? 100 LAST 04 AT T.0.07R50000000000 1 SECTORS. TAD .083000000500000 ~EN: RERI4DNG 7000 LEAPFROG STEPS. OT T E KR PE 0.301500 14.0244 1.81324 9.74679 000 1.80150 14.5240 5,70706 0.67310 2 VORTICES ZERO 1 10.6952 15.0000 1.0 0.40572 0.0000 ZERO 2 10.1044 15.0600 1.0 0.49972 100.0000 SECTOR 1 10.8952 14.0961 7.2 0.40663 0.0843 SECTOR 2 10.1049 14.0941 7.2 0.45691 170.0960 1000 3.30150 14.5250 1.02564 9.64323 2 VORTICES ZERO 1 11.6369 15.0000 1.0 0.45481 0.0000 ZERO 1 10.3911 15.0000 0.0 0.49651 100.0000 SECTOR 1 55.8.402 14.9061 7.2 0.49440 0.0009 SECTOR 2 16.3507 14.9041 7.2 0.49606 170.0992 SECTOR 3 14.7412 14.0944 0.9 0.14810 .0.0575 SECTOR 4 14.7412 54.5953) 9,6 0.24822 0.0396 1500 4.00050 14.5276 1.52441 9.69406 2 VORTICES ZERO 1 11.3840 15.0500) 1.0 5.45701 0.0000 ZERO 2 17.9100 10.0000) 1.0 0.45701 180.0000 SECTOR 1 12.3572 14.5961) 7.3 0.52295 0.0042 SECTOR 2 17.5530 14.0061) 7.3 0.54166 170.9940 SECTOR 3 14.0503 14.0964) 13.7 0.07410 0.1008 SECTOR 4 14.9553 14.0052) 13.7 0.07418 -0.0831 2~ 6.30150 14.5301 1.ROS2O 9.62144 2 VORTICES ZERO 1 13.1315 15.0000 1.0 0.40847 0.0000 ZERO 2 10.0685 15.0000 1.0 0.49841 -150.0000 SECTOR 1 13.5378 14.9863 0.4 1.47142 0.0079 SECTOR 2 15.9694 14.9963 10.1 1.55290 175.9930 SECTOR 3 14.9959 14.5969 14.4 0.00904 1.1762 SECTOR 4 ( 14.9955 14.0951) 14.4 S.OQ9SS 0.1750 2500 7.90150 14.5334 1.50497 9,09720 2 VORTOCES ZERO I ( 13.9037 15.0000) S.D 0.53641 0.0000 ZERD 2 ( 16.0063 15.0000) 1.0 0.S384S 150 0000 SECTOR S ( 14.9642 14.0070) 14.3 0.16792 0.2793 SECTOR 2 ( 15.0140 14.9969) 14.4 0.14160 179.6684 SECTOR 3 ( 14.9596 14.9076) 04.5 0.00118 52.6053 SECTOR 4 ( 04.9996 14.0956) 04.0 0.00092 30.0130 3000 9.30050 14,5372 1.60125 5,20214 2 VSRTOCES ZERO 1 ( 05.0000 14.4066) 1.0 1.34620 —90.0000 ZERO 2 ( 15.0005 15.5934) 1.0 1.34629 00.0000 SECTOR 1 ( 14.9902 14.9904) 14.5 0.00176 73.1003 SECTOR 2 ( 04.9096 14.0904) 14 U 0.00286 144.0502 SECTOR 3 ( 04.9993 15.0020) 14.5 0.00509 100.4579 SECTOR 4 ( 14.9693 14.0538) 14.4 0.00688 -50.5047 3000 10.5005 14.5416 1.00405 0.30024 2 VORTICES ZERO S ( 15.0000 13.6006) 1.0 0.40022 -90.0000 ZERO 2 ( 15.0000 DR.3R94) 1.0 0.45022 00.0000 SECTOR S ( 14.9507 14.9061 14.5 0.00216 -157.7459 SECTOR 2 ( 14.9060 14.9091 54.0 0.00160 -150.5083 SECTOR 3 ( 14.9082 10.0668 14.1 0.12900 00.7377 SECTOR 4 ( 14.5962 14.8448 13.6 0.28157 -00.3291 4000 52.3015 14.0460 1.75533 0.99295 2 VORTICES ZERO 1 15.0050 12.0955 1.0 0.48082 -00.0000 ZERO 2 10.0000 17.1045 1.0 0.49082 00.0500 SECTOR 1 14.9924 14.9908 14.3 0.01401 -87.0820 SECTOR 2 ( 14.0076 14,0905 14.3 0.01441 -82.5990 SECTOR 3 ( 14.9984 16.0094 0.6 1.30224 80.9826 SECTOR 4 14.0964 13.2643 8.2 1.23130 —89.9544 4500 13.5050 14.5027 1.59206 9.99008 2 VORTICES ZERO 5 15.0050 12.0741) 1.0 0.47083 -00.0050 ZERO 2 16.0000 17.8269) 1.0 0.47862 90.0000 SECTOR 1 14.5927 14.0106 12.8 0.07170 —68.9246 SECTOR 2 14.9090 14.9106 12.6 0.07150 —50.3023 SECTOR 3 ( 14.0987 17.7783 7.3 0.04200 90.0069 SECTOR 4 ( 14.9967 12.2137 7.3 0.53283 —90.0060 5000 15.3015 14.0901 1.68327 9.71800 2 VORTICES ZERO 1 15.0008 11.4754 1.0 0.45095 -90.0015 ZERO 2 15.0000 15.0248 1.0 0.48006 00.0015 SECTOR 1 14.9909 14.6820 6.4 0.25048 -50.6332 SECTOR 2 14.0981 14.6821 8.4 5.25939 -89.9286 SECTOR 3 14.9840 18.5015 7.2 0.46438 00.0640 SECTOR 4 14.0960 11.4951) 7.2 0.47150 —00.0488 5900 16.8015 14.0868 1.04589 0.61502 2 VORTICES ZERO 1 16.0000 10.7847) 1.0 0.48217 -90.0067 ZERO 2 15.0000 19.2153 1.0 0.48200 90.0067 SECTOR 3 14.0909 19.2082 7.2 0.47184 66.8850 SECTOR 4 14.9090 10.7805 7.2 0.47108 —80.0891 8~ 10.3015 14.5742 5.03722 10.0031 2 VORTICES ZERO S 15.0000 10.0823 1.0 0.47246 —89.0701 ZERO 2 05.0000 19.9170 1.0 0.47214 80.0760 SECTOR 3 14.9984 10.9125 7.2 0.46723 89.0887 SECTOR 4 14.8964 10.0840 7.2 0.48677 -00.0878 6500 19.8015 14.0827 1.84087 10.0080 2 VORTICES ZERO 1 ( 10.0500 0.3740) 1.0 0.48370 —80.9848 ZERO 2 S0.~ 20.6250) 1.0 0.48372 89.9821
(
14.9062 20.6147) 7.2 0.46712 14.0862 0.3048) 7.2 G.46717 1.69300 8.65A64 2 15.0000 O.6R955 1.0 0.45857 05.0508 21.3106) 1.0 0.45958 14.99R1 21.3146) 7,2 5.40555 04.9961 8.6646) 7.2 0.46R4S
90,0332 —90.0334 VORTICES —88.9508 00.0032 89.0930 —80.9931
____________________________________________________________________
READING ON ETC OCT 03 02:40:12 1980°
• CONPIO: ORASAO FOI94AT: VXSTO2 SAVED: ETC OCT 02 11:12:55 1008 • To 0.301500 OTo O.300000E-02 ALATo 0.150000000 LAMBDA. 2.00000 o IVERS. 1 KDV9444=-5 KOR5ERo-2 NDIM. 2 NSfl’E=200 200 1 KONDRTo 1 1 o ____________________________________________________________________
( ( ( ( ( (
21.3015
GAUSS LAW DEVOATOOR
o
To
•
TOTAL DEVIATION OVER 30204 SITES: MAXOItRA DEVIATION AT A SINGLE SITE: UMORLM CHARGE DENSITY:
* *
-3.40826 3.52704 S .400035E—03
:____________________________________________________________________ •
FILE: O9ASHO
WRITING VERSION
2
• CONFOG: bASAl FORMAT:015T02 SAVED ON: ETC OCT 03 02:54:42 1968 o To 21.3015 ST. 0.3000005-02 Al_AT. 0.150500008 LN~DA=2.00508 • OVERDo 2 KDVTRMI.-S KORDER.-2 88608. 2 NOO’EE.IOQ 200 1 KOWRRYo 1
°____________________________________ *
TOTAL RIM TIME:
* *
RI_I4NONG AT
•
RIIJTINE
TIME
0
MAIN 000.OAD 0110900 AO_EAP EXENEROT S1100ROHY HXTRAJ( OTOZERO TOTAL:
25.342 50.940 0.612 471.574 4.616 8.019 22.476 37.817 524.486
• • o *
o o
• • *
544.051
5.98582
SECONDS FOR 7000 ITERATIONS CROEC/STEP/SOTE
4.038 0.141 0.095 75.653 0.771 1.284 3.000 6.088
________________________________________________________________