A parallel code for the numerical simulation of flux tubes in superconductors and cosmic strings

A parallel code for the numerical simulation of flux tubes in superconductors and cosmic strings

Computer Physics Communications Computer Physics Communications 78 (1993) 141—154 North-Holland A parallel code for the numerical simulation of flux...

887KB Sizes 0 Downloads 70 Views

Computer Physics Communications

Computer Physics Communications 78 (1993) 141—154 North-Holland

A parallel code for the numerical simulation of flux tubes in superconductors and cosmic strings Richard Strilka Physics Department, Boston University, Boston, MA 02215, USA Received 14 April 1993

We present two programs written in CM Fortran that simulate the dynamical interactions of flux-tubes in superconductors and cosmic strings. One program simulates parallel line-vortices (a 2-dimensional problem) while the other simulates the full 3-dimensional problem. Both programs have been written to take advantage of the parallel architecture and data motion features of the Connection Machine.

PROGRAM SUMMARY Title of program: vxmkgo2D

Has the code been vectorised? Only for CM5

Catalogue number: ACPD

Peripherals used: data-vault, scalable-disc-array, frame-buffer No. of lines in distributed program, including test data, etc.: 3126

Program obtainable from: CPC Program Library, Queen’s University of Belfast, N. Ireland (see application form in this issue)

Keywords: line-vortices, superconductors, cosmic-strings, parallel computing, Connection Machine

Licensing provisions: none Computer: Connection Machine CM2 and CM5 Operating system under which the program is executed: Unix Programming language used: CM Fortran Memory required to execute with typical data: size dependence (2M bytes for 2562 lattice)

Nature of physical problem Vortices appear in a wide variety of physical phenomena. Vortices in the Ginzburg—Landau model (phenomenological theory of superconductivity) and in Grand Unified Theories of elementary particle interactions (in which case they are called cosmic strings) share a common theoretical background. Understanding the dynamical interactions of parallel line-vortices is important to both fields.

No. of processors used: varies

Method of solution We derive Hamilton’s equations by employing techniques from lattice gauge theories. A leap-frog algorithm is used to integrate the equations of motion.

_________

Typical run time: about 5 minutes

No. of bits in a word: 32

Correspondence to: R. Strilka, Physics Department, Boston University, Boston, MA 02215, USA.

0010-4655/93/$06.00 © 1993



Elsevier Science Publishers B.V. All rights reserved

142

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

PROGRAM SUMMARY Title of program: vxmkgo3D

Peripherals used: data-vault, scalable-disc-array, frame-hufler

Catalogue number: ACPE

No. of lines in distributed program, including test data, 2662

Program obtainable from: CPC Program Library, Queen’s University of Belfast, N. Ireland (see application form in this issue) Licensing provisions: none Computer: Connection Machine CM2 and CM5 Operating system under which the program is executed: Unix Programming language used: CM Fortran Memory required to execute with typical data: size dependent (2.56M bytes for 64~lattice)

etc.:

Keywords: line-vortices, superconductors, cosmic-Strings, parallel computing, Connection Machine Nature of physical problem Vortices appear in a wide variety of physical phenomena. Vortices in the Ginzburg—Landau model (phenomenological theory of superconductivity) and in Grand Unified Theories of elementary particle interactions (in which case they are called cosmic strings) share a common theoretical background. Understanding the full 3-dimensional dynamics is important and may have implications in the formation of large scale structure in the universe.

No. of processors used: varies

Method of solution We derive Hamilton’s equations by employing techniques from lattice gauge theories. A leap-frog algorithm is used to integrate the equations of motion.

Has the code been t’ectorised? Only for CM5

Typical run time: about 15 minutes

No. of bits in a word: 32

LONG WRITE-UP

1. Introduction Vortices are line-like topological defects which may exist in field theories which possess global and/or local symmetries. They have been the subject of intense investigation over the last two decades and are important to many different physical phenomena ‘i’. Vortices are solutions to classical field equations which are usually stable for topological reasons. In gauge theories, U(1) vortex solutions were first found by Nielson and Olsen [2] in the (3 + 1)-dimensional Abelian— Higgs [3] (or Ginzburg—Landau [4]) model. These

*

For an outline of solitons in particle physics refer for instance to ref. [11.

vortices were also known to Abrikosov [5] who studied their properties in the context of type-Il superconductivity. It is important to understand the dynamics of vortices. In cosmological models, vortices or cosmic strings could have formed as the universe cooled and underwent a phase transition via the Kibble mechanism [61. Field theories which support cosmic string solutions are interesting because they provide a mechanism for the formation of large-scale structure in the observed universe [71.In the string paradigm, oscillating cosmic string loops would act as the gravitational seeds for galaxies and groups of galaxies. Essential to the viability of this model was the numerical determination of string—string intercommutation [8,91.Long strings that crossed the horizon

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

could have survived to the present day as Big Bang relics. When the strings are parallel it is known, through analytic [10] and numerical methods [11], that they collide at 900. Recently, closely related semi-local vortices were found which are U(1) gauge strings in the presence of a SU(2) global symmetry [12]. In this model the strings are dynamically (rather then topologically) stable. When the SU(2) symmetry is gauged, the model becomes the bosonic sector of the Weinberg— Salam model of electroweak interactions and still supports vortex solutions [13]. The cosmological implications of these strings still requires further study. They differ from U(1) vortices in that the string can end in a cloud of energy [12]. The Ginzburg—Landau theory is a phenomenological model of superconductivity. In this context, vortices or magnetic flux tubes can be observed in type-TI superconductors when they are placed in a suitable external magnetic field. The vortices usually form a triangular “lattice” to

2. The model

minimize their mutual repulsion. The bulk mate-

A

The Abelian Higgs model in d spatial dimensions is described by the action

S

fdd±1x[(D~)~(D~)

=



~A(4t4

2





~2)

(2.1)

~

where f(x) is a complex scalar field, A~(x)is the U(1) gauge potential, ~ ~A~(x) a~A~(x) ~s the field strength, and D~ ieA~4(x)is the gauge-covariant derivative of ~(x). This is the relativistic generalization of the Ginzburg— Landau theory. We use units c h 1 throughout. In particle physics the ~ field is called the Higgs field and the order parameter in superconductivity. Rescaling lengths and fields as follows, 1 x3 ~jJ, ~ =



=



=

=

-

=

=

if

3 rial is still superconducting except at the center of each vortex. This vortex-lattice was predicted by Abrikosov [5] and has been extensively studied. The type-IT superconducting state is destroyed if the strength of the magnetic field is increased beyond a critical value. Type-I superconductors are quite different. If a magnetic field below is applied, superconductorsa particular will totallyvalue expell the field.theAbove this value the superconducting state is destroyed. In this paper, we present two CM Fortran programs that simulate the dynamical interactions of U(1) vortices in 2 and 3 spatial dimensions. Our code may be easily generalized to study other line defects in gauge theories such as semi-local strings. These programs extend the earlier work and formalism presented in ref. [8]. The CM Fortran language, developed by TMC [14], caters to the array subcomponent of the parallel Fortran 90 language, and additionally provides a few extensions which are natural for SIMD/MIMD architectures. These additions have been isolated in our programs and are easily written in Fortran 90 with a slight expense in performance.

143

=

ifA1,

s

=

—~,

(2.2)

e the action can be written as ~=

fd~~+hi[(n~)t(n~) 1

4A(~~ 2 1)2 is the remaining coupling (2.3) conwhere A A/e stant. The coupling constant A determines the spatial rate of change of the matter field relative to the rate of change of the gauge field. In these rescaled units, the critical value of the coupling —





=

which separates type-I and type-TI superconductors is A 2. We will drop the tilde henceforth. Note that the action eq. (2.3) is invariant Under the gauge transformation =

~(x) —s exp[ix(x)]4(x), A~(x) —sA~(x)+ ~x(x)

(2.4)

for any function x(x). The gauge field A~(x)has a geometric role which can be illustrated by thinking of the complex field 4(x) value at x~as a mapping to a vector in the complex plane. The gauge transformation eq. (2.4) is seen to be an

144

R. Strilka

/

A parallel code for the numerical simulation off/us tubes

arbitrary rotation of this vector. Regular derivatives, or finite difference equations, would have no meaning if 4(x) was allowed to have a random phase convention at each space-time point. The extra A~term in the gauge covariant derivative is needed to compensate for phase changes due to gauge transformations. Techniques from lattice-gauge-theories * provide a natural framework for lattice models which allow the matter fields to have an arbitrary space—time phase convention. The matter field ilj(x) is represented by the lattice variable ‘1~at the sites of a cubic lattice. The ~th lattice covariant derivative V1~ is constructed by covariantly transporting ~ across the ~ directed link by multiplying it with the phase exp(iO~)before the finite difference is formed. The ~th lattice covariant derivative is written as 1 =



[exp(



iO~)~1~~ ~x1’ —

(2.5)

where a is the lattice spacing. From eq. (2.5) we see that the variables O,~are defined on the jx directed links and 2). that in the continuum limit aA,~(x) + tsl’(a The equations of motion that are invariant under the discrete version of the gauge transformations eq. (2.4), namely,

grangian to the obtain Hamiltonian H for the discrete system [8] H

=

Ea”{~~(~ ieA~~) —

+~(~~+ieA~~)+ +

~ (V,&. ) ~(V,4~.)+ (-~i-~. ~

+

[i

~

+

a

~



o~)]

A°~ x;(~



~(E~—E1~) i(~~1—~1~) (2.8) a is satisfied. The equations of motion derived from eq. (2.7), which are guaranteed to be the exact equations for the discrete system, are ~

=

1T”

(2.9a)

~~ã~T~= dO~ —



dirt

—s—



aE~, 1



(2.9b)

E [exp(io~_

1)~~1

+

exp(



iO~)4~±II

2d —



ie

dE~ =



dt



exp(

[~t

if

2)~

(29c)



a —

c~+1exp(i6~)4~]

[sin(O~+O1±t J

~

For a review see e.g. ref. [15].

~

Here A~(x)is the lattice version of the temporal component of the gauge field. The Hamiltonian eq. (2.7) is lattice gauge invariant provided that the lattice version of Gauss’ law

~.

______ *

cos(O~+ 0~ —0’

~ E~(A° x±i



1

nience, we recapitulate their main points. The lattice gauge invariant Lagrangian can be written in terms of the variables ~ and Computationally, however, it is more convenient to work in the Hamiltonian formalism. This is done by defining the lattice momenta ir~ and E~, which are conjugates to the fields 4~and O~ respectively, and Legendre transforming the La-

°~

i~J

d4 ~ U O~+x1~—,v~, (2.6) may be obtained from a lattice gauge invariant Lagrangian for the discrete system. (In eq. (2.6) x.~is the lattice variable for x(x) and is defined at lattice sites.) Such a formulation was first presented in ref. [8]; and for the reader’s conve-





sin(O1_~+ I



o

~



0’

~

_Uix+j _6J



0’x—i)j\1

(2.9d)

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

In writing eqs. (2.9) we have used the gauge condition A°~ 0. This is called the temporal gauge and it simplifies the equations of motion. The variable A°~ does not have a conjugate momentum and is not a dynamical field. To make time discrete, we choose a finite time-step i~t which will be much smaller than the lattice spacing a (i.e. i~t ~ a). The equations of motion eqs. (2.9) are local and map naturally onto the parallel architecture of the Connection Machine (CM). The dataparallel paradigm is also a natural structure onto which the logic of the data-set may be superimposed. This mapping is exploited by the extended array sections of Fortran 90 which are incorporated in CM Fortran [14]. This high-level parallel language provides a concise and coherent embodiment of array processing operations to express the equations of motion which would otherwise involve cycling over array indices in a “DO-loop’. Nearest neighbor communications on a Cartesian grid, which are used in forming derivatives, are performed with CSHIFT commands. This is one of the fastest modes of communication for the CM provided that the problem, such as ours, can be mapped onto a d-dimensional grid. Boundary conditions can be easily imposed during the integration of the equations of motion with logical “masks” and CSHIFT commands in a way that the constraints are automatically satisfied (see next section). The CM software also allows the use of multiple sets of virtual processors (VP) to easily handle data-sets which larger than the physical size of the machine. In fact, the efficiency of most operations increases with the number of VP sets because of the internal pipelining of the loops over the VP sets. In general, however, the number of VP sets must be equal to a power of 2. =

3. Initial configuration

The first step in simulating two colliding cosmic strings is to calculate the initial configuration. This must be done numerically since no analytic solution exists. We accomplish this by first writing

145

the singly-wound static vortex configuration in the “radially symmetric” gauge as

~)

4~(x,

+_Yf(r) r

(3.la)

X

A~(x,y)

=



~~b(r).

(3.lb)

Here the functions f(r) and b(r) describe the radial profiles of the matter field and gauge fields, respectively, and r is the radial distance from the vortex center. Finite energy requires that both profile functions vanish at the center of the vortex. These functions, however, quickly approach unity away from the vortex core. Notice that 42(x) can in general “wind” around the central maximum of the potential in eq. (2.3) like çb(r, 0) exp(in0). The winding number n must be an integer for 42(x) to be singled valued and it counts the number of times the phase of 4.(x) winds through 2-rr as the vortex is circled. It is this global twist in the phase of 42(x) that gives the vortex its topological stability. The profile functions f(r) and b(r) can obtamed by minimizing the energy-density E [8]: E

=

2irf0 ~‘(b(r), f(r))r dr.

(3.2)

In writing eq. (3.2) we have assumed translationally-invariant strings along the the z-axis and the angular integration was also done. This 1-dimensional minimization problem is solved efficiently by integrating the relaxation equations df ~E db (3.3) dT dr on grids using different levels of coarse-graining [8]. That is, we first integrate eq. (3.3) on a relatively small number of grid points so that the lattice spacing, and the corresponding time-step, can be taken to be relatively large. When the solution reaches the desired accuracy, the number of grid points are doubled, the profile functions are interpolated to the finer grid, and the relaxation is performed again. This process is repeated several times until the desired resolution is reached. The profile functions can then be considered known functions of r. —

=



=



146

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

The time dependent solutions are then obtamed by boosting eq. (3.1), which we take for convenience, in the x-direction. (Our code, however, allows the boost to be in an arbitrary direc-

the momentum variables to the link variables 0,(x, y), are ‘r13 db I E 2(x, y) —H E1(x, y) 0. rb dr Ir~’ (3.8) =

tion). A slight complication arises because the temporal gauge will no longer be respected. This can be corrected by first making the gauge transformation x(x, y) =y12(x, y)

(3.4)

and then Lorentz transforming the fields [8]. In eq. (3.4) the integral ‘2 is given by 12(x

~)=fk~

d~

o

=

)

=

~(x,

(xh e~+ iy

(—VP)

f(rb)

r~ x(y2 ~ +

___________

rh2

~

rh

(1 —b(rh))

rh —

~XhY

e”~)

(x~etx

+ IXhY

e

df II dr (3 6)

— ‘

~‘

-

.

where Xh yx, r~=x~+y2, and rh is the radial distance to the point (x, y) in the vortex rest frame. In eq. (3.6) ~ l/~’1 p2, p i/c is the speed of light, and i’ is the speed of the =

=

=

Finally, the 2 vortex configuration is constructed using the composition rule A~(x)=A~(x)+A~(x), 1(x) + E~2(x), E~(x) E1

(39)

=

(3.5)

and it is also performed numerically and may be considered a known function of x and y. In this gauge, the boosted matter field 42(x, y) and its momentum conjugate ~7-(x,y) become 42(x, y)

— ~-

=

vortex. The numerical accuracy of the lattice covariant derivative may be increased by calculating the total phase change for ~ across the finite link by using

and the superscripts identify the two vortices. Ansatz (3.9) proved to be an excellent approximation when the vortices were well separated [16]. In this program we would like to simulate an infinitely large volume while maintaining energy conservation. Toward these ends, we have developed a code that imposes free- (or Neumann) boundary conditions. Free-boundary conditions may be thought of as setting the covariant derivative of 42(x, y) and the magnetic field at the boundary to zero. This is accomplished by: (I) setting the gauge potentials that cross the boundary to zero, (2) setting the regular derivative of 42(x, y) to zero, and (3) setting the gauge field components parallel to, and outside of, the computational box to the value of links that form the boundary. These conditions are easily met by using logical “masks” and CSHIFT commands as the equations of motion are being integrated. For example, 42~is needed at the boundary to calculate the momentum ir 1 within the computational box. This is easily accomplished by performing a CSHIFT (42’(x, y) 42(x + a, y)) followed by a WHERE (logical mask Right Boundary) 42’(x, y) 42(x, y). Thus the derivative of 42 can he set to zero across the right-boundary automatically as the configuration is evolved in time. —*

=

=

y±a

62(x,y)=f

A2(x,~)d~

(3.7) 4. Program structure

as the initial condition for the y-component of the link variable. 01(x, y) 0 at time t 0 due to our gauge condition. The electric fields E’(x, y), =

=

The code consists of two main programs i’xmkgo2D and i’xmkgo3D. The program

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

147

vxmkgo2D is used to create a two-dimensional vortex configuration (two parallel line-vortices) and propagate it in time, while vxmkgo3D creates and propagates two line-vortices in three-dimensions. Each program stores conifigurations which can be re-read and evolved further. Thus, both programs can be run as many times as needed to complete a numerical experiment. In addition, we have added preprocessor control lines to the routines that involve CM2 or CM5 specific peripherals such as the CM2 Frame-buffer and data-vault or CM5 scalable-disc-array (SDA). Written this way, our code can be complied on either the CM2 or CM5. If the storage peripherals are not avail-

implements a multigrid relaxation scheme to calculate the profile functions f(r) and b(r). We perform the one-dimensional relaxation in parallel, even though there are only a few processors active, to maintain the structure of the code.

able, the user may “turn-off the dependent routines by setting the flag DATASTAT to zero which is contained in the common block files vxcom2D.fcm and vxcom3D.fcm. So initialized, the storage routines (which we describe below) will simply return. This compile flag is in the Unix makefiles make2D and make3D.

do this, the routine vxffunc is used to obtain f(r) and b(r) (for any given r) by interpolating the data from vxprof. vxindices, is called before vxffunc to calculate the index into the shared array. vxboost then asks for the lattice spacing (stored in ALAT), the x- and y- separations of the colliding objects, the velocity of the collision, and whether the collision is to involve a vortex—vortex pair or a vortex and antivortex. It then calls the routines vxboots and vxboostl to calculate the configuration. They compute the boosted matter fields (and momentum) and the link variables (and electric fields), respectively. Both of these routines call vxinf2 to interpolate the data produced by vxbint so that the boost integral 12(x, y) can be considered a known function of x and ~v. vxboosts and vxboostl also need df(r)/dr and db(r)/dr (for any given r). vxdfunc supplies this. Before vxboost returns kilitable removes the shared arrays from memory.

vxmkgo2D program vxmkgo2D depends on 9 main subroutines. We describe the subroutines in the order they are called. Also the input that the subroutines and vxmkgo2D need is detailed, (1) vxflags is the first routine called. It creates the context flags or “logical masks” which are used to impose free boundary conditions. vxmkgo2D then calls /binit which asks for input of the variable GRAPHSTATUS. If GRAPHSTATUS # 0 then the routine initializes a display (either the Frame-buffer or an X-window display) and the energy density will be displayed by write-to-/b every time the routine vxenergy (see below) is called. If GRAPHSTATUS 0 then both Jbinit and write-to-/b simply return. vxmkgo2D then asks whether the user wants to create a new configuration or read in an existing configuration from the data-vault (or SDA). If a new configuration is to be created vxprof is called. =

(3) vxboost creates a vortex-vortex or vortexantivortex configuration in the center-of-mass frame. vxboost itself depends on several routines. The first routine used is profiletables. profiletables puts the data calculated by vxprof (f(r) and b(r)) onto shared arrays. Then vxbint calculates a table of values for the boost integral 12(x, y). To

(4) uxenergy is then called to calculate the energy. (5) vxjumpstart is called to advance the momentum half a time-step ahead of the fields using the equations of motion. This prepares the configuration for the leap-frog algorithm. The routine first asks for the time-scale-factor to be input (stored in TSF). The time-step ~t is then calculated as z~t TSF x ALAT/NS where NS are the number of lattice sites in any one direction and ALAT in the lattice spacing. =

(2) vxprof first asks for the value of the coupling A (stored in LAMBDA) to be input. It then

148

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

(6) vxgauss is called to calculate the deviation from Gauss’ law. The main program then asks for a character string called FNAME which is used to label vanous files for I/O. The name of the file on the

(a)

data-vault (or SDA), which will contain the final configuration state, is called FNAME with “DV” appended to it. (7) vxpack is called if the main program was asked to read-in a configuration from the data-

(b)

I

I

(c)

Os (d)

S Fig. 1. Energy density plot: (a) of the initial configuration, (b) slightly before the collision midpoint, (c) slightly after the collision midpoint and (d) of the final configuration for test run 1.

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

vault (or SDA) instead of calling the 6 subroutines listed above. vxpack is an I/O manager for the data-vault (or SDA) and is also periodically called by the main program to store the configuration as it is advance. The name of the data-vault (or SDA) backup file is FNAME with “DB” appended. There is also a small header file written to disc (called FNAME with “Zi” appended) which contains information about the state of the backup data-vault (or SDA) file. (8) vxplacezO is called to open a file on disc which is called the FNAME with “ZO” appended. It then searches for the position where the matter field 42 vanishes by calling the routine vxzero. vxzero first finds the plaquettes which have a non-zero winding number. It then uses a two-dimensional interpolation scheme to find the points where both the real and imaginary parts of 42 vanish. vxplacezO is primarily an I/O manager and is called by vxmkgo2D, during the time evolution, to make periodic measurements. The main program then asks for the total number of interactions for the run, and the frequency with which it is to: (1) report the energy of the system, (2) measure the position of the zeroes of the matter field and write them to the ZO file, and (3) write a current field configuration for backup. If any of these files do not exist, they are created with the appropriate headers at the beginning of the file. (9) vxleap is called to implement a leap-frog algorithm to integrate the equations of motion. It imposes free-boundary conditions as it evolves the system by using the “masks” created by uxflags. The program finishes the run by storing the final configuration, by measuring the energy, determining the deviation from Gauss’ law, and printing out timing information. The timing information measures the performance of the routine vxleap and does not include any of the initialization overhead, which is only a fraction of the total simulation time. As a measure of performance, we take the total vxleap run-time and divide it by

149

the total number of lattice sites and the number leap-frog steps completed. The program also prints the total run-time and total time spent on the CM. An example execution of this program is given in test run 1. It was performed on a 8K section of the Connection Machine 2 with the graphics and data-vault storage routines included. In fig. 1 we present the energy density at four different times during test run 1 as would be seen on the Framebuffer (or X-window display). Regions of high energy density are colored blue and areas of low energy areas are red. White represents zero energy density. For this run, we obtained a speed of 0.96 microseconds per update step per site. This is almost four time faster then the vector program presented in ref. [8] run on the CYBER 205. On the CYBER 205 the code obtained a speed of 3.45 microseconds per update step per site. vxmkgo3D program vxmkgo3D has the same basic structure as vxmkgo2D. The main difference is that instead of calling vxboost, vxmkgo3D calls vxstring to create two line-vortices. The only extra input needed is the angle between the second line-vortex and the z-axis. Also, all the subroutines have been generalized to 3-dimensions. The code was written in such a way as a make this generalization straightforward. We do not describe vxmkgo3D is any more detail except to mention that vxzero and vxplaczO was not extended for vxmkgo3D. An example execution of this program is given is test run 2. It was performed on a 32-node partition (without vector units) Connection Machine 5. (The CM 5 is approximately 25 times faster with the vector units.) In this test run the graphics and SDA storage routines were removed.

AcknowLedgements We wish to thank C. Rebbi and E. Myers for many useful discussions. We gratefully acknowledge support from the DARPA/NASA Graduate Assistantship in Parallel Processing (grant

150

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

#26947E), the US Department of Energy (contract #DE-ACO2-89ER40509), as well as computer support from the Boston University Center for Computational Science.

References [1] C. Rebbi and G. Soliani, Solitons and Particles (World Scientific, Singapore, 1984). [2] H.B. Nielson and P. Olsen, NucI. Phys. B 61(1973) 45. [3] P.W. Higgs, Phys. Lett. 12 (1964) 132; Phys. Rev. Lett. 13 (1964) 508; Phys. Rev. 145 (1966) 1156. [4] V.L. Ginzburg and L.D. Landau, Zh. Eksp. Thor. Fiz. 20 (1950) 1064. [5] A.A. Abriksov, Zh. Eksp. Teor. Fiz. 32 (1957) 1442 [Soy. Phys. JETP 5 (1957) 1174]. [6] T.W.B. Kibble, J. Phys. A 9 (1976) 1397. [7] A. Vilenkin, Phys. Rep. 12 (1985) 263 and references therein.

[8] KiM. Moriarty, E. Myers and C. Rebbi, Comput. Phys. Commun. 54 (1989) 273. [9] R.A. Matzner, Comput. Phys. 2 (1988) ~1. E.P.S. Shellard, NucI. Phys. B 283 (1987) 624. [10] P. Ruback, Nod. Phys. B 296 (1988) 669. [11] K.J.M. Moriarty, E. Myers and C. Rebbi, Phys. Lett. B 207 (1988) 411. E.P.S. Shellard and P.J. Ruback. Phys. Lett. B 209 (1988) 262. E. Myers, C. Rebbi, and R. Strilka Phys. Rev. D 45 (1991) 1355. [12] T. Vachaspati and A. Achucarro, Phys. Rev. D 44 (1991) 3067. [13] M. James, L. Perivolaropoulos and T. Vachaspati, Phys. Rev. D46 (1992) 5232; Nucl. Phys. B 395 (1993) 534. [14] CM Fortran Reference Manual, Version 1.1., Thinking Machines Corporation, Cambridge, MA (July 1991). [15] M. Creutz, L. Jacobs and C. Rebbi, Phys. Rep. 95 (1983) 201. J. Kogut. Rev. Mod. Phys. 55 (1983) 775. C. Rebbi, Lattice Gauge Theories and Monte Carlo Simulations, (World Scientific, Singapore, 1983). [16] E. Muller-Hartmann, Phys. Lett. 23 (1966) 521.

R. Strilka

/

A parallel code for the numerical simulation of flux tubes

TEST RUN OUTPUT Test run 1 INPUT GRAPHSTATUS (0

OFF)

Available Display Menu D The X display ‘bucrfll:0’ X Any X window display 1 CM Framebuffer: Physics 1 Choose a display (‘D’, X’, or a number): 1 What is the file name of the CONFIGURATION ? cm2tst cm2tst Do you want to create a new configuration (Y of N) y Enter Lambda 2. Computing radial profile. Multigrid level 1 N= 25 Multigrid level 2 N SO Multigrid level 3 N= 100 Multigrid level 4 N 200 Multigrid level 5 ES 400 vxprof: writing to VXRADDAT. vxbint: integrating.. What is the inverse lattice spacing (1/a)? 8.33333 8.333330 Lattice spacing a . 1200000 Coordinates range from 0.120000 to 30.7200 What is the separation in the X direction? 10. 10.00000 What is the separation in the Y direction (b)? 0.0 0. 0000000E+000 What is the velocity of each vortex (beta)? .9 .9000000 What is the angle wrt the x axis? O . 0000000E+000 Vortex—Vortex (V) or Vortex—AntiVortex (A) collision? V V

Vortex 1: 10.36001 Vortex 2: 20.36001 *

T~

16.36001

.9000000

15.36001 —.9000000

0.000000

ENERGY

* * * * * * * * *

Total Energy: Kinetic Energy: Potential Energy: Scalar Potential: Momentum squared: Electric Field: Grad squared: Magnetic Field:

28.8112 11.5633 11.6692 1.16194 6.83881 4.72452 10.5073 5.57869

Enter the time step scale factor: 10. 10.00000 DT= 4.6875020E—003

151

152

R. Strilka

/

A parallel code for the numerical simulation offlux tubes

7 How many iterations to run 3000 3000 How often to report energy7 200 200 How often to report zeros?

200 200 How often to write configuration to Z1 file7 1000

1000 IT

T

E

KE

200

0.937498

28.8032

11.8947

19.5275 11.1929

15.3604 15.3582

ZERO: ZERO:

1 2

400

28.80 28.80

PS 11.2051

26.6013 20.2710

1.87501

28.8019

1 2

18.6903 12.0297

15.3605 15.3617

28.80 28.80

0.893093 0.892609

179.995 0.242314

600 ZERO: 1

2.81252 17.8533

28.8133 15.3609

11.7404 28.81 0.892760

11.2398 179.976

ZERO: 2 800 ZERO: 1 ZERO: 2 1000 ZERO: 1 ZERO: 2 1200 ZERO: 1 ZERO: 2 1400 ZERO: 1 ZERO: 2 1600 ZERO: 1 ZERO: 2 1800 ZERO: 1 ZERO: 2

12.8666 3.75003 17,0166 13.7034 4.68750 16.1769 14.6433 5.62496 16.3600 15.3699 6.66242 16.3598 16.3600 7.49989 16.3698 15.3600 8.43736 15.3602 16.3599

16.3659 28.8111 15.3699 15.3687 28.8103 16.3590 15.3606 28.8247 16.2766 14.4434 28.8098 16.8139 13.9067 28.8081 17.4252 13.2961 28.8074 18.0925 12.6276

28.81 0.892686 11.6973 28.81 0.892448 28.81 0.892566 11.6631 28.81 0,895763 28.81 0.895929 8.64657 28.82 1.31046 28.82 1.30994 10.2439 28.81 0.s73241 28.81 0.573606 9,62969 28.81 0.662079 28.81 0.651339 8.51120 28.81 0.711736 28.81 0.711939

—0.399188 11.2571 —179.936 0.195832 10.9588 -179.933 0,120224 9.20713 131.677 —48.3163 12.1591 90.0197 —89.9915 14.1020 89.9990 -89.9998 16.2111 89.9682 —90.0128

2000 ZERO: 1

9.37481 16.3698

28.8036 18.7626

10.6043 28.80 0.714796

14.5206 90.0329

ZERO: 2 2200 ZERO: 1 ZERO: 2 2400 ZERO: 1 ZERO: 2 2600 ZERO: 1 ZERO: 2 2800 ZERO: 1 ZERO: 2 3000 ZERO: 1

15.3699 10.3123 15.3602 15.3600 11.2497 15.3599 16.3600 12.1872 16.3600 15.3600 13.1247 15.3604 16.3599 14.0621 15.3699

11.9574 28.8003 19.4278 11.2922 28.7969 20.0654 10.6547 28.7952 20.6378 10.0823 28.7896 21.1824 9.63766 28.7828 21.7648

28.80 0.714958 10.4797 28.80 0.709648 28.80 0.709632 9.88906 28.80 0.680092 28.80 0.680106 10.0630 28.80 0.610641 28.80 0.610626 8.41968 28.79 0.580875 28.79 0.681002 9.61426 28.78 0.610643

—89.9949 14.7647 89.9701 —89.9967 14.9244 90.0253 —90.0022 13.9949 89.9923 -89.9963

ZERO: 2 3000

15.3601 14.0621

8.96524 28.7828

28.78 0.610600 9.61426

—89.9871 13.6868

ZERO: ZERO:

11.6001

38.1888 53.9159 11.4194

14.9626

89.9601 —90.0080 13.6858 90.0498

*SS555S* *

T

14.0621

Gauss Law Deviation over

65636 Sites.

* * *

Total Deviation: Maximum Deviation at a single site:

0.364119 4.028987E—02

R. Strilka

* *

/

A parallel code for the numerical simulation offlux tubes

Total Charge: Maximum Charge density:

8.711779E—O4 0.784663

*

Timing infomation for vxleap: Running 3000 leapfrog steps Total elapsed time: 195.729 SECS Time spent on CM: 188.268 SECS Running at O.957579E—O6 SEC/STEP/SITE. FORTRAN STOP NORMAL EXIT

Test

run

2

What is the file name of the CONFIGURATION ? cm5tst cmStst Enter Lambda 2. Computing radial profile. Multigrid level 1 N 25 Multigrid level 2 5c 50 Multigrid level 3 N= 100 Multigrid level 4 N= 200 Multigrid level 5 N~ 400 vxprof: writing to VXRADDAT. vxbint: integrating.. What is the inverse lattice spacing (1/a)? 5.0 5.000000 Lattice spacing a .2000000 Coordinates range from 0.200000 to 12.8000 What is the separation in the X direction? 6.1 6.100000 What is the separation in the Y direction (b)? 0.0

O . 0000000E+000 What is the velocity of each vortex (beta)? .9 .9000000 What is the angle wrt the z axis (degrees) 2nd string? 90. 90.000000 Vortex—Vortex (V) or Vortex—AntiVortex (A) collision? V V

Line—Vortex 1: 3.35000 6.40000 Line—Vortex 2: 9.45000 6.40000 *

T=

0.000000

6.40000

0.90000

6.40000

—0.90000

ENERGY

* * *

* * *

* * *

Total Energy: Kinetic Energy: Potential Energy: Scalar Potential: Momentum squared: Electric Field: Grad squared: Magnetic Field:

352.883 143.383 137.965 14.4079 84.8022 58.5809 123.667 71.5354

Enter the time step scale factor: 10.

153

R. Strilka

154

/

A parallel code for the numerical simulation offlux tubes

10.00000 DT= 3.12500000E—002 How many iterations to run? 300

300 How often to report energy? 30 30 How often to write configuration 0

to Zi file?

0 IT

T

E

SE

PE

30

0.937500

352.942

141.911

137.915

60 90

1.875000 2.812500

352.943 352.969

120 150 180 210 240 270 300 300

3.750000 4.687500 5.625000 6.562500 7.500000 8.437500 9.375000 9.375000

352.816 352.905 352.850 352.825

141.330 145.739 140.609

137.445 137.636 137.768

140.127

137.599

125.510

147.789 161.261 169.858 184.419

352.783

114.461

352.798 352.030 352.030

103.234 98.6811 98.6811

*

T~ 9.375000

*

Total Deviation: Maximum Deviation at a single site: Total Charge: Maximum Charge density:

* * *

116.170

183.564 183.564

Gauss Law Deviation over 4.46220 3.463658E—02 4.526856E—04 1.40360

*

Timing infomation for vxleap: Running 300 leapfrog steps Total elapsed time: 643.880 SEeS Time spent on CM: 642.448 SECS Running at 0.816915E—05 SEC/STEP/SITE. FORTRAN STOP NORMAL EXIT

262144 sites.