Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164

Efficient implementation of the Monte Carlo method for lattice gauge theory calculations on the floating point systems FPS-164

Computer Physics Communications 29 (1983) 155—161 North-Holland Publishing Company 155 EFFICIENT IMPLEMENTATION OF THE MONTE CARLO METHOD FOR LATTIC...

557KB Sizes 2 Downloads 36 Views

Computer Physics Communications 29 (1983) 155—161 North-Holland Publishing Company

155

EFFICIENT IMPLEMENTATION OF THE MONTE CARLO METHOD FOR LATTICE GAUGE THEORY CALCULATIONS ON THE FLOATING POINT SYSTEMS FPS- 164 K.J.M. MORIARTY Department of Mathematics, Royal Holloway College, Englefield Green, Surrey, TW2O OEX, UK

and J.E. BLACKSHAW Floating Point Systems UK Ltd., Dudley House, High Street, Bracknell, Berkshire, UK Received 6 August 1982; in final form 5 November 1982

PROGRAM SUMMARY Title of program: LATI’ICE

plaquette for SU(6)/Z

Catalogue number: ACEK Program available from: CPC Program Library, Queen’s University of Belfast, N. Ireland (see application form in this issue)

6 lattice gauge theory. By considering quantum field theory on a space—time lattice [1,2], the ultraviolet divergences of the theory are regulated through the finite lattice spacing. The continuum theory results can be obtained by a renormalization group procedure.

Computer: DEC VAX 11/780 and FPS-l64 configuration; Installation: Floating Point Systems UK Ltd., Data Centre, Bracknell, Berkshire, UK

Making use of the FPS Mathematics Library (MATHLIB), we are able to generate an efficient code for the Monte Carlo algorithm for lattice gauge theory calculations which compares favourably with the performance of the CDC 7600.

Operating system: DEC VAX 11/780: VMS 2.4; FPS-164: SJE—RLE C

Method of solution Pure SU(6)/Z6 gauge theory is simulated by Monte Carlo methods on a four dimensional space—time lattice. The system

Programming language used: APFTN64

is equilibrated by the method of Metropolis et al. [3). The FPS Mathematics Library (MATHLIB) is used to take advantage of



FORTRAN77

High speed storage required: 124 Kwords (program + monitor) Number of bits in a word: 64 Peripherals used: line printer Number of cards in combined program and test deck: 635 Card punching code: ASCII Keywords:gauge Abelian lattice theory, gaugeSU(6) theory, and Yang—Mills SU(6)/Z theory, non6 gauge theories, SU( N)mechanics, istical and SU(NMonte )/ZN gauge Carlo theories, methods phase transitions, statNature of the physical problem The computer program calculates the average action per

the Floating Point Systems hardware to provide the highest speed of computation. Restrictions on the complexity of the program The only restrictions on the program are those imposed by storage limitations. The number of links in the program is given by D (ISIzE) D where D is the space—time dimensionality and ISIZE is the number of lattice sites in any direction. The SU(N) 2 elements. matrices are complex unitary unimodular matrices with N 2D ‘ISIZE’ Thus, the link 1?matrix array has dimensions N “ .‘ This is the largest array in the program and effectively sets the limitations on the gauge group and the size of the lattice which can be considered.

0010-4655/83/0000—0000/$03.00 © 1983 North-Holland

K.J.M. Moriarty, J.E. Blackshaw / Lattice gauge theory calculations on the FPS. 164

156

Typical running time The test run took I h 17 mm on the FPS-164 at Bracknell whereas it took 48 mm, operating in the scalar mode, on the CRAY-IS at Daresbury. The test run took only 10 s of the VAX 11/780 CPU time.

References [I] KG. Wilson, Phys. Rev. Dl0 (1974) 2455. [2] A.M. Polyakov, unpublished. [3] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller, J. Chem. Phys. 21(1953)1087.

LONG WRITE-UP 1. Introduction Many areas of advanced science and engineering share the common requirements of needing large amounts of CPU time and large amounts of storage. One only has to think of such diverse fields as aeronautical engineering, meteorology, geophysics and high-energy physics to realize that this is true. For example, recent calculations for SU(N)/ZN [1] and U(2) [2] gauge theories each took the equivalent of about 80 h of CPU time and all the available storage on the University of London CDC 7600. With the inclusion of fermions in the calculations [3], for example to calcu21T decay, late the proton charge radius or the p the demand on computer resources could be of the order of a thousand hours of equivalent CDC 7600 CPU time. There seem to be three approaches to the solution of these problems: purpose-built microprocessors, attached processors or supercomputers. Purpose-built microprocessors can be built relatively cheaply [4] and can yield impressive improvements in CPU time with factors of 25 over the CDC 7600 having been achieved [4]. The cost of memory is progressively reducing so that storage problems can be overcome as well. This approach has two drawbacks: first, it requires the attention of experts in electronics for long periods of time, e.g., years; secondly, the resulting microprocessor basically is a one-problem machine and requires extensive alterations to solve a new problem. However, it must be said that computer time on a purpose-built microprocessor is probably the cheapest of all possible approaches [4]. Attached processors such as Floating Point Systems hardware are a more natural solution to the problems posed and have been used extensively in many fields as reported previously by Wilson [5]. The attached processor requires the use of a host —~

computer, e.g. a DEC VAX 11/780 which uses about 1 to 5% of its time to supervise the attached processor. Thus the host computer is available to ordinary users for over 95% of the time. The FPS-164 will run with standard FORTRAN 77 and, with the FPS 164 MATHLIB instruction set, an efficient computer code can be generated. In regard to those parameters which are usually used to describe a computer’s performance, the cycle time for the FPS-l64 is 185 ns which results in a speed of 11 million floating point operations per second (11 megaflops). The third solution is the supercomputers of today; the CRAY-lS and the CDC CYBER 205. The application of the CDC CYBER 205 to lattice gauge theory calculations has recently been discussed in the literature [6,7]. In terms of performance, the cycle times for the CRAY- 1 S and the CDC CYBER 205 are 12 and 20 ns, respectively, while the megaflop rates are 100 and 400, respectively. In comparison, the CDC 7600 has a cycle time of 25 ns and a megaflop rate of 5. For an odd lattice, the vectorized algorithm for the CDC CYBER 205 was presented in ref. [6]. For ISIZE 3 a fivefold increase in speed over the CDC 7600 was achieved for the gauge group SU(4). For an even lattice, the red—black algorithm can be implemented [7]. For ISIZE 4 and SU(4) a factor of 12.5 increase in speed over the CDC 7600 can be realized. For larger lattices and larger groups speed increases of between 20 and 30 can be achieved on the CDC CYBER 205. The CDC CYBER 205 has hardwired many of the instructions which are user written subroutine implemented on the CRAY-1 S which leads to a faster code on the CDC CYBER 205. If one implements the scalar version of the Monte Carlo algorithm for lattice gauge theory calculations [8] on the CRAY- 1 S one gets hardly any improvement in performance at all over the CDC 7600. =

=

KJ,M. Moriarty, J.E. Blackshaw / Lattice gauge theory calculations on the FPS- 164

However, the CDC CYBER 205 gives a factor of 2 almost immediately. A drawback to the supercomputers is their high cost, being at least two orders of magnitude more expensive than an FPS-l64. Thus, the supercomputers can be purchased only by very large organizations. Another drawback to the supercomputers is that a substantial amount of programming time has to be invested to achieve the peak performance which would make their use fully cost effective, We at Royal Holloway College have, both alone and in collaboration with high-energy physicists outside Great Britain, become one of the centres of Monte Carlo lattice gauge theory calculations and so our computer requirements are large. However, Holloway College one of the smallerRoyal constituent colleges of isthebutUniversity of London; we could, therefore, not hope to acquire our own supercomputer. In addition, the requirements in trained personnel preclude developing our own microprocessors. A practical solution to our computational requirements would thus be to use an FPS-l64 attached processor coupled to a DEC VAX 11/780, as discussed more fully in ref. [9].

157

unoriented plaquettes p of the lattice such that

s[u]=~s~=~(l p

U( p)

where

—.~

ReTr U(p)),

(1)

p

U( b4 )U( b3 ) U(b2 )U( b1) is the parallel transporter around the plaquette with the boundary b1, b2, b3 and b4. The cutoff for small distances is the lattice spacing a so that the ultraviolet cutoff is 1/a. We can modify this action [1] by replacing the trace in eq. (1) with the trace in the adjoint representation. So we take =

TrAU( p), 21 S,, = 1 N where TrA denotes the trace of the corresponding adjoint matrix. The relationship between the bare coupling and PA for the modified action becomes —

PA = (N2





l)/Ng~.

Periodic boundary conditions are used in our program. The lattice was equilibrated by the method of Metropolis et al. Eli].

2. Outline of the theory We study [10] non-Abelian gauge theory in four space—time dimensions with the lattice spacing a. An element of the SU(N) gauge group U(b) sits on the link b joining nearest neighbour sites of the hypercubical lattice. The partition function is defined by

z(p) =fd[U]

exp(—flS[U]),

where the inverse temperature is ~3= 2 N/gd with g 0 the bare coupling constant and d[U] the normalized invariant Haar measure of the gauge group. The expectation value of a gauge invariant observable is given by


=

Z(/3Y’fd[U]A[U]

exp(—bS[U]).

3. Implementation of the algorithm on the FPS-164

The program consists of threç routines: the main driver routine, Subroutine MONTE and subroutine RENORM. Almost all of the execution time is spent in subroutine MONTE and so this is the routine which has to be reprogrammed and tuned. The driver routine simply sets up the input parameters and initiates calls to subroutine MONTE. Subroutine RENORM carries out a Gram—Schmidt orthogonalization of our SU(N) matrices. The main operations carried out in subroutine MONTE are: gather operations, matrix multiply, matrix trace operations, calculations of old and new interactions and scatter operations. For a typical sweep through a 44 lattice for SU(6) SU(6)/Z6 with fi = 0.0 and PA = 31.0 we have the —

The action is defined as the summation over all

K.J. M. Moriarty, i.E. Blackshaw / Lattice gauge theory calculations on the FPS-164

158

following statistics (acceptance rate about 60%): matrix multiply 28% calculate new interaction 19% calculate new matrix ~ gather, calculate old interactions and scatter Total time 100% -—

Thus, all the computer intensive operations (96% of the total time) involve matrix multiply and matrix trace operations. These can be implemented simply by subroutine calls to the FPS MATHLIB. The FPS MATHLIB [12] consists of an extensive set of subroutines (over 400) written in machine code to obtain the maximum performance from the FPS-164 for all the standard computing intensive operations. Thus, simply by replacing these kernels by FORTRAN subroutine calls to MATHLIB routines a considerable speed-up factor is achieved. Previously, it was thought that to achieve the full performance on a machine such as the FPS- 164 assembly language coding was required. However, with the APFTN64 compiler and the MATHLIB calls, no further code conversion to obtain the full performance is needed. The FPS-164 is a “pipelined” processor, not a “vector” processor like the CDC CYBER 205. Thus, the FPS-l64 can be very efficient at performing scalar operations which are “pipelined” automatically by the APFTN64 compiler. The red—black algorithm [7] which runs faster on the CDC CYBER 205 than the scalar algorithm [81 because it is a “vector” solution, gives no significant performance improvement over the scalar algorithm on the FPS- 164. We now describe the FPS-164 MATHLIB calls for matrix multiply and matrix trace to implement these operations in subroutine MONTE. Matrix multiply is given by: CALL CMMUL(A, I, B, C, K, MC, NC, NA) .~,

where

A

=

floating-point input matrix (column ordered),

I B

= =

=

C

=

K MC NC NA

= = =

=

integer element step for A, floating-point input matrix (column ordered), integer element step for B floating-point output matrix (column ordered) integer element step for C, integer number of rows in C (rows in A), integer number of columns in C (columns in B), integer number of columns in A (rows in B),

which replaces a triple nested DO loop. The matrix trace is given by: CALL CTRC(A I B J C M N ‘

where A I B J C M N = = = = = = =







floating-point input matrix (column ordered) integer element step for A floating-point input matrix (column ordered) integer element step for B output scalar number of rows in A (columns in B) number of columns in A (rows in B)

which replaces a double nested DO loop. By using these two MATHLIB calls we are able to increase the speed of these operations by a factor of about 4. As a result of running on the FPS-164, the original unmodified scalar version [8] of the program took 4 h 35 mm execution time, whereas the modified version took 1 h 17 mm execution time. As the same scalar version took 48 mm execution time on the CRAY-iS and 30 mm execution time on the CDC 7600, the optimized FPS- 164 version ran twice and three times slower than on the CRAY-iS and the CDC 7600, respectively. As a result, the FPS- 164 is some two orders of magnitude more cost effective. In the present paper, we have discussed calculations for pure gauge fields. Fermions can be included in the model using the techmques of ref. [3]. In terms of computer calculations, the inclusion of fermions leads to the inversion of complex matrices resulting from products of Dirac gamma matrices. In this version of the calculation, the inversion of complex matrices becomes the single

K.J.M. Moriarty, i.E. Blackshaw / Lattice gauge theory calculations on the FPS-164

most important part of the calculation, taking upwards of 75% of the total execution time. An efficient FPS-164 MATHLIB assembly language coded routine for carrying out this calculation exists called CMINV. Using this routine the calculation of the mass spectrum for SU(3) can be made extremely cost effective and comparable in execution time to the CRAY-IS, running in the scalar mode, or the CDC 7600.

4. Conclusions As a result of our experiences with implementation of the Monte Carlo algorithm for lattice gauge theory calculations on the FPS- 164 we draw the following conclusions: a) it is easy to program the FPS-164 using APFTN64-FORTRAN-77 along with MATHLIB. The peak performance described in the present paper was achieved after a few man-days of work whereas man-months are required for programming vector machines. b) In CPU time, performances of 2.5 times slower than a CDC 7600 can be easily achieved with a cost effectiveness factor of at least 200 in favour of the FPS- 164. c) In regard to the host DEC VAX 11/780, less than 1% of its CPU time is required to supervise the FPS- 164 attached processor. d) Little engineering maintenance is required for the FPS-164: the user can do most of this work himself with little effort. As a result, the FPS-164 can claim to be very suitable for lattice gauge theory calculations and other large-scale computer projects. Of course, the next generation of computers envisaged to be 1000 times as fast as a supercomputer [13], will be most welcome but these are probably a decade away. In the meantime, the FPS-164 is probably amongst the best machines available in terms of overall performance, cost, maintenance, and required programniing skill and time.

159

5. Test run The program contains the FPS-164 code, with the original Fortran as comments for comparison. Only a portion of the test run output is reproduced here.

Acknowledgement We would like to thank Dr. M. Creutz for his constant encouragement and advice.

References [I] M. Creutz and K.J.M. Moriarty, NucI. Phys. B2l0 (FS6) (1982) 50. [2] M. Creutz and K.J.M. Moriarty, NucI. Phys. B210 (FS6) (1982) 59. [3] (1982) See e.g.1792. H. Hamber and G. Parisi, Phys. Rev. Lett. 47 [4] See, e.g. R. Pearson, Future Plans for Monte Carlo Calculations, Presented at the Conf. on Lattice Methodology held at Rutherford-Appleton Laboratory (15th March, 1982). [5] KG. Wilson, Experiences with a Floating Point Systems Array Physics, Processor,ed.Parallel Computations, in: Press, Computational G. Rodrique (Academic New York, London) to be published. [6] D. Barkai and K.J.M. Moriarty, Comput. Phys. Commun. 25 (1982) 57. [7] D. Barkam and K.J.M. Moriarty, Comput. Phys. Commun. 27 (1982) 105. [8] R.W.B. Ardill, K.J.M. Moriarty and M. Creutz, Comput. Phys. Commun. 29 (1983) 97. [9] FPS-164 VAX Host Manual 860-7493-000A. [10] K.G. Wilson, Phys. Rev. Dl0 (1974) 2455. [11] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller and E. Teller, J. Chem. Phys. 21(1953)1087. [12] APMATH 64 Reference Manual 860-7482-000A. [13] K.G. Wilson, Universities and the Future of Computer Technology Development, Cornell University Preprint (1982).

K.J.M. Moriarty, J.E. Blackshaw / Lattice gauge theory calculations on the FPS. 164

160

TEST RUN OUTPUT ITERATION(S) ISIZE 4 ITERATION(S) :SIzE= 4 ISIZE 4 ITERATION(S) ISIZE= 4 ISIZE 4

1 S~JUSU(

5)11

63

3P30.93356

3°QA0.9~972

91= 31.000 ~i= 31.000

AP3=0.708102 APO.954796

A~34=0.624943 APOA=0.629516

3.001 0.001

~3=

31.000 31.000

~0=0.942597 3D0=0.9 6~.235

APQA=0.632842

0.031 0.001

3= 31.000 94= 31.000

A~C0.938252 A~3=1.D03852

SDQA=O 639~76 APOA=0.642650

0.001 0.001

21 ‘4=

A~3=1.304946 A~3=1.3O2O5O

APQA=O.643901 ADQArO.546326

0.001 0.001

34 31.000 31= 31.000

5DQ=~,9g3637

IPO=0.994034

APOA=O.649506 4705=0.652921

0.001 0.301

SA= 9A

31.000 31.000

4PQ=O.987871 APO=0.999490

APQA=O.656137 ADQA=O.659425

0.301 0.001

34= 54

31.301 31.030

ADQ=O.091961 ADO=1.003951

APQA=O.661494 APOA=O.562039

0.001 0.001

33 95=

31.000 31.000

4P0=Q.999746 A~3=1.008757

APQA=0.661794 APQA=0.662324

0.001 0.001

‘4= 31.000 65= 31.000

APQ=1.039749 4PQ=1.010017

APQA=0.662951 APOA=O.663592

0.031 3.331

‘5 93

503=0.994905 AP3=O.995092

5034=0.664696 A°QA=O.566069

3.031 0.001

04 31.003 55= 31.030

SPQS.004059 A°Q=1.304608

AP3A=0.667969 APOA=O.669596

0.001 0.031

05= 31.000 99= 31.000

APQ=0.997665 APQ=0.992419

ADOA=0.67O177 ADQA=O.671187

0.001 0.001

54= 54=

31.300 31.000

APO=1.002312 AP~=1.O34127

9RQA=0.673878 APQA=0.672333

0.001 0.001

SA= 31.000 95= 31.000

ADQ=1.009243 A°3=1.010804

4P34=3.673563 A~Q4=O.673819

0.001 0.031

P4=

31.000 31.000

403=1.003124 4°Q=1.0l3156

ADQA=0.674599 APQA=0.675146

0.001 0.001

54= 31.000 34= 31.003

AP01.009540 APC=1.005168

5034=0.675788 5734=0.575659

0.001 1.001

55= 34=

403=1.000335 403=1.307165

APIA=0.675853 APQA=O.676515

0.001

54

7U~SU( 6)/I 61= 3~OUPSU( 6)/:6

0.001 0.001

O~3UP=SU( 6)/ GROU°5’J( 6)/Z

0.001

2

2 60= 3=

ITERATION(S) 2 ISIZE 4 3RJU~SU( 6)11 61= 15115=4 3~OU~=SU( 6)1161= ITERATION(S) 2 ISIZE= 4 OROUP=SU( 6)/165= ISILE= 4 SROUPSY( 6)/Z 6 3= ITERATION(S) 2 ISIZE 4 S~3UPSU( 6)/IS 3r ISIZE 4 GROUPSU( 6)/1 63= ITERATION(S) 2 ISIZE= 4 ~0U7=SU( 6)1161= ISIZE 4 GROU~SU( 6)/Z 6 0 ITERATION(S) 2 ISIZE= 4 SROU~=SW( 6)/160= ISIZE 4 OPUPSU( 5)/163= ITERATION(S) 2 ISIZE= 4 OROUP=SU( 6)/16 = ISIZE= 4 GPOUO=SU( 6)/IS 5= ITERATION(S) 2 15115=4 GROUP=SU( 6)/I 6 3= 11115=4 ROU~S!J( 6)/169= ITERATION(S) 2 ISIZE= 4 S2OU~=SU( 6)/Z 6 5= ISIZE= 4 SROUPSU( 63/161= RENORMALIZED GROUD ELEMSNTS ITERATION(S) 2 15115=4 S7OU~=SU( 6)/165= ISIZE 4 GR)UP=SU( 6)/Z 6 9= ITERATION(S) 2 ISIZE 4 GROUPSIJ( 6)116 5= ISIZE= 4 GRCu’=Su( 6)/:65= ITERATION(S) 2 ISIZE= 4 GROUOSIJ( 6)/Z 65= ISIZE= 4 G3GU~=SU( 6)/Z 65= ITERATION(S) 2 ISIZE= 4 GRDUP=SU( 6)/Z 65= ISIZEr 4 ROUP=SU( 6)/165= ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/I 6 5= ISIZE= 4 GROU°=SU( 6)/169= ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/I 6 3= ISIZEr 4 GROUP=SU( 6)/Z 55= ITERATION(S) 2 15115=4 ~3UP=S!J( 6)/I 66= ISIZI= 4 430UP=SU( 63/? 6 3= ITERATION(S) 2

~3=

95=

31.000 31.000

31.000 31.000

31.000 31.000

A’QA=O.636233

KJ.M. Moriarty, J.E. Blackshaw / Lattice gauge theory calculations on the FPS-164 ITERATION(S) 2 ISIZE= 4 GR’JU0=SU( 6)/269= ISIZE= 4 GROUOSU( 6)/26 B ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/Z 65= 15126=4 GROU~=SU( 6)/165= ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/Z 68= ISIZE= 4 GROUD:SU( 6)/2 69= ITERATION(S) 2 ISIZE= 4 GQO’JP=SU( 6)/163= ISI2E= 4 GROUP=SU( 6)/2 6 6= RENORMALIZED GROUP ELEMENTS ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/Z 69= 1SIZE= 4 GROUP=SU( 6)/16 5= ITERATION(S) 2 ISIZE= 4 SROUP=SU( 6)/Z 6 •= 15119=4 SROUPSU( 6)/Z 63= ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/263= 15126=4 GROUO=SU( 6)/268= ITERATION(S) 2 ISIZE=4 GROUP=SU( 6)/268= ISIZE= 4 GROUQSU( 6)/265= ITERATION(S) 2 ISIZE 4 GROUP:SU( 6)/169= ISIZE= 4 GROUP=SU( 6)12 63= ITERATION(S) 2 15125=4 GROUP=SU( 6)/168= ISIZE 4 GROLJPSU( 6)/169= ITERATION(S) 2 ISIZE= 4 GROUP=SU( 6)/Z 65= ISIZE= 4 GRDuP=SU( 6)/265= ITERATION(S) 2 15126=4 GROIJD=SU( 6)/269= ISIZE= 4 GROUPSU( 6)/16 3= ITERATION(S) 2 ISIZE= 4 GRJUP=SU( 6)/263= ISIZE= 4 GROUP=SU( 6)/Z 63= ITERATION(S) 2 ISIZE= 4 GROUPSU( 6)/263= ISIZE 6 GROUP:SU( 6)/263=

0.001 0.001

34 54

0.001 0.001

31.000 31.000

APQ=0.993644 470=1.008969

ADQA=0.658244 APQA=O.658452

54= 31.000 35= 31.000

AP~=0.993525 4~~=O.9906B4

4~QA=0.653656 APQA=O.656848

0.001 0.001

94 94=

31.003 31.000

473:1.005043 APQ=1.012978

APQA=0.656696 APQA=0.656335

0.301 0.001

34 SA=

31.000 31.000

4~PQ=1.00Z236 APQ=O.979133

APQA=0.656047 4704=0.655427

0.001 0.001

3A= 94=

31.000 31.000

473=1.004874 473:1.302976

APQA=O.654895 APQA=0.654319

0.001 0.001

34 31.000 94= 31.000

570=0.993658 473=0.994754

APQA=0.654940 5035=0.656025

0.001 0.001

35= 31.000 35= 31.000

473=1.031144 APQ=0.997267

4035=0.656485 APQA=O.655530

0.001 0.001

55: 31.000 34: 31.000

473=0.998093 A°Q=O.992444

ADQA=O.656177 APQA=0.657212

0.001 0.001

55= 31.000 9.1= 31.000

473=1.007942 APQI.003369

APQA=0.657535 APQA=0.657665

0.001 0.001

BA: 31.000 94 31.000

APQ=1.001190 473=0.991583

APQA=0.658363 APQA=0.659130

0.001 0.001

94=

31.000 35= 31.000

403=0.995818 470=1.006655

4705=0.659538 4734=0.660000

0.001 0.001

94= 31.000 34= 31.000

APQ:0.992596 ADQ=1.006410

APQA=0.659723 4735=0.659138

0.001 0.001

94= 31.000 54: 31.000

473=1.009291 APQ:1.006517

4704=0.658921 5034:0.659248

0.001 0.001

84=31.000 A= 31.000

APQ=1.012736 APQ=0.999444

4704=0.659801 APQA=O.659834

161