Progressive simultaneous inverse iteration for symmetric linearized eigenvalue problems

Progressive simultaneous inverse iteration for symmetric linearized eigenvalue problems

0045-7949/811080051-l I$O2.00/0 @ 1981 Pergamon Press Ltd. Compulers 81 Slructuros. Vol. 14, No. l-2. pp. 51-61. 1981 Printed in Great Britain. PROG...

1MB Sizes 0 Downloads 95 Views

0045-7949/811080051-l I$O2.00/0 @ 1981 Pergamon Press Ltd.

Compulers 81 Slructuros. Vol. 14, No. l-2. pp. 51-61. 1981 Printed in Great Britain.

PROGRESSIVE SIMULTANEOUS INVERSE ITERATION FOR SYMMETRIC LINEARIZED EIGENVALUE PROBLEMS A. JENNINGS Civil Engineering Department, The Queen’s University, Belfast, Northern Ireland and T. J. A. AGAR Mott, Hay and Anderson, Consulting Civil Engineers, Croydon, Surrey, England (Received 21 September 1919; received for publication 17 July 1980) Abstract-Simultaneous iteration and the Sturm sequence method can be used for determining partial eigensolutions of linearized eigenvalue equations arising from the analysis of undamped structural vibration problems. Both methods may be implemented with band storage of the relevant matrices and hence are efficient when the matrices are sparse and of large order. A progressive simultaneous inverse iteration method is developed which contains some features of these two methods and has been formulated with the object of improving the numerical efficiency and reducing the storage requirements still further. Some numerical results are given. INTRODUCTION

In the dynamic response analysis of a structural system of n variables using a modal superposition the first step is usually to obtain a solution to the unforced undamped vibration.problem which may be expressed in linearized eigenvalue form as PiMXi = KXi.

(1)

Here M and K are n X n symmetric positive definite or positive semi-definite mass and stiffness matrices, pi is the square of the ith natural frequency and Xi is a vector of the relative displacements of the variables in the ith vibration mode. It will be assumed that i will be ordered suchthat 1114pZc *... sp.. If the finite element method has been used to model a structural system and a Rayleigh-Ritz consistent mass formulation adopted for deriving the dynamic equations the mass and stiffmess matrices will both be sparse. Furthermore if a frontal scheme is used for ordering the variables these matrices will have similar band forms. If, on the other hand, the mass is considered to be lumped at the nodes the mass matrix will be diagonal. Although n eigenvalues of eqn (1) exist, only the lower values, being those associated with the lower natural frequencies, are required. Linearized eigenvalue equations may also be obtained in the buckling analysis of structures, but the present discussion will be concerned only with the application to vibration frequency analysis. If L is the lower triangular Choleski matrix obtained by the factorization K = LLT then eqn (1) may be transformed to the standard matrix eigenvalue form

AYi= i yi # where A = L-‘ML-T.

(3)

Transformation methods of. solving eqn (2) require the

fully populated matrix A to be formed explicitly and then reduced to a condensed form (e.g. to tridiagonal form using Householder’s method). Whilst efficient for small order systems these methods require prohibitively large amounts of computing time and storage space for large order systems. Following the proof by Peters and Wilkinson[l] that the Sturm sequence property may be applied to the matrix K = FM, it is known that the number of eigenvalues pi satisfying eqn (1) which are smaller than the specified value @ is equal to the number of sign changes in the sequence of principal minors of K -bM. This sequence can be evaluated by means of an elimination procedure. Peters and Wilkinson[l] and Gupta[2] have used this property in a bisection process to accurately compute eigenvalues, inverse iteration being used to find the associated eigenvectors. Gupta[3] substantially improved his original method by curtailing the bisection process as soon as each interval contains no more than one eigenvalue. Inverse iteration is then used to accurately predict the eigenvalues as well as the eigenvectors. This substantially reduced the number of factorizations required and as a result led to a reduction in the amount of computation to about 25% of the original. In the case of two close eigenvalues the method does not operate so satisfactorily. Generally a large number of bisections will be necessary to separate them, or alternatively, if an early bisection happens to separate them, the subsequent inverse iteration will have a very slow convergence rate. If two equal eigenvalues are present a large number of bisections will be required before they can be diagnosed as equal. Groups of more than two equal or close eigenvalues will cause more severe problems of the same kind. Jensen[4] gives an alternative method of using inverse iteration without first isolating an individual eigenvalue by Sturm sequence bisection. If slow convergence rate is detected a p-shift is adopted. Bathe and Wilson[5] describe a determinant search technique which also uses the inverse iteration principle. As with Gupta’s Sturm sequence/inverse iteration method, it seems inevitable that the detection of equal and closely spaced eigenvalues will require extra computational effort. 51

52

A.JENNINGS and T. J. A. AGAR

The simultaneousiteration method (SI) is an extension of Von hilises’power iteration in which m trail vectors are processed simultaneously.Convergenceis to the set of eigenvectors corresponding to the m eigenvalues If largest modulus.If the columnsof U specify the m trial vectors and a partial eigensolution is required of the matrix A then the main operation in the iteration is the premultiplication V=AU

(4)

In order to prevent all m trial vectors convergingonto the eigenvector corresponding to the single most dominant eigenvalue Bauer included an orthogonalization process. However the method was not of much practical use until Jennings~4]and Rutishauser[7] improved its reliability and efficiencyby introducinginteraction analyses.If the eigenvaluesof A are ordered such that IAilz IA21 2 . * - 3 IAnI,then the error in the estimate for the ith eigenvector is reduced by a factor /&+,/hi/, approximately,at each iteration in either of the improved methods. If r eigenvectors are required it is normal practice to iterate with m trial vectors such that m > r. The m - r guard vectors make the chance of obtaining slow convergence for the required vectors extremly remote, the slowest convergence factor being /A,,,+,/&/ for the rth eigenvector. Simultaneousiteration has been implementedfor free undamped vibration problems by Jennings and Orr@], Bathe and Wilson[9,10] and also by other workers[ll, 121.Bathe and Wilson use the alternative name of subspaceiteration. In the procedure adopted by Jennings and On iteration is carried out implicitlywith the symmetricmatrix A = L-‘ML-T. The only operation involving A is the premultiplication V = AU which is implementedindirectly storing only the matrices L and M in band form. Recently Jennings and Agar[l3] have shown that the basic Strum sequence and simultaneous iteration methods may be used to enhance each other. If, in the Sturm sequence method, the inverse iteration operations are converted to simul~neous inverse iteration fewer factorizations are required and also good convergence rates can be assured. The purpose of this paper is to discuss an alternative technique where the p shift principle is used to enhance the simultaneous iteration method. It must also be noted that a form of inverse simultaneousiteration (or subspace iteration) is adopted by Wilsonin an improved SAPIV software packagellrl]. S~LTAN~US

INVERSEITERATION (SII) FOR MATRIXEIGENOLSON

accuracy/iterationwill be approximately e log,, ,+1-8 I ei - 0 I where It?,- $1s I&- g/s * 1. ’ s /&,,+I- $1.As with inverse iteration it is convenient not to form the matrix (C - a)-’ but to use the triangularfactors of C - 8. Simultaneousinverse iteration has the following advantages over inverse iteratio!: (1) more than one eigenvalueclose to the chosen 6 may be obtained, (2) if guard vectors are used, poor convergence rates are extremely unlikely, (3) multiple and closely spaced eigenvalueswill not normallycause difficulty. If C - BI is almost singular,inverse iteration is known to operate satisfactorily. In the same circumstancesSII will easily obtain the eigenvalue or eigenvalues very close to h but the other eigenvalue predictions will be subject to a loss of accuracy which in some cases may be significant.A standard symmetric factorization may fail due to zero pivot even when C- 81 is symmetric and well conditioned because it is not positive definite. For this reason Peters and Wilkinsonand also Gupta have recommended using an unsymmetric factorization for Sturm sequence evaluation and inverse iteration and Schwarz[lS] has proposed a modified symmetric factorization which involves possibleexpansionof the band storage. SII~RLINEA~ZEDE~GENVALURPROBLEMS

From eqn (I) MXi =

>T!T fI( - @M)x~= Ai(K- /.iM)Xi. I

(7)

By comparison with eqn (6) it may be seen that simultanecus iteration applied to the matrix (K - FM)-‘M will yield those pi values which are closest to 6. Only simultaneous iteration procedures applicable for unsymmetric matrices (such as the lop-sided iteration method of Jennings and W. J. Stewart[l6,17] or the method of G. W. Stewart(l81)may be applied directly to (K - ~~)-‘~ However two alternative techniques permit the use of symmetriceigensolutionprocedures as follows: (1) Mass matr~ ~actori%ation If the mass matrix Choleskifactorization M = GG* is obtained,then usingthe transformationyi = GTxieqn (7) becomes G’(K - ~~)-‘Gyi = hiyi.

(8)

If a matrix C satisfieseigenvalueequationsof the form Cqi = 0iqi

(5)

then (C- ciiryq, = &

qi = Aiqi.

The eigensolution of the symmetric matrix G’(K$Vf-‘G can then be obtained with the premultiplication step (9)

(61 implementedimplicitlyusingthe LDLT factorization

K - $M = SDST. (IO) Hence when sim_ultaneous iteration is carried out with the matrix (C- 0Z)-’ convergence will be to the set of (2) Subspace formulation [9-l 1,191 With a trial vector set V, vector sets CJand W may be eigenvaluesAiof largest absolutemagnitudewhich,from eqn (6), corre_spondto the Ai values closest to the value computed such that chosen for e. The convergence rate for the ith eigen(K-bM)U= V (Ila) vector expressed as the numberof decimalfiguresgain in

53

Simultaneousinverse iteration for Eigenvalue problems

and

1. It is possible to obtain a large number of eigen-

W=MU

(lib)values and eigenvectors even though only a few active

thus constitutinga premultiplicationstep. Evaluating K= UV( = V(K-/.UV)U) and

(12) I@= lJ=W(=U=MlJ) 1

enables the symmetric linear generalised eigenvalue problem Kqi = ZMqi (13) to be formulated. If the complete set of eigenvectors of eqn (13)is given by Q = [q1q2* * - * q,,,], it can be shown that the reorientated vector set

v=WQ

(14)

is a better approximationto the set of dominant eigenvectors. These can be used for the next set of trial vectors after they have each been normalized.The eigenvalues of eqn (13) converge to the required dominant eigenvalues of eqn (7). If the factorization (10) has been carried out initially,at each iteration the solutionof eqns (1la) can be obtained by the three stage process:

trial vectors are used simultaneously.This reduces the demands on core storage space as compared with standard simultaneousiteration or subspaceiteration. 2. Convergencerates for higher modes are better than they are for the standard simultaneousiteration method or subspaceiteration. 3. The number of factorizations will tend to be less than the number of eigenvaluesrequired and hence will be much less than for Gupta’s Sturm sequence/inverse iteration method and Bathe and Wilson’sdeterminantal search method. 4. It is possibleto use current predictionsfor the next eigenvaluesto estimate when it would be advantageous to perform a $ shift and also to estimate an appropriate value for i. 5. When carrying out the factorization of K -bM associated with a particular p shift it is possible to use the Sturm sequenceproperty to check that no eigenvalue has been omitted in the range 0 G pi < 6. A block diagram to illustrate the genera1strategy is shown in Fig. 1. IMPLEMENTATION

OF b SHIFTS

The followingstrategiesfor decidingwhen to adopt a b shift and predicting a suitable new value for k have (i) forward substitutionSV'= V (ii) row scaling V*= D-'V' (13 been used for the numericaltests described later. Constder that xl, x2,. . . , Xi-1 have all been computed (iii) back substitutionSTU = V*. and that SII is beingimplementedwith m trial vectors. Let Of these two methods the second is to be preferred pi denote the current relative accuracy for the predicted since it does not require a mass matrix factorization.The valueof Xiandlet t be the requiredtolerance,then the error first method has been included because it was used to will need to be reduced by the factor t/pi to satisfy the obtain the results quoted later. The convergencerates of tolerance criterion. However the error is reduced by a - @(at each simultaneous iteration methods are governed by the factor approximatelyequal to lki - /ZI/Ipi+m eigenvalue spectrum and hence alternative methods ap- iteration, and hence the tolerance shouldfirst be satisfied plied to the same problem with the same numberof trial after k iterations where vectors (and in this case the same 6 shift) will exhibit /4-b ‘=1_ similarconvergencerates. PRctGRDsrvE SLt Initially b is set to zero and simultaneous inverse iteration is carried out with a few vectors. This is effectively the same as the standard use of simultaneous iteration or subspaceiteration. When a vector passes the tolerance test it is transferred to backingstore and a new trial vector introduced in its place, thus keeping the number of active vectors constant. The active vectors are prevented from convergingonto eigenvectorsalready predicted by appropriate orthogonalizationoperations. Althoughthe convergencerate is generallyrapid initially, it tends to deteriorate as more eigenvectors have been predicted.The strategy of the progressiveuse of SII is to interrupt this process when the convergence becomes slow and adopt a value of b which is close to the eigenvaluescurrently being predicted. Thus, for the cost of performing an additional factorization, the convergence rate should be considerably improved. Subsequent iteration with this b value will tend to slow down as more eigenvectorsare predicted accurately and hence further i shifts are introduced whenever required to accelerate the convergence. Iteration is continued in this way until all the required eigenvalues and eigenvectors have been obtained. Some useful general features of this method are as follows:

I -I Pi+m

-

CL

Pi’

In practice the current estimate for pi is usually fairly accurate and an estimate for pi is available from the previous tolerance test for this vector, but no estimate is availablefor pi+,. In place of pi+” the current estimate for pi+m-1has been used giving

where a bar is used over pi, Fi and pi+,,-1 to indicate current estimates. The alternative to continuingwith SII with the same b value is to shift 6 closer to pi. Assumingthat the new b value is SO close to pi that xi will gain the required accuracy after only one iteration,then two iterationswill have taken place before the tolerance test for xi will have been satisfied.Hence consideringthe determination of xi alone it appears that a p shift will give a savingin the number of iterations of k - 2. However the number of multiplicationsrequired to perform the factorization for the p shift is approximately$~b’ and the number of multiplicationsrequired to perform one iteration of SII is approximately 4nmb where the mass and stiffness matrices have the same average semi-bandwidthb, and m is assumedto be small.Hence it is of benefitto shift b

54

A. JENNINGSand T. J. A. AGAR

-

introduce a random vector for each vector converged

*

Factorize K and construct trial vector set

W

Orthogonaiize with respect to converged vectors where necessary 8 perform one SII

Sturm sequence

count

Transier eigenvector to backing store and compute vibration mode

Fig. 1. Block diagram for progressive SII.

when b<8m(k-2)

(18)

(If the mass matrix is diagonal an iteration of SII is approximately twice as fast and hence the factor 8 should be changed to 4.) This condition has been used to determine whether to apply a b shift usingthe estimatefor k given by eqn (17). In general &+m-1will be an overestimateof pi+m-r and this may partly compensateor even overcompensatefor the fact that pii+mshouldreally be used. When it has been decided to implementa 6 shift, it is necessary to determine a suitable value (say p*) such

that the accuracy of Xiwill be improved to correspond with the required tolerance in one iteration. If CL*is the new value for p this condition will be satisfiedprovided that (19) Using fii and &+m-z as current estimates for pi and pi+mt/A*should be withinthe interval t!$i+n-*-

tzi -

Pi - t

&)


@i+m-t

pi+t

-

@iI

. 03)

Simultaneous inverse iteration for Eigenvalue problems

55

The upper limit is the one chosen since this gives the tions. Hence the liklihood of failure should be small greatest acceleration in the convergence rate for the particularlyif the computationis carried out usinga large eigenvectors Xi+],Xi+29 etc. The value bi+m_2has been numberof significantfigures(Raju,Rao and Murthy used adopted as a prediction for pi+m rather than $+-m-l a precision of about 16figures). because an overestimate willtake p* outside the interval Allowancemay be made for the fact that failures of the symmetric factorization are possible by monitoring given by conditions(20). the calculationof the pivots and rejecting any particular VECTOR ORTHOGONALIZATIONS factorization if there is an unacceptablylarge amount of Whenevera trial vector has been introducedto replace cancellation.In the SDS= factorization the element di is a vector which has converged, it has been chosen by the ith pivot. But eqn (10) has as its (i, i) element taking a random vector and orthogonalizingit with res- equation pect to all the eigenvectors which have already been computed accurately. This prevents re-convergenceonto kii - $n;i = i djs G (22) i=l the computed eigenvectors provided also that further re-orthogonalizationoperationsare carried out whenever the componentsof any of these eigenvectorscould grow giving more rapidly duringthe iterative process than any of the i-l eigenvectors currently being predicted. Since the di = kii -/hiiEdis;. (23) magnificationfactor through one round of iteration for j-l an eigenvector component is (pi- $1, re-orthogonalization is only necessary with respect to the converged If e represents the relative error of kii such that kii t ckii is the number actually stored, this error will carry foreigenvector when ward to di where it will have a relative error EkJdi. Thus 2/i - /Jd+m-I < Pj (21) kijdi is a measure of the error magnificationdue to cancellation in the calculation of the pivot. This error (see Fig. 2). Since /i++m_ltends to be larger than pi+m-l magnificationbeing of significance when di 4 kii. Alit is safe to replace pi+m-l in eqn (21) by its current though cancellation may occur in other elements it is prediction /.&+m-1. In the tests described later such re- cancellation in the pivots that is generally of most orthogonalizationshave been performed for all active significancesince these are used as divisors. If for invectors at each iteration. The number of converged stance, the maximumvalue of ki,ldi is equal to lo-’ on a vectors included in the orthogonalizationmay change computer having a relative precision of lo-‘, the acfrom iteration to iteration. curacy of the factorization (10m3)would be suspect. If, If the mass matrix factorization method is used ortho- on the other hand, the same value of kijdi were obtained gonalizationof vector ii with respect to converged vec- when the relative precision was 10-16,the extra eight tors yi should be such as to ensure that yjTISi = 0. On the figuresprecision should be more than enough to provide other hand, if the subspace formulation is adopted, the an accurate factorization. Therefore in choosing a orthogonalizationneeds to be performed with respect to tolerance for kiJdi due allowanceshould be made of the the mass matrix thus givingXi*MUi = 0. precisionof the computer. If the chosen value of p lies extremely close to one of FACTORIZATIONPROCEDURE the values of pi, a large degree of cancellationwill occur The procedure outlined above includes a symmetric in the finalpivot d.. Hence monitoringthe cancellationin factorization of the matrix K - iit4 (eqn lo), which may the pivots has the incidental effect of guarding against be carried out by a Gaussianor compact type elimination inaccurateresults beingobtainedfrom SII due to k being since the SDST factorization only differs slightly from too close to one of the values of gi. an LU or Choleski factorization[20].Three advantages An alternative method of monitoring the stability of of using a symmetricfactorization as compared with an triangular factorizations is given by Erisman and unsymmetric factorization are (1) simpler implemen- Reid[23]. This method is justified mathematically for tation, (2) greater efficiency(by a factor of about 3), (3) symmetric positive matrices and is claimed to be satisless storage requirement(e.g. variablebandwidthstorage factory also for general symmetric matrices (such as may be used). Other authors[21,22] have adopted sym- K - @I#). However, this method is not immediately metric factorizations for K-PM without indicatingthat suitablefor use in the present situationbecause (1) some they have had any difficulty performing the factoriza- account would need to be taken of the fact that the matrix being factorized is derived from the matrices K and M, (2) the Erisman and Reid algorithm would not Converged e,genvalue detect cancellationin the final pivot of the factorization est>mate I Elgenvolue and hence would not guard against i being too close to one of the pi. key

I

l

A STURMSEQUENCECHECK

I.,..

_ b,

Fig.

P,

__ e

1 pm-,

P

2. Magnification factors for eigenvector components.

If the chosen value of k is not rejected it can be assumed that the factorization has been performed with satisfactory accuracy. From the Sturm sequence property it is knownthat the numberof negativeelementsin the diagonalmatrix D is equal to the number of eigenvalues pi less than /I. However predictions (some accurate and some not fully converged)shouldbe available for all the eigenvaluesless than ii. If the Sturm sequence

A. JENNINGS and T. J. A. ACAR

56

count for P does not agree with the number of eigenvalue predictions in the interval O< pi
(24)

Since pI < 12,where pI is the largest eigenvalue predicted accurately using the previous value for b, eqn (24) will always give a reduction in the value of i, At the same time as b is reduced the active vectors should be replaced by a random set of vectors.

IMF'LEMENTATIONOFTHEALGORITHM

Most of the operations required to implement the algorithm are standard matrix operations. FORTRAN subroutines for forward substitution and backsubstitution with the relevant matrices stored in variable bandwidth form have been given by Corr and Jennings[24]. The variable bandwidth factorization routine given in this reference requires minor modifications to perform the .SDST factorization of K - FM. If mass matrix factorization is used the eigensolution of the m x m symmetric interaction matrix 8’V is required which can be obtained, for instance, by either the classical Jacobi or Householder method. The approximate technique of Corr and Jennings[24] is not recommended in this case, because the eigenvalue predictions should be as accurate as possible since they control the p shift policy. Alternatively if the subspace formulation is used symmetry may be retained by using the generalized Jacobi method [9]. If body movements of the structure are possible the stiffness matrix K will be singular and hence the initial factorization with i = 0 should fail. This situation can be remedied by starting with a negative value of F in which case the body freedoms will be obtained first[8].

EFFICIENCYANDSTORAGEREQUIREMENT

Let b, and b be the average semi-bandwidths of matrices M and K-FM respectively. The latter will normally be equal to b,, the average semi-bandwidth of K. Approximately f nb* multiplications will be required for the factorization K - t.iM = SDS? If it is assumed that m is small compared with b all interaction analysis operations can be ignored, the number of multiplications for each iteration of SII being approximately 2nm(b t b,). Therefore if f is the number of $ values used and k the number of iterations of SII the total number of multiplications required for the progressive SII method using the subspace formulation is approximately $ nb*f t 2nmk(b t b,). Major storage space is required for the matrices K, M and S, one n x m matrix of trial vectors and also the converged vectors. If r is the total number of eigenvectors required then the total storage requirement is approximately n(b t b, t b, t m t r). Using row-wise segmentation of variable bandwidth stores[20,25] it is possible to implement the algorithm with a fairly small in-core storage allocation. If mass matrix factorization is adopted a further f nb,2 multiplications is required and some extra storage space is required.

SOMETESTRESULTS

A progressive SII algorithm conforming to the above scheme using the mass matrix factorization has been programmed in FORTRAN IV and run on an ICI, 1906s computer at Queen’s University, Belfast for various types of structural vibration problems [26.27]. The use of exchangeable disc backing store enabled large order problems to be solved although comparison with the conventional simultaneous iteration method were not available for the larger problems. A comparison of progressive SII with Sturm sequence and simultaneous iteration methods applied to :r multistorey plane frame vibration problem has previously been given by the authors[l3]. Two further examples are given below both involving finite element idealizations. In both cases the eigenvector tolerance was 0.001. An open cantilever box Vibration of the open cantilever box shown in Fig. 3 was analysed as an assembly of 28 rectangular panel finite elements which were all uniform having the same stiffness and mass properties. Where the panels meeting at a mode were not coplanar, the node had six degrees of freedom; the other nodes had only five degrees of freedom since in-plane rotations were omitted. The number of variables, n, was equal to 163 and the mass and stiffness matrices had an average half-bandwidth of about 32. Table 1 shows the p values and convergence rates when the lowest 24 frequencies and vibration modes were obtained by progressive SIl. Also shown are the lowest 29 values of gi. Five values of p were used (including fi = 0) and 43 iterations of SII giving 43 x 5 = 215 vector iterations in total. Table 2 shows a comparison of computational and storage requirements of progressive SII using different numbers of trial vectors. Also shown in this table are corresponding values for a backing store version of Corr and Jennings’ simultaneous iteration algorithm applied to the same problem and values for Gupta’s SS/Il method. To obtain these last results the number of bisections and vector iterations required for the SS/II method were estimated from the pi spectrum. In progressive SII storage space has been assumed for (m t l)n t 1024-t m* variables in-core and (b t m t m,)n out of core where m, is the total number of vectors required. For Gupta’s SS/II method it was assumed that storage space would be required for the active part of the factorization and also one vector in-core, and hence would be governed only by the largest bandwidth of the mass and stiffness matrices. The approximate number of multiplications required to implement Bathe and Wilson’s determinant search method is quoted by them as (3b’t 4b t 118)nr (in the notation of this paper) based on the assumption that six values of /* will need to be chosen to isolate each eigenvalue. Therefore, for the problem under consideration, over 17 M multiplications are likely to be required.

Fig. 3. An open cantilever hox

51

Simultaneous inverse iteration for Eigenvalue problems Table 1. Number of vector iterations for convergence of the first 24 eigenvectors of the open cantilever box using 5 active trial vectors.

Key: 8 + 2, ji = 37.19 means 8 iterations at previous I( value and 2 iterations at fi = 37.19. of iterations ; values [initially U=O.Ol

Numhar i

and

Total number iterations

1

2.537

2+2,~=2.607

2

3.1127

2*2,

"

4

3

6.649

2*4,

"

6

4

4

8.049

2t4,

"

6

5

9.556

2t6,

v

a

6

12.28

6. "

6

7

20.47

7. "

7

a

23.95

6, 8)

9

35.79

a+2,;=37.19

6 10

10

41.93

6t3,

"

11

47.51

4*3,

"

7

12

50.37

3+3,

vv

6

13

60.63

2+6,

"

a

14

86.98

7. 0'

15

99.17

a, *I

9

7 6

16

113.0

17

121.5

ia

131.4

19

159.8

a+3,

9'

11

20

174.2

6+S,

"

11

21

178.3

2+7.

"

9

22

ia2.a

2+7,

n

23

205.4

7*3,;=2fl7.8

24

227.F

12,

*

12

12,

"

12

11*2,u=132.9

210.8

z

6+3, -

13

9

v

10 9

_(-----

-4*3"

-

of

7

26

228.6

2*3,

*

27

262.3

2+3,

"

28

270.2

29

311.6

5 5 -

--

r

= 215

Table 2. Comparison of number of multiplications and storage requirements for the open cantilever box problem

(M=106). Gupta unsym.

Number

of active

Number

of u shifts

Total Number

nunber

vectors

of factorizations

of vector

iterations

Num!~er of multipllcatlons for fectorizations

required

Number of multipllcatlons for vector iterations

required

Total In-core

nurrbar of multiplications storage

Out-of-core

requirement

storage

requirement

SW11

Progressive

4ynl.

1

1

3

4

5

6

7

a

5

4

3

2

50

50

10

7

6

5

153

153

186

204

215

4.'17M

0.83M

0.58M

3.17rl

4.11M

4.59M

16.5ON

7.34M

4.9411

3,620

2,180

1.70@

26,900

16,140

20,050

20,210

12.51M

3.9911

iteration

Simultaneous

SII

27

28

29

4

1

1

1

222

245

283

259

281

0.50M

0.42M

0.33M

0.08M

O.OBrl

O.OBrl

4.92M

5.17M

5.81f.l

9.02M

8.36tl

9.1e.M

5.1711

5.4211

5.5911

6.14M

9.1OM

8.44tl

9.2fin

1,850

2,050

2,200

2.400

5,750

5,900

6,050

20,380

20,540

20,700

14,85cl

15.000

15,160

A. JENNINGS and T. J. A. AGAR

58

Table3. Eigenvaluespi of the proppedcantileverplateproblemin (rad/sec)*. i

Eigenvalue

i

pi

A propped cantilever plate Lateral vibrations of the uniform propped cantilever plate shown in Fig. 4 was analysed using the 5 x 9 finite element grid. The number of variables, n, was equal to 160 and the mass and stiffness matrices had an average bandwidth of 24. Progressive SII was used to determine the lowest 30 vibration modes whose associated eigenvalues pi are shown in Table 3. Table 4 shows ii values used by the progressive SII algorithm for different values of m, and Table 5 shows a comparison of the amounts of computation and storage requirements. The determinant search method is estimated as requiring over 13 M multiplications for this problem. Square plate problems In order to test the performance of the prgressive SII algorithm when equal and nearly equal eigenvalues are present a square plate and some almost square plates were analysed to obtain their vibration frequencies. These plates were of uniform thickness and mass distribution and were simply supported along their edges. Equations were derived for an 8 x 8 grid of finite elements and solved by progressive SII with a vector tolerance of 0.001 and using m = 4, and also, for comparison, by a standard SI procedure with a small tolerance. Results for the square plate are shown in Table 6. Table 7 shows the corresponding results when the aspect ratio of the plate has been changed to 1.001 so separating slightly the previously equal eigenvalues.

vi

IO4

19

430.9

2

3.51c

2n

453.4

)I

3

5.985

21

513.2

"

1

Fig. 4. A proppedcantileverplate.

Eigenvalue

Y

4

11.76

22

528.2

I-

5

17.05

23

574.8

n

6

33.37

24

642.6

"

7

46.72

25

804.9

"

a

47.94

28

865.0

"

9

72.93

27

913.4

"

IO

95.97

28

941.2

"

972.5

"

1011.9

*

11

110.9

29

12

161.7

30

13

171.9

31

14

223.8

32

= 1175

"

15

246.1

33

= 1290

"

16

265.3

34

= 1324

"

17

289.5

35

= 1433

"

320.1

36

= 1482

-

18

= 1079

"

using progressive SII using both three and four active trial vectors. The only modification made to the method was that a negative value of p was used initially. In both cases the six body freedoms were computed first, followed by the modes involving structural deformation in order of increasing frequency.

An unrestrained structure A structure with six body freedoms was analysed

Table4. Effectof usingdifferentnumbersof activevectorson choiceof 12valuesfor proppedcantileverplateproblem Numbrr

of aitivr

iteration

4

5

vectors

G/l04 3

Initial

0.0

0.0

0.0

1st

shift

35.24

15.70

44.67

2nd

*

77.73

52.55

3rd

'I

163.7

175.4

4th

"

231.8

229.7

5th

'

475.1

442.7

6th

M

379.6

ah4.0

7th

-

433.7

844.1 934.3

ath

"

523.5

9th



864.4

10th

"

841.2

11th

"

919.6

445.6 4sa.a

6

i

0.0

0.0

195.2

211.2

881.6

504.7 875.4 845.9

Simultaneous inverse iteration for Eigenvalue problems

59

Table 5. Comparison of no. of multiplication and storage requirements for propped cantilever plate problem (M = 106mults). SW11

Gupta ""sym.

Number

of active

Nutier Total

vectors

1

1

65

65

192

3

of factorizatlons

of vector

iterations

Number of multiplications for factorizations

required

Number of multiplications for vector iterations

required

number

In-core

of multipllcatlons

storage

Out-of-core

requiresmnt

storage

requirement

34

35

11

8

3

2

c

_

13

10

5

4

1

1

1

192

231

228

260

276

344

315

335

6.8811

2.29M

0.46M

0.35rl

0.16rl

0.14M

0.04M

0.04rl

Cl.II4r.l

3.23M

2.58rl

3.3811

3.42M

4.0211

4.37rl

9.30rl

8.64rl

9.4611

10.11rl

4.8711

9.3411

B.fi6rl

9.50M

37

3.8411

3.77M

4.20M

4.SlM

1,380

460

1,700

1,850

2,003

2,200

6.80@

6,950

7,250

19,200

11,520

16,600

17,000

17,100

17,300

12,200

12,300

12,600

square

pi predicted

by progressive tolerance

SII with = 0.001

plate

in (rad/sec)2 Eigenvalue pi given by SI with small tolerance

13.294855

x 105

82.393071

82.392374

I

82.397178

82.392780

.

13.29420

x 105

203.53305

203.59307

I

329.67473

329.67486



329.67503

329.67503

I

528.40848

528.40849



8

528.43505

528.43506

.

9

958.33120

958.33120

I

10

95a.35406

ssa.35406

l

11

971.77152

971.77152

I

6

iteration

6

Eigenvalue

1

Simultaneous

5

Table 6. Eigenvalues pi of a

i

SII

4

of ; shifts nuwJer

Number

Total

Progressive

sym.

12

124a.9006

1248.8973

I

13

124a.9~00

1248.9076

I

14

1838.6250

1838.2768



15

la38.8710

la3a.6309

I

16

2268.4057

2268.3812

I

17

2268.4069

2268.4006

*

la

2652.4054

2652.4058



19

2652.8387

2652.8390



20

2897.4014

2897.3492



A. JENNINGS and

T. J. A. AGAR

Table 7. Eigenvalues of an almost square plate in (rad/sec)2

i

Elgenvalus vi predicted by progressive SII with to1erancs = B.DOl

82.133797 82.322443

II

,I

Eiganvalua vi given by SI with small to1erence.

62.133823 82.322456

203.18703

,I

203.18667

32e.53635

8,

320.53772

527.00293

II

527.00317

e

527.73001

"

527;73Oa2

3

954.99174

"

954.99174

10

957.66613

I,

957.86813

IZ

963.63225

"

969.83162

329.48369

1245.2a97

12

0

1245.7480

13

1247.6363

t,

1247.6711

14

1833.9452

"

1633.9454

15

1835.6621

II

1635.6622

16

2260.4931

II

2260.5781

17

2267.2376

‘I

2267.1548

16

2644.3721

"

2644.3271

19

2650.3617

I,

2650.3516

2891.6007

,I

2891.5986

20

D~U~ION

OF RFSULTS

Tables 2 and 5 show that the SSlII method with unsymmetricfactorizationsis not as efficientnumerically as standard simultaneousiteration for the two problems considered. If a lumped mass rather than a consistent mass idealizationwere to be used then the comparison would be even more favourable to the standard simultaneous iterat~n method. The progressiveSII method is more efficient than the standard simul~neous iteration method, the SSlII method (even if symmetric factorizations were performed) and also the determinant search method.A particularadvantageof the progressive SII method over standardsimultaneousiteration is that it reduces the in-core storage requirements.In the use of progressiveSII the total numberof m~tipIicationsis not very sensitive to the number of active trial vectors, m, although there appears to be a slight benefit in using fewer trial vectors. As mightbe expected the progressive SII method, when operating with fewer trial vectors, requires to shift 6 more frequently to counteract the slower convergence rates which would otherwise arise. Since also the eigenvalueestimates may be less accurate when m is small,the risk of overestimatingthe 5 shit is greater. For the tests shownin Table 4 the value of 6 has been reduced on four occasions.In each case this was to accelerate the convergence of a vector pi < @ rather than because of failure of the factorization or the Sturm sequence check. The ma~mum loss in accuracy due to cancellationwithin the pivotal elements recorded in the SDST factorization has been 4 figures through a very

large number of factori~tions (single length arithmetic was used with a relative precision of about 10”). Mode shapes have been computed to four figure accuracy (for which the correspondingeigenvaluesshould be accurate to about eight figures). The results for the square and almost square plates (Tables6 and 7) indicate that no difticultyis encountered obtainingequal or close eigenvalueswhen these occur in pairs. On account of the satisfactory analysis of the unrestrainedstructure it is apparent that progressiveSII is capable of computingeigenvaluesof large multiplicity, even when the number of equal eigenvaluesexceeds the number of active trial vectors. The overall impression from test results is that accuracy and efficiency are affected very little by the spacingof the eigenvaiues. The examples quoted above are not of very large order. All estimates of computationalrequirements for methods compared in this paper are proportional to II. Furthermore convergencerates are independentof n. (A coarse and a fine descretizationof the same problemwill have rou~y similar convergence rates). Hence comparisons between methods are not directly affected by the magnitudeof n. However large order problems are likely to require larger bandwidthsin the stiffness matrix and hence there is an indirect influencewhich makes the cost of performingfactorizations relatively more expensive as n increases. Since progressive SII uses less facto~~ons than the number of required eigenvalues, whereas all Sturm sequence and inverse iteration methods except Jensen’s are bound to use at least one

Simultaneous inverse iteration for Eigenvalue problems factorization

per required eigenvalue,

then progressive

SII will be even more competitive in efficiency the larger the order of the problem (provided that the bandwidth of the stiffness matrix is also larger). In comparison with standard simultaneous iteration (or subspace iteration) the progressive SII will be more economical if the number of required eigenvalues is fairly large. Progressive

CONCLUSION SII is a reliable algorithm for computing

eigenvalues and eigenvectors of linearized eigenvalue equations of the form obtained from vibration problems. For large order problems, particularly where the bandwidth of K - FM is large and a large number of eigenvalues are needed, the algorithm should be numerically more efficient than Sturm sequence, and standard simultaneous iteration methods, and can be programmed with economical use of core storage space. Two forms of the algorithm are discussed. One involves a mass matrix factorization, for which test results have been given. The other is based on the subspace iteration technique and avoids the need to factorize the mass matrix. REFERENCES

1. G. Peters and J. H. Wilkinson, Eigenvalues of Ax = ABx with band symmetric matrices. Cornput. .I. 12, 39&lO4 (1969). 2. K. K. .Gupta, Solution of eigenvalue problems by Sturm seauence method. Int J. Num. Merh. Enan~ 4. 379-388 (1612). 3. K. K. Gupta, Eigenproblem solution by a combined Sturm sequence and inverse iteration technique. Inf. J. Num. Meth. Engng 7,17-42 (1913). 4. P. S. Jensen, The solution of large symmetric eigenproblems by sectioning. SIAMJ. Nun.Anal. 9, 534-545(1972). 5. K. J. Bathe and E. L. Wilson, Numerical Methods in Finite Element Analysis. Prentice-Hall, Englewood Cliffs, New Jersey (1976). 6. A. Jennings, A direct iteration method of obtaining the latent roots and vectors of a symmmetric matrix. Proc. Cumb. Phil. lY

Sot. 63.155-165

(1961).

H. Rutishauser, Computational aspects of F. L. Bauer’s simultaneous iteration method. Num. Math. 13, 4-13 (1%9). A. Jennings and D. R. L. Orr, Application of the simultaneous iteration method to undamped vibration problems. Int. J. Num. Meth. Enenp 3. 13-24 (1911).

K. J. Bathe and E. L~@ils~n, Sol;tion’methods for eigenvalue problems in structural mechanics. Int L Num. Meth. Engng 6, 213-226 (1913).

61

IO. K. J. Bathe and E. L. Wilson, Large eigenvalue problems in dynamic analysis. Proc. ASCE 98 (EM6), 1411-1485(1972). 11. S. B. Dong, J. A. Wolf and F. E. Peterson, On a directiterative eigensolution technique. It& J. Num. Meth. Engng 4, 155-161 (1972).

12. Th. L. Johnson, On the computation of natural modes of an unsupported vibrating structure by simultaneous iteration. Comp. Methods App. Mech. Engng 2,305-322

(1913).

13. A. Jennings and T. J. A. Agar, Hybrid Strum sequence and simultaneous iteration methods. Symp. Appl. Comput. Merh. in Engng. University of Southern California, Los Angeles (Aug. 1971). 14. E. L. Wilson, Improved software packages for SAPIV. Structural Software Development Inc., 1930 Shattuck Av., Berkeley CA 94704. 15. H.R.Schwarz, Two algorithms for treating Ax = hB. COmP. Methods in App. Mech. Engng 12, 181-199 (1977).

16. A. Jennings and W. J. Stewart, Simultaneous iteration for partial eigensolution of real matrices. J. Inst. Maths. Applies. 15, 351-361(1975). 17. W. J. Stewart and A. Jennings, (to be published), LOPSI: A simultaneous iteration algorithm for real matrices. ACM Trans. Math. Software. 18. G. W. Stewart, Simultaneous iteration for computing invariant subspaces of non-Hermitian matrices. Num. Math. 25, 123-136(1976). 19. S. B. Dong, A block-Stodola eigensolution technique for large algebraic systems with non-symmetrical matrices. Inr. J. Num. Merh. Engng 11,247-267 (1977). 20. A. Jennings, Matrix Computation for Engineers and Scientists. Wiley, London (1977). 21. I. S. Raju, G. V.Rao and T. V. G. K. Murthy, Eigenvalues and eigenvectors of large order banded matrices. Comput. Structures 3,491-507

(1973).

22. C. J. Bates, A computational technique for handling large matrices. Int. J. Num. Meth. Engng 7,85-93 (1973). 23. A. M. Erisman and J. K. Reid, Monitoring the stability of triangular factorization of a sparse matrix. Num. Math. 22, 183-186(1914). 24. R. B. Corr and A. Jennings, A simultaneous iteration algorithm for symmetric eigenvalue problems. Int. I. Num. Meth. Engng 10, 647-663(1916). 25. A. Jennings and A. D. Tuff, A direct method for the solution of large sparse symmetric simultaneous equations. In Large Soarse Sets of Linear Eauations (Edited bv J. K. Reid). Academic Press, London (1971). = 26. T. J. A. Agar, Computational methods of obtaining frequencies with particular reference to ship hulls. Ph.D. dissertation. Oueen’s Universitv. Belfast (1978). 27. T. J. A. Agar, The specification of the piogressive simulteneous iteration algorithm “SISSD” . Civil Engng Dept. Rep. Queen’s University, Belfast (1978).