Applications of Simultaneous Iteration

Applications of Simultaneous Iteration

8 Applications of Simultaneous Iteration Neville T Neill School of Computing and Mathematical Sciences, University of Ulster, United Kingdom nt.Neill@...

469KB Sizes 1 Downloads 69 Views

8 Applications of Simultaneous Iteration Neville T Neill School of Computing and Mathematical Sciences, University of Ulster, United Kingdom [email protected]

This paper illustrates how modelling can impact on the teaching and learning of mathematics. Colleagues in engineering sought assistance in finding partial eigensolutions of symmetric matrices which had been developed as part of their research. This seemingly routine request has now led to major changes in the curriculum as it became clear how ma~ aspects of numerical linear algebra were included in producing the required results.

INTRODUCTION Basic concepts in numerical linear algebra are taught as part of most undergraduate mathematical methods modules. A brief resume of these ideas will place the later discussions in a more meaningful context.

MATHEMATICAL PRELIMINARIES Consider a square matrix

A of order n with eigenvalues A.; ordered so that

and corresponding eigenvectors 2W. 1.

The Euclidean norm of A.,

WE =~L ai~j' 2.

~~

, is defmed as:-

while the 2 - norm of a vector! is defined as 11!II2

The eigenvalues of A'I are

1

xrI

=~L

x:

100

Applications of Simultaneous Iteration

[Ch. 8

--q .

3.

The eigenvalues of A - ql are

4.

Any arbitrary vector .u.(O) can be written as a linear combination of the Ki. The Power Method uses the iterative scheme V(k) -

!!

= A U
(k) _

~)-

- ~

k

t a,

A.i

= 1,2, ...

where Uk is the element of y"(k) of maximum absolute value. Generally Uk approximates A.I and .u.(k) approximates AI as k ---+ 00. For further details see Faires & Burden, 1998. 5.

The rate of convergence of the Power Method is governed by the ratio

6.

Inverse Iteration uses the sequence (Ii.

~-p

D y..(k)

-

(k.l)

-.u.

,

k- " 1 2

7
\A

....

to determine that eigenvalue of A which is closest to p and its associated eigenvector.

TRANSFORMAnON METHODS A transformation method transforms the matrix under investigation into another with the same eigenvalues. The most general transformation is a similarity transformation

where N may be any non-singular matrix of the same order as = NT then, if A is symmetric, the transformation

NI

A.

If N is orthogonal, i.e,

ensures 1! is also symmetric. Methods which solve the symmetric eigenproblem by means of orthogonal transformations include those of Jacobi, Givens and Householder (Jennings & McKeown, 1992).

THE PROBLEM Many eigenvalue problems may be specified in a form in which only the largest set of eigenvalues is required e.g. principal component analysis and Markov chain analysis. This is also the case in dynamic stability analysis and, where the matrices are large, normal transformation methods may not be the most efficient means of solution.

Simultaneous Iteration

101

Colleagues in the Faculty of Engineering were working with vibrating systems which had multiple degrees of freedom. They wished to determine the natural frequencies of vibration (eigenvalues) and their corresponding mode shapes (eigenvectors). Clearly complications can arise in multi-degree systems because the number of terms increases rapidly with the number of degrees of freedom but matrix formulations are very effective for manipulating such large numbers oftenns. Initially they were working on undamped systems and modelling them by both action equations and displacement equations. One computational advantage of such modelling is that, for linearly elastic vibratory systems, the coefficient matrix can always be developed as a symmetric, positive-definite array by an appropriate choice ofcoordinates. During our initial meetings, I asked how they normally attempted to fmd the eigenvalues and eigenvectors of matrices they generated. For general cases the QR algorithm (Faires & Burden, 1998) was used. However this approach is not the most efficient when dealing with structural problems which are represented by finite-element models and lead to large, symmetric, mass and stiffness matrices. In such cases, the transformation methods mentioned above are more suitable. As our discussions progressed it became clear that large fmite-element models presented special problems and that no one "best" algorithm was available. The reason for this was that only a limited number of eigenvalues and their corresponding eigenvectors were usually required and hence a different approach was needed. The traditional way to tackle this situation was by using the Power Method to determine AI and ~I, then deflating the matrix and reapplying the Power Method to the deflated matrix to produce the second eigenpair A2 and ~ etc. Rounding errors make the results produced by repeated applications of this process unreliable however, so an alternative strategy was sought. Since the problem posed by the engineers essentially involved finding partial eigensolutions of symmetric matrices, it was decided to employ the method of Simultaneous Iteration (SI). This is an extension of the Power Method and employs as many trial vectors as the order of eigensolution sought. Should the dominant 6 eigenpairs be required from a coefficient matrix of order 50, for example, Simultaneous Iteration uses 6 (orthogonal) trial vectors to produce a matrix of order 6 containing the required results. The savings in terms of computational effort are obvious and the method is discussed in more detail in the next section.

SIMULTANEOUS ITERAnON This is an extension of the bi-iteration scheme of Bauer, 1957. Consider a real symmetric matrix A. of order n whose eigenvalues satisfy.

~I A21 z..... ~I AJ and let the corresponding eigenvectors Xi be such that

I All

':1 ':i = 1

Ojj •

rcn.s

Applications of Simultaneous Iteration

102

The algorithm may be summarised as foUows:Let the columns of the n x m matrix l,[O) =

[W0),

~O),

...

~)]

approximate

X2,

XI,

... , &n with

Fork=0,1,2, ......

(i)

DWlI:ix of order m, a.(k) =

(ii)

form the interaction

(iii)

determine the eigenvalues of

t::,.

(k)T

~

(k)

a. and associated matrix of eigenvectors I

(iv) (v)

orthogonalise ytk) by the Gram-Schmidt process to form

(vi)

test for convergence and go to step (i) if necessary

ltk+l )

A fuller discussion of the method can be found in Jennings & McKeown, 1992.

NUMERICAL EXAMPLE The symmetric matrix A =

[~ ~3 ~ 3 4

1 5

: ] has eigenvalues -8.03, 7.93,

6 -2 -2 -1

5.67 and -1.57 approximately. If an eigensolution of order two is required then taking

u

~ [~ ~]

initially mean, that 46 iteration, are needed to ensure tbat successive trial

vectors sets agree to

I.E-7. If three trial vectors are used i.e.

l.l

=

100 010 001 000

and

again an eigensolution of order two sought, convergence is now achieved in 12 iterations.

Extensions to the Basic SI Cycle

103

TEACHING ASPECTS Even in the early stages of the work, it became apparent how useful this approach could be in the teaching of numerical linear algebra. 1.

forming B. = It Y == It A II involves the concept of a similarity transformation

2.

determining the eigensolution of B. can be done in a number of ways and amongst those implemented were •

a single sweep of the method of Jacobi through the off-diagonal elements thus producing a very crude approximation a complete eigenreduction by the Jacobi algorithm an analysis developed from first principles which quantified the T interaction between vectors !Ii and ~ i.e. how near b ij = 11 i ~ is to zero for i ~ j

• •

3.

consolidation of the idea of orthogonality by programming Gram-Schmidt

4.

using vector norms to implement a suitable convergence test

5.

investigating rates of convergence by utilising gaps in the eigenvalue spectrum e.g, in the example above when two vectors were employed the number of iterations, k, could be approximated from the ratio

. = p":;j becomes

I.E-7 gives k

~ fl

i.e.

A3 A2

k

< t.E-7 i.e.

= 48. When three vectors are used, however, the rate

~ r:rl

=

I.E-7 giving k = 10 theoretically.

EXTENSIONS TO THE BASIC SI CYCLE The matrix around which most of the work revolved was a symmetric matrix of order t2 which had been produced by colleagues in Engineering as part of their model1ing pilot study discussed earlier. It was wished to determine only the four dominant eigenvalues. The Sf algorithm was applied to this matrix and an eigensolution of order four requested. With four trial vectors, 82 iterations were needed while eight trial vectors produced correct results in only 12 iterations. After much additional testing over a prolonged period, the engineers were fmally satisfied that they had a general purpose algorithm suitable for their future needs. However, further mathematical study found an interesting development in the former result which has led to an enhanced algorithm, details of which are published for the first time in this paper.

104

Applications of Simultaneous Iteration

[Ch.8

When the interaction matrix was examined it was observed that it became essentially diagonal after only 7 iterations. At this stage the diagonal elements were 5333.88843322,236.71021447, 187.24411569 and 38.02445747 while the exact eigenvalues, correct to eight decimal places, were 5333.88843322, 236.71021447, 187.24411640 and 38.08595989. In subsequent iterations the eigenvector prediction matrix I remained effectively the unit matrix hence only the pre-multiplication and orthogonalisation processes were significantly purifying the trial vectors. It seemed wasteful not to make the best possible use of these excellent eigenvalue predictions and both postgraduate and honours students were asked to produce a modified SI algorithm which utilised them. After considerable study the following scheme SIlT (Sl with Inverse Iteration), was produced and implemented: Step I

Apply normal SI process untilll(k) is sufficiently diagonal

Step 2

Re-order the eigenvalue and eigenvector approximations into descending order to facilitate the computation of the required dominant subset

Step 3

Construct the matrix AI where (k)

I

A

0:

A - brr I

where I S r S.e and.e is the order of eigensolution sought Step 4

Solve the iterative system

A (k)

where yr diagonal Step 5

I

(k+l)

yr

(k)

0:

yr

is the approximation to Xr produced when .e.(k) became

Test for convergence i.e. is

zzr Ilu(k+1J

u(klll
where E is the required accuracy Step 6

(k)

When 1lr is accurate go to Step 3 and choose a new brr

which is the

approximation to the next subdominant eigenvalue. Continue until vectors have been computed accurately Step 7

e

Evaluate the eigenvalues by forming I

Ai == 11 i A lli, i == I, ...., t

Note how other facets of numerical linear algebra are invoked in this algorithm. Step 3 involves a shift of origin, Step 4 utilises inverse iteration and Step 7 requires use of a similarity transformation.

Further Areas of Investigation

105

RESULTS Applying SIlT to the 12 x 12 symmetric matrix discussed earlier gave the following results: Matrix .a. diagonal after 7 iterations. Vector Number I 2

Converged after 8 10 12 17 iterations

3

4

Thus the original 82 iterations have been reduced to 17, a result which illustrates the potential efficiency gains possible when the inverse iteration phase is included. FURTHER AREAS OF INVESTIGATION No formal proof of the SI method exists for arbitrary trial vectors and applying it to other matrices demonstrates some of its weaknesses. A matrix of order eleven (Gregory & Karney, 1969) was investigated. When the normal algorithm was used, 241 iterations were required to produce an eigensolution of order 5 using 5 trial vectors. The eigenvalues found were A.. A2, A), A... and As. When SIlT was used, different values were used to defme diagonality of .a.(k) i.e. TOL

Iterations taken to become diagonal

i.e, off-diagonal

elements of B(k) less than ...... 0.05 0.005 0.0005 0.00005

7 9 13 19

Total number of iterations taken by SIlT 59 56 44

Eigenvalues found

A\, A2' A3, As. A6 A.. A2' A), A4, A7 AI, A2' A3, A4, A7

In the last case SIlT failed as the matrix AI became singular. These results indicated that further research could be done on the algorithm e.g. was TOL related in some way to ll(k) and how could the apparent contradiction between low tolerances and incorrect eigenvalue sets be reconciled with a high tolerance which prevented any solution being found. The former conjecture was investigated by a postgraduate student who showed that the choice of TOL could be estimated by the Euclidean norm of the off-diagonal elements of .a.(k). This meant that a dynamic threshold level could be incorporated into the SIlT scheme. An honours student suggested that the latter situation be tackled by introducing a perturbation factor to allow the algorithm to continue i.e, form

AI

\K I

=

A - (b« x (l + TOL) D

This idea was incorporated into the process and has proved very successful.

106

Applications of Simultaneous Iteration

[cns

EXAMPLES OF STUDENT WORK Since the beginning of the project, students, both undergraduate and postgraduate, have helped develop the SI and SIlT algorithms. High level programming languages, particularly FORTRAN, have been used to implement them and considerable use has been made of NAG subroutines to assist with Step 4 of SIlT. Since all numerical analysis lectures at Ulster are held in computer laboratories, computer algebra systems, mainly MAPLE, have been integrated into the teaching and learning process. Recently revised undergraduate syllabi have drawn upon the work discussed in this paper and now include Simultaneous Iteration. Students can thus quickly develop MAPLE worksheets which implement the basic SI cycle using standard commands e.g.

> >

with (linalg): A: = matrix (3, 3, [4, 1,4, I, 10, 1,4, I, 10]);

>

U : = matrix (3, 2, [1,0,0, 1,0,0]);

>

V : = multiply (A, U);

>

B : = multiply (transpose (U), U);

>

r: = evalf(eigenvects (B, 'radical'); r:= (10.1622776,1.,{(1. 6.162277661}), (3.83772234,1., {I. -.162277661}]

>

v I : = normalize «(r[1 ][3][ I])));

>

v2 : = normalize «(r[2][3][1])));

>

T : = augment (v l, v2);

>

W : = multiply (V, T);

>

wi : = col (W, I);

>

w2 : = col (W, 2);

>

g: = GramSchmidt ({wi, w2});

>

w l : = normalize (g[1]);

>

w2 : = normalize (g[2]);

>

U : = augment (w l, w2); {now test for convergence and repeat if necessary}

Some sophisticated solutions have been produced for the general case as the following section of code ilIustrates:{assume the square symmetric matrix A of order n and the orthogonal trial vector matrix II of order n x m have been input and the interaction matrix II has been formed}

Conclusion

107

> evec : = m; > r: = evalf(eigenvects (B, 'radical')); > T : = r[I][3][1] {pick out first eigenvector. Assume the corresponding eigenvalue has algebraic multiplicity unity for illustrative purposes} > for i from 2 to evec do > for j from I to r[i][2] do > S : == array (L.m); > S : = r[i][3]U]; > T : = augment (T, S); > evec : = evec - I; > od; > od; {T now holds the eigenvectors of B} > W : = multiply (V, T); > a: = {col(W, I)}; {form a set of the columns ofW} > for i from 2 to m do > b : == {col(W, i)}; > a : = a union b; > od; > g : = GramSchmidt (a); {orthogonalise and normalise} > U : = normalize (g[ I)); > for i from 2 to m do > a : = normalize (g[i)); > U : = augment (U, a); > od; {now test for convergence and repeat if necessary}

CONCLUSION Models, and their solution, can provide a rich source of material suitable for incorporation into the teaching and learning of mathematics. This paper has attempted to illustrate this statement by showing how a request from academic colleagues has developed into an important aspect of our undergraduate syllabi. The work reported in this paper is innovative and will continue to provide scope for further development. From a staff viewpoint, it has been a gratifying experience to have had considerable input from students into the entire process. They can relate readily to the subject area and have enjoyed the challenge of obtaining innovative solutions via technology. To involve students in the development of their own curriculum has had very positive results both in terms of motivation and ownership. In such a situation everyone gains and the cycle of research, modelling and mathematics is complete.

108

Applications of Simultaneous Iteration

[Ch.8

REFERENCES Bauer F L (1957) 'Das verfahren der trepeniteration und cerwandte verfahren zur losung algebraisher eigenvertprobleme' ZAMP, 8. Faires J D and Burden R (1998) Numerical Methods. Brooks - Cole Gregory R T and Karney 0 T (1969) A collection of matrices for testing computational algorithms. Wiley - Interscience. Jennings A and McKeown J J (1992) Matrix Computation. Wiley.