A new restarting method in the Lanczos algorithm for generalized eigenvalue problem

A new restarting method in the Lanczos algorithm for generalized eigenvalue problem

Applied Mathematics and Computation 184 (2007) 421–428 www.elsevier.com/locate/amc A new restarting method in the Lanczos algorithm for generalized e...

346KB Sizes 1 Downloads 74 Views

Applied Mathematics and Computation 184 (2007) 421–428 www.elsevier.com/locate/amc

A new restarting method in the Lanczos algorithm for generalized eigenvalue problem H. Saberi Najafi *, A. Refahi Department of Mathematics, Faculty of sciences, Guilan University, P.O. Box 41335-1914, Rasht, Iran

Abstract In this paper, we present a new restarting method in the Lanczos algorithm for computing a few eigenvalues of the symmetric positive definite eigenvalue problem, AX = kBX, where A and B are large and sparse matrices. The implementation of the algorithm has been tested by numerical examples, the results show that the algorithm converges fast and works with high accuracy.  2006 Elsevier Inc. All rights reserved.

1. Introduction Computing eigenvalues of the generalized eigenvalue equation is one of the most important topics in the numerical linear algebra; this is because many engineering problems rise to generalized eigenvalue problems, and many application find a symmetric positive definite generalized eigenvalue problem of the form AX = kBX where A and B are symmetric, large and sparse and at least one of them is positive definite. For example, generalized eigenvalue problems which arise in mechanical vibration or analysis of structures. There are several methods for solving a symmetric positive definite generalized eigenvalue [1]. Among these methods the Lanczos algorithm is very important, the other methods in which matrices A and B are large and sparse do not work well. This algorithm is a method of krylov subspace. In this method the idea is to make the matrix similar to a symmetric tridiagonal form. However, the resulting generalized eigenvalues are not correct and need to be improved. 2. Definition If U and V are two vectors of Rn their B-scalar product is ðU ; V ÞB ¼ V T BU : *

Corresponding author. E-mail address: hnajafi@guilan.ac.ir (H. Saberi Najafi).

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.06.079

422

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

This inner product is well defined if and only if the matrix B is positive definite. In this case, we can define B-norm k Æ kB associated with this inner product by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi kU kB ¼ ðU ; U ÞB ¼ U T BU 8U 2 Rn : 3. Lanczos method Let A and B are two n · n symmetric matrices and B is a positive definite and V 2 Rm is a vector. Then this method after m-step produces a m · m symmetric tridiagonal matrix Tm and B-orthonormal n · m matrix Vm, such that V Tm AV m  T m ; V Tm BV m ¼ I m ;

m 6 n:

So the eigenvalues of Tm are an approximation of the generalized eigenvalues of the pencil (A, B). Note that if m is large enough then the m eigenvalues of the matrix Tm are almost the same as m generalized eigenvalues of (A, B) [1]. One of the strongest points in the Lanczos method, compared to the other methods is that if one needs to m eigenvalues of pencil (A, B), it is sufficient to use the algorithm m times. The algorithm is as follows: Algorithm 1. Lanczos process: Choose a vector V1 such that kV1kB = 1 Set b0 = 1, r0 = V1, V0 = 0j For i = 1, . . . , m do ai ¼ V Hi ðAV i  BV i1 Þ ri = AVi  bi1BVi1  aiBVi bi ¼ kL1 ri k2 Solve: BVi+1 = ri/bi End do 3.1. Computational remarks (1) Note that L1ri can be computed by solving the lower triangular system: LY i ¼ ri : (2) For solving the BVi+1 = ri/bi the weighted Arnoldi process can be used [2]. 4. Theorems Theorem 4.1. The symmetric positive definite pencil (A, B) have real eigenvalues and linearly independent eigenvectors. Proof in [1]. Theorem 4.2. The vectors V1, V2, . . . , Vm produced by the Lanczos algorithm form an B-orthonormal basis of the subspace km = span{V1, B1AV1, . . . , (B1A)m1V1}. Proof in [1]. Theorem 4.3. The generalized eigenvector of pencil (A, B) are B-orthonormal. Proof in [1]. Theorem 4.4. Denote by Vm a n · m matrix with column vectors v1, v2, . . . , vm and by Tm a m · m symmetric tridiagonal matrix whose nonzero entries are defined by Algorithm 1. Then the following relations hold:

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

(i) AV m ¼ BV m T m þ bm Bvmþ1 eTm . (ii) V Tm AV m  T m .

Proof (1) From Algorithm 1 we have Avi ¼ Bvi1 bi1 þ Bvi ai þ Bviþ1 bi ;

i ¼ 1; . . . ; m:

Then AV m ¼ BV m T m þ bm Bvmþ1 eTm : (2) If we multiply the equality i by V Tm and consider that {v1, . . . , vm} is a B-orthonormal, then we have V Tm AV m  T m :



5. Numerical test 1 Let A and B are 1000 · 1000 matrices as 2 1 1:2 0:42 6 6 1:2 2 1:2 0:42 6 6 6 0:42 1:2 3 1:2 0:42 6 6 6 0:8 0:42 1:2 4 1:2 6 6 6 .. .. .. .. .. 6 . . . . . 6 A¼6 .. .. .. .. 6 6 . . . . 6 6 .. .. .. 6 6 . . . 6 6 6 0:8 0:42 1:2 6 6 6 0:8 0:42 4 0:8

3

0:42 ..

.

..

.

..

.

..

.

..

.

998

1:2

1:2

999

0:42

1:2

7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 .. 7 . 7 7 7 0:42 7 7 7 1:2 7 5 1000

;

10001000

B ¼ Diagð2; 3; . . . ; 1000; 1001Þ10001000 : We apply Algorithm 1 for m = 2, 3, . . . , 20. The results are shown in Table 1.

Table 1 Shows the results of applying algorithm 1 m

kVmAVm  Tmk

Time

2 4 6 10 12 15 18 20

7.92e16 4.307e15 5.68e14 1.88e11 8.17e10 5.307e7 4.78e4 0.0056

0.7334 1.6838 2.6338 4.5404 5.4913 6.9152 8.3447 9.2959

423

424

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

6. Restarting methods in Lanczos algorithm In the Lanczos algorithm the initial vector V1 is chosen as an arbitrary vector. Since this vector can increase the accuracy of the results, it would be more suitable if it was chosen in a better way, which is using a process called restarting. As m eigenvectors of the pencil (A, B) form a base for the krylov subspace Km(B1A, V1) and since V1 2 Km then V1 is the linear combination of m eigenvectors, this is the main idea in the restarting which can be seen in the following algorithm: Algorithm 2. Lanczos restarting process: Choose initial vector V(0) such that kV(0)kB = 1 For S = 0, 1, . . . do V1 = V(s) Compute Lanczos algorithm for m step with initial V1 ðsÞ ðsÞ Compute the eigenpairs ðkðsÞ i ; hi Þ of T m i ¼ 1; . . . ; m ðsÞ ðsÞ ðsÞ Compute the Ritz vectors Y i ¼ V m hi i ¼ 1; . . . ; m Pm ðsÞ Set V ðsþ1Þ ¼ i¼1 C i Y i and B-normalized End do 7. New restart method (generalized restarting) All restarting methods described in [3–10] can also be used for computing the generalized eigenvalues, in this section a new restarting method for computing the generalized eigenvalue of (A, B) is described. This method has a very simple algorithm and can be used on personal computers. First, we prove the following theorem and then the G-R algorithm will be described. ðmÞ

Theorem 7.1. Let Y i be an a eigenvector of symmetric tridiagonal matrix Tm associated with the eigenvalue ðmÞ ðmÞ ðmÞ ðmÞ ki , computed by Lanczos algorithm, and U i , the Ritz approximate eigenvector, U i ¼ V m Y i of pencil (A, B), then ðmÞ

ðmÞ

(i) ðA  kðmÞ ¼ bm Bvmþ1 eTm Y i and i BÞU i       ðmÞ  T ðmÞ  BÞU ¼ b Bv e Y ðA  kðmÞ    ; mþ1 i i m m i  B  B   ðmÞ  ðmÞ T ðmÞ  BÞU ¼ b Bv e Y ðA  k  mþ1 m i  : m i  i 2

2

(ii) If the matrix B be idempotent then          T ðmÞ  ðmÞ  ðmÞ  ðmÞ ðA  kðmÞ i BÞU i  ¼ ðA  ki BÞU i  ¼ bm em Y i : 2

B

Proof ðmÞ

(i) Since we have AV m ¼ BV m T m þ bm Bvmþ1 eTm .Therefore, multiplying both sides of above relation by Y i can be written as ðmÞ

AV m Y i

ðmÞ

¼ BV m T m Y i

ðmÞ

þ bm Bvmþ1 eTm Y i :

So ðmÞ

ðA  kmi BÞU i

ðmÞ

¼ bm Bvmþ1 eTm Y i :

 2      ðmÞ  3 T T ðmÞ bm Bvmþ1 eT Y m 2 ¼ bm Y TðmÞ (ii) ðA  kðmÞ BÞU ¼ e v B b v e Y .  m mþ1 m i i i m i B mþ1 m i B

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

425

Since B is idempotent so   TðmÞ ðmÞ ¼ b2m Y i em V Tmþ1 BV mþ1 eTm Y i    TðmÞ ðmÞ ¼ b2m Y i em eTm Y i  T   ðmÞ ðmÞ ¼ b2m eTm Y i eTm Y i  2  ðmÞ  ¼ b2m eTm Y i  : And similarly ðmÞ

ðmÞ 2

2 T kðA  kðmÞ i BÞU i k2 ¼ bm jem Y i j :

Finally,          T ðmÞ  ðmÞ  ðmÞ  ðmÞ BÞU ¼ ðA  k BÞU ¼ b e Y ðA  kðmÞ     m m i : i i i i 2



B

Algorithm 3. Generalized Restarting process (G-R) Choose an initial vector V~ 1 ~ Compute V 1 ¼ V~V 1 k 1 kB For k = 1, 2, . . . , m do For s = 1, 2, . . . do   ðsÞ Run Lanczos algorithm for m step V ðsÞ m ;Tm ðsÞ Ritz vector Compute Lanczos Ritz  value hi and generalized n om  ðsÞ ðsÞ ðsÞ  ðsÞ Set c ¼ min ri ¼ Ahi  hi BY i  2

i¼k

for P = k, k + 1, . . . , m do ðsÞ ðsÞ If c ¼ Y P then, set V 1 ¼ Y P and B-normalized. end if; end do; end do; ðsÞ ðsÞ YP $ Yk ðsÞ Uk ¼ P Yk P ðsÞ V 1 ¼ kj¼1 U j þ mj¼kþ1 Y j and B-normalized End do 7.1. Numerical test 2 for G-R Let A and B are two 1000 · 1000 matrices used in numerical test 1. We obtain four generalized eigenvalues of pencil (A, B) by applying the G-R algorithm. Step 1: Let m = 4 in the G-R algorithm, then we obtain tridiagonal matrix Tm with four eigenvalues k1, k2, k3, k4, they are approximations of four generalized eigenvalues of pencil (A, B). The values of the generalized eigenvalues and residual values are ð1Þ

ð1Þ

k1 ¼ 1:0095 r1 ¼ 0:094383; ð1Þ

ð1Þ

k2 ¼ 1:058 r2 ¼ 0:56031; ð1Þ

ð1Þ

ð1Þ

ð1Þ

k3 ¼ 1:2907 r3 ¼ 0:93066; k4 ¼ 1:8339 r4 ¼ 0:26231: ð1Þ

ð1Þ

Since the residual value of k1 is minimum, therefore, we select vector V 1 ¼ Y 1 for the next iteration of Lanczos algorithm.

426

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

After 10 iterations the values are ð10Þ

k1

ð10Þ k2 ð10Þ k3 ð10Þ k4

ð10Þ

¼ 0:034192 r1 ¼ ¼ ¼

¼ 1:38e  9;

ð10Þ 0:60339 r2 ¼ 0:94467; ð10Þ 1:1142 r3 ¼ 2:0542; ð10Þ 1:491 r4 ¼ 1:6968:

According to the results shown, the generalized eigenvalue k1 has improved very well (see Fig. 1). ð10Þ Step 2: To improve the other generalized eigenvalue, set V1 = U1 + Y2 + Y3 + Y4 where U 1 ¼ Y 1 ; ð10Þ ð10Þ ð10Þ Y 2 ¼ Y 2 ; Y 3 ¼ Y 3 ; Y 4 ¼ Y 4 after 10 iterations, the generalized eigenvalue k4 improved very well (see Fig. 2). ð20Þ

k4

¼ 1:8352

ð20Þ

r4

¼ 1:722e  14:

Step 3: As the results show, the generalized eigenvalue k4 improved, so for the next iteration set V1 = U1 + ð10Þ ð20Þ ð20Þ ð20Þ U2 + Y2 + Y3 where U 1 ¼ Y 1 ; U 2 ¼ Y 4 ; Y 2 ¼ Y 2 ; Y 3 ¼ Y 3 and redoing Lanczos algorithm.

Fig. 1. Showing the improvement of the first generalized eigenvalue (k1) in the first step of generalized restarting after a number of restarts.

Fig. 2. Showing the improvement of the fourth generalized eigenvalue (k4) in the second step of generalized restarting after a number of restarts.

Fig. 3. Showing the improvement of the third generalized eigenvalue (k3) in the third step of generalized restarting after a number of restarts.

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

427

Fig. 4. Showing the improvement of the second generalized eigenvalue (k2) in the fourth step of generalized restarting after a number of restarts.

After 37 iterations, the generalized eigenvalue k3 improved very well (see Fig. 3): ð57Þ

ð57Þ

¼ 1:3528 r3

k3

¼ 9:3618e  5:

Step 4: Having computed three generalized eigenvalue of pencil (A, B), k1, k4, k3 set V1 = U1 + U2 + U3 + Y2 ð10Þ ð20Þ for the starting vector in the Lanczos algorithm and restart. Where U 1 ¼ Y 1 ; U 2 ¼ Y 4 ; ð57Þ ð57Þ U 3 ¼ Y 3 ; Y 2 ¼ Y 2 after 47 iterations the fourth generalized eigenvalue, k2 improved, which is (see Fig. 4): ð104Þ

k2

¼ 0:66417

ð104Þ

r2

¼ 0:000015:

Note: After step 1 one of the other generalized eigenvalue might improve and the others may get worse. For example in step 2 the residual value of k1 increased to 2.1149, but this is not important because in step 2 we are not supposed to improve k1, considering that it has already been computed. 8. Comments and conclusions (1) As we have seen in the Theorem 7.1. the error can be computed with respect to k k2 or k kB, but as numerical tests have shown if k k2 is used then we have higher accuracy and less consuming time (see Table 2). Table 2 Shows the results of G-R algorithm using k k2 and k kB k kB Step 1 2 3 4

k k2 Iteration

Error

Time

10 10 50 49

3.5439e8 4.88e13 9.5053e5 0.0001637

17.0288 17.3324 83.6560 82.0542

119

2.5888e4

200.0714

Step 1 2 3 4

Iteration 10 10 37 47

Error 1.38e9 1.7225e14 9.3618e5 0.0000157

Time 17.1634 16.7966 62.0779 78.6765

104

6.281e5

174.7144

Step

Iteration

Error

Time

1 2 3 4 5

3 3 5 20 31

3.653e14 2.996e6 1.211e8 1.2345e6 5.6874e14

12.095 11.2138 18.4821 74.1037 113.012

62

8.486e7

228.89

Table 3 Shows the results of algorithm 3 when m > p m=5

m=8

Step

Iteration

Error

1 2 3 4 5

10 10 5 27 31

2.126e9 1.731e10 7.946e6 0.000437 0.00586

83

0.0013

Time 22.4656 22.1833 11.1172 59.6631 68.9032 184.31

428

H. Saberi Najafi, A. Refahi / Applied Mathematics and Computation 184 (2007) 421–428

Numerical test 3. Let A and B are two 1000 · 1000 matrices used in numerical test 1. We obtain four generalized eigenvalue of pencil (A, B) by applying the G-R algorithm using k k2 and k kB, see Table 2 for the results. (2) If the number of eigenvalues of (A, B) is only important, e.g., p 6 n is that number then in Algorithm 3 if we choose m > p, the error decreases significantly, see Table 3. Numerical test 4. Let A be 1000 · 1000 matrix used in numerical test 1. And B be a 1000 · 1000 tridiagonal matrix 3 2 2 1:4 7 6 7 6 1:4 3 1:4 7 6 7 6 B¼6 1:4 4 1:4 : 7 7 6 1:4 1000 1:4 5 4 1:4 1001 10001000 We obtain five generalized eigenvalue of pencil (A,B) by applying the G-R algorithm with m = 5 and m = 8, the results are shown in Table 3. As the results are shown without increasing the time the five generalized eigenvalues of (A, B) are obtained with high accuracy. (3) In Theorem 7.1 we observed that if B is an idempotent matrix then the computed error is the same as the error in computing the eigenvalues in [4], in other words there is no need B = I it is sufficient B be an idempotent matrix. (4) The generalized restarting method has the following properties: (a) This algorithm is very simple to run. (b) It can be run on any PC. (c) The generalized eigenvalues can be computed with a high accuracy and less consuming time even comparing with the case B = I (see [3,4]). References [1] B.N. Datta, Numerical Linear Algebra and Applications, Brooks, New york, 1994. [2] A. Essai, Weighted Fom and GMRES for solving nonsymmetric linear systems, Numer. Algorithm 18 (1998) 277–299. [3] H. Saberi Najafi, E. Khaleghi, A new restarting method in the Arnoldi algorithm for computing the eigenvalues of a nonsymmetric matrix, Appl. Math. 156 (2004) 59–67. [4] H. Saberi Najafi, H. Ghazvini, Weighted restarting method in the weighted Arnoldi algorithm for computing the eigenvalues of a nonsymmetric matrix, Appl. Math. 175 (2006) 1279–1287. [5] Y. Saad, Numerical Solution of Large Nonsymmetric Eigenvalue problems, Halsted Press, New york, 1992. [6] Y. Saad, Iterative Method for Sparse Linear System, PWS Publishing Company, A Division of International Thomson Publishing Inc., USA, 1996. [7] D.C. Sorenson, Implicit application of polynomial filters in a K-step Arnoldi method, SIAM J. Matrix Appl. 13 (1992) 357–385. [8] B. Morgen, Restarting the Arnoldi method for large nonsymmetric eigenvalue problems, Math. Comput. (1996) 1213–1230. [9] A. Stathopoulos, Y. Saad, K. WU, Dynamic thick restarting of the Daridson and the implicitly restarted Arnoldi Algorithm, SIAM J. Sci. Comput. 19 (1998) 227–245. [10] R.B. Lehoucq, Analysis and implementation of an implicitly restarted Arnoldi iteration, Ph.D. thesis, Department of Computational and Applied Mathematics, Ric University, 1995, TR 95-13.