Mathematical and Computer Modelling 57 (2013) 2149–2157
Contents lists available at SciVerse ScienceDirect
Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm
Computing pseudospectra using block implicitly restarted Arnoldi iteration R. Astudillo, Z. Castillo ∗ Centro de Cálculo Científico y Tecnológico, Escuela de Computación, Universidad Central de Venezuela, Caracas, Venezuela
article
info
Article history: Received 9 December 2010 Accepted 12 March 2011 Keywords: Block Arnoldi Spectra Pseudospectra Implicit restarting Pseudospectra computation
abstract The pseudospectra is a useful tool to study the behavior of systems associated with nonnormal matrices. In the past decade, different projection Krylov methods have been used to calculate the pseudospectra of large matrices, rather than earlier approaches which require the application of SVD decomposition at each point of a grid. Inverse Lanczos is a better choice, but still requires previous Schur factorization, which is prohibited in the large scale setting. In this work, we investigate the practical applicability and the performance of a block implicitly restarted Arnoldi method to approximate the pseudospectrum of large matrices, as was suggested by Wright and Trefethen in ‘‘Pseudospectra of rectangular matrices’’ (Wright and Trefethen, 2002 [16]). We present a case study of this idea, using a block version of the Implicitly Restarted Arnoldi Method (IRAM) (Sorensen, 1992 [5]). Numerical results, on several test matrices from the literature, are encouraging and show a reduction in time of this block method compared with its counterpart single version IRAM. © 2011 Elsevier Ltd. All rights reserved.
1. Introduction The spectrum of a matrix A ∈ Cn×n is the set of all λ ∈ C such that Ax = λx, for some non-null vector x called eigenvector. This set can be described as
Λ(A) = {z ∈ C : rank(zI − A) < n} = {z ∈ C : (zI − A) is singular} where rank(A) is the number of linearly independent columns. Computing eigenvectors and eigenvalues of large matrices is an important step in several applications in science and engineering, however when the matrix A is highly non-normal1 the information of the spectrum is insufficient to analyze some important phenomena, for example, the behavior of solution of dynamical systems or operators with its perturbations [3]. In recent years, the pseudospectra of matrices has become an important tool to study dynamical system with associated non-normal operators. When the matrix is large and the interest is just in a few particular eigenvalues, it is worth using Krylov projection techniques to approximate the pseudospectra. In practice, this kind of approximation has proved to be an effective low cost tool providing additional information after eigenvalue calculation. The present work focuses on the approximation of the pseudospectra of large matrices using a block implicitly restarted Arnoldi. We start with the concept of pseudospectra of a matrix A and some equivalent definitions, as well as a quick overview about methods to compute the pseudospectra of A. Then, in Section 3 we present projection methods to approximate the pseudospectra of large and sparse matrices. Section 4 describes block Arnoldi projection methods and introduces the Block Implicitly Restarted Arnoldi Method (BLIRAM) [4], as an extension of the Implicitly Restarted Arnoldi
∗
Corresponding author. Tel.: +58 212 6613309; fax: +58 212 6051676. E-mail addresses:
[email protected] (R. Astudillo),
[email protected] (Z. Castillo).
1 A is non-normal if AA∗ ̸= A∗ A. There exist different measures of non-normality [1,2]. 0895-7177/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.mcm.2011.03.022
2150
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
(IRAM) method proposed by Sorensen [5], which is a reference in the computation of a few eigenvalues of large matrices. Also, in this section we establish a way to use BLIRAM in the computation of the pseudospectra of large matrices. Numerical experimentation and comparison with the mainstream methods is presented in Section 5. Concluding remarks and future work is discussed in Section 6. 1.1. Notation We adopt Householder notation: capital letters A, B, C , . . . denote matrices, A∗ is the conjugate transpose of A. Lower case letters (except z) denote column vectors. Greek letters and z represent complex scalars. In is the identity matrix, with en as its nth column, and wherever the context is clear the subscript n is eliminated. Λ(A) is the spectrum of the matrix A, while Λϵ (A) is its ϵ -pseudospectrum, the symbol smin (A) denote the smallest singular value of A, and λmin (A) represents the smallest eigenvalue in magnitude. 2. Pseudospectra computation The concept of pseudospectra was introduced (with different names) by Gudonov (1964) [6], Varah (1967) [7], Landau (1975) [8], Kostin and Razzakov (1985) [9], and Trefethen in 1991 [10]. Given ϵ > 0, the pseudospectrum of a matrix A ∈ Cn×n can be defined in several equivalent ways. Next we mention some of them. Definition 1. The ϵ -pseudospectrum of a matrix A, denoted by Λϵ (A), is the set of numbers z ∈ C such that
∥(zI − A)−1 ∥ > ϵ −1 .
(1)
If ∥ ⋆ ∥2 is used, the following equivalent definition can be derived. Definition 2. Λϵ (A) is the set of z ∈ C such that smin (zI − A) < ϵ
(2)
where smin (A) denotes the minimum singular value of A. From eigenvalue perturbation theory two more definitions are in order. Definition 3. Λϵ (A) is the set of z ∈ C such that z ∈ Λ(A + E ) for some E ∈ Cn×n with ∥E ∥ < ϵ . Λ(A) denote the set of eigenvalues of A. Definition 4. Λϵ (A) is the set of z ∈ C such that
∥(zI − A)v∥ < ϵ for some v ∈ Cn , with ∥v∥ = 1, the vector v is called pseudoeigenvector and the scalar z pseudoeigenvalue. A classical procedure suggested in [11] to compute the ϵ -pseudospectrum of a matrix A is summarized in the following steps. 1. 2. 3. 4.
Determine a region K of interest in C. Discretize the region K . For each point z of the discretization, compute ∥(zI − A)−1 ∥. Use a visualization tool to display the computed value for z ∈ K . There are different manners to determine and discretize the region K . For a discussion of this topic, see [3,12]. To compute
∥(zI − A)−1 ∥, we can use Definition 2 and obtain the singular value decomposition of (zI − A) for each point z in the grid, and then select the minimum singular value. However, this process is very expensive. A better approach applies the inverse power method to B = (zI − A)∗ (zI − A) to get the smallest singular value of (zI − A) for each z. A more sophisticated scheme uses the inverse Lanczos method applied to the matrix B = (zI − A)∗ (zI − A). At every iteration, this method requires the solution of linear systems with (zI − A)∗ and (zI − A); for this reason one should previously perform a Schur factorization of A in order to reduce the cost per iteration (see [13,3]). This approach is the core of the main computational package for computing pseudospectra, i.e., EigTool, developed by Wright and maintained by Embree [14]. Although this last approach is generally used in practice, its application on large matrices is not suitable, and in most cases a projection method is recommended [15]. 3. Projection method for pseudospectra computation In the large scale setting, methods like inverse iteration or inverse Lanczos combined with an initial Schur decomposition of A behave well for small or medium size matrices, but can result in a high computational cost when A is large and destroy
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
2151
the sparsity pattern of the original matrix when it is sparse. In this case, it is worth using a projection onto a Krylov subspace of dimension m with m ≪ n
Km (A, x) = {x, Ax, A2 x, . . . , Am−1 x}. In [15] Toh and Trefethen proposed to use the Arnoldi iteration to obtain an Arnoldi factorization AVm = Vm Hm + fm e∗m ,
(3)
or equivalently
¯ m, AVm = Vm+1 H
(4)
¯ m is an (m + 1) × m upper where Vm is a n × m matrix with orthonormal columns which are the basis of Km (A, x), and H Hessenberg matrix, and then, to approximate the pseudospectra of A by the pseudospectra of the smaller rectangular matrix ¯ m , for which, Definition 2 also applies (see [16]). The authors in [15] also proved that H Λϵ (H¯ m ) ⊆ Λϵ (A).
(5)
¯ m generally The basic Arnoldi process can be used to get the factorization in (4), however, the resulting pseudospectra of H is a poor approximation of Λϵ (A). In this case, restarting strategies usually help. On the other hand, frequently one wants to know the behavior of a few eigenvalues of a particular part of the spectrum, and this fact should be considered in the restarting strategy. One of the most useful implementation of a restarted Arnoldi is the Implicitly Restarted Arnoldi Method (IRAM) proposed by Sorensen in [5,17]. Its implementations in Matlab (eigs) and Fortran (ARPACK [18]) were used by Wright and Trefethen in [19] to compute the pseudospectra of large and sparse matrices. The idea behind IRAM can be resumed in the following three steps. 1. Inflate a k-Arnoldi factorization to an m-Arnoldi factorization. AVk = Vk Hk + fk e∗k → AVm = Vm Hm + fm e∗m .
(6)
2. Reorder the m-Arnoldi factorization in such a way that the desired eigenvalues appear in the leading submatrix of Hm .
¯ m + f¯m e∗m . AVm = Vm Hm + fm e∗m → AV¯ m = V¯ m H
(7)
3. Deflate the reordered m-Arnoldi factorization to get a new k-Arnoldi factorization.
¯ m + f¯m e∗m → AV¯ k = V¯ k H¯ k + f¯k e∗k . AV¯ m = V¯ m H
(8)
Step 2 of this process is done by performing successive shifted QR iterations on Hm , using as shifts the p = n − k unwanted ¯ m which is similar to Hm , but its eigenvalues are ordered, keeping the eigenvalues (exact shifts). This step produce a matrix H ¯ m . Moreover, for each application of the shifted QR algorithm, using a shift wanted information in the leading submatrix of H ¯ m = R ∗ Q + µI is also Hessenberg since Q is upper Hessenberg equal to µ (an unwanted eigenvalue) the resulting matrix H and R is upper triangular. Thus, in step 3 a deflation can be defined just cutting appropriately the m-factorization generated in step 2, resulting in a new k-Arnoldi factorization with the same characteristics of the initial one, but richer in information about the eigenvalues wanted. Currently, IRAM and more general approaches based on transfer functions approximation, as the one proposed by Simoncini and Gallopoulos [20], are widely used to compute pseudospectra in the large scale setting. In particular, in this work we present and propose a block version of IRAM, and analyze its performance in the computation of pseudospectra of some test matrices. The next section provides a description of this block implicitly restarted Arnoldi, as well as its use in pseudospectra computation. 4. Block projection methods The main contribution of this work is to present and evaluate a block Arnoldi method in the computation of pseudospectra. Specifically, we propose to use the extended version of Block Implicitly Restarted Arnoldi (BLIRAM) to compute pseudospectra of large matrices. BLIRAM was introduced by Castillo in [4] as a 1-block eigensolver, useful to be embedded in continuation algorithms, and currently, an extended k-block version is available upon request. Considering the suggestion of Wright and Trefethen in [16], we conducted an evaluation of this block eigensolver, comparing with the single version IRAM in the computation of the pseudospectra. In order to introduce this block method let us recall that the basic block Arnoldi iteration builds a block Arnoldi factorization of size m and block size b, such that ∗ AV[m] = V[m] H[m] + Fm Em ,
(9)
or equivalently
¯ [m] , AV[m] = V[m+1] H where
(10)
2152
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
• V[∗m] AV[m] = H[m] is a block upper Hessenberg matrix in C(m×b)×(m×b) ; • H¯ [m] ∈ C((m+1)×b)×(m×b) ; • V[tm] V[m] = Im×b ; • V[tm] Fm = 0; • The matrix V[m] = [V1 , V2 , . . . , Vm ], contains m blocks {Vi }m i=1 of size n × b, which form an orthogonal basis of the block Krylov subspace of dimension m:
Km (A, X ) = {X , AX , A2 X , . . . , Am−1 X }, where X is an n × b non-singular matrix; m−1
• Em = [Zb , Zb , Zb , . . . , Zb , Ib ] with Zb and Ib the zero and identity matrices of order b, respectively. Different authors [4,21,22] have discussed advantages of block methods, and some of them are in order:
• the ability to compute multiple or clustered eigenvalues more efficiently; • high performance computing in large scale cases, block methods replace matrix–vector products with level-3 BLAS matrix–matrix multiplications. However, there are also some drawbacks, as the following.
• Loss of orthogonality between blocks. • For a fixed size of Krylov space, e.g. m × b, a block method will attempt to find the wanted eigenspace in a matrix polynomial space of lower degree than the eigenspace of single vector methods, which can lead to slower convergence. In practice, to avoid or minimize the disadvantages of block methods one choose blocks of small size and apply reorthogonalization strategies as in the single vector case. After m steps, a block Arnoldi process has constructed m + 1 blocks V1 , V2 , . . . , Vm , Fm ∈ Cn×b , and a block upper Hessenberg matrix H[m] such that Eqs. (9) and (10) hold. For large matrices, values for m and b are restricted, since m × b, the size of H[m] , should not be as large as n, for that reason, it is desirable to take small values for m and b in conjunction with a restarting strategy. The process that defines BLIRAM, as its counterpart IRAM, has three primary steps. 1. Inflate a block Arnoldi factorization of size k to a block Arnoldi factorization of size m(k < m). 2. Reorder the block Arnoldi factorization of size m such that the wanted eigenvalues appear in the leading (k × b) × (k × b) submatrix of H[m] . 3. Deflate the reordered block Arnoldi factorization of size m to get a new block Arnoldi factorization of size k. The reordering process in step 2 is exactly as the one described previously for IRAM. In this case, the method performs
(m − k) × b shifted QR applications to the block Hessenberg matrix H[m] , choosing the shifts from the unwanted part of its
spectrum. It can be shown that the block Hessenberg structure of H[m] is maintained during the shifted QR process, since the matrix Q is block Hessenberg and R is triangular. Steps 1–3 are repeated until the convergence of k × b eigenvalues of H[m] to the desired eigenvalues of A. In [4], an evaluation of the 1-block version of BLIRAM as eigensolver was conducted, revealing its competitiveness with the single version IRAM. In this work, we propose to use the extended k-block version to approximate pseudospectra of large matrices.
4.1. BLIRAM for pseudospectra calculations In [16], Wright and Trefethen introduced the idea of using block Arnoldi methods to approximate the pseudospectra of large and sparse matrices. They also pointed out that the ϵ -pseudospectra of the block upper Hessenberg rectangular matrix ¯ [m] satisfies, H
Λϵ (H¯ [m] ) ⊆ Λϵ (A).
¯ [m] , and then, approximate the pseudospectra of A by Thus, in this work we use BLIRAM to get the rectangular matrix H ¯ the pseudospectra of H[m] . Following the direction in [19] and to reduce computational cost when computing pseudospectra ¯ [m] , we perform a QR factorization of (zI − H¯ [m] ) for each point z of the region of interest, and of the rectangular matrix H then apply the inverse Lanczos method to R∗ R. Algorithm 1 outlines the whole process. The following section describes some numerical experiments that we conducted to analyze the performance of this approach.
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
2153
Algorithm 1 BLIRAM for pseudospectra calculations 1: 2: 3: 4: 5: 6: 7: 8:
Select block size b, and restating parameters k, m ¯ [m] Call BLIRAM(k, m, b) to obtain the matrix H Define a grid over a region K for each grid point z ∈ K do ¯ [m] Compute the QR factorization of zI − H ∗ Get λmax (z ) using inverse Lanczos on R R √ σmin (z ) = 1/ λmax (z ) end for Table 1 Parameter description.
(a) Inverse Lanczos.
Method
Parameters
IRAM(k, m)
k = number of wanted eigenvalues m = size of the restart
BLIRAM(k, m, b)
b = block size k = number of wanted eigenvalues (k × b) m = size of the restart (m × b)
(b) IRAM.
(c) BLIRAM.
Fig. 1. Pseudospectrum of rdb800l for ϵ = 10−1 , 10−1.2 , . . . , 10−2.4 . Approaches based on IRAM and BLIRAM produce the same relevant information generated by inverse Lanczos, but in much less time.
5. Numerical experiments In order to know how useful are the projection methods to approximate pseudospectra, in this section we present a comparison between IRAM and BLIRAM when they are used for this kind of computation. We select matrices from the literature to show practical behavior of both approaches. Tests were performed using Matlab 7.8 on an AMD 2.1 GHz X2, with GNU/Debian Linux as operating system. Table 1 shows input parameters for IRAM and BLIRAM. For both methods, the maximum number of iteration was set to 1000, and the stopping criteria was based on the convergence of Ritz values, and set as ∥f ∥2 ∥D∥2 < 1e−14 for IRAM and ∥F ∥F ∥D∥F < 1e−14 for BLIRAM, where D is a Diagonal matrix with the Ritz values in its diagonal, and ∥.∥F is the Frobenius norm. The parameter w h, specifying the desired eigenvalues, is set as ‘LM’ or ‘SM’ for the eigenvalues of largest or smallest magnitude, and as ‘LR’ or ‘SR’ for the eigenvalues of largest or smallest real part. All figures showing the pseudospectra computed by inverse Lanczos were generated using EigTool [14] with default values. In the first experiment, we are interested in the part of the pseudospectrum where the rightmost eigenvalues of the Jacobi matrix of order 800 (rdb800l), taken from the Matrix Market repository [23] lie. This matrix is lightly non-normal and it arises after the discretization of a chemical reaction–diffusion Brusselator model [24]. Fig. 1 shows the pseudospectra of this matrix in the region [−1.1, 1.2] × [−0.2, 2.5] computed by the inverse Lanczos method and the approximations generated by IRAM and BLIRAM. It can be noted that IRAM and BLIRAM provide a good approximation for the pseudospectra of the matrix A = rdb800l. Moreover, in the same figure we can appreciate that IRAM(90,155) and BLIRAM(15,17,3) produced useful information in less time compared with inverse Lanczos iteration. In the second experiment, we consider a highly non-normal matrix, the Grcar matrix [25] of dimension 400, and set our interest in the largest modulus eigenvalues lying in the region [−0.5, 2.5] × [1.5, 4]. As we can see, in Fig. 2, both, IRAM
2154
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
(a) Inverse Lanczos.
(b) IRAM.
(c) BLIRAM.
Fig. 2. Pseudospectra of Grcar (400), for ϵ = 10−1 , 10−2 , . . . , 10−11 .
(a) ≈ Λϵ (A) by IRAM.
(b) ≈ Λϵ (A) by BLIRAM.
Fig. 3. A = gre_1107 and ϵ = 10−1 . Both, IRAM and BLIRAM generate accurate information to produce a good approximation of Λϵ (A).
(a) ≈ Λϵ (A) by IRAM.
(b) ≈ Λϵ (A) by BLIRAM.
Fig. 4. A = gre_1107 and ϵ = 10−2 . Both, IRAM and BLIRAM fail to produce a good approximation for pseudospectra of higher level.
(45, 51) and BLIRAM (15, 17, 3) fail to produce good approximations of the whole pseudospectra, and this is due to the high non-normality of this matrix. Even though, we still note the same behavior showed in the previous test, i.e., higher levels of the pseudospectra are better approximated than lower levels, which characterizes most of the Krylov projection methods. To illustrate, even more, this typical behavior of Krylov projection methods we consider the matrix gre_1107, from Harwell Boeing collection [23] and seek the eigenvalues of smallest real part in the region [−1, 1.5] × [0, 1]. Fig. 3 shows the ϵ = 10−1 pseudospectrum of this matrix and its approximations using IRAM(70,150) and BLIRAM (50, 90, 3), while Fig. 4 contains results for ϵ = 10−2 . One can note that both methods generate a better approximation for ϵ = 10−1 than for ϵ = 10−2 .
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157 Inverse Lanczos
IRAM LM
2
2
1
1
0
0
-1
-1
-2
-2 -2
-1
0
1
2
-2
-1
IRAM LR 2
1
1
0
0
-1
-1
-2
-2 -1
0
0
1
2
1
2
IRAM SR
2
-2
2155
1
2
-2
-1
0
Fig. 5. ϵ -Pseudospectra, ϵ = 10−1 , 10−2 , . . . , 10−3 , of pentadiagonal Toeplitz matrix, and its approximations using IRAM with different criteria for wanted eigenvalues (wh = ‘LM’, ‘LR’, ‘SR’). The symbol ‘+’ represents the Ritz values.
Another feature of Krylov methods is the strong dependence of Ritz values. Let us consider the pentadiagonal Toeplitz matrix,2 of order 200, and its pseudospectra in the region [−2.5, 2.5] × [−2.5, 2.5]. As we can appreciate, in Figs. 5 and 6, IRAM and BLIRAM behave in a similar way, producing a more accurate approximation in the region of converged Ritz values. This suggests that approximation of pseudospectra using Krylov projection methods is locally acceptable in the regions of convergence of Ritz values. It is important to note that in order to compare both approaches we conducted many more experiments in a systematic manner, over matrices from the bibliography, and in most of these experiments, the approaches using IRAM and BLIRAM produced similar figures for the pseudospectra, but BLIRAM took less time. This is not conclusive however, for instance, looking for the right most eigenvalues of the matrix A = BWM2000 [26], and taking k = 10, m = 50 and b = 2 for BLIRAM, and k = 20 and m = 100 for IRAM; we observe that BLIRAM take 147.473 s while IRAM produced the same figure in 82.0191 s; see Fig. 7. Nevertheless, in the next experiment, with a matrix large and sparse the effectiveness of BLIRAM is clear. In this last experiment, we are interested in the rightmost eigenvalues of the random matrix of order 200 000, presented in [19], which is generated by the following Matlab code:
N = 200000; A = spdiags([3 ∗ exp(..(0 : N..1)′ /10).5 ∗ ones(N, 1)], 0 : 1, N, N) + · · · 1 ∗ sprandn(N, N, 10/N);
(11)
Fig. 8 shows the pseudospectra in the region [0.5, 3.5] × [−0.6, 0.6] produced by IRAM (30, 60) and BLIRAM(10, 20, 3). In [19], Wright and Trefethen reported that IRAM took about 26 h to produce a similar figure, while in our computational environment IRAM required 130 583.43 s (about 36 h), curiously, the same figure was generated using BLIRAM in just 77.4142 s, which is really a considerable improvement. 6. Conclusions and final remarks This work presented a new approach for the computation of pseudospectra of large matrices. Specifically, we propose to use BLIRAM, a block version of the Implicitly Restarted Arnoldi Method (IRAM), and approximate the pseudospectra 2 Produced by the Matlab command A = gallery(‘toeppen’, 200, 0, 1/2, 0, 0, 1).
2156
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157 Inverse Lanczos
BLIRAM LM
2
2
1
1
0
0
-1
-1
-2
-2 -2
-1
0
1
2
-2
-1
BLIRAM LR 2
1
1
0
0
-1
-1
-2
-2 -1
0
1
2
1
2
BLIRAM SR
2
-2
0
1
2
-2
-1
0
Fig. 6. ϵ -Pseudospectra, ϵ = 10−1 , 10−2 , . . . , 10−3 , of pentadiagonal Toeplitz matrix, and its approximations using BLIRAM with different criteria for wanted eigenvalues (wh = ‘LM’, ‘LR’, ‘SR’). The symbol ‘+’ represents the Ritz values.
(a) ≈ Λϵ (A) by IRAM.
(b) ≈ Λϵ (A) by BLIRAM. Fig. 7. ϵ -Pseudospectra of A = BWM2000, ϵ = 10−0.2 , 10−0.4 , . . . , 10−1.6 .
(a) ≈ Λϵ (A) by IRAM.
(b) ≈ Λϵ (A) by BLIRAM. Fig. 8. ϵ -Pseudospectra of A defined in (11), ϵ = 10−1 , 10−1.5 , . . . , 10−4.5 .
¯ [m] , a projection of A onto a block Krylov subspace, generated by BLIRAM. This of a matrix A by the pseudospectra of H approach results in a convenient low cost strategy. An extensive numerical experimentation using different kinds of matrices was done, and we included in this document a representative sample of these experiments. A comparative analysis with current approaches establishes the superiority of projection Krylov methods in the computation of particular portions of the
R. Astudillo, Z. Castillo / Mathematical and Computer Modelling 57 (2013) 2149–2157
2157
pseudospectra. In general, the results are promising and suggest the application of this approach in the large scale setting. Some research lines in this direction, to be worked in the near future are the following.
• Comparison with other block Arnoldi schemes, like those proposed by Baglama [21], Möller [22], and Yin and Lu [27]. • Computation of pseudospectra in real applications, using the proposed approach. • Implementation in high performance architectures. Acknowledgments We would like to thank the referees for their comments and suggestions to improve the manuscript. References [1] R. Grone, C.R. Johnson, E.M. Sá, H. Wolkowicz, Normal matrices, Linear Algebra and its Applications 87 (1987) 213–225. [2] L. Elsner, K.D. Ikramov, Normal matrices: an update, Linear Algebra and its Applications 285 (1998) 291–303. [3] L.N. Trefethen, M. Embree, Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators, Princeton University Press, New Jersey, USA, 2005. [4] Z. Castillo, A new algorithm for continuation and bifurcation analysis of large scale free surface flows, Ph.D. Thesis, Rice University, ISBN: 9729961506, 2004. [5] D. Sorensen, Implicit application of polynomial filters in a k-step Arnoldi method, SIAM Journal on Matrix Analysis and Applications 13 (1) (1992) 357–385. [6] S.K. Godunov, V.S. Ryabenkii, Theory of Difference Schemes: An Introduction, North-Holland, Amsterdam, 1964. [7] J. Varah, The computation of bounds for the invariant subspaces of a general matrix operator, Technical Report CS 66, Computer Science Department, Stanford University, 1967. [8] H. Landau, Computation of pseudospectra, Journal d’Analyse Mathematique 98 (98) (1975) 243–251. [9] V. Kostin, I. Razzakov, On convergence of the power orthogonal method of spectrum cumputing, Inst. Math. Sib. Branch Acad. Sci. (1985) Report. [10] L.N. Trefethen, Pseudospectra of matrices, in: D.F. Griffiths, G.A. Watson (Eds.), in: Numerical Analysis, 1991: Proceedings of the 14th Dundee Conference, vol. 260, Longman Scientific and Technical, Harlow, Essex, 1991, pp. 234–266. [11] T. Braconnier, Complete iterative method for computing pseudospectra, Technical Report TR/PA/97/13, CERFACS, Toulouse, France, 1997. [12] I. Koutis, E. Gallopoulos, Exclusion regions and fast estimation of pseudospectra, manuscript, 2002. [13] S. Lui, Computation of pseudospectra by continuation, SIAM Journal on Scientific Computing 18 (2) (1997) 565–573. [14] T.G. Wright, Eigtool, 2002. http://www.comlab.ox.ac.uk/pseudospectra/eigtool/. [15] K. Toh, L.N. Trefethen, Calculation of pseudospectra by the Arnoldi iteration, SIAM Journal on Scientific Computing 17 (1) (1996) 1–15. [16] T.G. Wright, L.N. Trefethen, Pseudospectra of rectangular matrices, IMA Journal of Numerical Analysis 22 (4) (2002) 501–519. [17] D. Sorensen, Implicitly restarted Arnoldi/Lanczos methods for large scale eigenvualue calculations, Tech. Rep. TR-96-40, Rice University, 1996. [18] R. Lehoucq, D. Sorensen, C. Yang, ARPACK USERS GUIDE: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods, SIAM, Philadelphia, 1998. [19] T.G. Wright, L.N. Trefethen, Large-scale computation of pseudospectra using ARPACK and eigs, SIAM Journal on Scientific Computing 23 (2) (2002) 591–605. [20] V. Simoncini, E. Gallopoulos, Transfer functions and resolvent norm approximation of large matrices, Electronic Transactions on Numerical Analysis 7 (1998) 190–201. [21] J. Baglama, Augmented block householder Arnoldi method, Linear Algebra and its Applications 429 (10) (2008) 2315–2334. [22] J. Möller, Implementations of the implicitly restarted block Arnoldi method, Numerical Analysis Report TRITA-NA-0446, Dept. of Numerical Analysis and Computer Science, Royal Institute of Technology, KTH, 2004. [23] I. Duff, R.G. Grimes, J.G. Lewis, Users’ guide for the Harwell–Boeing sparse matrix collection (Release I), Tech. Rep. TR/PA/92/86, CERFACS, 1992. [24] B. Hassard, N. Kazarinoff, Y. Wan, Theory and Applications of Hopf Bifurcation, Cambridge Univ. Press, Cambridge, 1981. [25] J. Grcar, Operator coefficient methods for linear equations, Report SAND89-8691, Sandia National Laboratory, 1989. [26] SIAM, waves in distributed chemical systems: experiments and computations, in: Proceedings of the Asilomar Conference Ground, 1980. [27] Q. Yin, L. Lu, An implicitly restarted block Arnoldi method in a vector-wise fashion, Numerical Mathematics 15 (3) (2006) 268–277.