Applied Numerical Mathematics 38 (2001) 361–376 www.elsevier.com/locate/apnum
Iterative algorithms for nonlinear ordinary differential eigenvalue problems W. Sun ∗ , K.M. Liu 1 Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, People’s Republic of China
Abstract A Newton-type iterative algorithm is developed for solving a class of nonlinear eigenvalue problems. This algorithm is based on solving an algebraic equation β(λ) = 0 which is defined implicitly. We show that the β(λ) in our algorithm is analytic in the area of interest and can be evaluated by solving a block bi-diagonal system. Also the Argument Principle is employed in determining the eigenvalue distribution. Numerical results for both linear and nonlinear problems are given. 2001 IMACS. Published by Elsevier Science B.V. All rights reserved. Keywords: Nonlinear eigenvalue calculation; Newton’s iteration; Argument Principle
1. Introduction We consider the following differential eigenvalue problem,
Lu := D + m
m−1
fl (λ, x)D
l
u = 0,
x ∈ [0, 1],
l=0
(1.1)
b(u) = 0, which arises from a wide range of physical problems, such as hydrodynamic stability calculation [4, 17,18] and chemical reaction [8]. In (1.1), D l denotes the lth order differential operator, b a boundary operator and fl (λ, x), l = 0, 1, . . . , m − 1, is continuous in C × [0, 1] and satisfies fl (λ, x) c1 + c2 |λ|dfl ,
(1.2)
where c1 , c2 and dfl are positive constants. When fl (λ, x) is a polynomial of λ, dfl is the degree of fl (λ, x). Numerical discretization of differential eigenvalue problems has been extensively studied, e.g., * Corresponding author. The work of this author was supported in part by a grant from RGC of Hong Kong (CityU, 1061/00P).
E-mail address:
[email protected] (W. Sun). 1 The work of this author was supported by the City University of Hong Kong Grant 7001093.
0168-9274/01/$ – see front matter 2001 IMACS. Published by Elsevier Science B.V. All rights reserved. PII: S 0 1 6 8 - 9 2 7 4 ( 0 1 ) 0 0 0 4 3 - 5
362
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
see [3–5,10]. For the consideration of accuracy, spectral/tau methods and spline collocation methods are popular for the discretization [7,10,13,23]. A nonlinear matrix eigenvalue problem T (λ)v = 0 arises when a discrete method is applied for (1.1), where T (λ) is an n × n function matrix. Since physically, only some eigenvalues are of interest, usually those with smallest modulus of real part, Newton-type methods are very efficient due to their high order convergence. These methods have been investigated for nonlinear matrix eigenvalue problems by many authors, such as Osborne and Michaelson [18], Lancaster [12], Ruhe [21], Osborne [19] and Neumaier [16]. Most algorithms for this nonlinear eigenvalue problem are based on solving the following algebraic equation β(λ) = 0,
(1.3)
which is given implicitly and evaluated by solving some linear systems. The form of β can be quite complicated and is of significant importance in developing efficient iterative algorithms. A large class of Newton-type algorithms were analyzed in [1,19] where β(λ) is defined by T (λ)v(λ) = β(λ)z,
s H v(λ) = constant,
(1.4)
for fixed vectors z, s ∈ CN . Many other iterative algorithms, such as Rayleigh quotient, shooting method and inverse iteration, can be obtained by different choices of s and x. The effectiveness of the method is also highly dependent on the choice of vectors s and x. A disadvantage for such a class of Newton-type methods is that β is often a rational function and so without a careful choice of x and s some poles can be very close to its zeros, particularly for those eigenvalues of interest. An example was presented in [19], where s = e1 , x = e1 and det(T ) , (1.5) β(λ) = det(T11 ) where T11 is the first (n − 1) × (n − 1) principal submatrix of T . The poles of β(λ) are the zeros of det(T11 ). When a differential eigenvalue problem is considered, for the first several eigenvalues λi of T , there exist some poles τi such that
|λi − τi | = O hε ,
(1.6)
where ε > 0 depends upon the accuracy of discrete method and h is the mesh size of discrete method. Newton’s iteration is less efficient and choosing a suitable initial guess become more serious for such a β. Lancaster [12] presented a Newton-type iteration in which β(λ) ≡ det(T (λ)). The iterative algorithm is based on the classical identity (e.g., see Davidenko [6]) d det T (λ) = det(T ) tr T −1 Tλ , (1.7) dλ where tr(T ) denotes the trace of T and Tλ = dT /dλ. The Newton-type iteration is then, 1 . (1.8) λ1 = λ0 − −1 tr(T Tλ ) The β(λ) in Lancaster’s algorithm is the characteristic polynomial of T (λ). It is efficient for some matrices of moderate size. However, for a large sparse system, this algorithm is much more expensive than others since the inverse of T must be calculated at each iterative step.
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
363
In this paper, we are concerned with a large class of differential eigenvalue problems. The discretization involved in many numerical methods results in a system which is essentially block bi-diagonal—there being an additional non-zero block in the upper right corner. We present an iterative algorithm for solving the matrix eigenvalue problem for this type of system. In our algorithm, β(λ) is still a rational function. We present an estimate of the poles which shows that they are O(h−σ ), σ > 0. When h is small, the poles are far away from the eigenvalues of interest and β(λ) is smooth and analytic in this area. Based on this estimate, the distribution of eigenvalues can be determined by using the Argument Principle. Finally, the algorithm is tested against two examples. 2. An iterative algorithm For simplicity, we start a nonlinear matrix eigenvalue problem with an augmented block bi-diagonal structure which can be obtained from the nonlinear differential eigenvalue problem (1.1) by using some classical approximations. We shall give the discretization of (1.1) with finite difference and spline collocation methods in Section 3. For details and other approximations, see [2,3,8]. Here, we consider a matrix eigenvalue problem in the form
T + T (λ) v = 0.
In (2.1)
(2.1)
A1
B2 T =
...
0 A2 .. .
0 .. .
... .. .
BN
AN BN
B1 0 .. .
(2.2)
0
AN
is a constant augmented block bi-diagonal matrix corresponding to the discretization of D m , where A and B i are k × k constant blocks. T (λ) is a matrix function corresponding to the discretization of i m−1 l l=0 fl (λ, x)D and has the same structure as T . We denote its entries by Ai and Bi . Let T (λ) = T + T (λ) and Ai and Bi denote the corresponding entries of T . A Newton-type iterative algorithm is given below for solving the nonlinear eigenvalue problem (1.1). Let Vi , i = 1, 2, . . . , N , be in Ck×k and satisfy the following system:
T (λ)
V1 A1 V1 + B1 V2 0 = , .. .. . .
VN
VN = I.
(2.3)
0
The important points to note are that V1 is implicitly a function of λ and can be obtained by the above recurrence and that the zeros of det(T (λ)) are precisely the zeros of det(A1 V1 + B1 ) = 0. This will be shown in the next section.
(2.4)
364
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
The idea of the algorithm is to differentiate β(λ) = det(A1 V1 + B1 )
(2.5)
according to (1.7) to get dβ(λ) = β(λ) tr (A1 V1 + B1 )−1 (A1 V1 + B1 )λ . dλ
(2.6)
The iteration is then given by λ1 = λ0 −
1 . tr((A1 V1 + B1 )−1 (A1 V1 + B1 )λ )
(2.7)
Differentiating (2.3) yields
(A V + B1 )λ 1 1
T Vλ + Tλ V =
0 .. .
(2.8)
0 which implies T (λ)Vλ =
A1 V1λ −A2λ V2 − B2λ V1 .. . −ANλ VN − BNλ VN−1
.
(2.9)
V1λ can be obtained by solving the above equation and then, (A1 V1 + B1 )λ can be calculated. It should be noted that determining A1 V1 + B1 and (A1 V1 + B1 )λ in this manner involves a recursive back substitution defined in (2.3) and (2.9) which may be unstable. The recursion can be obviated in two ways by directly solving modified systems. In the first way simply note that the system in (2.3) and (2.9) can be truncated to a genuine bi-diagonal systems B2 0
A2
0
...
B3
A3 .. .
... .. .
V 0 1 0 .. .. .. . = . . V 0 N−2 AN−1 V −A
0
BN
N−1
N
(2.10)
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
365
and
B2
0
A2
0
...
B3
A3 .. .
... .. . .. .
V −A V − B V 1λ 2λ 2 2λ 1 0 .. .. . .. . . = . V V − B V −A N−2,λ N−1,λ N−1 N1 ,λ N−2 AN−1 V −A − B V
0
N−1,λ
BN
Nλ
(2.11)
Nλ N−1
Alternatively, we can avail ourselves of good routines to solve T (λ)W =
I .. . .
(2.12)
0 By multiplying the equation in (2.12) on the right by A1 V1 + B1 it follows that W and V are related by V = W (A1 V1 + B1 ), whence (A1 V1 + B1 ) = WN−1 .
(2.13)
Differentiating (2.13) then gives (A1 V1 + B1 )λ = −WN−1 WNλ WN−1 ,
(2.14)
which may be used in (2.7). Similarly, Wλ can be calculated by solving T (λ)Wλ = −Tλ W.
(2.15)
In each iterative step, one only needs to solve systems (2.12) and (2.15) (alternatively, (2.10) and (2.11)) to calculate
tr (A1 V1 + B1 )−1 (A1 V1 + B1 )λ = − tr WNλ WN−1 .
(2.16)
Since WNλ is a k × k matrix and usually k is small compared with the number of mesh points, it is easy to calculate its inverse. The systems (2.12) has a so-called block bi-diagonal structure. Several codes for solving such systems have been developed in [8,9,24].
3. Analysis In Section 2, we have presented an iterative algorithm for solving a class of nonlinear eigenvalue problems. The algorithm seems at the first look to be a block form of the previous algorithm in (1.4). However, key difference lies in the smoothness of β(λ).
366
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
3.1. Zeros and poles The algorithm in (2.7) is a Newton-type algorithm for solving the algebraic equation (2.5). The effectiveness of the algorithm is highly dependent on the smoothness of β(λ) and the distribution of poles and zeros. An important feature on zero distribution is given in the following theorem. Theorem 3.1. det(A1 V1 + B1 ) = (−1)k Proof. Since
2 (N−1)
det(T (λ)) . i=2 det(Bi )
N
(3.1)
B2
A2
0
...
0 .. .
B3 .. .
A3 .. .
... .. .
k 2 (N−1) det det(T ) = (−1)
0 0 .. .
BN A1
0
...
(3.2)
AN
B1
0
using block column Gaussian elimination gives
B2
0 k 2 (N−1) det det(T ) = (−1)
A1
0
0
...
0
B3
0 .. .
...
0
BN
0
−A1 B2−1 A2
B1 − (−1)N A1 B2−1 A2 · · · BN−1 AN
...
(3.3)
which implies
det(T ) = det B1 − (−1)N A1 B2−1 A2 · · · BN−1 AN
N
det(Bi ).
(3.4)
i=2
A compact form of V1 can be obtained by solving (2.3), V1 = (−1)N+1 B2−1 A2 · · · BN−1 AN . Then (3.1) follows immediately.
(3.5)
✷
Notice from Theorem 3.1 that β(λ) defined in (2.5) is a rational function and its poles are the zeros of l det(Bi (λ)). Since ( Ai , B i ) and (Ai , Bi ) correspond to the discretization of D m and m−1 l=0 fl (λ, x)D , respectively, in many discrete methods, such as finite difference and spline collocation [3,23], the following conditions can be satisfied:
det B i = 0,
i = 2, 3, . . . , N,
(3.6)
and for any given λ, Bi B i−1 → 0,
i = 2, 3, . . . , N,
as h → 0.
where h is the mesh size. Then we have the following theorem.
(3.7)
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
367
Theorem 3.2. Let B i and Bi (λ) be defined above, satisfy the conditions (3.6), (3.7) and Bi (λ) = B i + Bi (λ). Then
τj det Bi (λ)
→ ∞,
as h → 0,
i = 2, 3, . . . , N,
(3.8)
where τj (det(Bi (λ))) denotes a zero of det(Bi (λ)). Proof. By (3.6),
det Bi B i−1 = det I + Bi (λ)B i−1 det B i .
(3.9)
Since B i is a constant matrix independent of λ and each entry of Bi (λ) satisfies (1.2), by (3.7), for any given λ, I + B i−1 Bi (λ) is nonsingular when h is small enough. Then (3.8) follows immediately. ✷ Remarks. In most physical problems, one is only interested in first several eigenvalues which usually are in a bounded region [4,14,17]. Notice from Theorem 3.2 that for the β(λ) defined in (2.5), poles do not appear in this region. Thus, this iterative algorithm is very efficient for calculating those “small eigenvalues”. Physically, the calculation of large eigenvalues are of less importance. The above analysis is also useful in determining the eigenvalue distribution which we will discuss a little later. It is noted that our algorithm is based on the block bi-diagonal structure. Such a structure can be produced in the discrete differential eigenvalue problem (2.1) by using most numerical methods [2,3,20]. The conditions (3.6) and (3.7) are satisfied by a suitable partition. 3.2. Stability In the above algorithm, V can be obviated in two ways: solving the systems (2.10) and (2.11) by back substitution and solving the full systems (2.12) and (2.14). The stability of the back substitution depends upon the conditioning of Bi A−1 i , i = 2, 3, . . . , N . A more general differential eigenvalue problem is a system of ODEs: du = A(λ, x)u, (3.10) dx where A(λ, x) is a polynomial of λ (or satisfies the condition (1.2)) and continuous in x. A simple finite difference scheme leads to the trapezoidal rule:
h h I + A(λ, xi ) ui − I − A(λ, xi+1 ) ui+1 = 0 (3.11) 2 2 which gives a block bi-diagonal system with h h Bi = I + A(λ, xi ). (3.12) Ai = I − A(λ, xi+1 ), 2 2 By the classical stability theory of ODE, if λh (or A(λ, x)h) is too large, the scheme is unstable. In this case, the conditions (3.6) and (3.7) are not satisfied and β(λ) will have poles which will show themselves in the actual back substitution breaking down. For those “small” eigenvalues which are of physical importance, the back substitution is stable. The stability of the back substitution for high order ODEs is often related to choice of discrete methods. The detailed discussion has been presented in [3]. Usually, the alternative procedure given in (2.12) and (2.14) are more stable if some stable linear solvers are used.
368
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
3.3. The Argument Principle A quite interesting thing in physics is to determine the number of eigenvalues in a given region. Usually, it is difficult to calculate all eigenvalues in a given region by using a local iterative method. Global methods, such as QR algorithm, can be used to calculate all eigenvalues but it is very expensive particularly for a large sparse systems. We can use the Argument Principle as aid in determining the number of eigenvalues in a given region. Recall the Argument Principle states: If β is analytic on a closed contour Γ bounding a region Ω in C, and Ω contains zeros λi and poles τi of β, with multiplicities mi and orders ni respectively, then 1 Int ≡ 2π i
Γ
β (z) dz = mi − ni . β(z) i i
(3.13)
Lyness and Delves [15] used this as the basis for an algorithm for locating zeros of analytic and meromorphic functions. Since when β(z) is a polynomial, ni = 0, one can use the above formula to determine the number of eigenvalues in Ω. However, for a (nonlinear) matrix eigenvalue problem, the cost for calculating characteristic polynomial can be much more expensive than that for solving the original eigenvalue problem. The β(λ) defined in (2.5) is a rational function. By Theorem 3.2, for a bounded region Ω the β(λ) is analytic and ni = 0 when h is small enough. Then we still can use (3.13) to determine the number of eigenvalues in this region Ω. It has been noted that β (z)/β(z) in (3.13) is precisely the inverse of Newton’s correction. By (2.6) and (2.16), β (z) = tr (A1 V1 + B1 )−1 (A1 V1 + B1 )λ = − tr WNλ WN−1 β(z)
(3.14)
which can be evaluated directly by using the algorithm in Section 2. Another important issue for Newton-type iteration is initial guess. This Newton-type iterative algorithm is convergent only if the initial guess for both eigenvalue and eigenvector is good enough. The Argument Principle can be used for this issue. For a bounded region Ω, one can split it into several small subregions and then, use (3.13) to locate the zeros. A better initial guess can be given at some smaller subregion. 3.4. Examples The nonlinear matrix eigenvalue problem (2.1), (2.2) arises from a wide range of differential eigenvalue problems with different numerical discretizations. Here, we present briefly the discretization of spline collocation methods and finite difference methods by considering several typical examples of differential eigenvalue problems. Also the theoretical analysis in Theorem 3.2 will be confirmed. We first consider the problem (3.10) by using a class of C k−1 -spline collocation discretization (2k > m) and this collocation method will be used in our numerical tests. Let {xi }N i=0 be a uniform partition in [0, 1], h = (xi+1 − xi )/N and at the subinterval [xi , xi+1 ], U (xi + th) ≡
k−1 q=0
uiq ξq (t) +
k−1 q=0
(−1)q ui+1,q ξq (1 − t),
0 t 1,
(3.15)
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
369
define a C k−1 -spline collocation approximation, where t = (x − xi )/ h and ξq (t), q = 0, 1, . . . , k − 1, are the local basis functions which satisfy ξq(p) (0) = δpq ,
ξq(p)(1) = 0,
p, q = 0, 1, . . . , k − 1.
∗ = xi + tp∗ h denote the collocation points. Let {tp∗ }kp=1 be Gaussian points in (0, 1) and xi+1,p Substituting (3.15) into (1.1) and collocating it at these collocation points, we obtain the following matrix eigenvalue problem [23],
D + m
m−1
fl (λ, x)D
∗ U xip = 0,
l
i = 1, 2, . . . , N − 1, p = 1, 2, . . . , k
l=0
or
h
−m
Bm +
m−1
m−1 −m −l h Dil (λ)Bl ui + h Am + h Dil (λ)Al ui+1 = 0, −l
l=0
l=0
i = 0, 1, . . . , N − 1,
(3.16)
where {Al , Bl } defines the collocation discrete systems of D j , j = 0, 1, 2, . . . , m, and ui = (ui0 , ui1 , . . . , l l ) and Bl = (bpq ) are k × k constant matrices independent of λ, Dil (λ) is a diagonal ui,k−1 )T . Al = (apq matrix function, and
(l) l bpq = ξq−1 1 − tp∗ ,
(l) l apq = ξq−1 tp∗ ,
∗ ∗ ∗ , fl λ, xi2 , . . . , fl λ, xik . Dil (λ) = diag fl λ, xi1
It has been proved that this collocation method is of high-order accuracy when Gaussian points are used as the collocation points. Comparing (2.2) and (3.16), we have B i = h−m Bm ,
Bi (λ) =
m−1
h−j Dil (λ)Bl .
(3.17)
l=0
Then Bi B i−1 = I +
m−1
hm−l Dil (λ)Bl B −1 m .
(3.18)
l=0
By noting the definition of Dil , there exists a constant c > 0 such that the poles of β(λ) in (2.5) and (3.1) satisfy
τj Bi (λ) ch−σ ,
(3.19)
i.e., Bi (λ) is nonsingular for h small enough and |λ| ch−σ , where σ < min l
m−l dfl
(3.20)
and dfl is defined in (1.2). The result (3.8) in Theorem 3.2 is obtained. It is obvious that the estimate (3.19) is true for all those methods which satisfy (3.17).
370
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
Next, we study the following simple example −u + 2λu = 0,
(3.21)
u(0) = u(1) = 0
by using both central finite difference method and a cubic C 1 collocation method (k = 2). For the central finite difference, we have ui+1 − ui−1 ui+1 − 2ui + ui−1 + 2λ = 0, i = 1, 2, . . . , N − 1, − 2 h 2h and therefore,
0
1 1 − − 2λ h h2 T (λ) =
−
0
1
0 2 h2
0 −
2λ 1 − 2 h h
1 2λ + 2 h h 2 h2 .. .
0 1 + h2 .. . 1 − 2− h −
2λ h 2λ h
..
. 2 h2
−
. 2λ
1 + h2 h
Then by a simple partition, 2 1 2λ − h2 − h h2 , i = 2, 3, . . . , N. Bi = 1 2λ 0 − 2− h h Obviously Bi is singular only if λ = −1/(2h), i.e., τj (Bi (λ)) = O(1/ h). Use of the cubic C 1 -spline collocation method as stated above (also see [23]) gives
1 6(1 − 2t ∗ ) 6t ∗ − 4 Bi = − 2 , h 6(2t ∗ − 1) 2 − 6t ∗
(3.24)
1 −6t ∗ (1 − t ∗ ) (1 − t ∗ )(1 − 3t ∗ ) Bi = , h −6t ∗ (1 − t ∗ ) t ∗ (3t ∗ − 2)
(3.23)
0 0 B1T = , 1 0
B 1 = 0,
where t ∗ = 12 −
(3.22)
1 √ . 2 3
By a straightforward calculation, we obtain
1 . h This confirms the result in Theorem 3.2. τj det Bi (λ)
i > 1,
=O
(3.25)
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
371
Finally, we consider a fourth-order differential eigenvalue problem: φ
(4)
3 + 2γ φ + γ φ + γ R 1 − y 2 φ + γ 2 φ + 3φ = 0 2 2
4
(3.26)
with boundary conditions φ(±1) = φ (±1) = 0,
(3.27)
where R is the Reynolds number and γ is the wave number of the disturbance. (3.26) arises from the stability analysis of an incompressible viscous flow in a channel and is similar to the well-known Orr– Sommerfeld equation [4,17]. Using either the C 1 spline collocation or finite difference methods we have the estimate (3.19) with σ = 2.
4. Numerical test We have presented an iterative algorithm for solving nonlinear eigenvalue problems with the special structure in (2.1), (2.2). The comparison of computational complexity with some previous algorithms can be found in Table 1, where Algorithm I follows (2.10), (2.11) and Algorithm II follows (2.12) and (2.15). In this section, we present two numerical examples of differential eigenvalue problems. Hermite-type spline collocation methods as shown in the last section are used for the discretization of ODEs in both examples. Only Algorithm II is tested due to stability and the mesh used here is uniform. The iterative procedure is stopped when Newton’s correction is smaller than a prescribed tolerance δ > 0. Here, we take δ = 10−12 for both examples. Throughout, the computation is performed on a Sun SPARC-20 in double precision. Example 4.1. First we consider a simple example −u (x) − λu = 0,
x ∈ (0, 1),
(4.1)
u(0) = u(1) = 0. The eigenvalues of the differential eigenvalue problem (4.1) are λj = π 2 j 2 ,
j = 1, 2, . . . .
(4.2)
Table 1 The comparison of cost at each iteration (k N) Algorithm
Operation count
Lancaster’s algorithm [12]
O(N 2 )
Osborne’s algorithm [18] Algorithm I
13 3 2 k N 4k 3 N
Algorithm II
13 3 2 k N
372
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
Table 2 The convergence histories of Algorithm II Number of collocation points (k)
Number of mesh points (N)
Iteration number
2
4
0
9.871D0
1
3.541D−2
2
7.836D−6
3
3.403D−15
0
9.871D0
1
5.501D−3
2
3.193D−8
3
3.403D−14
0
9.871D0
1
2.891D−3
2
1.500D−10
3
1.116D−14
0
9.871D0
1
2.709D−2
2
4.712D−8
3
3.980D−15
0
9.871D0
1
2.331D−2
2
2.733D−8
3
5.248D−15
8
16
3
4
4
4
Newton’s correction
Error |λ1 − λh1 |
1.656D−3
1.144D−4
7.51D−6
5.574D−6
1.101D−8
Our numerical results are given in Table 2 with some different numbers of collocation points and mesh points. This confirms some basic features. The spline collocation methods produce a high-order accuracy which has been proved theoretically. Numerical results obtained by using Lancaster’s algorithm are given in Table 3. It is obvious that the proposed algorithm needs less iterations and produces higher order convergence (> 2) for such a linear eigenvalue problem. Example 4.2. Now we consider the fourth-order differential eigenvalue problem defined in (3.26), where γ is an eigenvalue to be calculated. The problem was considered by Bramley and Dennis [4] by using Chebyshev series discretization and the standard QR algorithm for solving the resulting dense matrix
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
373
Table 3 The convergence histories for Lancaster’s algorithm with k = 2 Iteration number
N =4
Newton’s correction N =8
N = 16
0
9.871D0
9.871D0
9.871D0
1
6.545D0
6.261D0
6.127D0
2
2.790D0
2.927D0
2.984D0
3
5.187D−1
6.512D−1
7.187D−1
4
1.696D−2
3.015D−2
3.882D−1
5
1.779D−6
6.285D−5
1.095D−4
6
1.955D−11
2.725D−10
8.692D−10
7
2.256D−15
2.301D−14
1.171D−14
Fig. 1. The first two eigenvalues for odd model against Reynolds number.
eigenvalue problems. It is quite expensive to calculate some local eigenvalues by using QR algorithm since it is a global algorithm. Eq. (3.26) is nonlinear in γ , which in general will be complex. It is easily verified that the complex eigenvalues are in conjugate pairs but for simplicity only one of the pair will be given in the numerical results. The eigenfunctions will either be odd or even functions of x and the terms odd and even will be applied to the eigenvalues themselves. We use only the fourth-order spline collocation approximation for discretizing (3.26) and then, use the algorithm in Section 2 to calculate some “small” eigenvalues. Two odd and two (even real eigenvalue curves against the Reynolds number are presented in Figs. 1 and 2, respectively. Some important features have been observed by Bramley and Dennis [4] and confirmed again by our numerical results. For both odd and even models, there are no real positive eigenvalues for R 5. In this iterative procedure, only 4–5 Newton’s iterations are needed when the iteration is convergent. Some complex eigenvalues are listed in Table 4.
374
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
Fig. 2. The first two eigenvalues for even model against Reynolds number.
Table 4 The complex eigenvalues Reynolds number
Eigenvalues (odd model)
Eigenvalues (even model)
1.0
(3.47755, 1.38780)
(1.80722, 1.19351)
(6.68937, 1.22354)
(5.09513, 1.53078)
(−4.06733, 1.29828)
(−2.49138, 0.86113)
(−7.21199, 1.63618)
(−5.64247, 1.49793)
(2.80077, 1.00865)
(1.22062, 1.01520)
(5.94490, 1.03028)
(4.98839, 0.89218)
(−7.24814, 1.26150)
(−6.39489, 0.74488)
(−10.04782, 1.08913)
(−8.71975, 1.13394)
(3.37772, 0.50968)
(1.01975, 0.77019)
(6.62135, 1.24368)
(4.98839, 0.89218)
5.0
10.0
(−10.29045, 1.49476) 100
(2.78436, 0.22550)
(0.833837, 0.47274)
(6.33168, 0.70864)
(−25.94723, 12.75441)
The contour integral in (3.13) is calculated numerically by using a trapezoidal rule which gives a high order accuracy for a smooth problem. Several closed contours as shown in Figs. 3 and 4 are tested for this underlying problem. This suggests that there are no eigenvalues in the square region in Fig. 3 and a big circle in Fig. 4 and only one eigenvalue in the small circle in Fig. 4.
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
375
Fig. 3. The contour of integration.
Fig. 4. The contour of integration.
5. Conclusions Algorithm presented in this paper is based on a block bi-diagonal structure and a specified implicit function β(λ). We have proved that β(λ) is analytic in the area of interest and such a block bi-diagonal structure appears in a wide range of physical applications. The algorithm here is considered for problems with a single eigen-parameter, but it is easy to extend to those with more parameters [11] due to the nature of Newton’s iteration.
Acknowledgements The first author wishes to express his appreciate to Professor M.R. Osborne for his suggesting this topic and constructive comments and Dr. C. Lenard for many fruitful discussions. This author also would like to thank Centre for Mathematics and Its Applications, Australian National University, for their hospitality. The work was done partially when he was visiting this centre.
376
W. Sun, K.M. Liu / Applied Numerical Mathematics 38 (2001) 361–376
References [1] A.L. Andrew, K.E. Chu, P. Lancaster, On the numerical solution of nonlinear eigenvalue problems, Computing 55 (1995) 91–111. [2] U.M. Ascher, J. Christiansen, R.D. Russell, Collocation software for boundary-value ODEs, ACM Trans. Math. Software 7 (1981) 209–222. [3] U.M. Ascher, R.M. Mattheij, R.D. Russell, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, NJ, 1988. [4] J.S. Bramely, S.C.R. Dennis, The calculation of eigenvalues for the stationary perturbation of Poiseuille flow, J. Comput. Phys. 47 (1982) 179–198. [5] J.S. Bramely, L. Dieci, R.D. Russell, Numerical solution of eigenvalue problems for linear boundary value ODEs, J. Comput. Phys. 94 (1991) 382–402. [6] D.F. Davidenko, The evaluation of determinants by the methods of variation of parameters, Soviet Math. Dokl. 1 (1960) 316–319. [7] J.J. Dongarra, B. Straughan, D.W. Walker, Chebyshev tau-QZ algorithm methods for calculating spectra of hydrodynamics stability problems, Appl. Numer. Math. 22 (1996) 399–434. [8] D.L. Harrar II, M.R. Osborne, Computational techniques for differential equation eigenvalue problems on vector processors, Mathematical Research Report, ANU, Australia, 1999. [9] M. Hegland, M.R. Osborne, Wrap-around partitioning for block bidiagonal linear systems, IMA J. Numer. Anal. 18 (1998) 373–383. [10] W. Huang, D.M. Sloan, The pseudospectral method for solving differential eigenvalue problems, J. Comput. Phys. 111 (1994) 399–409. [11] X. Ji, 2D bisection method for double eigenvalue problems, J. Comput. Phys. 126 (1996) 91–98. [12] P. Lancaster, Algorithm for lambda-matrices, Numer. Math. 6 (1964) 388–394. [13] K.M. Liu, E.L. Ortiz, Tau method approximation of differential eigenvalue problems where the spectral parameters enter nonlinearly, J. Comput. Phys. 72 (1987) 299–310. [14] L. Lu, W. Sun, The minimal eigenvalues of a class of block tridiagonal matrices, IEEE Trans. Inform. Theory 43 (1997) 787–792. [15] L.M. Lyness, J.N. Delves, A numerical method for locating the zeros of an analytic function, Math. Comp. 21 (1967) 543–560. [16] L.A. Neumaier, Residual inverse iteration for the nonlinear eigenvalue problem, SIAM J. Numer. Anal. 22 (1985) 914–923. [17] S.A. Orszag, Accurate solution of the Orr–Sommerfeld stability equation, J. Fluid Mech. 50 (1971) 689–703. [18] M.R. Osborne, S. Michaelson, On the numerical solution of eigenvalue problems in which the eigenvalue parameter appears nonlinearly with an application to differential equation, Comput. J. 7 (1964) 66–71. [19] M.R. Osborne, Inverse iteration, Newton’s method, and non-linear eigenvalue problems, in: The Contributions of Dr. J.H. Wilkinson to Numerical Analysis, Symposium Proceedings Series, Vol. 19, The Institute of Mathematics and its Applications, 1979. [20] J. Paine, R.D. Russell, Conditioning of collocation matrices and discrete Green function, SIAM J. Numer. Anal. 23 (1986). [21] A. Ruhe, Algorithms for nonlinear eigenvalue problem, SIAM J. Numer. Anal. 10 (1973) 674–689. [22] W. Sun, Block iterative algorithm for solving Hermite bicubic collocation equations, SIAM J. Numer. Anal. 33 (1996) 589–601. [23] W. Sun, The spectral analysis of Hermite cubic spline collocation systems, SIAM J. Numer. Anal. 36 (1999) 1962–1975. [24] S. Wright, Stable parallel algorithms for two-point boundary value problems, SIAM J. Sci. Statist. Comput. 13 (1992) 742–764.