Journal of Computational and Applied Mathematics 239 (2013) 135–151
Contents lists available at SciVerse ScienceDirect
Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam
Legendre multi-projection methods for solving eigenvalue problems for a compact integral operator Bijaya Laxmi Panigrahi a,∗ , Guangqing Long b , Gnaneshwar Nelakanti c a
Institute of Mathematics and Applications, Bhubaneswar - 751003, India
b
Department of Mathematics, Guangxi Normal College, Nanning 530001, PR China
c
Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur-721302, India
article
abstract
info
Article history: Received 7 December 2011 Received in revised form 15 August 2012 Keywords: Eigenvalue problem Legendre polynomials Multi-projection method Superconvergence rates Integral equations
In this paper, we consider the M-Galerkin and M-collocation methods for solving the eigenvalue problem for a compact integral operator with smooth kernels, using Legendre polynomial bases. We obtain error bounds for the eigenvalues and the gap between the spectral subspaces in both Legendre M-Galerkin and Legendre M-collocation methods. We also obtain superconvergence results for the eigenvalues and iterated eigenvectors in both L2 and infinity norm. We illustrate our results with numerical examples. © 2012 Elsevier B.V. All rights reserved.
1. Introduction Let X be a Banach space. We consider the following eigenvalue problem: Find u ∈ X and λ ∈ C \ {0} such that
K u = λu,
∥u∥ = 1,
(1.1)
where K is a compact linear integral operator from X into X and C denotes the set of complex numbers. Many practical problems in science and engineering are formulated as eigenvalue problems like (1.1) for compact linear integral operators K (see [1]). For many years, numerical solutions of eigenvalue problems have attracted much attention. The Galerkin, Petrov–Galerkin, collocation, Nyström and degenerate kernel methods are the methods commonly used for the approximation of eigenelements of the compact integral operator K . The analyses for the convergence of Galerkin, Petrov–Galerkin, collocation, Nyström and degenerate kernel methods are well documented in [1–5]. The projection methods for approximately solving the eigenvalue problem (1.1) would be the following. First select a sequence of linear subspaces {Xn ⊂ X, n ∈ N} and a sequence of projection operators {Pn : X → Xn , n ∈ N}, where N := {1, 2, 3, . . .}, and then solve the approximate problem
Pn K un = λn un ,
0 ̸= λn ∈ C, ∥un ∥ = 1.
(1.2)
Eq. (1.2) gives rise to the Galerkin or collocation method, respectively, when the Pn are orthogonal or interpolatory bounded projections. Using the notation in [6], the operator K can be written in the following matrix form:
KnLL KnLH
KnHL Pn KPn = (I − Pn )KPn KnHH
Pn K (I − Pn ) . (I − Pn )K (I − Pn )
(1.3)
The standard projection method replaces the operator K by KnLL .
∗
Corresponding author. Tel.: +09853277584. E-mail addresses:
[email protected] (B.L. Panigrahi),
[email protected] (G. Long),
[email protected] (G. Nelakanti).
0377-0427/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2012.09.014
136
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
There have been many attempts at improving the accuracy of numerical solutions. An idea for improving the convergence of the standard projection method is using multi-blocks to approximate the operator K instead of a single one, KnLL , that is, using a combination of some of the blocks KnLL , KnHL and KnLH . Using the well known Sloan iteration, one approximates the operator K by the combination of KnLL and KnLH . The Sloan iteration is usually used to improve the accuracy of projection methods. The multi-projection operator KnM (cf. [4,6–8]) is the combination of three blocks KnLL , KnHL and KnLH , used to approximate M the operator K . Instead of the eigenvalue problem (1.2), we consider the approximation problem: Find uM n ∈ X, 0 ̸= λn ∈ C such that M M KnM uM n = λn un ,
∥ uM n ∥ = 1.
(1.4)
We call the scheme (1.4) the M-projection method for solving (1.1). The eigenvalue problem (1.4) is called the M-Galerkin or M-collocation method, respectively, if the projection Pn is chosen to be orthogonal or interpolatory projection. Let Xn be a sequence of piecewise polynomial subspaces of X of degree ≤r − 1 on [a, b]. Then in both the M-Galerkin and M-collocation methods, under suitable conditions on the kernel, it is known that the orders of convergence for the eigenvalues and iterated eigenvectors are of the order O (h4r ) and the gaps between the spectral subspaces are of the order O (h3r ), where h denotes the norm of the partition (cf. [4,7,8]). However, using piecewise polynomials as basis functions, the computation of sufficiently accurate eigenvalues and eigenvectors may require the use of much finer partitioning of the domain [a, b], and hence the size of the matrix eigenvalue problem corresponding to the eigenvalue problem (1.4) becomes very large as the norm of the partition h becomes smaller and smaller. On the other hand, it is well known that the matrices in these matrix eigenvalue problems are dense, and it requires huge computational effort to generate these matrices. This motivated us to develop M-Galerkin and M-collocation methods for solving the eigenvalue problem (1.2) using global polynomial basis functions rather than piecewise polynomial basis functions. In this paper, we consider the M-Galerkin and M-collocation methods for solving the eigenvalue problem for a compact integral operator with smooth kernel, using global polynomial bases. Obviously, low degree polynomials imply a small matrix eigenvalue problem, something which is highly desirable in practical computations. We use the Legendre polynomials as basis functions to find the approximate eigenvalues and eigenvectors. The Legendre polynomial bases are easy to generate recursively and they have the nice property of orthogonality. Hence Legendre basis functions are less expensive computationally in solving the matrix eigenvalue problem corresponding to the eigenvalue problem (1.4) in comparison to piecewise polynomial basis functions. However, if Pn denotes either an orthogonal or an interpolatory projection from X into a subspace of global polynomials of degree ≤n, then ∥Pn ∥∞ is unbounded. Therefore, there exists at least one f ∈ C [a, b] such that Pn f 9 f . The purpose of this paper is to obtain similar superconvergence results for the approximate eigenvalues and eigenvectors in M-Galerkin and M-collocation methods in both L2 and infinity norm using Legendre polynomial bases, as in the case of spline bases. In fact we show that the error bounds for eigenvalues in the Legendre M-Galerkin method are of the order O (n−4r ) and the gaps between the spectral subspaces are of the order O (n−3r ) in L2 norm and O (n1/2−3r ) in the infinity norm, where r denotes the smoothness of the kernel and n denotes the degree of the Legendre polynomials employed. By iterating the eigenvectors we show that the iterated eigenvectors converges with the orders of convergence O (n−4r ) in both L2 and infinity norm. However, for the Legendre M-collocation method, we prove that the error bounds for the eigenvalues are of the order O (n−2r ) and the gaps between the spectral subspaces are of the orders O (n−2r ) in L2 norm and O (n1/2−2r ) in the infinity norm, and for the iterated eigenvector, the error bounds are of the order O (n−2r ) in infinity norm. In Section 2 we set up a theoretical framework for the eigenvalue problem using global polynomial bases. In Section 3 we apply the theory presented in Section 2 to Legendre M-Galerkin methods and Legendre M-collocation methods to obtain superconvergence rates. In Section 4 we present numerical results. Throughout the paper, we assume that c is a generic constant. 2. An abstract framework In this section, we present an abstract framework for multi-projection methods for solving eigenvalue problems for a compact linear integral operator, using polynomial bases. Consider the following integral operator K defined on X = L2 [−1, 1] or C [−1, 1]:
K u(s) =
1
K (s, t )u(t ) dt ,
s ∈ [−1, 1],
(2.1)
−1
where kernel K (·, ·) ∈ C ([−1, 1] × [−1, 1]). Then K : X → X is a compact operator. Let BL(X) denote the space of all bounded linear operators from X into X. The resolvent set of K is given by
ρ(K ) = {z ∈ C : (K − z I)−1 ∈ BL(X)}. The spectrum of K , σ (K ) is defined as σ (K ) = C \ ρ(K ). We are interested in the following eigenvalue problem: Find u ∈ X and λ ∈ C \ {0} such that
K u = λu,
∥u∥ = 1.
(2.2)
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
137
Assume that λ ̸= 0 is the eigenvalue of K with algebraic multiplicity m and ascent l. Let Γ ⊂ ρ(K ) be a simple closed rectifiable curve such that σ (K ) ∩ int Γ = {λ}, 0 ̸∈ int Γ . Now we describe the projection methods for solving the eigenvalue problem (2.2) using Legendre polynomial bases. Let Xn denote the sequence of Legendre polynomial subspaces of X of degree ≤n and Pn be the projection operator from X into Xn satisfying the following conditions: (C1) The set of operators {Pn : n ∈ N} is uniformly bounded in L2 norm, i.e., there exists a positive constant M independent of n such that ∥Pn ∥L2 ≤ M, for all n ∈ N. (C2) The operators Pn converge pointwise to I on X, i.e., for any u ∈ X, ∥Pn u − u∥L2 → 0 as n → ∞. With the bounded linear projection Pn , we will discuss the multi-projection method. To do this, we define the multiprojection operator (cf. [6,7]) by
KnM = Pn KPn + (I − Pn )KPn + Pn K (I − Pn ).
(2.3)
M M The approximate eigenvalue problem for solving (2.2) is that of finding λM n ∈ σ (Kn ) \ {0} and un ∈ X such that M M KnM uM n = λ n un ,
∥ uM n ∥L2 = 1.
(2.4)
To solve (2.4), applying Pn and I − Pn to the above equation yields M M M Pn KPn uM n + Pn K (I − Pn )un = λn Pn un
(2.5)
and
( I − P n ) uM n =
1
(I − Pn )KPn uM n ,
λM n
(2.6)
respectively. Substituting (2.6) into (2.5), we get
Pn KPn uM n +
1
λM n
M M Pn K (I − Pn )KPn uM n = λn Pn un ,
which is equivalent to solving the following eigenvalue problem:
Pn KPn I
Pn K (I − Pn )KPn O
1 1 ψn M ψn = λ , n ψn2 ψn2
(2.7)
2 where I and O are the n × n identity and zero matrices, respectively, and ψn1 = Pn uM n and ψn =
eigenvalue problem (2.7), we find λ and ψ = M n
uM n
u1n
1 n
u1n ,
and using (2.6), we find
u2n .
u2n
= (I −
)
Pn uM n
=
Pn uM n
λM n 1
λM n
. By solving the matrix
(I − Pn )K u1n . Thus the
eigenvector is given by = + M Analysis of the approximation of the eigenelements λM n , un of the problem (2.4) to the eigenelements λ, u of the eigenvalue problem (2.2) requires a measure of convergence of the approximate operator KnM to the operator K . For any u ∈ C (r ) [−1, 1], the space of functions on [−1, 1] that are r times continuously differentiable, the following error bounds hold:
∥u − Pn u∥L2 ≤ c n−r ∥u(r ) ∥L2 , β−r
∥u − Pn u∥∞ ≤ c n
(2.8)
(r )
∥u ∥L2 ,
(2.9)
where c is a constant independent of n and β < r (see [9,10]). In the estimate (2.9), the β value is 3/4 for Pn chosen to be the orthogonal projection and the β value is 1/2 for Pn chosen to be the interpolatory projection. From the assumption (2.9), we notice that ∥Pn u − u∥∞ 9 0 as n → ∞ for any u ∈ C ([−1, 1]).
We set the following notation. For u ∈ L2 [−1, 1], ∥u∥L2 = ( −1 |u(t )|2 dt )1/2 , and for any function u ∈ C (r ) [−1, 1], define ∥u∥r ,∞ = max{∥u(j) ∥∞ : 0 ≤ j ≤ r }, where u(j) denotes the jth derivative of u. If K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]), then R(K ) ⊂ C (r ) [−1, 1] and we define
1
K (i,j) (s, t ) =
∂ i+j K (s, t ), ∂ si ∂ t j
and
∥K ∥r ,∞ = max{∥K (i,j) (·, ·)∥∞ : 0 ≤ i ≤ r , 0 ≤ j ≤ r }.
138
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
Set Ks (t ) = K (s, t ) for s, t ∈ [−1, 1]. Note that for K (s, ·) ∈ C (j) ([−1, 1]), j = 0, 1, 2, . . . , r and u ∈ C [−1, 1], we have
1 ∂j |(K u) (s)| = K (s, t )u(t ) dt j −1 ∂ s 1 ∂j |u(t )| dt ≤ max j K (s, t ) −1 s,t ∈[−1,1] ∂ s √ ≤ 2 ∥K ∥j,∞ ∥u∥L2 ≤ 2∥K ∥j,∞ ∥u∥∞ . (j)
From this it follows that for j = 0, 1, 2, . . . , r,
∥(K u)(j) ∥∞ ≤
√
∥(K u)(j) ∥L2 ≤
√
2 ∥K ∥j,∞ ∥u∥L2 ≤ 2 ∥K ∥j,∞ ∥u∥∞ ,
(2.10)
and
√
2∥(K u)(j) ∥∞ ≤ 2∥K ∥j,∞ ∥u∥L2 ≤ 2 2∥K ∥j,∞ ∥u∥∞ .
(2.11)
Next we discuss the existence and error bounds for the approximated eigenelements. To this end, we first show that the multi-projection operator KnM norm converges to K in both L2 and infinity norm. Theorem 2.1. Assume K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]), r > β . Then KnM is norm convergent to K in both L2 and infinity norm. Proof. Using the estimates (2.8) and (2.11), we see that
∥(I − Pn )K u∥L2 ≤ c n−r ∥(K u)(r ) ∥L2 ≤ c n−r ∥K ∥r ,∞ ∥u∥L2 .
(2.12)
Hence using (C1) and (2.12), we have
∥K − KnM ∥L2 = ∥(I − Pn )K (I − Pn )∥L2 ≤ (1 + M )∥(I − Pn )K ∥L2 , and it follows that ∥K − KnM ∥L2 → 0 as n → ∞. Next, using the estimate (2.9), we get
∥(K − KnM )u∥∞ = ∥(I − Pn )K (I − Pn )u∥∞ ≤ c nβ−r ∥(K (I − Pn )u)r ∥∞ . Using the Cauchy–Schwarz inequality, we have
1 ∂ r K (s, t ) ( I − P ) u ( t ) dt |(K (I − Pn )u) (s)| = n −1 ∂ s r (r )
= |⟨Ks(r ,0) (·), (I − Pn )u⟩| ≤ ∥Ks(r ,0) (·)∥L2 ∥(I − Pn )u∥L2 ≤ ∥K ∥r ,∞ (1 + ∥Pn ∥L2 )∥u∥L2 ≤ (1 + M )∥K ∥r ,∞ ∥u∥∞ . Hence,
∥(K − KnM )u∥∞ ≤ c nβ−r ∥K ∥r ,∞ ∥u∥∞ , which gives ∥K − KnM ∥∞ → 0 as n → ∞, for r > β . This completes the proof.
Since KnM is norm convergent to K in both L2 and infinity norm, the spectrum of KnM inside Γ consists of m eigenvalues, M M say λM n,1 , λn,2 , . . . , λn,m , counted according to their algebraic multiplicities inside Γ (see [1,11]). Let
λˆ M n =
M M λM n,1 + λn,2 + · · · + λn,m
m
ˆM denote their arithmetic mean and we approximate λ by λ n . Let PS = −
1 2π i
Γ
(K − z I)−1 dz
and
PnS,M = −
1 2π i
Γ
(KnM − z I)−1 dz
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
139
be the spectral projections of K and KnM , respectively, associated with their corresponding spectra inside Γ . Let
SnM =
1 2π i
Γ
(KnM − z I)−1 dz z − λM n
be the reduced resolvent associated with KnM and λM n . To discuss the closeness of eigenvectors of the original operator K and those of the approximate operator KnM , we recall the concept of a gap between the spectral subspaces. For nonzero closed subspaces Y1 and Y2 of X, and for p = 2 or ∞, let
δp (Y1 , Y2 ) = sup{distp (y, Y2 ) : y ∈ Y1 , ∥y∥Lp = 1}.
(2.13)
δˆp (Y1 , Y2 ) = max{δp (Y1 , Y2 ), δp (Y2 , Y1 )}
(2.14)
Then denotes the gap between the spectral subspaces in L2 norm (p = 2) and infinity norm (p = ∞). We quote the following lemmas, which give the error bounds for the eigenvalues and the gap between the spectral subspaces. Lemma 2.2 ([4]). If X is a Banach space, K , KnM ∈ BL(X) for n ∈ N and KnM is norm convergent to K in L2 and infinity norm, then for sufficiently large n, there exists a constant c independent of n such that for j = 1, 2, . . . , m, M M |λ − λˆ M n | ≤ c ∥Kn (K − Kn )K ∥L2 , M ℓ n ,j
|λ − λ | ≤ c ∥
KnM
(K −
KnM
(2.15)
)K ∥L2 .
(2.16)
Lemma 2.3 ([2,4]). Let X be a Banach space, K , KnM ∈ BL(X) for n ∈ N and KnM be norm convergent to K in both L2 and infinity norm. Let R(P S ) and R(PnS,M ) be the ranges of the spectral projections P S and PnS,M , respectively. Then for sufficiently large n, there exists a constant c independent of n such that for j = 1, 2, . . . , m,
δˆ2 (R(P S ), R(PnS,M )) ≤ c ∥(K − KnM )K ∥L2 , δˆ∞ (R(P S ), R(PnS,M )) ≤ c ∥(K − KnM )K ∥∞ . S In particular, for any uM n ∈ R (Pn,M ), we have S M M ∥ uM n − P un ∥L2 ≤ c ∥(K − Kn )K ∥L2 , S M M ∥ uM n − P un ∥∞ ≤ c ∥(K − Kn )K ∥∞ . S M M In the following lemma we give the error bounds for the iterated eigenvectors u˜ M n = K un , un ∈ R (Pn,M ).
Lemma 2.4 ([4,11]). Let X be a Banach space and assume that K , KnM ∈ BL(X) with KnM is norm convergent to K in both L2 and infinity norm. Let R(P S ) and R(PnS,M ) be the ranges of the spectral projections P S and PnS,M , respectively. Then for sufficiently large n, there exists a constant c independent of n, with
δ2 (KR(PnS,M ), R(P S )) ≤ c ∥K (K − KnM )K ∥L2 , δ∞ (KR(PnS,M ), R(P S )) ≤ c ∥K (K − KnM )K ∥∞ . S M ˜M In particular, for any uM n ∈ R (Pn,M ), u n = K un , we have S M S M M ˜ n ∥L2 = ∥K uM ∥˜uM n −P u n − P K un ∥L2 ≤ c ∥K (K − Kn )K ∥L2 , S M S M M ˜ n ∥∞ = ∥K uM ∥˜uM n −P u n − P K un ∥∞ ≤ c ∥K (K − Kn )K ∥∞ .
Now suppose λ is a simple eigenvalue of the integral operator K with the corresponding eigenvector u and let λM n be the eigenvalue of KnM with corresponding eigenvector uM . Then in the following theorem, we give the error bounds for n M between the derivatives of the iterated eigenvector u˜ M = K u and the corresponding derivatives of the exact eigenvector n n in the infinity norm. Theorem 2.5. Let X be a Banach space and assume that K , KnM ∈ BL(X) with KnM is norm convergent to K in both L2 and M infinity norm. Let λ be the simple eigenvalue of the integral operator K and let λM n be the eigenvalue of Kn with corresponding M S eigenvector un . Let P be the spectral projections associated with the integral operator K and λ, and S (λM n ) be the reduced resolvent operator associated with the integral operator KnM and λM n . Then the following holds: S M M M (j) ∥ K uM n − P K un ∥j,∞ = max |⟨(K − Kn )un , ℓs ⟩|, s∈[−1,1]
(j)
(j)
S ∗ S ∗ ¯ ¯M where (K ∗ − λ n )(I − (P ) )ℓs = (I − (P ) )Ks .
j ∈ N0 := N ∪ {0},
140
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
S Proof. For any uM n ∈ R (PM ), we have from [1], S M M M M uM n − P un = S (λn )(K − Kn )un ,
(2.17)
where S (λ ) is the reduced resolvent operator of at λ Now by applying K to both sides of the above equation and using the fact that KP S = P S K , we obtain M n
KnM
M n .
S M M M M K uM n − P K un = KS (λn )(K − Kn )un .
Hence S M (K uM n − P K un )(s) =
1
M M K (s, t )[S (λM n )(K − Kn )un ](t ) dt .
−1
For j ∈ N0 ,
∂j M M K (s, t )[S (λM n )(K − Kn )un ](t ) dt j −1 ∂ s M M ¯ (j) = ⟨S (λM n )(K − Kn )un , Ks ⟩ M M ∗ ¯ (j) = ⟨(K − Kn )un , (S (λM n )) Ks ⟩
S M (j) (K uM n − P K un ) ( s ) =
1
(j) = ⟨(K − KnM )uM n , ℓs ⟩. (j)
∗¯ In the above we have used the fact that (S (λM n )) Ks is also smooth. Hence
= ℓ(sj) is uniquely solvable. Note that if the kernel Ks is smooth, then ℓs
S M M M (j) ∥ K uM n − P K un ∥j,∞ = max |⟨(K − Kn )un , ℓs ⟩|. s∈[−1,1]
Hence the proof is complete.
3. Legendre M -projection methods In this section, we apply the abstract framework developed in the last section to the Legendre M-projection (Legendre MGalerkin and Legendre M-collocation) methods for solving the eigenvalue problem (2.2) and obtain the superconvergence results for the eigenelements. To do this, we choose the Legendre polynomials {L0 , L1 , L2 , . . . , Ln } as an orthonormal basis for the subspace Xn , where L0 ( x ) = 1 ,
L1 (x) = x,
x ∈ [−1, 1],
and for i = 1, 2, . . . , n − 1,
(i + 1)Li+1 (x) = (2i + 1)xLi (x) − iLi−1 (x),
x ∈ [−1, 1].
3.1. The Legendre M-Galerkin method In this subsection, we discuss the Legendre M-Galerkin method for solving the eigenvalue problem for the compact integral operator and obtain the superconvergence results for the eigenelements. To do this, first we define the orthogonal projection PnG . Let PnG : X → Xn be the orthogonal projection defined by
PnG u =
n
⟨u, Lj ⟩Lj ,
u ∈ X,
(3.1)
j =0
1
where ⟨u, Lj ⟩ = −1 u(t )Lj (t ) dt and Li are Legendre polynomials of degree i. Using orthogonal projection PnG , the Legendre M-Galerkin method for approximation of the eigenvalue problem (2.2) is to ,G find λnM ,G ∈ σ (KnM ,G ) \ {0} and uM ∈ X such that n ,G ,G M ,G KnM ,G uM = λM un , n n
,G ∥ uM ∥L2 = 1, n
where
KnM ,G = PnG KPnG + (I − PnG )KPnG + PnG K (I − PnG ).
(3.2)
Next we discuss the existence of Legendre M-Galerkin eigenelements and their superconvergence results. To this end, we quote the following proposition and lemma which follow from the analysis of [9] and [10, pp. 283–287].
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
141
Proposition 3.1 ([10]). Let PnG : X → Xn denote the orthogonal projection defined by (3.1). Then the projection PnG satisfies (C1) and (C2), i.e., the following hold: (i) {PnG : n ∈ N} is uniformly bounded in L2 norm, i.e., ∥PnG ∥L2 ≤ M. (ii) There exists a constant c > 0 such that for any n ∈ N and u ∈ X,
∥PnG u − u∥L2 ≤ c inf ∥u − φ∥L2 . φ∈Xn
Lemma 3.2 ([9,10]). Let PnG be the orthogonal projection defined by (3.1). Then for any u ∈ C (r ) [−1, 1], the following hold:
∥u − PnG u∥L2 ≤ c n−r ∥u(r ) ∥L2 ,
(3.3)
3
∥u − PnG u∥∞ ≤ c n 4 −r ∥u(r ) ∥L2 , ∥u − PnG u∥∞ ≤ c n
1 −r 2
(3.4)
V (u(r ) ),
(3.5) (r )
where c is a constant independent of n and V (u
(r )
) denotes the total variation of u .
From the estimate (3.4), we see that ∥PnG u − u∥∞ 9 0 as n → ∞ for any u ∈ C [−1, 1]. However, in Theorem 3.4, we prove that KnM ,G converges to K in both L2 and infinity norm. To do this, we prove the following theorem. Theorem 3.3. Let PnG : X → Xn be an orthogonal projection defined by (3.1) and K be an integral operator with a kernel K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]). Then for any u ∈ C (r ) [−1, 1], the following estimates hold:
∥(K (I − PnG )u)(j) ∥L2 ≤ c n−r ∥K ∥r ,∞ ∥u∥L2 ,
(3.6)
∥(K (I − PnG )u)(j) ∥L2 ≤ c n−2r ∥K ∥r ,∞ ∥u(r ) ∥L2 .
(3.7)
Proof. For any s ∈ [−1, 1] and j ∈ N0 , consider
∂j K (s, t )(I − PnG )u(t ) dt j −1 ∂ s j ∂ G K ( s , ·), ( I − P ) u (·) = n ∂ sj ∂j = (I − PnG ) j K (s, ·), u(·) ∂s j G ∂ G = (I − Pn ) j K (s, ·), (I − Pn )u(·) . ∂s
(K (I − PnG )u)(j) (s) =
1
(3.8)
(3.9)
Now using the estimates (3.3), (3.8) and the Cauchy–Schwarz inequality, we obtain
j G ∂ ∥u∥L2 ( I − P ) ∥(K (I − PnG )u)(j) ∥∞ ≤ K ( s , ·) n 2 j ∂s L
≤ c n−r ∥K ∥r ,∞ ∥u∥L2 ≤ c n−r ∥K ∥r ,∞ ∥u∥∞ . Hence
∥(K (I − PnG )u)(j) ∥L2 ≤
√
2∥(K (I − PnG )u)(j) ∥∞ ≤ c n−r ∥K ∥r ,∞ ∥u∥L2 .
(3.10)
This completes the proof of (3.6). Following lines parallel to those of the proof above and using the estimates (3.3) and (3.9), one can easily see that
∥(K (I − PnG )u)(j) ∥L2 ≤
√
2∥(K (I − PnG )u)(j) ∥∞ j G ∂ G ≤ (I − Pn ) j K (s, ·) 2 ∥(I − Pn )u∥L2 ∂s L
≤ c n−2r ∥K ∥r ,∞ ∥u(r ) ∥L2 , which completes the proof of (3.7).
Theorem 3.4. Assume K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]), r > 3/4. Then KnM ,G is norm convergent to K in both L2 and infinity norm.
142
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
Proof. Using (3.3) and (3.6), we have
∥(K − KnM ,G )u∥L2 = ∥(I − PnG )K (I − PnG )u∥L2 ≤ c n−r ∥(K (I − PnG )u)(r ) ∥L2 ≤ c n−2r ∥K ∥r ,∞ ∥u∥L2 . From this, it follows that ∥K − KnM ,G ∥L2 → 0 as n → ∞. Using the estimate (3.4) and (3.10), we get
∥(K − KnM ,G )u∥∞ = ∥(I − PnG )K (I − PnG )u∥∞ 3
≤ c n 4 −r ∥(K (I − PnG )u)r ∥∞ 3
≤ c n 4 −2r ∥K ∥r ,∞ ∥u∥∞ , which gives ∥K − KnM ,G ∥∞ → 0 for r >
3 8
as n → ∞. This completes the proof.
Since KnM ,G is norm convergent to K in both L2 and infinity norm, the spectrum of KnM ,G inside Γ consists of m M ,G
M ,G
,G ˆ M ,G denote their eigenvalues, say λn,1 , λn,2 , . . . , λM n,m counted according to their algebraic multiplicities (cf. [1,11]). Let λn S ,G ,G ˆM arithmetic mean and we approximate λ by λ . Let P S and Pn,M be the spectral projections of K and KnM ,G , respectively, n associated with their corresponding spectra inside Γ . Next we discuss convergence rates for the eigenvalues and eigenvectors of the Legendre M-Galerkin method. To do this, first we prove the following theorems.
Theorem 3.5. Let PnG : X → Xn be an orthogonal projection defined by (3.1) and K be an integral operator with a kernel K (·, ·) ∈ C (r +1) ([−1, 1] × [−1, 1]). Then the following approximations hold:
∥(I − PnG )K (I − PnG )K ∥L2 = O (n−3r ),
(3.11)
∥K (I − PnG )K (I − PnG )K ∥L2 = O (n−4r ),
(3.12)
∥(I −
PnG
)K (I −
PnG
)K ∥∞ = O (n
1 −3r 2
),
∥K (I − PnG )K (I − PnG )K ∥∞ = O (n−4r ).
(3.13) (3.14)
Proof. Using the estimates (3.3), (3.7) and (2.10), we have
∥(I − PnG )K (I − PnG )K u∥L2 ≤ c n−r ∥(K (I − PnG )K u)r ∥L2 ≤ c n−r n−2r ∥K ∥r ,∞ ∥(K u)(r ) ∥L2 ≤ c n−3r ∥K ∥2r ,∞ ∥u∥L2 , which completes the proof of (3.11). Next using the estimates (3.7) for j = 0 and j = r, and (2.10), we obtain
∥K (I − PnG )K (I − PnG )K u∥L2 ≤ c n−2r ∥K ∥r ,∞ ∥(K (I − PnG )K u)r ∥L2 ≤ c n−4r ∥K ∥2r ,∞ ∥(K u)(r ) ∥L2 ≤ c n−4r ∥K ∥3r ,∞ ∥u∥L2 , whose proof follows that of (3.12). Using the estimates (3.5), we obtain 1
∥(I − PnG )K (I − PnG )K u∥∞ ≤ c n 2 −r V ((K (I − PnG )K u)(r ) ). Now by the definition of total variation and the mean value theorem, we have V ((K (I − PnG )K u)(r ) ) = sup P
n
|(K (I − PnG )K u)(r ) (xi ) − (K (I − PnG )K u)(r ) (xi−1 )|
i=1
n 1 (r ,0) (r ,0) G = sup ( xi , t ) − K (xi−1 , t )](I − Pn )K u(t ) dt [K P − 1 i=1 n 1 = sup [K (r +1,0) (ξi , t )](I − PnG )K u(t ) dt (xi − xi−1 ) P − 1 i=1
(3.15)
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
≤ sup P
n
143
|xi − xi−1 ||⟨K (r +1,0) (ξ , t ), (I − PnG )K u⟩|
i =1
≤ 2|⟨(I − PnG )K (r +1,0) (ξ , t ), (I − PnG )K u⟩|,
(3.16)
for any partition P = {x0 , x1 , . . . , xn } of [−1, 1], where ξi ∈ (xi , xi+1 ), i = 0, 1, . . . , n − 1. Using the estimate (3.16) with (3.15), (3.3) and (2.10), we have 1
∥(I − PnG )K (I − PnG )K u∥∞ ≤ c n 2 −r V ((K (I − PnG )K u)(r ) ) 1
≤ c n 2 −r ∥(I − PnG )K (r +1,0) ∥L2 ∥(I − PnG )K u∥L2 1
≤ c n 2 −3r ∥K ∥r +1,∞ ∥(K u)r ∥L2 1
≤ c n 2 −3r ∥K ∥r +1,∞ ∥K ∥r ,∞ ∥u∥∞ , which completes the proof of (3.13). Next using the estimates (3.7) for j = 0 and j = r, and (2.10), we obtain
∥K (I − PnG )K (I − PnG )K u∥∞ ≤ c n−2r ∥K ∥r ,∞ ∥(K (I − PnG )K u)r ∥L2 ≤ c n−4r ∥K ∥2r ,∞ ∥(K u)(r ) ∥L2 ≤ c n−4r ∥K ∥3r ,∞ ∥u∥L2 ≤ c n−4r ∥K ∥3r ,∞ ∥u∥∞ , which completes the proof of (3.14).
Theorem 3.6. Let K be an integral operator with a kernel K (·, ·) ∈ C (r +1) ([−1, 1] × [−1, 1]) and KnM ,G be the M-Galerkin operator defined by (3.2). Then the following approximation results hold:
∥K (K − KnM ,G )K ∥L2 = O (n−4r ),
∥(K − KnM ,G )K ∥L2 = O (n−3r ), ∥
KnM ,G
(K −
KnM ,G
)K ∥L2 = O (n
∥(K − KnM ,G )K ∥∞ = O (n
1 −3r 2
−4r
),
),
∥K (K − KnM ,G )K ∥∞ = O (n−4r ).
Proof. Using the estimate (3.11) and (3.12), we get
∥(K − KnM ,G )K ∥L2 = ∥(I − PnG )K (I − PnG )K ∥L2 = O (n−3r ), ∥K (K − KnM ,G )K ∥L2 = ∥K (I − PnG )K (I − PnG )K ∥L2 = O (n−4r ), ∥KnM ,G (K − KnM ,G )K ∥L2 = ∥K (I − PnG )K (I − PnG )K ∥L2 = O (n−4r ). Then using the estimate (3.13) and (3.14), we obtain 1
∥(K − KnM ,G )K ∥∞ = ∥(I − PnG )K (I − PnG )K ∥∞ = O (n 2 −3r ), ∥K (K − KnM ,G )K ∥∞ = ∥K (I − PnG )K (I − PnG )K ∥∞ = O (n−4r ). This completes the proof.
In the following theorem, we give the superconvergence rates for the eigenvalues and eigenvectors. Theorem 3.7. Let K be a compact integral operator with a kernel K (·, ·) ∈ C (r +1) ([−1, 1] × [−1, 1]). Let KnM ,G be the MS ,G
Galerkin operator defined by (3.2), and KnM ,G be norm convergent to K in both L2 and infinity norm. Let R(P S ) and R(Pn,M ) S ,G
be the ranges of the spectral projections P S and Pn,M , respectively. Assume λ is the eigenvalue of K with algebraic multiplicity
ˆ m and ascent ℓ. Let λ
M ,G n
M ,G
be the arithmetic mean of the eigenvalues λn,j , for j = 1, 2, . . . , m. Then
δˆ2 (R(PnS,,MG ), R(P S )) = O (n−3r ), 1 δˆ∞ (R(PnS,,MG ), R(P S )) = O (n 2 −3r ),
δ2 (KR(PnS,,MG ), R(P S )) = O (n−4r ), δ∞ (KR(PnS,,MG ), R(P S )) = O (n−4r ).
S ,G
,G ,G ,G In particular, for any uM ∈ R(Pn,M ) and u˜ M = K uM , we have n n n ,G ,G ∥ uM − P S uM ∥L2 = O (n−3r ), n n 1
,G ,G ∥ uM − P S uM ∥∞ = O (n 2 −3r ), n n
,G ,G ∥˜uM − P S u˜ M ∥L2 = O (n−4r ), n n ,G ∥˜uM − P S u˜ nM ,G ∥∞ = O (n−4r ). n
For j = 1, 2, . . . , m, ,G |λ − λˆ M | = O (n−4r ), n
,G ℓ −4r |λ − λM ). n,j | = O (n
144
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
Proof. It follows from Lemma 2.3 and Theorem 3.6 that
δˆ2 (R(PnS,,MG ), R(P S )) ≤ c ∥(K − KnM ,G )K ∥L2 = O (n−3r ), 1 δˆ∞ (R(PnS,,MG ), R(P S )) ≤ c ∥(K − KnM ,G )K ∥∞ = O (n 2 −3r ).
S ,G
,G ,G ,G In particular, for any uM ∈ R(PM ), u˜ M = K uM , we have n n n ,G ∥unM ,G − P S uM ∥L2 ≤ c ∥(K − KnM ,G )K ∥L2 = O (n−3r ), n 1
,G ,G ∥ uM − P S uM ∥∞ ≤ c ∥(K − KnM ,G )K ∥∞ = O (n 2 −3r ). n n
It follows from Lemma 2.4 and Theorem 3.6 that
δ2 (KR(PnS,,MG ), R(P S )) ≤ c ∥K (K − KnM ,G )K ∥L2 = O (n−4r ), δ∞ (KR(PnS,,MG ), R(P S )) ≤ c ∥K (K − KnM ,G )K ∥∞ = O (n−4r ). S ,G
,G In particular, for any uM ∈ R(Pn,M ), we have n ,G ,G ,G ∥˜unM ,G − P S u˜ M ∥L2 = ∥K uM − P S K uM ∥L2 ≤ c ∥K (K − KnM ,G )K ∥L2 = O (n−4r ), n n n ,G ,G ,G ∥˜unM ,G − P S u˜ M ∥∞ = ∥K uM − P S K uM ∥∞ ≤ c ∥K (K − KnM ,G )K ∥∞ = O (n−4r ). n n n
By using Lemma 2.2 and Theorem 3.6, for j = 1, 2, . . . , m, ,G |λ − λˆ M | ≤ c ∥KnM ,G (K − KnM ,G )K ∥L2 = O (n−4r ), n ,G ℓ M ,G |λ − λM (K − KnM ,G )K ∥L2 = O (n−4r ). n ,j | ≤ c ∥ K n
This completes the proof.
,G Now suppose λ is the simple eigenvalue of the integral operator K with corresponding eigenvector u and let λM be the n M ,G M ,G M ,G ,G eigenvalue of Kn with corresponding eigenvector un . Then in Theorem 3.8, we show that not only does u˜ n = K uM n −4r M ,G ˜ converge to u with the order of convergence O (n ), but also the derivatives of un also converge to the corresponding derivatives of u with the same order of convergence in the infinity norm.
Theorem 3.8. Let X be a Banach space and assume K , KnM ,G ∈ BL(X) with KnM ,G norm convergent to K in both L2 and infinity ,G norm. Let λ be the simple eigenvalue of the integral operator K and let λM be the eigenvalue of KnM ,G with corresponding n ,G S M ,G eigenvector uM . Let P and S (λ ) be the spectral projections and reduced resolvent operator associated with the integral n n operator K and λ, respectively. Then the following approximation result holds: ,G ,G ,G ,G ∥˜uM − P S u˜ M ∥j,∞ = ∥K uM − P S K uM ∥j,∞ = O (n−4r ). n n n n ,G ,G Proof. Since λM → λ ̸= 0, and we have ϵ given, there exists a positive integer n0 ∈ N such that |λM − λ| < ϵ, n > n0 . n n 1 1 M ,G Hence |λn | < |λ| + ϵ ; this implies that M ,G > |λ|+ϵ > 0. |
|λn
,G ,G ,G = ∈ Xn , ∥u1n ∥L2 < ∞ and u2n = (I − PnG )uM Now since uM = u1n + u2n ∈ X with u1n = PnG uM n n n
using the estimates (3.3) and (3.7), we obtain ,G ∥(I − PnG )K (I − PnG )uM ∥L2 = n
1 ,G |λM | n
≤c
∥(I − PnG )K (I − PnG )K u1n ∥L2
1
|λ
M ,G n
|
n−r ∥(K (I − PnG )K u1n )(r ) ∥L2
1
n−r n−2r ∥(K u1n )(r ) ∥L2 | ≤ c n−3r ∥K ∥r ,∞ ∥u1n ∥L2 .
≤c
|λ
M ,G n
Thus for j = 0, 1, 2, . . . , r, we have, using Theorem 2.5, ,G ,G ,G (j) ∥˜uM − P S u˜ M ∥j,∞ = max |⟨(K − KnM ,G )uM , ℓs,M ⟩| n n n s∈[−1,1]
,G (j) ≤ max |⟨(I − PnG )K (I − PnG )uM , ℓs,M ⟩| n s∈[−1,1]
,G , (I − PnG )ℓ(s,j)M ⟩| ≤ |⟨(I − PnG )K (I − PnG )uM n
1
,G λM n
(I − PnG )K u1n ,
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
145
≤ c ∥(I − PnG )K (I − PnG )unM ,G ∥L2 ∥(I − PnG )ℓ(s,j)M ∥L2 ≤c
1 ,G | |λM n
∥(I − PnG )K (I − PnG )K u1n ∥L2 ∥(I − PnG )ℓ(s,j)M ∥L2
≤ c n−3r ∥K ∥r ,∞ ∥u1n ∥L2 n−r ∥(ℓ(s,j)M )(r ) ∥L2 , (j)
(j)
where ℓs,M = (S (λnM ,G ))∗ K¯ s . This completes the proof.
Remark 3.9. From Theorem 3.7, we observe that in the Legendre M-Galerkin method, the approximated eigenvalues converge to the exact eigenvalues with the order O (n−4r ) and we notice that the gap between the spectral subspaces in L2 norm is of the order O (n−3r ) whereas in the infinity norm it is of the order O (n1/2−3r ), r > 1/6. This shows that, in the Legendre M-Galerkin method the eigenvectors converge faster in L2 norm than in the infinity norm. However iterated eigenvectors converge with the order O (n−4r ) in both L2 norm and infinity norm. Thus in the Legendre M-Galerkin method, we obtain the superconvergence rates for the iterated eigenvectors and for the eigenvalues. ,G Remark 3.10. In Theorem 3.8, we showed that not only does the iterated eigenvector u˜ M converge to u with the order n −4r M ,G of convergence O (n ), but also the derivatives of u˜ n also converge to the corresponding derivatives of u with the same order of convergence in the infinity norm.
3.2. The Legendre M-collocation method In this subsection, we discuss the Legendre M-collocation method and establish the convergence rates for the eigenelements. Let X = C [−1, 1]. Let {τ0 , τ1 , . . . , τn } be the zeros of the Legendre polynomial of degree n + 1 and define the interpolatory projection PnC : X → Xn by
PnC u ∈ Xn ,
PnC u(τi ) = u(τi ),
i = 0, 1, . . . , n, u ∈ X.
(3.17)
According to the analysis of [9] and [10, p. 289], we have the following proposition and lemma. Proposition 3.11 ([9,10]). The following statements hold: (i) {PnC : n ∈ N} is uniformly bounded in L2 norm, i.e., ∥PnC ∥L2 ≤ M. (ii) There exists a constant c > 0 such that for any n ∈ N and u ∈ X,
∥PnC u − u∥L2 ≤ c inf ∥u − φ∥L2 . φ∈Xn
Lemma 3.12 ([9,10]). Let PnC : X → Xn be the interpolatory projection defined by (3.17). Then for any u ∈ C (r ) [−1, 1], there exists a constant c independent of n such that
∥u − PnC u∥L2 ≤ c n−r ∥u(r ) ∥L2 ,
n ≥ r.
(3.18)
From Proposition 3.11 and Lemma 3.12, we observe that the interpolatory projection PnC satisfies the assumptions C1 and C2. Using the interpolatory projection PnC , the M-collocation method for approximation of the eigenvalue problem (2.2) is ,C ,C ∈ X with ∥uM ∥ = 1 such that finding λnM ,C ∈ σ (KnM ,C ) \ {0} and uM n n ,C ,C M ,C KnM ,C uM = uM λn , n n
(3.19)
where
KnM ,C = PnC KPnC + (I − PnC )KPnC + PnC K (I − PnC ).
(3.20)
Next we discuss the existence of Legendre M-collocation eigenelements and their superconvergence results. To do this, consider 23/2
∥PnC ∥∞ = 1 + √ n1/2 + B0 + O (n−1/2 ), π
(3.21)
where B0 is a bounded constant (cf. [12–14]); we see that ∥PnC u − u∥∞ 9 0 as n → ∞ for any u ∈ C [−1, 1]. Now for any χ ∈ Xn , we have
∥(I − PnC )u∥∞ = ∥(I − PnC )(u − χ )∥∞ ≤ (1 + ∥PnC ∥∞ )∥u − χ ∥∞ .
(3.22)
146
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
Since the above estimate (3.22) is true for any χ ∈ Xn , it follows that
∥(I − PnC )u∥∞ ≤ (1 + ∥PnC ∥∞ ) inf ∥u − χ∥∞ . χ∈Xn
(3.23)
Now using Jackson’s Theorem from [15] and combining the estimates (3.21) and (3.23), we obtain 1
1
∥(I − PnC )u∥∞ ≤ c n 2 n−r ∥u(r ) ∥∞ ≤ c n 2 −r ∥u(r ) ∥∞ .
(3.24)
The estimate (3.24) shows that ∥(I − PnC )u∥∞ → 0 as n → ∞ for u ∈ C (r ) [−1, 1], r > 1/2. However, in Theorem 3.14, we prove that KnM ,C converges to K in both L2 and infinity norm. To do this, we prove the following theorem. Theorem 3.13. Let PnC : X → Xn be an interpolatory projection defined by (3.17) and K be an integral operator with a kernel K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]). Then for u ∈ C (r ) [−1, 1], the following estimates hold:
∥(K (I − PnC )u)(j) ∥L2 ≤ c ∥K ∥r ,∞ ∥u∥L2 , ∥(K (I −
PnC
(j)
)K u) ∥L2 ≤ c n ∥K ∥ −r
2 r ,∞
∥u∥L2 .
(3.25) (3.26)
Proof. For any s ∈ [−1, 1] and j = 0, 1, 2, . . . , r, consider
|(K (I −
PnC
)u) (s)| = (j)
∂j C K (s, t )(I − Pn )u(t ) dt j ∂s
1
−1
= |⟨(Ks (·))(j,0) , (I − PnC )u⟩| ≤ ∥(Ks (·))(j,0) ∥L2 ∥(I − PnC )u∥L2 ≤ c (1 + M )∥K ∥r ,∞ ∥u∥L2 . Taking the supremum over s ∈ [−1, 1], we obtain
∥(K (I − PnC )u)(j) ∥∞ ≤ c ∥K ∥r ,∞ ∥u∥L2 .
(3.27)
Hence
∥(K (I − PnC )u)(j) ∥L2 ≤ c ∥(K (I − PnC )u)(j) ∥∞ ≤ c ∥K ∥r ,∞ ∥u∥L2 . This completes the proof of (3.25). Now using the estimates (3.18), (2.10) and the Cauchy–Schwarz inequality, we find that
|(K (I −
PnC
)K u) (s)| =
1
(j)
−1
∂j C K ( s , t )( I − P ) K u ( t ) dt n ∂ sj
= |⟨(Ks (·))(j,0) , (I − PnC )K u⟩| ≤ ∥(Ks (·))(j,0) ∥L2 ∥(I − PnC )K u∥L2 ≤ c n−r ∥K ∥r ,∞ ∥(K u)r ∥L2 ≤ c n−r ∥K ∥2r ,∞ ∥u∥L2 . Hence following lines parallel to those of the proof above, one can easily show that
∥(K (I − PnC )K u)(j) ∥L2 ≤ c ∥(K (I − PnC )K u)(j) ∥∞ ≤ c n−r ∥K ∥2r ,∞ ∥u∥L2 , which completes the proof of (3.26).
Theorem 3.14. Assume K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]), r > 1/2. Then KnM ,C is norm convergent to K in both L2 and infinity norm. Proof. Using the estimates (3.18) and (3.25), we get
∥(K − KnM ,C )u∥L2 = ∥(I − PnC )K (I − PnC )u∥L2 ≤ c n−r ∥(K (I − PnC )u)r ∥L2 ≤ c n−r ∥K ∥r ,∞ ∥u∥L2 . From this, it follows that ∥K − KnM ,C ∥L2 → 0 as n → ∞.
(3.28)
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
147
Using the estimates (3.24) and (3.27), we get
∥(K − KnM ,C )u∥∞ = ∥(I − PnC )K (I − PnC )u∥∞ 1
≤ c n 2 −r ∥(K (I − PnC )u)r ∥∞ 1
1
≤ c n 2 −r ∥K ∥r ,∞ ∥u∥L2 ≤ c n 2 −r ∥K ∥r ,∞ ∥u∥∞ , which gives ∥K − KnM ,C ∥∞ → 0 for r > Since
KnM ,C
1 2
as n → ∞. This completes the proof.
is norm convergent to K in both L and infinity norm, the spectrum of KnM ,C inside Γ will consist of m 2
M ,C
M ,C
,C ˆ M ,C denote eigenvalues, say λn,1 , λn,2 , . . . , λM n,m counted according to their algebraic multiplicities inside Γ (cf. [1,11]). Let λn S ,C
ˆ nM ,C to approximate λ. Let P S and Pn,M be the spectral projections their arithmetic mean. We will use the arithmetic mean λ KnM ,C ,
of K and respectively, associated with their corresponding spectra inside Γ . Next we discuss convergence rates for the eigenvalues and eigenvectors. To do this, first we prove the following theorems. Theorem 3.15. Let PnC : X → Xn be an interpolatory projection defined by (3.17) and K be an integral operator with a kernel K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]). Then the following estimates hold:
∥(I − PnC )K (I − PnC )K ∥L2 = O (n−2r ), ∥K (I −
PnC
)K (I −
PnC
)K ∥L2 = O (n
∥(I − PnC )K (I − PnC )K ∥∞ = O (n ∥K (I −
PnC
)K (I −
PnC
−2r
1 −2r 2
)K ∥∞ = O (n
(3.29)
),
(3.30)
),
−2r
(3.31)
).
(3.32)
Proof. Using the estimates (3.18) and (3.26), we have
∥(I − PnC )K (I − PnC )K u∥L2 ≤ c n−r ∥(K (I − PnC )K u)r ∥L2 ≤ c n−2r ∥K ∥2r ,∞ ∥u∥L2 , which completes the proof of (3.29). Next using the estimates (3.26) for j = 0, we get
∥K (I − PnC )K (I − PnC )K u∥L2 ≤ c n−r ∥K ∥2r ,∞ ∥(I − PnC )K u∥L2 , and again using the estimates (3.18) and (2.11), we obtain
∥K (I − PnC )K (I − PnC )K u∥L2 ≤ c n−r n−r ∥K ∥2r ,∞ ∥(K u)(r ) ∥L2 ≤ c n−2r ∥K ∥3r ,∞ ∥u∥L2 , which proves the estimate (3.30). Using the estimates (3.24) and (3.26), we obtain 1
1
∥(I − PnC )K (I − PnC )K u∥∞ ≤ c n 2 −r ∥(K (I − PnC )K u)(r ) ∥∞ ≤ c n 2 −2r ∥K ∥2r ,∞ ∥u∥∞ . This proves the estimate (3.31). Next using the estimates (3.26) for j = 0, (3.18) and (2.11), we obtain
∥K (I − PnC )K (I − Pn )K u∥∞ ≤ c n−r ∥K ∥2r ,∞ ∥(I − PnC )K u∥L2 ≤ c n−r n−r ∥K ∥2r ,∞ ∥(K u)(r ) ∥L2 ≤ c n−2r ∥K ∥3r ,∞ ∥u∥∞ , which completes the proof of (3.32).
Theorem 3.16. Let K be an integral operator with a kernel K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]) and KnM ,C be the M-collocation operator defined by (3.20). Then the following estimates hold:
∥(K − KnM ,C )K ∥L2 = O (n−2r ),
∥K (K − KnM ,C )K ∥L2 = O (n−2r ),
∥KnM ,C (K − KnM ,C )K ∥L2 = O (n−2r ), 1
∥(K − KnM ,C )K ∥∞ = O (n 2 −2r ),
∥K (K − KnM ,C )K ∥∞ = O (n−2r ).
Proof. Using the estimate (3.29) and (3.30), we get
∥(K − KnM ,C )K ∥L2 = ∥(I − PnC )K (I − PnC )K ∥L2 = O (n−2r ), ∥K (K − KnM ,C )K ∥L2 = ∥K (I − PnC )K (I − PnC )K ∥L2 = O (n−2r ), ∥KnM ,C (K − KnM ,C )K ∥L2 = ∥K (I − PnC )K (I − PnC )K ∥L2 = O (n−2r ).
148
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
Then using the estimate (3.31) and (3.32), we obtain 1
∥(K − KnM ,C )K ∥∞ = ∥(I − PnC )K (I − PnC )K ∥∞ = O (n 2 −2r ), ∥K (K − KnM ,C )K ∥∞ = ∥K (I − PnC )K (I − PnC )K ∥∞ = O (n−2r ). This completes the proof.
In the following theorem we give the superconvergence results for the approximated eigenelements in the Legendre M-collocation method. Theorem 3.17. Let K be a compact integral operator with a kernel K (·, ·) ∈ C (r ) ([−1, 1] × [−1, 1]). Let KnM ,C be the MS ,C
collocation operator defined by (3.20), and KnM ,C be norm convergent to K in L2 and infinity norm. Let R(P S ) and R(Pn,M ) be S ,C
the ranges of the spectral projections of P S and Pn,M , respectively. Assume λ is the eigenvalue of K with algebraic multiplicity M ,C n
ˆ m and ascent ℓ. Let λ
M ,C
be the arithmetic mean of the eigenvalues λn,j , for j = 1, 2, . . . , m; then
δˆ2 (R(PMS ,C ), R(P S )) = O (n−2r ),
δ2 (KR(PMS ,C ), R(P S )) = O (n−2r ), δ∞ (KR(PMS ,C ), R(P S )) = O (n−2r ).
1 δˆ∞ (R(PMS ,C ), R(P S )) = O (n 2 −2r ),
S ,C
,C In particular, for any uM ∈ R(Pn,M ), we have n ,C ,C ∥ uM − P S uM ∥L2 = O (n−2r ), n n ,C ∥unM ,C − P S uM ∥∞ = O (n n
1 −2r 2
),
,C ∥ K uM − P S K un ∥L2 = O (n−2r ), n ,C ,C ∥ K uM − P S K uM ∥∞ = O (n−2r ). n n
For j = 1, 2, . . . , m, ,C |λ − λˆ M | = O (n−2r ), n
,C ℓ −2r |λ − λM ). n,j | = O (n
Proof. It follows from Lemma 2.3 and Theorem 3.16 that
δˆ2 (R(PMS ,C ), R(P S )) ≤ c ∥(K − KnM ,C )K ∥L2 = O (n−2r ), 1 δˆ∞ (R(PMS ,C ), R(P S )) ≤ c ∥(K − KnM ,C )K ∥∞ = O (n 2 −2r ).
S ,C
,C ,C ,C In particular, for any uM ∈ R(Pn,M ), u˜ M = K uM , we have n n n ,C ,C ∥ uM − P S uM ∥L2 ≤ c ∥(K − KnM ,C )K ∥L2 = O (n−2r ), n n 1
,C ,C ∥ uM − P S uM ∥∞ ≤ c ∥(K − KnM ,C )K ∥∞ = O (n 2 −2r ). n n
It follows from Lemma 2.4 and Theorem 3.16 that
δ2 (KR(PMS ,C ), R(P S )) ≤ c ∥K (K − KnM ,C )K ∥L2 = O (n−2r ), δ∞ (KR(PMS ,C ), R(P S )) ≤ c ∥K (K − KnM ,C )K ∥∞ = O (n−2r ). S ,C
,C ,C ,C In particular, for any uM ∈ R(Pn,M ), u˜ M = K uM , we have n n n ,C ,C ,C ∥˜unM ,C − P S u˜ M ∥L2 = ∥K uM − P S K uM ∥L2 ≤ c ∥K (K − KnM ,C )K ∥L2 = O (n−2r ), n n n ,C ,C ,C ,C ∥˜uM − P S u˜ M ∥∞ = ∥K uM − P S K uM ∥∞ ≤ c ∥K (K − KnM ,C )K ∥∞ = O (n−2r ). n n n n
By using Lemma 2.2 and Theorem 3.16, for j = 1, 2, . . . , m, we have ,C |λ − λˆ M | ≤ c ∥KnM ,C (K − KnM )K ∥L2 = O (n−2r ), n ,C ℓ M M ,C |λ − λM )K ∥L2 = O (n−2r ). n,j | ≤ c ∥Kn (K − Kn
This completes the proof.
,C Now suppose that λ is the simple eigenvalue of the integral operator K with corresponding eigenvector u and let λM n M ,C M ,C be the eigenvalue of Kn with corresponding eigenvector un . Then in the following theorem, we show that not only does ,C ,C the iterated eigenvector u˜ M = K uM converge to u with the order of convergence O (n−2r ), but also the derivatives of n n ,C u˜ M also converge to the corresponding derivatives of u with the same order of convergence in the infinity norm. n
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
149
Theorem 3.18. Let X be a Banach space and assume K , KnM ,C ∈ BL(X) with KnM ,C is norm convergent to K in both L2 and ,C infinity norm. Let λ be the simple eigenvalue of the integral operator K and let λM be the eigenvalue of KnM ,C with corresponding n ,C S M ,C eigenvector uM . Let P and S (λ ) be the spectral projections and reduced resolvent operator associated with the integral n n operator K and λ, respectively. Then the following approximation holds: ,C ,C ,C ,C ∥j,∞ = ∥K uM − P S K uM ∥j,∞ = O (n−2r ). ∥˜uM − P S u˜ M n n n n ,C ,C Proof. Since λM → λ ̸= 0, we have, with given ϵ , that there exists a positive integer n0 ∈ N such that |λM − λ| < ϵ, n > n n 1 1 M ,C n0 . Hence |λn | < |λ| + ϵ ; this implies that M ,C > |λ|+ϵ > 0. |λn
|
,C ,C ,C Now since uM = u1n + u2n ∈ X with u1n = PnM ,C uM ∈ Xn , ∥u1n ∥L2 < ∞ and u2n = (I − PnM ,C )uM = n n n
using the estimates (3.18) and (3.26), we obtain ,C ∥(I − PnC )K (I − PnC )uM ∥L2 = n
1
|λ
≤c
M ,C n
|
1
,C λM n
(I − PnC )K u1n ,
∥(I − PnC )K (I − PnC )K u1n ∥L2
1 ,C |λM | n
n−r ∥(K (I − PnC )K u1n )(r ) ∥L2
1
n−r n−r ∥K ∥2r ,∞ ∥u1n ∥L2 | ≤ c n−2r ∥K ∥2r ,∞ ∥u1n ∥L2 .
≤c
M ,C n
|λ
For j = 0, 1, 2, . . . , r, we have, using Theorem 2.5, ,C ,C ,C ∥˜uM − P S u˜ M ∥j,∞ = max |⟨(K − KnM )uM , ℓ(s,j)M ⟩| n n n s∈[−1,1]
,C = max |⟨(I − PnC )K (I − PnC )uM , ℓ(s,j)M ⟩| n s∈[−1,1]
,C ≤ c ∥(I − PnC )K (I − PnC )uM ∥L2 ∥ℓ(s,j)M ∥L2 n
≤ c n−2r ∥K ∥2r ,∞ ∥u1n ∥L2 ∥ℓs(,j)M ∥L2 , (j)
(j)
,C ∗ ¯ where ℓs,M = (S (λM )) Ks . This completes the proof. n
Remark 3.19. From Theorem 3.17, we observed that in the Legendre M-collocation method, the approximated eigenvalues converge to the exact eigenvalues with the order O (n−2r ) and we notice that the gap between the spectral subspaces in L2 norm is of the order O (n−2r ) whereas in the infinity norm it is of the order O (n1/2−2r ), r > 1/2. This shows that in the Legendre M-collocation method the eigenvectors converge faster in L2 norm than in the infinity norm. However, the iterated eigenvectors converge with the order O (n−2r ) in infinity norm. ,C Remark 3.20. In Theorem 3.18, we showed that not only does the iterated eigenvector u˜ M converge to u with the order n ,C of convergence O (n−2r ), but also the derivatives of u˜ M also converge to the corresponding derivatives of u with the same n order of convergence in the infinity norm.
Remark 3.21. From Theorems 3.7 and 3.17, we observed that the Legendre M-Galerkin method gives faster convergence rates in comparison to the Legendre M-collocation method. We observed that the Legendre M-projection method for solving the eigenvalue problem for a compact linear integral operator improves upon the Legendre projection method discussed in [16]. 4. Numerical results In this section we discuss the numerical results. Consider the eigenvalue problem (2.2) for the following integral operator:
(K u)(s) =
1
K (s, t )u(t ) dt ,
t ∈ [−1, 1],
−1
for various smooth kernels K (s, t ). We choose Legendre polynomials {L0 , L1 , L2 , . . . , Ln } as an orthonormal basis for the subspace Xn . As the exact eigenelements of K are not known, for the sake of comparison, we replace the integral operator K by
K N u(s) =
ϑr j =1
wj K (s, tj )u(tj ),
150
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
obtained by using the composite Gauss quadrature rule with r points associated with an uniform partition of [−1, 1] with
ϑ intervals. Let λ be the eigenvalue of K N and P S be the associated spectral projection.
ˆM For different kernels, and for different values of n, we compute the approximate eigenvalue λ n and corresponding M eigenvector un in the Legendre Galerkin and Legendre collocation methods and we compute the iterated eigenvectors u˜ M n = 2 K uM n . The computed errors in both L and infinity norm of the eigenelements approximating to the exact eigenelements are presented for the Legendre M-Galerkin and Legendre M-collocation method in the following tables. Kernel-1: K ( s, t ) =
1 1 + |s − t |2
,
s, t ∈ [a, b] = [−1, 1].
Table 1 Legendre M-Galerkin method. n
|λ − λˆ M n |
S M ∥ uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
2.713819e−08 2.209121e−12 2.208677e−12 4.662936e−15 3.774758e−15 4.440892e−16
4.659302e−06 1.372242e−08 1.372242e−08 2.725376e−10 2.725376e−10 3.458547e−13
6.277391e−08 2.622430e−11 2.622430e−11 7.205883e−14 7.204869e−14 1.261791e−15
7.208240e−06 2.473539e−08 2.473539e−08 5.537607e−10 5.537603e−10 7.371325e−13
6.985609e−08 3.590472e−11 3.590472e−11 1.086908e−13 1.088018e−13 3.663736e−15
Table 2 Legendre M-collocation method. n
|λ − λˆ M n |
S M ∥ uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
7.214786e−07 9.503304e−10 4.675868e−10 4.524602e−12 1.301181e−12 4.662937e−15
4.611079e−06 2.559471e−08 1.623573e−08 3.756447e−10 1.970777e−10 1.163916e−12
1.236811e−07 2.539709e−10 1.136206e−10 7.364692e−13 3.706558e−13 8.57020e−16
5.259234e−06 7.943986e−08 2.323663e−08 1.266455e−09 2.691291e−10 4.350741e−12
1.253991e−07 2.587376e−10 1.302826e−10 9.210410e−13 4.682920e−13 2.109423e−15
Kernel-2: 2 K (s, t ) = e−|s−t | ,
s, t ∈ [a, b] = [−1, 1].
Table 3 Legendre M-Galerkin method. n
|λ − λˆ M n |
S M ∥ uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
8.622345e−07 2.239297e−11 2.239364e−11 1.998401e−15 2.220446e−15 1.332267e−15
3.601854e−05 2.851418e−08 2.851418e−08 1.211714e−12 1.211747e−12 5.805245e−15
1.037924e−06 7.969537e−11 7.969537e−11 8.926294e−16 9.172669e−16 9.483052e−16
6.269433e−05 6.303644e−08 6.303644e−08 3.102906e−12 3.102684e−12 1.681987e−14
1.130893e−06 7.819944e−11 7.819955e−11 3.941291e−15 3.996802e−15 4.163336e−15
Table 4 Legendre M-collocation method. n
|λ − λˆ M n |
S M ∥ uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
4.066425e−06 8.432635e−09 2.2461499e−10 3.752553e−14 8.881784e−16 6.661338e−16
4.221339e−05 1.718339e−07 2.463978e−08 1.449654e−11 3.423756e−12 7.215671e−14
1.958070e−06 4.523501e−09 1.741816e−10 4.414127e−14 1.903065e−15 6.467808e−16
6.339383e−05 5.102786e−07 4.893198e−08 5.210887e−11 8.412964e−12 2.846056e−13
2.284867e−06 5.283503e−09 1.810976e−10 4.646283e−14 4.773959e−15 2.609024e−15
B.L. Panigrahi et al. / Journal of Computational and Applied Mathematics 239 (2013) 135–151
151
Kernel-3: 2 K (s, t ) = e−s t ,
t , s ∈ [a, b] = [−1, 1].
Table 5 Legendre M-Galerkin method. n
|λ − λˆ M n |
S M ∥uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥ ∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
3.765739e−09 2.664535e−15 2.664535e−15 7.549516e−15 7.993606e−15 4.884981e−15
3.300288e−06 3.142945e−10 3.142945e−10 1.027529e−13 1.027026e−13 5.981502e−16
4.499840e−09 3.758389e−15 3.761610e−15 1.708800e−15 1.687764e−15 1.679399e−15
9.581522e−06 1.093232e−09 1.093232e−09 4.041212e−13 4.041212e−13 1.554312e−15
1.297028e−08 1.221245e−14 1.221245e−14 3.996803e−15 4.218847e−15 5.329070e−15
Table 6 Legendre M-collocation method. n
|λ − λˆ M n |
S M ∥uM n − P un ∥L2
S M ˜ n ∥L2 ∥˜uM n −P u
S M ∥uM n − P un ∥ ∞
S M ˜ n ∥∞ ∥˜uM n −P u
3 4 5 6 7 8
7.556366e−07 1.159698e−09 1.772316e−11 6.705747e−14 5.773160e−15 2.664535e−15
1.577848e−05 1.432961e−07 1.229439e−08 1.181609e−10 4.058765e−12 2.888279e−14
2.606914e−07 1.406129e−09 1.214972e−11 1.068623e−13 6.087019e−15 6.251354e−15
4.468019e−05 5.279119e−07 4.011195e−08 5.016513e−10 1.372302e−11 1.330047e−13
5.731762e−07 3.114675e−09 2.834532e−11 2.438049e−13 1.332267e−14 1.598721e−14
We observed from Tables 1–6 that our theoretical results agree with the numerical results. Acknowledgments The second author was supported in part by the Natural Science Foundation of China under grant 11061008, the NSF of Guangxi Province under grant 2011GXNSFA018128, the Key Scientific Research grant 0993002-4, and the Program for Excellent Talents in Guangxi Higher Education Institutions. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
F. Chatelin, Spectral Approximation of Linear Operators, Academic Press, New York, 1983. M. Ahues, A. Largillier, B.V. Limaye, Spectral Computations for Bounded Operators, Chapman and Hall, CRC, New York, 2001. K.E. Atkinson, The Numerical Solution of Integral Equations of the Second Kind, Cambridge University Press, Cambridge, UK, 1997. G. Nelakanti, Spectral approximation for integral operators, Ph.D. Thesis, Indian Institute of Technology, Bombay, India, 2003. G. Nelakanti, A degenerate kernel method for eigenvalue problems of compact integral operators, Adv. Comput. Math. 27 (2007) 339–354. Z. Chen, G. Long, G. Nelakanti, The discrete multi-projection method for Fredholm integral equations of the second kind, J. Integral Equations Appl. 19 (2007) 143–162. R.P. Kulkarni, A new superconvergent projection method for approximate solutions of eigenvalue problems, Numer. Funct. Anal. Optim. 24 (2003) 75–84. G. Long, G. Nelakanti, B.L. Panigrahi, M.M. Sahani, Discrete multi-projection methods for eigen-problems of compact integral operators, Appl. Math. Comput. 217 (2010) 3974–3984. Ben-Yu Guo, Spectral Methods and their Applications, World Scientific, 1998. C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral Methods, Springer-Verlag, Berlin, Heidelberg, 2006. J.E. Osborn, Spectral approximation for compact operators, Math. Comp. 29 (1975) 712–725. Y. Chen, T. Tang, Spectral methods for weakly singular Volterra integral equations with smooth solutions, J. Comput. Appl. Math. 233 (2009) 938–950. C.K. Qu, R. Wong, Szego’s conjecture on Lebesgue constants for Legendre series, Pacific J. Math. 135 (1988) 157–188. T. Tang, X. Xu, J. Cheng, On spectral methods for Volterra type integral equations and the convergence analysis, J. Comput. Math. 26 (2008) 825–837. L.L. Schumaker, Spline Functions: Basic Theory, John Wiley and Sons, New York, 1981. B.L. Panigrahi, G. Nelakanti, Superconvergence of Legendre projection methods for the eigenvalue problem of a compact integral operator, J. Comput. Appl. Math. 235 (2011) 2380–2391.