Galerkin method with new quadratic spline wavelets for integral and integro-differential equations

Galerkin method with new quadratic spline wavelets for integral and integro-differential equations

Journal of Computational and Applied Mathematics 363 (2020) 426–443 Contents lists available at ScienceDirect Journal of Computational and Applied M...

741KB Sizes 1 Downloads 34 Views

Journal of Computational and Applied Mathematics 363 (2020) 426–443

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

Galerkin method with new quadratic spline wavelets for integral and integro-differential equations ∗

Dana Černá , Václav Finěk Department of Mathematics and Didactics of Mathematics, Technical University in Liberec, Studentská 2, 461 17 Liberec, Czech Republic

article

info

a b s t r a c t The paper is concerned with the wavelet-Galerkin method for the numerical solution of Fredholm linear integral equations and second-order integro-differential equations. We propose a construction of a quadratic spline-wavelet basis on the unit interval, such that the wavelets have three vanishing moments and the shortest support among such wavelets. We prove that this basis is a Riesz basis in the space L2 (0, 1). We adapt the basis to homogeneous Dirichlet boundary conditions, and using a tensor product we construct a wavelet basis on the hyperrectangle. We use the wavelet-Galerkin method with the constructed bases for solving integral and integro-differential equations, and we show that the matrices arising from discretization have uniformly bounded condition numbers and that they can be approximated by sparse matrices. We present numerical examples and compare the results with the Galerkin method using other quadratic spline wavelet bases and other methods. © 2019 Elsevier B.V. All rights reserved.

Article history: Received 30 January 2018 Received in revised form 23 April 2019 MSC: 65T60 65R20 45F05 45J05 45K05 Keywords: Wavelet Quadratic spline Short support Galerkin method Integral equation Integro-differential equation

1. Introduction Let Ω = (a1 , b1 ) × (a2 , b2 ) × · · · × (ad , bd ), where ai , bi ∈ R, ai < bi , i = 1, . . . , d, d ∈ N, and let L2 (Ω ) be the space of real-valued square-integrable functions on Ω equipped with the inner product and the norm

⟨f , g ⟩ =

∫ Ω

f (x) g (x) dx,

∥f ∥ =

√ ⟨f , f ⟩,

(1)

respectively. Let H 1 (Ω ) be the Sobolev space, i.e. the space of all functions from L2 (Ω ) for which their first-order weak derivatives also belong to L2 (Ω ), which is equipped with the inner product

⟨f , g ⟩H 1 =

d ⟨ ∑ ∂f i=1

⟩ ∂g + ⟨f , g ⟩ , ∂ xi ∂ xi ,

∗ Corresponding author. E-mail addresses: [email protected] (D. Černá), [email protected] (V. Finěk). https://doi.org/10.1016/j.cam.2019.06.033 0377-0427/© 2019 Elsevier B.V. All rights reserved.

(2)

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

427

and the norm and the seminorm

∥f ∥H 1 =



⟨f , f ⟩H 1 ,

|f |H 1

  d  2 ∑  ∂ f    , =√  ∂ xi 

(3)

i=1

respectively. Let H01 be the closure in H 1 (Ω ) of the set of all functions f such that supp f ⊂ Ω , f is continuous on Ω and f has continuous first order derivatives in Ω . The set of all m-times continuously differentiable functions on Ω is denoted ( ) as C m Ω . Let K ∈ L2 (Ω × Ω ) and let K : L2 (Ω ) → L2 (Ω ) be an integral operator given by

(Ky) (t ) =

∫ Ω

K (t , x) y (x) dx.

(4)

Our aim is to find a solution y of the equation on Ω ,

Ay := −ϵ ∆y + py + Ky = f

(5)

where ϵ ≥ 0 is a constant. Assumptions for functions p and f will be specified below. In this paper we focus on two cases. In the first case we assume that ϵ = 0, i.e. our aim is to find a solution y ∈ L2 (Ω ) of the linear integral equation p (t ) y (t ) +

∫ Ω

K (t , x) y (x) dx = f (t ) ,

t ∈ Ω.

(6)

In the second case we assume that ϵ > 0 and our aim is to find a solution y of the second-order integro-differential equation

− ϵ ∆y (t ) + p (t ) y (t ) +

∫ Ω

K (t , x) y (x) dx = f (t ) ,

t ∈ Ω,

(7)

satisfying homogeneous Dirichlet boundary conditions y = 0 on the boundary of Ω . Let V = L2 (Ω ) for ϵ = 0 and V = H01 (Ω ) for ϵ > 0, let ∥·∥V be a norm in V, and let us define the bilinear form a : V × V → R as a (u, v) = ϵ

⟩ d ⟨ ∑ ∂ u ∂v , + ⟨pu, v⟩ + ⟨Ku, v⟩ ∂ ti ∂ ti

(8)

i=1

for u, v ∈ V . The variational formulation of Eqs. (6) and (7) reads as: Find y ∈ V such that a (y, v) = ⟨f , v⟩

for all

v ∈ V.

(9)

We make the following assumptions: (A1) (A2) (A3) (A4)

The The The The

function p satisfies p ∈ C Ω , and there exists a constant pmin such that p (t ) ≥ pmin > 0 for all t ∈ Ω . ) ( kernel K is smooth enough, i.e. K ∈ C m Ω × Ω for some m ∈ N. function f belongs to the space L2 (Ω ). bilinear form a is coercive, which means that there exists a constant α > 0 such that

( )

a (u, u) ≥ α ∥u∥2V

for all

u ∈ V.

(10)

The smoothness of the kernel is important for decay of the entries of discretization matrices, which enables approximation of the discretization matrices by sparse matrices. The coercivity is a crucial property for uniform boundedness of condition numbers of the discretization matrices. Theorem 1. If the assumptions (A1)–(A4) are satisfied, then there exists a unique solution y ∈ V of Eq. (9). Proof. The assumptions (A1) and (A2) imply continuity of the bilinear form a. Indeed, since

⏐∫ ⏐ ⏐ ⏐ ⏐ u (x) dx⏐ ≤ C ∥u∥ , ⏐ ⏐ Ω

√∫ C = Ω

1dx,

(11)

we have

|a (u, v)| ≤ ϵ |u|H 1 |v|H 1 + pmax ∥u∥ ∥v∥ + Kmax C 2 ∥u∥ ∥v∥ ≤ max(ϵ, pmax + Kmax C 2 ) ∥u∥V ∥v∥V ,

(12)

where pmax = maxx∈Ω p (x) and Kmax = maxx,t ∈Ω |K (x, t )|. The existence and uniqueness of the solution follows from the continuity and coercivity of the bilinear form a by the Lax–Milgram lemma, see e.g. [1,2]. □

428

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

The following lemma gives a sufficient condition for coercivity of the bilinear form a. Lemma 2. If (A1) and (A2) are satisfied and there exists a constant Kmin such that K (x, t) ≥ Kmin for all x, t ∈ Ω and Kmin + pmin > 0, then a is coercive. Proof. If u ∈ L2 (Ω ) and the assumptions of the lemma are satisfied, then

⟨Ku, u⟩ =

∫ ∫ Ω



K (x, t ) u (x) u (t ) dxdt ≥ Kmin

)2

(∫ Ω

u (x) dx

.

(13)

If Kmin is positive then ⟨Ku, u⟩ ≥ 0. Due to (11), we have ⟨Ku, u⟩ ≥ Kmin C 2 ∥u∥2 for Kmin negative. If ϵ = 0, then a (u, u) ≥ min pmin , pmin + Kmin C 2 ∥u∥2 .

)

(

(14)

If ϵ > 0, then a (u, u) ≥ ϵ|u|2H 1 + min pmin , pmin + Kmin C 2 ∥u∥2

)

(

≥ min ϵ, pmin , pmin + Kmin C

(

2

)

2 H1

∥ u∥

(15)

. □

The problem (9) can be solved by many methods, e.g. a quadrature method, a moment method, the Galerkin method, or a collocation method. These methods transform the problem into linear algebraic equations. However, for high-dimensional problems or even for one-dimensional problems with a solution that is oscillatory or that has large derivatives, discretization leads to matrices that are very large. The disadvantage of many classical methods is that these matrices are full and thus it is difficult to solve the systems efficiently or even to store the matrices. It is known that the wavelet-Galerkin method leads to system matrices that have many entries that are very small and thus these matrices can be approximated by sparse matrices. In [3,4], it was shown that the discretization of singular integral operators using compactly supported orthogonal wavelets leads to sparse matrices. In [5,6] the Galerkin method with orthogonal wavelets was applied on systems of integral equations of the second kind and a compression strategy for the discretization matrix was proposed. The discretization of integral operators using biorthogonal wavelets was studied e.g. in [7]. Fredholm integral equations of the second kind were also solved using semiorthogonal spline wavelets in [8,9] and Hermite cubic spline wavelets in [10]. In the case ϵ = 0 and p (t ) = 1 the wavelet-Galerkin method is typically used with orthogonal or semiorthogonal wavelet bases, because the discretization of the term p (t ) y (t ) leads to diagonal or banded matrices. However, if ϵ ̸ = 0 or p (t ) is not identity, then L2 -orthogonality or semiorthogonality is not a significant advantage and the quantitative properties of the method strongly depend on the wavelet basis used, namely on its condition number, the length of the support of wavelets, its polynomial exactness, the number of vanishing wavelet moments and the smoothness of basis functions. Therefore, the construction of an appropriate wavelet basis is a very important issue. A construction of a biorthogonal spline-wavelet basis on the interval such that both the primal and dual wavelets are local was proposed in [11] and later modified in [12,13]. Spline wavelets with nonlocal duals were also constructed and adapted to various types of boundary conditions, e.g. in [14–18]. The main advantage of these types of bases in comparison to bases with local duals is usually the shorter support of wavelets, the lower condition numbers of the bases and of the corresponding discretization matrices as well as the simplicity of the construction. In this paper, we construct a quadratic spline wavelet basis on the unit interval with nonlocal duals. The inner wavelets correspond to the wavelets from [19,20]. The boundary wavelets are constructed to have the same number of vanishing moments as inner wavelets. We prove that the constructed basis is a Riesz basis of the space L2 (0, 1). The proof significantly differs from the proofs of the Riesz basis property in other papers that are based on Fourier transformation, the properties of dual functions or the oblique projections. The proof in this paper is based on verifying sufficient conditions for the union of Riesz sequences to also be a Riesz sequence, which is derived in Section 2. We also construct a quadratic spline-wavelet basis satisfying homogeneous Dirichlet boundary conditions and prove that this basis normalized in the H 1 -norm is a Riesz basis for the Sobolev space H01 (0, 1). The bases are well-conditioned, and the wavelets have three vanishing moments. Furthermore, the support is the shortest among all known quadratic spline wavelets on the bounded interval with at least three vanishing moments. Then, we use the standard linear transformation and the anisotropic tensor product to construct a wavelet basis on the hyperrectangle Ω . We presented preliminary results about the basis adapted to boundary conditions in [21]. We use the Galerkin method with the constructed wavelet basis for a numerical solution of problem (9). We show the decay of the discretization matrix entries and prove uniform boundedness of the condition numbers of the matrices. In the literature, e.g. [5,7], such decay estimates are typically derived under the assumption that the diameter of the support of basis function decays exponentially with respect to the level. However, this assumption does not hold for the anisotropic tensor product bases that are used in this paper. We present several numerical examples to illustrate the efficiency of the method and compare these with other quadratic spline-wavelet bases and with other methods.

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

429

2. Construction of a quadratic spline-wavelet basis on the unit interval In this section, we recall a concept of a wavelet basis, propose a quadratic spline-wavelet basis on the interval [0, 1] and prove that this basis is a Riesz basis of the space L2 (0, 1). Let J be an at most countable index set. In this paper, we view v = {vλ }λ∈J , vλ ∈ R, as a column vector. We denote by l2 (J ) the space of all vectors v = {vλ }λ∈J , such that the norm

∥v∥ =

√∑

vλ2

(16)

λ∈J

is finite. For two normed linear spaces U and V and a linear operator M : U −→ V , we define the norm

∥M∥ = sup 0̸ =u∈U

∥Mu∥ . ∥u∥

(17)

Let H be a separable Hilbert space of real functions defined on Ω with the inner product ⟨·, ·⟩H and the norm ∥·∥H . Our aim is to construct a Riesz basis of the space H, where H = L2 (Ω ) or H = H01 (Ω ). Definition 3. A family Ψ = {ψλ , λ ∈ J } is a Riesz basis for H, if the closure of the span of Ψ is H and there exist constants c , C ∈ (0, ∞) such that

   ∑   bλ ψλ  ≤ C ∥b∥ , c ∥b∥ ≤    λ∈J

(18)

H

for all b := {bλ }λ∈J ∈ l2 (J ). The definition of the wavelet basis is not unified in the mathematical literature. In this paper we understand a wavelet basis as a Riesz basis that is local in the following sense. If d = 1, then let λ = (j, k). Let |λ| := j ∈ Z denote the level. By locality, we mean that diam supp ψλ ≤ C 2−|λ| ,

λ ∈ J.

(19)

In the case that ψλ is constructed using a tensor product of univariate functions ψλ1 , . . . , ψλd , we assume that ψλi satisfies (19), and thus diam supp ψλ ≤ C 2−[λ] ,

λ ∈ J,

[λ] := min ⏐λp ⏐ .

⏐ ⏐

p=1...d

(20)

Furthermore, we assume that a wavelet basis consists of some functions on the coarsest ⟨ ⟩ level called scaling functions and several levels of wavelets, and that wavelets have L ≥ 1 vanishing moments, i.e. xk , ψλ = 0 for all wavelets ψλ and k = 0, . . . , L − 1. Remark 1. The set of functions is called a Riesz sequence in H if there exist positive constants c and C that satisfy (18) but the closure of this set is not necessarily H. Remark 2.

For the two countable sets of functions Γ , Θ ⊂ L2 (Ω ), the symbol ⟨Γ , Θ ⟩ denotes the matrix

⟨Γ , Θ ⟩ = {⟨γ , θ⟩}γ ∈Γ ,θ ∈Θ .

(21)

The constants cΨ = sup {c : c satisfies (18)}

and CΨ = inf {C : C satisfies (18)}

(22)

are called Riesz bounds and the number cond Ψ = CΨ /cΨ is called the condition number of Ψ . It is known that the constants cΨ and CΨ satisfy cΨ =



λmin (⟨Ψ , Ψ ⟩),

CΨ =



λmax (⟨Ψ , Ψ ⟩),

(23)

where λmin (⟨Ψ , Ψ ⟩) and λmax (⟨Ψ , Ψ ⟩) are the smallest and the largest eigenvalues of the matrix ⟨Ψ , Ψ ⟩, respectively. Remark 3. Let us consider the set of functions {fλ , λ ∈ J }. If J ∑ is an infinite set, then there are two concepts of linear independence. The set {fλ , λ ∈ J } is called linear independent, if λ∈ ∑I cλ fλ = 0 for every finite subset I ⊂ J implies that cλ = 0 for all λ ∈ I . The set {fλ , λ ∈ J } is called ω-independent, if λ∈J cλ fλ = 0 implies that cλ = 0 for all λ ∈ J . It is known [22,23] that if {fλ , λ ∈ J } is a Riesz sequence, then {fλ , λ ∈ J } is both linearly independent and ω-independent.

430

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

Fig. 1. The scaling functions φb1 , φb2 , and φ .

We define a scaling basis as a basis of quadratic B-splines as in [12,13,16]. Let φ be a quadratic B-spline defined on knots [0, 1, 2, 3]. It can be written explicitly as

⎧ ⎪ ⎪ ⎨

x2 2

, −x + 3x − 32 , φ (x) = x2 9 ⎪ ⎪ ⎩ 2 − 3x + 2 , 0, 2

x ∈ [0, 1], x ∈ [1, 2],

(24)

x ∈ [2, 3], otherwise.

The function φ satisfies a scaling equation [16]

φ (x) = Let φb1

φ (2x)

+

3φ (2x − 1)

+

3φ (2x − 2)

+

φ (2x − 3)

4 4 4 4 be a quadratic B-spline defined on knots [0, 0, 0, 1], i.e.

φb1 (x) =

{

x2 − 2x + 1, 0,

.

(25)

x ∈ [0, 1], otherwise,

(26)

and let φb2 be a quadratic B-spline defined on knots [0, 0, 1, 2], i.e.

⎧ 2 ⎨ − 3x2 + 2x, 2 x φb2 (x) = ⎩ 2 − 2x + 2, 0,

x ∈ [0, 1], x ∈ [1, 2], otherwise.

(27)

The functions φb1 and φb2 satisfy scaling equations [16]:

φb2 (2x)

φb1 (x) = φb1 (2x) + φb2 (x) =

φb2 (2x)

2

3φ (2x)

+

,

(28)

+

φ (2x − 1)

.

(29)

2 4 4 The graphs of the functions φb1 , φb2 , and φb are displayed in Fig. 1. For j ≥ 2 and x ∈ [0, 1] we set

φj,k (x) = 2j/2 φ (2j x − k + 3), k = 3, . . . , 2j , φj,1 (x) = 2j/2 φb1 (2j x), φj,2j +2 (x) = 2j/2 φb1 (2j (1 − x)), φj,2 (x) = 2j/2 φb2 (2j x),

(30)

φj,2j +1 (x) = 2j/2 φb2 (2j (1 − x)).

We define a wavelet ψ and a boundary wavelet ψb as 1

3

3

4

4 13φb2 (2x)

1

ψ (x) = − φ (2x) + φ (2x − 1) − φ (2x − 2) + φ (2x − 3) ψb (x) = −φb1 (2x) +



4 37φ (2x)

+

4 φ (2x − 1)

(31)

.

12 72 8 Then, supp ψ = [0, 3] and supp ψb = [0, 2]. The graphs of wavelets ψb and ψ are displayed in Fig. 2. Lemma 4. The wavelets ψ and ψb have three vanishing moments, i.e. 3



xk ψ (x)dx = 0, 0

2



xk ψb (x)dx = 0,

k = 0, 1, 2.

(32)

0

Proof. It is easy to verify the lemma by substituting (31) into (32).



D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

431

Fig. 2. The wavelet ψ , the boundary wavelet ψb , and the boundary wavelet ψbD adapted to a homogeneous Dirichlet boundary condition at point x = 0.

For j ≥ 2 and x ∈ [0, 1] we define

ψj,k (x) = 2j/2 ψ (2j x − k + 2), k = 2, . . . , 2j − 1, ψj,1 (x) = 2j/2 ψb (2j x), ψj,2j (x) = 2j/2 ψb (2j (1 − x)).

(33)

We denote the index sets by Ij = k ∈ Z : 1 ≤ k ≤ 2j + 2 ,

{

Jj = k ∈ Z : 1 ≤ k ≤ 2j .

}

{

}

(34)

We define

} { Φj = φj,k , k ∈ Ij ,

} { Ψj = ψ j , k , k ∈ J j ,

(35)

and

Ψ = Φ2 ∪

∞ ⋃

j0 −1+s

Ψj ,



Ψjs0 = Φj0 ∪

Ψj ,

j0 ≥ 2,

s > 0.

(36)

j=j0

j=2

In the following, we prove that Ψ is a wavelet basis of the space L2 (0, 1); the set Ψ2s is its finite-dimensional subset. In Section 6 we also use Ψ3s . First we formulate sufficient conditions under which the sum of two Riesz sequences is also a Riesz sequence. Theorem 5. Let I and J be at most countable index sets, {fk }k∈I be a Riesz sequence with a Riesz lower bound cf and {gl }l∈J be a Riesz with a Riesz lower bound cg . Furthermore, let the matrix G with entries Gk,l = ⟨fk , gl ⟩, k ∈ I , l ∈ J , satisfy ( sequence ) ∥G∥ / cf cg < 1. Then {fk }k∈I ∪ {gl }l∈J is a Riesz sequence with a Riesz lower bound c and

√ c≥

1−

∥ G∥ cf cg

( ) · min cf , cg . (

(37)

)

(

)

Proof. Let us denote F = span {fk }k∈I and G = span {gl }l∈J . For u = we have

⏐ T ⏐ ⏐c Gd⏐ |⟨u, v⟩| ∥Gc∥ ∥G∥ ∥c∥ ∥G∥ ∥u∥ sup ≤ sup = ≤ ≤ . ∥v∥ cg cg cf cg 0̸ =v∈G 0̸ =d∈l2 (J ) cg ∥d∥ ( ) Therefore if the condition ∥G∥ / cf cg < 1 is satisfied, then |⟨u, v⟩| < 1. ∥ 0̸ =u∈F ,0̸ =v∈G u∥ ∥v∥ sup



k∈I

ck fk , v =



l∈ J

dl gl , c = {ck }k∈I , d = {dl }l∈J ,

(38)

(39)

It is known [24] that if {fk }k∈I is( a Riesz) sequence, then {fk }k∈I is a frame sequence, i.e. there exist constants c˜f , C˜ f > 0 such that any function f ∈ span {fk }k∈I satisfies c˜f ∥f ∥2 ≤



|⟨f , fk ⟩|2 ≤ C˜ f ∥f ∥2 .

k∈I

{

}

The supremum of c˜f : c˜f satisfies (40) is called a lower frame bound.

(40)

432

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

˜ = G ∩ (F ∩ G)⊥ , where M ⊥ denotes the orthogonal complement of M. It is known [25] Let F˜ = F ∩ (F ∩ G)⊥ and G that if {fk }k∈I and {gl }l∈J are frame sequences with lower frame bounds c˜f and c˜g , respectively, and cos (F , G) =

|⟨f , g ⟩| < 1, ˜ ∥f ∥ ∥g ∥ 0̸ =f ∈F˜ ,0̸ =g ∈G

(41)

sup

then {fk }k∈I ∪ {gl }l∈J is a frame sequence with a lower frame bound c˜f ,g = (1 − cos (F , G)) · min c˜f , c˜g .

(

)

(42)

|⟨f , g ⟩| |⟨f , g ⟩| ≤ sup , 0̸ =f ∈F ,0̸ =g ∈G ∥f ∥ ∥g ∥ ˜ ∥f ∥ ∥g ∥ 0̸ =f ∈F˜ ,0̸ =g ∈G

(43)

Due to (39) and the fact that cos (F , G) =

sup

the condition (41) is satisfied and {fk }k∈I ∪ {gl }l∈J is a frame sequence. Furthermore, (39) implies that F ∩ G = {0} and thus the set {fk }k∈I ∪ {gl }l∈J is ω-independent. Indeed, if f ∈ {fk }k∈I ∩ {gl }l∈J is a nonzero function, then sup

0̸ =u∈F ,0̸ =v∈G

|⟨u, v⟩| |⟨f , f ⟩| ≥ = 1, ∥u∥ ∥v∥ ∥f ∥ ∥f ∥

(44)

which is a contradiction. Since √a frame sequence of ω-independent functions with a frame lower bound b is a Riesz sequence with a Riesz lower bound b, see e.g. [24], Theorem 5 is proven. □ Theorem 6.

The set ΨL = ψj,1 , j ≥ 2 is a Riesz sequence with a Riesz lower bound cL > 0.2987.

{

}

Proof. According to Remark 2, the set ΨL is a Riesz sequence if and only if the maximal eigenvalue λmax and the minimal eigenvalue λmin of the (infinite) matrix GL = ⟨ΨL , ΨL ⟩ satisfy 0 < λmin ≤ λmax < ∞. Due to the length of the support of ψj,1 and three vanishing moments, we have (GL )i,j = 0 for |i − j| > 1. Furthermore, direct computation yields

(GL )i,j =

⎧ ⎨ ⎩

17 135 √ 7 2 540

0,

, ,

i = j,

|i − j| = 1, otherwise.

(45)

Using the Gershgorin circle theorem we have



[ λmin , λmax ∈

17 135



7 2 270

,

17 135

√ ]

+

7 2 270

.

(46)

Hence, the set ΨL is a Riesz sequence with a Riesz lower bound cL ≥



√ 17 135



7 2 270

> 0.2987. □

Corollary 1. Since ΨR = ψj,2j , j ≥ 2 has a similar structure as ΨL , the set ΨR is a Riesz sequence with a Riesz lower bound cR > 0.2987. Due to the non-overlapping supports of ψj,1 and ψj,2j the set Ψb = ΨL ∪ ΨR is also a Riesz sequence with a Riesz lower bound cb > 0.2987.

{

Theorem 7.

}

The set ΨI = ψj,k , k = 2, . . . , 2j − 1, j ≥ 2 is a Riesz sequence with a Riesz lower bound cI > 0.2238.

{

}

Proof. It was shown in [19,20] that 2j/2 ψ 2j x − k , j, k ∈ Z

{

(

)

}

(47)

is a Riesz basis of the space L2 (R). Since ΨI is its subset,{it is a Riesz sequence. We estimate } its Riesz lower bound. Let cIK be a Riesz lower bound of the Riesz sequence ΨIK = ψj,k , k = 2, . . . , 2j − 1, 2 ≤ j ≤ K which ⟨ is clearly ⟩ a subset of ΨI . We estimate cIK using Remark 2. Due to the structure of the set ΨIK , the Gram matrix GKI = ΨIK , ΨIK has the block structure: G2,2 ⎜ G3,2

G2,3 G3,3



GKI = ⎜ ⎝ ..

.. .

.

GK ,2

GK ,3

... ... .. . ...

G2 ,K G3 ,K ⎟



.. ⎟ ⎠, .

(48)

GK ,K

where Gi,j = , for = ψj,k , k = 2, . . . , 2j − 1 . The matrix GKI is symmetric and strictly diagonally dominant. Using the Gershgorin circle theorem, we obtain cIK > 0.16, which is a quite pessimistic bound.



ΨiI

ΨjI



ΨjI

{

}

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

433

Therefore, we use a different approach. Let FK1 be a matrix that contains diagonal blocks of GKI , i.e.



0

⎜ 0 FK1 = ⎜ ⎜ . .



G3,3

0

...

G2 ,2

⎝ .

... ..



0

.. ⎟ . ⎟ ⎟, ⎟ 0 ⎠

.

0

(49)

G K ,K

where 0 are zero matrices of appropriate sizes. Let



0

⎜ ⎜G3,2 ⎜ ⎜ k F2 = ⎜ 0 ⎜ ⎜ . ⎝ ..

0

0

G3,4

G4,3

..

.. . ...

0

..

..

FK2 .

Let λmin

0 FK3

GKI K GI .

FK1

...

G2,3

and (= ) − − Then, to λmin

.

0

0

.. .

⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ GK −1,K ⎠

. .

GK ,K −1

(

GKI

)



(50)

0

be the minimal eigenvalue of GKI and x be a normalized eigenvector corresponding

( ) λmin GKI = xT GKI x = xT FK1 x + xT FK2 x + xT FK3 x ⏐ ⏐ ⏐ ⏐ ≥ xT FK x − ⏐xT FK x⏐ − ⏐xT FK x⏐ . 1

2

(51)

3

For k ≥ 2, direct computation yields

(

Gk,k

)

i,j

=

⎧ ⎪ ⎨ ⎪ ⎩

,

23 160 1 192



i = j,

,

0,

|i − j| = 2, otherwise.

(52)

Thus, is symmetric and positive definite and xT FK1 x ≥ λmin FK1 . Using ⏐ the⏐ Gershgorin ( K) ( )circle theorem we obtain ( ) the estimate λmin F1 ≥ 2/15 > 0.1333. Since FK2 is symmetric, we have ⏐xT FK2 x⏐ ≤ λmax FK2 . For estimating λmax FK2 we use Theorem 2 from [26]. It states that all eigenvalues λ of FK2 satisfy:

( )

FK1

      (0 − λI)−1 −1 ≤ Gj,j+1  + Gj+2,j+1  ,

(53)

for at least one 1 ≤ j ≤ K − 1, where for simplicity we define G1,2 , G2,1 , GK ,K +1 , and GK +1,K as zero matrices. We have

 √  ) ( Gj,j+1  = λmax Gj,j+1 GT j,j+1 ( )1/2 ∑ ⏐⏐( ) ⏐⏐ T ≤ 0.0375. ≤ max ⏐ Gj,j+1 Gj,j+1 k,l ⏐ k

(54)

l

Therefore, λmax F2 < 2 · 0.0375 = 0.075. Since FK3 is symmetric, we have ⏐xT FK3 x⏐ ≤ λmax FK3 and



( K)



( )

∑ ⏐⏐( ) ⏐⏐ ( )   λmax FK3 = FK3  ≤ max ⏐ FK3 i,j ⏐ < 0.0082. i

In summary, we have λmin GKI 0.2238. □

(

Theorem 8.

(55)

j

)

≥ 0.1333 − 0.0750 − 0.0082 = 0.0501, cIK ≥



0.0501 and thus cI ≥



0.0501 >

The set Ψm = Ψb ∪ ΨI is a Riesz sequence with a Riesz lower bound cm > 0.1358.

Proof. From Corollary 1 and Theorem 7 we know that Ψb and ΨI are Riesz sequences. Due to Theorem 5 to prove that Ψm is a Riesz sequence what remains to be shown is that the matrix H = ⟨Ψb , ΨI ⟩ satisfies ∥H∥ /(cb cI ) < 1. We have

⎞1/2 ⏐ ⏐⎠ < 0.0422. ⏐ i ,j

⎛ √ ∑ ⏐⏐(  ) ∥H∥ = HHT  ≤ ⎝max ⏐ HHT i

j

Therefore, ∥H∥ /(cb cI ) < 0.6313 and cm > Theorem 9.



1 − 0.6313 · min (0.2987, 0.2238) > 0.1358. □

The set Ψ is a Riesz basis of the space L2 (0, 1).

(56)

434

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

Proof. We have Ψ = Φ2 ∪ Ψm . From the previous theorem, the set Ψm is a Riesz sequence. The set Φ2 is a Riesz sequence, because the functions from this set are linearly independent and the set is finite. Since λmin (⟨Φ2 , Φ2 ⟩) = 0.0823, the Riesz √ lower bound c2 of the basis Φ2 satisfies c2 √ > 0.0823 > 0.2869. Thus it remains to be proven that N = ⟨Φ2 , Ψm ⟩ satisfies ∥N∥ /(c2 cm ) < 1. Since ∥N∥ =

  NNT  ≤ 0.0377, the set Ψ is a Riesz sequence. Since spline spaces Xk = span Φ2+k = span Ψ2k , k ≥ 2, are nested, i.e. Xk ⊂ Xk+1 , and their union is dense in L2 (Ω ), see [13,16], the set Ψ is a Riesz basis of L2 (Ω ). □ ⋃∞ Corollary 2. The set Φj0 ∪ j=j Ψj with the coarsest level j0 > 2 is also a Riesz basis of the space L2 (0, 1). 0 3. Adaptation to homogeneous Dirichlet boundary conditions In this section we construct a quadratic spline-wavelet basis such that the basis functions in the inner part of the interval are the same as basis functions in the previous section, but the boundary scaling functions and wavelets are adapted to homogeneous Dirichlet boundary conditions. Let φ , φb2 , and ψ be defined by (24), (27), and (31), respectively, and let the boundary wavelet ψbD be given by

ψbD (x) = −

φb2 (2x)

47φ (2x)

+

4

120



13φ (2x − 1) 40

φ (2x − 2)

+

10

.

(57)

Then, ψbD has three vanishing moments and ψbD (0) = 0. The graph of the wavelet ψbD is displayed in Fig. 2. Let φjD,k be defined by

φjD,k (x) = 2j/2 φ (2j x − k + 2), φ

D j,1 (x)

j/2

= 2 φb2 (2 x), j

k = 2, . . . , 2j − 1,

φ

D (x) j,2j

j/2

(58)

= 2 φb2 (2 (1 − x)). j

For j ≥ 2 and x ∈ [0, 1] we define

ψjD,k (x) = 2j/2 ψ (2j x − k + 2), ψ

D j,1 (x)

j/2

=2 ψ

,

D j b (2 x)

ψ

D (x) j,2j

k = 2, . . . , 2j − 1, j/2

=2 ψ

D j b (2 (1

(59)

− x)).

We define

} { ΦjD = φjD,k , k ∈ Jj ,

} { ΨjD = ψjD,k , k ∈ Jj ,

(60)

and

Ψ D = Φ2D ∪

∞ ⋃

ΨjD ,

Ψjs0,D = ΦjD0 ∪

j=2

j0 −1+s



ΨjD ,

j0 ≥ 2.

(61)

j=j0 s,D

In the following, we prove that Ψ D is a wavelet basis of the space L2 (0, 1); the set Ψ2 Theorem 10.

is its finite-dimensional subset.

The set Ψ D defined by (61) is a Riesz basis of the space L2 (0, 1).

Proof. We use a similar approach as in the proofs in the previous section. Using the Gershgorin theorem we found that

} { } { ΨbD = ψjD,1 , j ≥ 2 ∪ ψjD,2j , j ≥ 2

(62)

is a Riesz sequence with a Riesz lower bound cb > 0.529. The set ΨI is the same as in the previous ⟨section,⟩ because inner wavelets are the same, therefore its Riesz lower bound is cI > 0.2238. Since the matrix H = ΨbD , ΨI satisfies ∥H∥ < 0.041, we have ∥H∥ /(cb cI ) < 0.35 < 1. Using a similar argument as in the proof of Theorem 8, we conclude that the set ΨmD = ΨbD ∪ ΨI is a Riesz sequence with a Riesz lower bound cm > 0.18. The set Φ2D is a Riesz sequence with a Riesz lower bound c2 > 0.365. Since N = ⟨Φ2 , Ψm ⟩ satisfies ∥N∥ < 0.0458, the set Ψ D is indeed a Riesz basis of L2 (Ω ). □ Corollary 3. Due to Theorem 10, the multiscale structure of the set Ψ D , the smoothness of functions from Ψ D , and the polynomial exactness of Ψ D , the set Ψ D when normalized with respect to the H 1 -norm is the Riesz basis of the space H01 (0, 1), see [27]. 4. Construction of multidimensional wavelet basis We present a well-known construction of multivariate wavelet bases on the hyperrectangle Ω that is based on tensorizing univariate wavelet bases and that preserves the Riesz basis property, vanishing moments and polynomial exactness. This approach is called the anisotropic approach.

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

435

Let Ψ be a wavelet basis on the interval [0, 1] constructed in Section 2, Ij and Jj be defined by (34), and j0 be a fixed coarsest level. For notational simplicity, we denote Jj0 −1 := Ij0 and

ψj0 −1,k := φj0 ,k ,

k ∈ Jj0 −1 ,

J = (j, k) , j ≥ j0 − 1, k ∈ Jj .

{

}

(63)

Then we can write

{ } Ψ = ψj,k , j ≥ j0 − 1, k ∈ Jj = {ψλ , λ ∈ J } .

(64)

Since Ψ is a Riesz basis of L2 (0, 1), the set

} } { { Ψ i = ψji,k , j ≥ j0 − 1, k ∈ Jj = ψλi , λ ∈ J ,

(65)

where

(

ψji,k (x) = ψj,k

x − ai

)

b i − ai

,

x ∈ [ai , bi ] ,

(66)

is a Riesz basis of L2 (ai , bi ). Recall that for the index λ = (j, k) we denote |λ| = j. We use u ⊗ v to denote the tensor product of functions u and v , i.e. (u ⊗ v) (x1 , x2 ) = u (x1 ) v (x2 ). For d ≥ 1 we generalize the definition of the index set J . We define J = λ = (λ1 , . . . , λd ) : λi = (ji , ki ) , ji ≥ j0 − 1, ki ∈ Jji .

{

}

(67)

We define multivariate basis functions as

ψλ = ⊗di=1 ψλi i ,

λ = (λ1 , . . . , λd ) ∈ J ,

(68)

Furthermore, recall that [λ] = mini=1,...,d |λi |. Since diam supp ψλi ≤ Ci 2 i

  d ∑ √ Ci2 2−2|λi | ≤ C d2−[λ] , diam supp ψλ ≤ √ i=1

−|λi |

with Ci = 3 (bi − ai ), we have

C = max Ci .

(69)

i=1,...,d

We define the set Ψ = {ψλ , λ ∈ J } and the set

Ψjs0 = {ψλ : λ = (λ1 , . . . , λd ) , |λi | < j0 + s} .

(70) s,D

Similarly using the wavelet basis from the previous section, we construct the set Ψ D and the finite-dimensional set Ψj

0

.

Theorem 11. The set Ψ = {ψλ , λ ∈ J } is a Riesz basis of the space L2 (Ω ) and the set Ψ D normalized in the H 1 -norm is a Riesz basis of the space H01 (Ω ). Proof. This follows from Theorem 9, Corollary 1, and the fact that the tensor product of Riesz bases is also a Riesz basis, see [28]. □ 5. Wavelet-Galerkin method k,D

We now recall the wavelet-Galerkin method for solving the problem (9). Let Ψjk and Ψj 0 0 the previous section. For the fixed coarsest level j0 ≥ 2, let us denote

{ Ψ = k

Ψjk0 ,

ϵ = 0,

Ψjk0,D

ϵ > 0.

,

be multiscale bases from

(71)

Thus Ψ k is a wavelet basis that contains scaling functions at a coarsest level j0 and k levels of wavelets. Then Xk = span Ψ k are the finite-dimensional subspaces of V that are nested, i.e. Xk ⊂ Xk+1 , k ∈ N, and V =

∞ ⋃

Xk ,

(72)

k=j0 −1

see [13,16]. The Galerkin formulation of (9) reads: Find yk ∈ Xk such that a (yk , v) = ⟨f , v⟩

for all

v ∈ Xk .

In the following theorems, we show that the method is convergent and stable.

(73)

436

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

Theorem 12. Let (A1)–(A4) be satisfied, h = 2−j0 −k and y be a solution of (9). Then, there exists a unique solution yk of Eq. (73) and ∥yk − y∥V → 0. Moreover, if y ∈ H 3 (Ω ) ∩ V , then

∥yk − y∥ ≤ Ch3 ,

∥yk − y∥H 1 ≤ Ch2

(74)

with a constant C that does not depend on h. Proof. The existence and uniqueness of the solution yk of the problem (73) is a consequence of the Lax–Milgram lemma, see [1,2]. Let A be defined by (5), Pk : V → Xk be a V -orthogonal projection, and Ak := Pk A|Xk .

(75)

According to Theorem 2.35 from [5] and its proof under the assumptions that A is invertible, there exists C ∈ R such that

 −1  A Pk A ≤ C ,

(76)

k

and that the denseness property (72) is satisfied, it holds that ∥yk − y∥V → 0 and

∥yk − y∥V ≤ C ∥Pk y − y∥V .

(77)

We verify these assumptions. As usual, C denotes a generic constant that may take different values at different occurrences. 1 The invertibility of A and Ak and boundedness of the norm of A− are a consequence of the Lax–Milgram lemma. k Moreover, due to the orthogonality of the projections Pk and boundedness of A we have

 −1  1 A  ≤ , ∥Pk ∥ = 1, ∥A∥ ≤ β, k α where α is a coercivity constant from (10). Therefore, (76) is satisfied.

(78)

Hence, the error depends on the approximation properties of spline spaces Xk , which are well-known. For y ∈ H 3 (Ω ) ∩ V , we have

∥Pk y − y∥H 1 ≤ Ch−1 ∥Pk y − y∥ ≤ Ch2 ∥y∥H 3 , see e.g. Theorems 3.3.2, 3.4.1. and 3.10.2. in [27].

(79) □

Theorem 13. Let Pk : V → Xk be a V -orthogonal projection and Ak be defined by (75). The above method is stable in the sense that there exist nonnegative constants µ and ν , a positive constant δ and a positive integer q such that for any k ≥ q, the operator Ak is invertible, and the perturbed equation (Ak + Ek ) y˜ k = Pk f + gk has a unique solution y˜ k ∈ Xk for any gk ∈ Xk and any bounded linear operator Ek : Xk → Xk , with ∥Ek ∥ < δ , and y˜ k satisfies

  y˜ k − yk  ≤ µ ∥Ek ∥ ∥yk ∥V + ν ∥gk ∥V . V

(80)





1 , which Proof. According to Theorem 2.47 from [5] the stability is a consequence of the uniform boundedness of A− k has already been shown in the proof of Theorem 12. □

From Theorem 12, the convergence rate depends on the chosen discretization spaces and not directly on the chosen bases of these spaces. Since the constructed basis generates the same spaces as quadratic B-splines or other quadratic spline wavelets, it can be expected that the error will be similar. The main difference is therefore in the sparsity of the discretization matrices, the condition numbers of the matrices and the number of iterations needed to resolve the problem with a desired accuracy. We write the function yk as



yk =

cλk ψλ .

(81)

ψλ ∈Ψ k

Let Gk and Kk be matrices with the entries Gkµ,λ = ϵ ψλ , ψµ + pψλ , ψµ ,









Kkµ,λ = Kψλ , ψµ ,





(82)

for ψλ , ψµ ∈ Ψ k . Let fk be a vector with entries fµk = f , ψµ ,





ψµ ∈ Ψ k ,

(83) k

k

and c be the column vector of coefficients cλ . We obtain the system Ak ck = fk ,

where Ak = Gk + Kk .

(84) k

We apply the standard Jacobi diagonal preconditioning on the system (84). Let D be a diagonal matrix with diagonal elements Dkλ,λ :=



Akλ,λ =



a (ψλ , ψλ ).

(85)

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

437

We define the preconditioned system

˜ k u˜ k = f˜k A

(86)

with

( )−1

( )−1

˜ k = Dk A

Ak Dk

( )−1

,

f˜k = Dk

fk ,

˜ k = Dk uk . u

(87)

We solve the system (86) by the method of generalized residuals (GMRES) or, in the case where the system matrix is symmetric and positive definite, we use the conjugate gradient method. ˜ k have uniformly bounded condition numbers (cond). Due to coercivity of the bilinear form a, the matrices A Theorem 14.

˜ k ≤ C for all k ≥ 0. There exists a constant C such that cond A

k Proof. Let us view a wavelet basis Ψ k as a column vector. Let Γ k be a diagonal matrix with diagonal entries Γλ,λ = ∥ψλ ∥V .

Theorem 9 and the fact that c < ∥ψλ ∥ < C for some 0 < c < C < ∞ imply that Γ k exist constants 0 < cΨ < CΨ < ∞ such that

(

 

cΨ ∥c∥ ≤ cT Γ k

(

)−1

  Ψ k  ≤ CΨ ∥c∥

for all

V

)−1

Ψ k is a Riesz basis in V , i.e. there

c ∈ l2 (J ) .

(88)

Due to continuity and coercivity of the bilinear form a, we have

α ∥ψλ ∥2V ≤ a (ψλ , ψλ ) ≤ β ∥ψλ ∥2V ,

(89)

which implies that



( )−1 k √ αI ≤ Γ k D ≤ β I,

(90)

( )−1 k √ √ where I is an identity matrix. Then, α ∥c∥ ≤ ∥d∥ ≤ β ∥c∥ for dT = cT Γ k D . From (88) and (90), we have   ( )  √ √  √ ( ) −1 −1     Ψ k  = β dT Dk Ψ k cΨ ∥d∥ ≤ cΨ β ∥c∥ ≤ β cT Γ k V

(91)

V

and

  √ √  √   ( )−1   ( )−1  α dT Dk Ψ k  ≤ α cT Γ k Ψ k  ≤ CΨ α ∥c∥ ≤ CΨ ∥d∥ . V

(92)

V

Furthermore, we have

( )−1

˜ k d = dT Dk dT A

(

( )−1

Ak Dk

( k )−1 T

d

( k )−1 T

=a d D Ψ k, d 2  ( ) −1   Ψ k ≤ β dT Dk

D

(93)

Ψk

)

V

β CΨ2 ∥d∥2 ≤ α ˜ k d ≥ α c 2 ∥d∥2 /β . According to Theorem 1.3-1 from [29], we have and similarly dT A Ψ   ˜ kd dT A  ˜ k . A  ≤ 2 sup 2 0̸ =d∈l2 (J ) ∥d∥    ˜ k Due to (93) and (94), we have A  ≤ 2β CΨ2 /α . Similarly, we estimate ( )   k −1   A˜  ( )  u    k −1  ∥d∥  A˜  = sup  ≤ sup     ∥ ∥ u 2 2 ˜ k d 0̸ =u∈l (J ) 0̸ =d∈l (J ) A  ( )−1 ˜ kd ∥d∥2 dT A β  ≤ = sup inf ≤ 2. ˜k  2 (J ) ∥d∥2 α cΨ 0 ̸ = d ∈ l 0̸ =d∈l2 (J ) ∥d∥ A d

(94)

(95)

Therefore,

  ( )−1   2β 2 CΨ2  ˜ k  k  ≤ ˜ ˜k cond A = A   A . □  α 2 cΨ2

(96)

438

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

Fig. 3. The sparsity pattern of the matrix Gk for seven levels of wavelets (left) and the discretization matrix for Eq. (110) (right).

Now we study the structure of the matrices Gk and Kk . Let the size of these matrices be N × N (with N dependent on k). Due to property (69), the matrix Gk has O (N ln N ) nonzero entries and a so-called finger-band pattern. For ϵ = 0, p = 1, and a wavelet basis constructed in Section 2, the pattern is displayed in Fig. 3. For seven levels of wavelets the matrix has the size 514 × 514 and thus it has 264196 entries, but only 16294 of the entries are nonzero. For the standard Galerkin method using B-splines, the matrix Kk is full. However, it is known that for some classes of operators and some types of bases, many entries of the matrix Kk are small and can be thresholded and the matrix Kk can be approximated with a matrix that is sparse. The decay estimates of the entries of the matrix Kk are typically derived for isotropic systems, see e.g. [5–7]. In this paper, we present the decay estimates for anisotropic wavelet systems and a kernel K that is sufficiently smooth. Theorem 15. m = 2L. Then,

∫ ∫ Ω



Let ψλ , ψµ ∈ Ψ k be wavelets with L = 3 vanishing moments and let the assumption (A2) be fulfilled for

K (x, t ) ψλ (x) ψµ (t ) dx dt ≤ C 2−([λ]+[µ])(L+d/2)

(97)

with a constant C independent of λ and µ. Proof. Let the centres of the supports of ψλ and ψµ be denoted by xλ and tλ , respectively. For multi-index l = (l1 , . . . , ld ) ∈ l l Nd0 , N0 = N ∪ {0}, and x = (x1 , . . . , xd ) ∈ Rd , we denote |l| = l1 + · · · + ld , l! = l1 ! . . . ld !, and xl = x11 . . . xdd . Due to (A2) and the Taylor Theorem, there exists a function P such that for t fixed the function P (x, t ) is a polynomial with respect to x of degree at most L − 1, and there exists a function Q such that for x fixed the function Q (x, t ) is a polynomial with respect to t of degree at most L − 1 such that K (x, t ) = P (x, t ) + Q (x, t ) +

∑∑

Cl,m (x − xλ )l t − tµ

(

)m

(98)

|l|=L |m|=L

with Cl,m =

1 ∂ l ∂ m K (ξ (x, t ))

∂ xl ∂ t m

l!m!

(99)

and

( ) ( ( )) ξ (x, t ) = xλ , tµ + α (x, t ) − xλ , tµ

(100)

for some α ∈ [0, 1]. Since Ψ has L vanishing moments, we have

∫ ∫ Ω



(P (x, t ) + Q (x, t )) ψλ (x) ψµ (t ) dx dt = 0.

(101)

Let us denote

∫ ∫ Kµ,λ =





K (x, t ) ψλ (x) ψµ (t ) dx dt .

(102)

If |l| = L then

|x − xλ |l ≤ C

d ∏ i=1

2−|λi |li ≤ C 2−L[λ]

(103)

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

439

and

∫ Ω

|ψλ (x)| dx ≤ C 2−|λ1 |/2−···−|λd |/2 ≤ C 2−[λ]d/2 .

(104)

Hence combining (98), (101), (103), and (104), we have

∫ ∫ ⏐ ⏐ ∑∑ ⏐ ⏐m ⏐ ⏐ ⏐Kµ,λ ⏐≤ |x − xλ |l ⏐t − tµ ⏐ ⏐ψλ (x) ψµ (t )⏐ dx dt C l,m Ω Ω |l|=L |m|=L −L[λ]−L[µ]−[λ]d/2−[µ]d/2

≤ C2

(105)

. □

˜k It is clear from the proof that Theorem 15 holds for any anisotropic wavelet basis with L vanishing moments. Let A ˜ k are small be the matrix from (86). Due to Theorem 15 and the local support of wavelets, many entries of the matrix A ˜ k can be represented by sparse matrix Aˆ k . More precisely, let T be a chosen and they can be thresholded, and the matrix A k ˆ threshold and let A be defined as ˆ km,l = A

⎧ ⎪ ⎨ A˜ km,l , ⎪ ⎩

⏐ ⏐

⏐ ⏐

⏐ ⏐

⏐ ⏐

˜k ⏐ > T, if ⏐A m,l

(106)

˜k ⏐ ≤ T. if ⏐A m,l

0,

Then,

  ) ∑ ⏐⏐(  ˜ k ˆ k ⏐ A˜ k − Aˆ k A − A  ≤ max ⏐ m

l

⏐ ⏐ ⏐ ≤ Tnk , ⏐ m,l

(107)

˜ k . Thus, if we choose a threshold T small enough, then the norm of A˜ k − Aˆ k is where nk × nk is the size of the matrix A ˜ k , and cˆ k is the solution of the system with the matrix Aˆ k and small. If c˜ k is the solution of the system with the matrix A the same right-hand side, then it can be expected that c˜ k is close to cˆ k . More precisely, if

( ) ( )   k −1 k k   ˜ ˜ ˆ γ :=  A A −A <1

(108)

   ˜ k ˆ k  k  − A k A c˜ − cˆ k   ˜ ˜k cond A Tnk cond A   ≤   ≤  . cˆ k   ˜ k   1−γ (1 − γ ) A˜ k  A 

(109)

then

˜ k are uniformly bounded. The Moreover, due to Theorem 14, the condition numbers and the norms of the matrices A inequality (109) is a consequence of the well-known theorem about perturbed systems, see e.g. Theorem 7.12 in [30]. Remark 4 (Compression Strategy). In some applications where the left-hand side of the equation is fixed and the equation is solved for various right-hand sides, the system matrix can be computed, analysed and compressed only once as a preprocessing step and then one can work with the compressed matrix. Another approach is to use the estimate (97) together with the known structure of the matrices Gk to compute only significant entries of the matrix Ak . 6. Numerical examples In this section we present several numerical experiments. We use the following notation: e2 is the L2 -norm of the error, N × N is the size of the system matrix, K denotes the finest level of a wavelet basis, NNZ is the number of nonzero entries of the system matrix, cond is its condition number, and it is the number of iterations for the chosen iterative method. We define the compression ratio as CR = NNZ /N 2 . B-spline denotes the standard quadratic B-spline basis with Schoenberg sequence of knots, see e.g. [12]. Quadratic semiorthogonal wavelet basis constructed by Chui and Quak in [16] is denoted as CQ, biorthogonal quadratic spline wavelet bases with k vanishing moments from [12] are denoted as bior3.k, and the wavelet bases from this paper are denoted as new . Spline-Galerkin denotes the Galerkin method with quadratic B-splines. Spline-collocation denotes the collocation method with quadratic B-splines (with 2j uniform collocation nodes for level j for Eqs. (113) and (114), and these nodes with {0, 1} for Eq. (110) ), Legendre-collocation denotes the collocation method with Legendre polynomials and Gauss–Legendre nodes, and Quadrature method denotes the standard quadrature method using the Simpson’s rule. For details on these methods see e.g. [5].

440

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443 Table 1 L2 errors and the number of nonzero entries of compressed and uncompressed matrices for Eq. (110). T = 10−8

T = 10−10

Uncompressed

K

N

e2

NNZ

e2

NNZ

e2

NNZ

6 7 8 9 10 11

66 130 258 514 1026 2050

7.04e−1 4.01e−1 2.06e−2 8.71e−4 2.02e−4 2.37e−5

1780 3992 9204 21182 44052 106396

7.04e−1 4.01e−1 2.06e−2 8.71e−4 2.02e−4 2.37e−5

1814 4398 10274 22262 49346 111744

7.04e−1 4.01e−1 2.06e−2 8.71e−4 2.02e−4 2.37e−5

4356 16900 66564 264196 1052676 4202500

Table 2 The L2 -norms of the errors of the solution of Eq. (110). Galerkin method corresponds to the Galerkin method with bases new, B-spline basis, CQ, and bior3.3. Spline-Galerkin

Spline-collocation

N

e2

cond

N

e2

cond

66 130 258 514 1026 2050

7.04e−1 4.01e−1 2.06e−2 8.71e−4 2.02e−4 2.37e−5

7.64e0 7.62e0 7.62e0 7.62e0 7.61e0 7.61e0

66 130 258 514 1026 2050

1.00e0 4.05e−1 2.51e−2 2.01e−3 2.15e−4 2.48e−5

4.12e0 4.17e0 4.20e0 4.22e0 4.23e0 4.23e0

Legendre-collocation

Quadrature method

N

e2

cond

N

e2

cond

66 130 258 514 1026 2050

1.80e1 1.81e1 1.51e1 1.21e−13 1.61e−12 3.39e−13

1.74e18 3.99e18 2.69e18 5.18e19 1.14e20 1.30e20

67 151 259 515 1027 2051

9.82e−1 3.54e−1 1.32e−1 6.63e−2 8.69e−3 2.18e−3

2.00e0 2.00e0 2.00e0 2.00e0 2.00e0 2.00e0

6.1. One-dimensional integral equation We consider the integral equation

(2 − t ) y (t ) +

1

1



π

sin (t − x) y (x) dx = f (t ) ,

t ∈ (0, 1) ,

(110)

0

with the oscillatory solution y (t ) = sin (120π t ). We use the basis Ψ2K −2 from Section 2. The structure of this matrix is shown in Fig. 3. The results for the thresholds T = 10−8 , T = 10−10 , and uncompressed matrices are presented in Table 1. In Table 2 the results are also presented for other methods. The results for the Galerkin method with B-splines are displayed in the column labelled Spline-Galerkin. The L2 errors were (to at least three decimal digits) same as the errors for the new method, as was the case for the Galerkin method with semiorthogonal wavelets and bior3.3 wavelets. In Table 2 the results are also displayed for the spline collocation method, the Legendre collocation method, and the quadrature method. All four methods lead to matrices that are full and in the case of Legendre collocation method they are also very badly conditioned. In Table 3 the number of nonzero entries and the condition numbers of the compressed matrices with the threshold T = 10−10 are listed for various piecewise quadratic bases. We also tested biorthogonal quadratic spline wavelet bases with 5 and 7 vanishing moments from [12], but the results were significantly worse than for bases listed in Table 3 both with respect to the number of nonzero entries and the condition number of the compressed discretization matrix. The number of nonzero entries of the system matrix is the smallest for the new basis. 6.2. Two-dimensional integral equation We consider Ω = (0, 1)2 and the equation p (t1 , t2 ) y (t1 , t2 ) +

∫∫ Ω

K (x1 , x2 ) y (x1 , x2 ) dx1 dx2 = f (t1 , t2 ) ,

(111)

where p (t1 , t2 ) = (t1 + 0.1)2 (1.1 + sin 10π t2 ) , K (x1 , x2 ) = (x1 sin x2 + 1) ,

(112)

because they were (t1 , t2 ) ∈ Ω , and the solution is y (t1 , t2 ) = t1 cos (50π t2 ). We present the results for the basis slightly better than the results for the basis Ψ2K −2 . We solve the resulting system by the GMRES method with the following

Ψ3K −3 ,

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

441

Table 3 The number of nonzero entries and the condition numbers of compressed matrices with threshold 10−10 for various bases for Eq. (110). B-spline

bior3.3

CQ

new

K

NNZ

cond

NNZ

cond

NNZ

cond

NNZ

cond

3 4 5 6 7 8 9 10 11

100 324 1156 4356 16900 66564 264196 1052676 4202500

7.9 7.7 7.6 7.6 7.6 7.6 7.6 7.6 7.6

100 312 932 2552 6494 14002 31524 70410 162190

8.4 7.9 7.9 7.9 7.9 7.9 7.9 7.9 7.9

100 316 956 2604 6628 16012 34060 75380 168386

7.9 25.1 37.4 53.0 64.3 75.0 84.0 92.0 98.9

100 286 726 1814 4398 10274 22262 49346 111744

8.8 8.9 8.9 8.9 8.9 8.9 8.9 8.9 8.9

Table 4 The compression ratios for T = 10−8 , the number of outer and inner GMRES iterations and the L2 errors for Eq. (111). CQ

new

N

e2

it

CR

e2

it

CR

1156 4356 16900 66564 264196

4.10e−1 1.06e−1 6.12e−3 5.95e−4 6.70e−5

43(9) 42(7) 43(2) 43(2) 43(2)

2.67e−1 1.31e−1 4.82e−2 1.36e−2 3.25e−3

4.10e−1 1.06e−1 6.12e−3 5.95e−4 6.70e−5

46(3) 35(7) 35(1) 34(1) 34(10)

1.17e−1 8.23e−2 3.15e−2 9.43e−3 2.44e−3

Table 5 The compression ratios, L2 errors and the condition numbers for Eq. (114). bior3.5

new

N

e2

cond

CR

e2

cond

CR

64 128 256 512 1024 2048

7.19e−2 4.97e−3 5.15e−4 6.12e−5 7.53e−6 9.03e−7

13.03 13.16 13.20 13.23 13.23 13.23

6.34e−1 4.00e−1 2.32e−1 1.28e−1 6.77e−2 3.51e−2

7.19e−2 4.97e−3 5.15e−4 6.12e−5 7.53e−6 9.03e−7

10.77 10.86 10.90 10.93 10.95 10.96

3.94e−1 1.98e−1 9.96e−2 5.05e−2 2.56e−2 1.29e−2

parameters: a restart after ten iterations, the maximum number of outer iterations 500, and the iterations stop if the relative residual is less than 10−13 . In Table 4, the results for bases CQ and new are presented. Since the Galerkin method with B-splines, the spline-collocation method, the Legendre-collocation method and the quadrature method lead to full matrices, we do not use them for this problem. The Galerkin method with bior3.3 wavelets leads to badly conditioned matrices and the GMRES method stopped after 500 iterations without reaching the desired residual.

6.3. One-dimensional integro-differential equation i. K −2,D

We use the Galerkin method with the basis Ψ2 equation

− y′′ (t ) + y (t ) −

1 2

from Section 3 for the numerical solution of the integro-differential

1



sin (π x + π t ) y (x) dx = f (t ) ,

t ∈ (0, 1) ,

(113)

0

with the boundary conditions y (0) = y (1) = 0. The right-hand side f is such that the solution is y (t ) = sin (40π t ). We use the threshold T = 10−10 for matrix compression. Since the CQ basis cannot be adapted to boundary conditions while preserving both semiorthogonality and the number of vanishing moments, and boundary adapted bior3.5 wavelets led to better results than boundary adapted bior3.3 wavelets, we present in Table 5 the results for the Galerkin method with bases bior3.5 and new. We also use the Galerkin method and the collocation method both with quadratic B-splines adapted to homogeneous Dirichlet boundary conditions. The results are listed in Table 6. For these methods the matrices were full and badly conditioned.

442

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443 Table 6 The condition numbers and L2 errors for Eq. (113). Spline-Galerkin

Spline-collocation

N

e2

cond

e2

cond

64 128 256 512 1024

7.19e−2 4.97e−3 5.15e−4 6.12e−5 7.72e−6

5.65e2 2.26e3 9.05e3 3.62e4 1.44e5

1.22e−1 2.84e−2 7.10e−3 1.77e−3 4.44e−4

1.51e3 6.03e3 2.41e4 9.65e4 3.86e5

Table 7 The compression ratios, L2 errors and the condition numbers for Eq. (114). N

bior3.5 e2

cond

CR

new e2

cond

CR

64 128 256 512 1024 2048

2.78e−2 3.45e−3 4.31e−4 5.38e−5 6.71e−6 8.08e−7

12.1 12.1 12.2 12.2 12.2 12.2

6.08e−1 3.94e−1 2.31e−1 1.27e−1 6.76e−2 3.51e−2

2.78e−2 3.45e−3 4.31e−4 5.38e−5 6.71e−6 8.08e−7

10.8 10.8 10.9 10.9 10.9 10.9

3.10e−1 1.74e−1 9.35e−2 4.90e−2 2.52e−2 1.28e−2

Table 8 The condition numbers and L2 errors for Eq. (114). Spline-Galerkin

Spline-collocation

N

e2

cond

e2

cond

64 128 256 512 1024

2.78e−2 3.45e−3 4.31e−4 5.38e−5 6.71e−6

5.18e2 2.07e3 8.28e3 3.31e4 1.33e5

2.00e0 5.01e−1 1.25e−1 3.14e−2 7.84e−3

1.38e3 5.52e3 2.21e4 8.33e4 3.53e5

6.4. One-dimensional integro-differential equation ii We solve the integro-differential equation

− y (t ) + 2y (t ) − ′′

1



(x − t ) y (x) dx = f (t ) ,

t ∈ (0, 1) ,

(114)

0

with the boundary conditions y (0) = y (1) = 0. The right-hand side f is such that the solution is y (t ) = (1 − t ) 1 − e10t . K −2,D and the threshold T = 10−10 . We use the same methods as in the previous example. The results We use the basis Ψ2 are presented in Tables 7 and 8.

(

)

6.5. Two-dimensional integro-differential equation For Ω = (0, 1)2 we consider the equation

− ϵ ∆y (t1 , t2 ) + y (t1 , t2 ) +

∫∫ Ω

ex1 +t1 x2 t2 2

y (x1 , x2 ) = f (t1 , t2 ) ,

(115)

with the homogeneous Dirichlet boundary conditions, ϵ = 10−5 , and with the solution y (t1 , t2 ) = t1 t2 1 − e50t1 −50 ( ) K −3,D 1 − e50t2 −50 . We use Ψ3 basis from Section 4 and the threshold T = 10−10 . For this equation the system matrix is symmetric and positive definite and thus we use the conjugate gradient method. The iterations stop if the relative residual is less than 10−10 . The results for new and bior3.5 wavelets are listed in Table 9.

(

)

Conclusion We constructed quadratic spline-wavelet bases with three vanishing moments for the spaces L2 (Ω ) and H01 (Ω ), where Ω is the hyperrectangle. We derived a sufficient condition for a union of Riesz sequences to also be a Riesz sequence and proved the Riesz basis property for the constructed bases. We used the Galerkin method with the constructed bases for solving integral and integro-differential equations. We proved convergence and stability of the method, and uniform boundedness of the condition numbers of the system matrices. We derived decay estimates of the entries of the matrix arising from discretization of the integral term using anisotropic wavelet systems under the assumption that the kernel

D. Černá and V. Finěk / Journal of Computational and Applied Mathematics 363 (2020) 426–443

443

Table 9 The compression ratios for T = 10−9 , the number conjugate gradient iterations and L2 errors for Eq. (115). bior3.5

new

N

e2

it

CR

e2

it

CR

1024 4096 16384 65536 262144

1.97e−3 2.41e−4 2.88e−5 3.54e−6 4.89e−7

300 419 488 514 520

6.23e−1 3.60e−1 1.45e−1 4.85e−2 1.43e−2

1.97e−3 2.41e−4 2.88e−5 3.54e−6 4.89e−7

104 134 157 171 176

2.06e−1 8.16e−2 3.00e−2 8.24e−3 2.31e−3

is smooth enough. We presented several numerical examples and compared the results with the Galerkin method with other quadratic spline wavelet bases and with other methods. In our examples, the errors for small enough thresholds were similar as the error for full matrices, but the number of nonzero entries of the compressed matrices was significantly smaller than for full matrices and thus we were able to solve large systems efficiently. Acknowledgement This work was supported by grant GA16-09541S of the Czech Science Foundation. References [1] P. Lax, A. Milgram, Parabolic equations, Ann. Math. Stud. 33 (1954) 167–190. [2] P. Ciarlet, The Finite Element Method for Elliptic Problems, Society for Industrial and Applied Mathematics, 2002. [3] B. Alpert, G. Beylkin, R. Coifman, V. Rokhlin, Wavelet-like bases for the fast solution of second-kind integral equations, SIAM J. Sci. Comput. 14 (1993) 159–184. [4] G. Beylkin, R. Coifman, V. Rokhlin, Fast wavelet transforms and numerical algorithms I, Comm. Pure Appl. Math. 44 (1991) 141–183. [5] Z. Chen, C.A. Micchelli, Y. Xu, Multiscale Methods for Fredholm Integral Equations, in: Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2015. [6] C.A. Micchelli, Y. Xu, Y. Zhao, Wavelet Galerkin methods for second-kind integral equations, J. Comput. Appl. Math. 86 (1997) 251–270. [7] W. Dahmen, H. Harbrecht, R. Schneider, Compression techniques for boundary integral equations – asymptotically optimal complexity estimates, SIAM J. Numer. Anal. 43 (2006) 2251–2271. [8] K. Maleknejad, M.N. Sahlan, The method of moments for solution of second kind Fredholm integral equations based on B-spline wavelets, Int. J. Comput. Math. 87 (2010) 1602–1616. [9] K. Maleknejad, M. Nosrati, E. Najafi, Wavelet Galerkin method for solving singular integral equations, Comput. Appl. Math. 31 (2012) 373–390. [10] K. Maleknejad, M. Yousefi, Numerical solution of the integral equation of the second kind by using wavelet bases of Hermite cubic splines, Appl. Math. Comput. 183 (2006) 134–141. [11] W. Dahmen, A. Kunoth, K. Urban, Biorthogonal spline wavelets on the interval - stability and moment conditions, Appl. Comput. Harmon. Anal. 6 (1999) 132–196. [12] D. Černá, V. Finěk, Construction of optimally conditioned cubic spline wavelets on the interval, Adv. Comput. Math. 34 (2011) 219–252. [13] M. Primbs, New stable biorthogonal spline-wavelets on the interval, Result. Math. 57 (2010) 121–162. [14] D. Černá, V. Finěk, Quadratic spline wavelets with short support for fourth-order problems, Result. Math. 66 (3) (2014) 525–540. [15] D. Černá, V. Finěk, Quadratic spline wavelets with short support satisfying homogeneous boundary conditions, Electron. Trans. Numer. Anal. 48 (2018) 15–39. [16] C.K. Chui, E. Quak, Wavelets on a bounded interval, in: Numerical Methods in Approximation Theory, Vol. 9, Birkhäuser Basel, Basel, 1992, pp. 53–75. [17] B. Han, M. Michelle, Derivative-orthogonal Riesz wavelets in Sobolev spaces with applications to differential equations, Appl. Comput. Harmon. Anal. (2017) http://dx.doi.org/10.1016/j.acha.2017.12.001. [18] R.-Q. Jia, W. Zhao, Riesz bases of wavelets and applications to numerical solutions of elliptic equations, Math. Comp. 80 (2011) 1525–1556. [19] D. Chen, Extended families of cardinal spline wavelets, Appl. Comput. Harmon. Anal. 1 (1994) 194–208. [20] B. Han, Z. Shen, Wavelets with short support, SIAM J. Math. Anal. 38 (2006) 530–556. [21] D. Černá, V. Finěk, M. Šimůnková, A quadratic spline-wavelet basis on the interval, in: J. Chleboun, K. Segeth, J. Šístek, T. Vejchodský (Eds.), Programs and Algorithms of Numerical Matematics, vol. 16, Institute of Mathematics AS CR, 2013, pp. 29–34. [22] O. Christensen, A.M. Lindner, Frames of exponentials: lower frame bounds for finite subfamilies and approximation of the inverse frame operator, Linear Algebra Appl. 323 (2001) 117–130. [23] H.O. Kim, J.K. Lim, New characterizations of Riesz bases, Appl. Comput. Harmon. Anal. 4 (1997) 222–229. [24] O. Christensen, An Introduction to Frames and Riesz Bases, in: Applied and Numerical Harmonic Analysis, Birkhäuser Boston, 2003. [25] Y.-H. Ha, H.-Y. Ryu, I.-S. Shin, Angle criteria for frame sequences and frames containing a Riesz basis, J. Math. Anal. Appl. 347 (2008) 90–95. [26] D.G. Feingold, R.S. Varga, Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem., Pacific J. Math. 12 (1962) 1241–1250. [27] A. Cohen, Numerical analysis of wavelet methods, in: Studies in Mathematics and its Applications, Elsevier, 2003. [28] M. Griebel, P. Oswald, Tensor product type subspace splittings and multilevel iterative methods for anisotropic problems, Adv. Comput. Math. 4 (1995) 171–206. [29] K. Gustafson, D. Rao, Numerical Range, The Field of Values of Linear Operators and Matrices, Springer-Verlag, Berlin, 1997. [30] L. Beilina, E. Karchevskii, M. Karchevskii, Numerical Linear Algebra: Theory and Aplications, Springer International Publishing, 2017.