Applied Numerical Mathematics 32 (2000) 331–357
Filter bank subdivisions of bounded domains Johan Waldén 1 Yale University, Department of Mathematics, 10 Hillhouse Avenue, P.O. Box 208283, New Haven, CT 06520, USA
Abstract We construct filter bank transforms that are adapted to bounded domains. The transforms are constructed with the aim of solving PDEs together with finite difference methods, and the properties important for this kind of application are analyzed. As the filter banks do not need to correspond to wavelets, short filters can be used. The price paid is extra growth factors in the number of coefficients and in the error, compared to the multiresolution analysis approach. However, these factors are small. We give upper bounds of the growth factors, and show numerical examples for an interval, a three-dimensional box, and a triangle. We also show examples of fast differentiation and other operations on thresholded expansions. 2000 IMACS. Published by Elsevier Science B.V. All rights reserved. Keywords: Wavelets; Adaptive PDE methods; Finite difference methods; Numerical algorithms
1. Introduction In recent years, the task of constructing wavelet bases for bounded domains has achieved attention. In [2] an idea for how to construct wavelets that satisfy homogeneous boundary conditions (b.c.s) was presented. In [1] Andersson et al. developed a method for constructing general biorthogonal bases on an interval, and algorithmic issues were also discussed. A good review of different methods to construct wavelet bases on an interval was presented in [6]. The filter bank method, developed in [15] uses a totally discrete approach to solve hyperbolic PDEs. It is based on the filter bank transform (which is the discrete version of the wavelet transform), and finite difference methods. When using the filter bank method, the methods described in [1,2,6] will be inadequate. First, the methods depend on the existence of wavelets in a multiresolution analysis, and this leads to extra conditions that need to be satisfied. A consequence is that the boundary stencils will be long. Second, the constructions rely on the fact that different quadrature rules can be used for interior and boundary basis functions. However, when using the filter bank method, we do not use quadratures rules (or, in other words, we use the one-point quadrature everywhere, see [15]). Thus, we need to construct boundary filter banks that work with the filter bank method. We will do this in this paper. 1 E-mail:
[email protected]
0168-9274/00/$20.00 2000 IMACS. Published by Elsevier Science B.V. All rights reserved. PII: S 0 1 6 8 - 9 2 7 4 ( 9 9 ) 0 0 0 2 5 - 2
332
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
The construction will be similar to what is done when constructing finite difference stencils at boundaries. By asserting that polynomials are treated in the right way, and that the norms of the operators do not grow fast, we get boundary filters that perform satisfactorily. The fact that short filters can be used is very important at boundaries, where extra work is needed for each coefficient that “passes” the boundary. The price paid when using the boundary filter bank method is that when doing many steps in the transform, the norm of the transform-operator grows, in contrast to when using wavelets. However, this potential instability is mild, and as we typically will do no more than 5–15 steps in the transform, it can be kept under control. We derive estimates of the growth of these norms, and show in some numerical examples that the method performs satisfactorily. We present the filter bank method adapted to an interval, higher dimensional tensor product spaces (“boxes”), general Cartesian domains in more than one dimension, and a triangle. Thus, the filter bank idea can be generalized to domains for which one can find a regular subdivision. The paper is organized as follows. In Section 2 we introduce some notation, and review the periodic filter bank transform. In Section 3 we present the boundary filter bank transform for an interval. We also derive bounds on the growth factors of the error and number of coefficients. Furthermore, we show how the method performs together with so called Λ-cycles when used for fast differentiation (and other operations) on thresholded expansions. In Section 4 we present the boundary filter bank method in higher dimensions. We generalize the transform to tensor product spaces, and also to general Cartesian discretizations of domains. Furthermore, we use the ideas to define a transform, with the same properties as the Cartesian transform, on a subdivision of the unit triangle. Finally, we present numerical examples for the higher-dimensional transforms.
2. Preliminaries 2.1. Notation We define the Hölder spaces, C γ , by f ∈ C γ ⇔ sup x,y
|f (x) − f (y)| < ∞, |x − y|γ
γ ∈ (0, 1],
f 0 ∈ C γ ⇔ f ∈ C γ +1 (this is a slight abuse of notation as, for γ ∈ N, C γ actually is the Lipschitz space, Lipγ ). We have the semi-norms on C γ , kf kC˙ γ = sup x,y
|f (bγ c) (x) − f (bγ c) (y)| , |x − y|γ −bγ c
where bγ c is defined as the largest integer not larger than γ . By P M (RN ), we mean the set of all polynomials, p : RN → R, such that each term x1α1 x2α2 · · · xNαN has α1 + · · · + αN 6 M. We will mainly be concerned with discrete functions. A discrete function (grid function, or filter) is a function, u : Z → R (or u : U → R, U ⊂ Z). We will sometimes use the notation uk for u(k). A special
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
333
discrete function is the Kronecker function, δ with δ0 = 1, δk = 0, k 6= 0. Some subspaces of the space of all discrete functions, RZ , are the Banach spaces l p with norms kukp =
X k
1/p
|u(k)|
p
,
p ∈ [1, ∞),
kuk∞ = sup u(k) , k
and the Hilbert space l 2 with inner product hu, vi =
X
u(k)v(k).
k
P
The convolution of two discrete functions is (f ∗ g)(k) = n f (n)g(k − n) (when the right hand side is defined). Downsampling of a discrete function is defined as (↓f )(k) = f (2k) and upsampling as (↑f )(k) = f (k/2) when k is even, and (↑f )(k) = 0 when k is odd. In signal analysis these operations are often denoted ↓2 and ↑2, but as we will only work with a sub-sampling factor of 2, we choose the shorter notation. We will also use the mirror operator u(k) ˇ = u(−k). The convolution of u and v can be seen as applying a linear operator on v (this is why we do not make any distinction between discrete functions and filters). A bounded (on l 2 ) linear operator on discrete functions, A, can be represented by an infinite matrix, A ∈ RZ×Z and the case of convolution arises when A is a Toeplitz operator, i.e., when Ai,j = Ai−j,0 ∀i, ∀j . For bounded linear operators we define the operator norms kAukp kAkp = sup . kukp u When combining different operators on discrete functions we use a right-to-left rule, e.g., ↓u ∗ v = ↓(u ∗ v). We will sometimes write hg instead of h ∗ g (interpreting h as the Toeplitz operator acting on g). The adjoint A∗ of a linear operator A on l 2 , is the operator for which hAu, vi = hu, A∗ vi, ∀u, ∀v. We have the following relations: (AB)∗ = B ∗ A∗ , ↓∗ = ↑ , ˇ h∗ = h, ˇ ∗ = h↑ . The support of a discrete function is defined as supp u = {k: uk 6= 0}. We will leading to (↓h) use the lower bound m(u) = inf supp u, the upper bound M(u) = sup supp u, and the length of a filter, length u = M(u) − m(u) + 1. An easy way to display a discrete function with finite length is of the form u = (um(u) , um(u)+1 , . . . , u−1 , u0 , u1 , . . . , uM(u)−1 , uM(u) ), where the zeroth coordinate of u is underlined. The discrete ordered interval [a, b]Z is defined as {a, a + 1, . . . , b − 1, b}. We will go from functions on R to Z with the sampling operator, [ ]h : RR → RZ , which is defined through [f ]h (k) = f (kh). The operator is similarly defined for intervals. For a matrix, A, we define cols(A) to be the number of columns of A, and rows(A) the number of rows. We will use MATLAB notation, e.g., A = [1, 2; 3, 4] is the matrix with A1,1 = 1, A1,2 = 2, A2,1 = 3, A2,2 = 4. It will be convenient for us to use the empty matrix, N . Before we define the filter bank transform on the interval, we give a brief review of the transform on the line and on the circle.
334
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
2.2. The transform on RZ The discrete version of the biorthogonal wavelet transform relies on so-called perfect reconstruction filter banks. ˜ g) Definition 2.1. A perfect reconstruction filter bank is a quadruple of filters, (h, g, h, ˜ satisfying √ X X h˜ k = 2, hk = k
X
(1)
k
hn h˜ n+2k = δk ,
(2)
n
gk = (−1)k h˜ 1−k ,
g˜ k = (−1)k h1−k .
(3)
We will only work with real filters with a finite number of non-zero coefficients. We define the nth discrete moment as X
σn (h) =
k n hk ,
k
and we say that a filter h has N vanishing moments if σk (h) = 0,
k = 0, . . . , N − 1.
(4)
If we define the operators √ √ 1 ˇ˜ 1 e = 2↓g, ˇ˜ ˇ He = √ ↓h, G H = 2↓h, G = √ ↓g, ˇ 2 2 we get the perfect reconstruction identity (see Appendix for notation) e e ⊕H G
−1
= G∗ ⊕ H ∗ ,
and by repeated iteration with He , we define the filter bank transform, Te n−n0 : RZ → ( of a grid function s n as
(5)
Ln−n0 j =1
(6) RZ ) ⊕ RZ ,
Te n−n0 s n = d n−1 ⊕ d n−2 ⊕ · · · ⊕ d n0 ⊕ s n0 = t n−1 ⊕ t n−2 ⊕ · · · ⊕ t n0 −1 = τ, eH e j −1 s n , s n0 = H e n−n0 s n , d n−j = G i.e., Te n−n0 =
n−n M0
! eH e j −1 G
⊕ He n−n0 .
(7)
j =1
With a perfect reconstruction filter bank we get the inversion formula Te n−n0
−1
= T
∗ n−n0
=
n−n M0
H
∗ j −1
!
∗
G
⊕ H∗
n−n0
.
(8)
j =1
The spline filter banks (Table 1, [5]) and the δ filter banks (Table 2, [9]) are useful. These were used to solve periodic initial value problems of hyperbolic type in [15]. We will adapt these filter banks to bounded domains in the next section, but we first review the periodic version of the transform, and the idea of thresholding.
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
335
Table 1 Spline filters banks of order 3 and 4 Name N h h˜ √ √ 2 2 (1, 3, 3, 1) (−1, 3, 3, −1) β2 3 8 4 √ √ 2 2 (1, 4, 6, 4, 1) (−1, 4, −1) 4 β3 16 2 Table 2 δ-filter banks of order 2 and 4 Name
N
δ1
2
δ3
4
h √ 2 (1, 2, 1) 4 √ 2 (−1, 0, 9, 16, 9, 0, −1) 32
h˜ √ ( 2) √ ( 2)
Fig. 1. Matrix representation of periodic filter bank transform.
2.3. The periodic transform Periodic discrete functions u ∈ Rm , where m = 2n , can be treated by using circulant convolution. We then have He : Rm → Rm/2 ,
e : Rm → Rm/2 , G
H ∗ : Rm/2 → Rm ,
G∗ : Rm/2 → Rm ,
and the transform gives the same decomposition as (7) and (8). The matrix representation of (one step in) the periodic filter bank transform is shown in Fig. 1, and the perfect reconstruction identity ensures that T ∗ Te = I . In Fig. 2 we show the matrices for the δ1 filter bank e and G∗ part of the matrices for later purposes). and m = 8 (where we have shifted the G 2.4. Thresholding We wish to get a sparse expansion by thresholding a transformed function, only keeping coefficients with absolute value larger than some threshold value, ε. We define the set of retained coefficients as
j Iεn = (j, k): n0 − 1 6 j < n, tk > ε ,
(9)
and the number of significant coefficients
Ns = Iεn .
(10)
336
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Fig. 2. Matrix representation of periodic filter bank transform with δ1 filter bank, and m = 8.
The thresholded representation (or sparse expansion) is defined as τε = Tε τ = tεn−1 ⊕ · · · ⊕ tεn0 −1 , where tεj
(l) =
t j (l), 0,
(j, l) ∈ Iεn , (j, l) ∈ / Iεn .
(11)
(12)
3. The one-dimensional transform 3.1. The boundary filter bank transform on Rm In [15] the properties of a perfect reconstruction filter bank needed for it to make the periodic filter bank transform successful were analyzed. By successful, we meant the possibility of solving the following problems: • The representation problem. A function that is mostly smooth, but has some well-localized regions with sharp gradients (or worse behavior) is efficiently represented by the filter bank transform and thresholding. • The differentiation problem. There are fast algorithms for finding a good approximation to the (nth) differential of a sparse expansion. • The operator problem. There are fast algorithms for finding a good approximation of performing an operator on a sparse expansion (e.g., multiplying it with a smooth function, f (x) 7→ g(x)f (x) or taking the exponential, f (x) 7→ ef (x) ). • The multiple-operator problem. There are fast algorithms for finding a good approximation of performing an operator on multiple sparse expansions (e.g., multiplying two sparse expansions, (f (x), g(x)) 7→ f (x)g(x)).
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
337
For sparse expansions, it was shown that so called Λ-cycles could be used to give fast algorithms for all of these problems (of complexity O(Ns )). The name comes from the cycles working like multigrid V -cycles, but “up-side-down” as they start on a coarse grid representation, work back to the finest grid, and then forward to the coarse grid. It was also shown that by imposing vanishing moment conditions on the g-filter, ˜ the representation problem and the differentiation problem will be well approximated. The following properties of the filter bank transform ensure this: 1. The He transform maps polynomials to polynomials, i.e., if s = [p]1x for some p ∈ P N−1 (R), then He s = [q]21x for some q ∈ P N−1 (R). e transform is orthogonal to polynomials 2. For filter banks such that g˜ has N vanishing moments, the G N−1 e = 0. of degree N − 1 or less, i.e., if s = [p]1x for some p ∈ P (R) then Gs p ∗ p e 3. The norms of H and (H ) do not grow too quickly as p increases. e H ∗ and G∗ operators are all local in the sense that every coefficient of the transformed 4. The He , G, function depends on a few neighboring coefficients of the original function. e and G∗ to have large norms, but as we only will perform these Remark. Of course, we do not want G operators at most once for each element in the expansion (see (7), (8)), they are not as important as He and H ∗ .
The spline filter banks are examples that satisfy these conditions, and thus solve the representation problem and the differentiation problem satisfactorily. For the operator problems, it was shown that extra conditions have to be added to get a good solution. The extra conditions needed are vanishing moments ˜ for the h-filter (except the zeroth moment). By replacing condition 1 above with the following property, the operator problems is solved satisfactorily: ˜ = 0, k = 1, . . . , N − 1, the He transform maps polynomials of 10 . For filter banks where h˜ has σk (h) degree less than of equal to N − 1, to the same polynomial, i.e., if s = [p]1x for some p ∈ P N−1 (R) then He s = [p]21x . The δ-filter banks are examples that also satisfy this additional condition. Our plan is now to adapt the transform to an interval, keeping these properties, but adding the extra property that there is no “wrapping” of the transform around the boundary. For example, in Fig. 2 we see that the “wrapping” is present in the last row of Te and in the first column of T ∗ . We could, of course, apply the periodic transform to a nonperiodic function, but it is improbable that such a function will act locally as a polynomial around the boundary, and thus we will introduce artificial “jumps” in the coefficients with this approach. This can be dealt with by doing some extrapolation procedure at the boundary, but this will lead to the sum of the dimensions of the two parts of the transformed function being larger than the dimension of the original function, and thus the transformed expansion is over-determined. Instead we change the transform at the boundaries, keeping the perfect reconstruction identity, but changing the rows/columns of the matrices that cross the boundary. The idea is the same as when introducing one-sided finite difference stencils at boundaries. We would also want to be able to apply (many steps in) the transform on functions that do not have 2n coefficients for some n. This is done by introducing two transforms; one for functions with an even number and one for functions with an odd number of coefficients. We need extra shift factors for our transforms, so that the boundary and interior
338
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
part of the transform interact in the right way. Taking all this into account we arrive at the following definition: Definition 3.1. A boundary perfect reconstruction filter bank for functions of even length is a perfect reconstruction filter bank, an ordered list of eight (possibly empty) matrices
e left , G e right, H left , H right, Gleft , Gright , He left , He right, G
and a quadruple of integers
s He , s Ge , s H , s G , such that for functions u ∈ Rm (m even), the operators He : Rm → Rm/2 ,
e : Rm → Rm/2 , G
H : Rm → Rm/2 ,
G : Rm → Rm/2 ,
defined by e cols( Hleft) X left Hei,k uk , 1 6 i 6 rows He left , k=1 eright ) cols( H X right He u i = e right < i 6 m/2, He u right )+m , m/2 − rows H e right )−m/2,k k−cols(H e H i+rows( k=1 m X 1 √ h˜ k−2i+1+s Heuk , otherwise
(13)
2 k=1
e H , G, satisfy the identity with similar definitions for G, e e ⊕H G
−1
= G∗ ⊕ H ∗ .
In the same manner we define a perfect reconstruction filter bank for functions of odd length, but with the difference that we now let the dimensions vary like He : Rm → R(m+1)/2,
e : Rm → R(m−1)/2, G
H : Rm → R(m+1)/2,
G : Rm → R(m−1)/2.
The structure of the matrix representation of the boundary filter bank transform is shown in Fig. 3. We will always assume that m is large enough for all the boundary matrices to fit into the matrix representation, and that no column or row has elements of both a left and a right matrix. For the transform on RZ , N vanishing moments for g˜ is necessary and sufficient for the important properties 1 and 2 above. For the boundary transform, we wish these properties to be preserved, and we therefore define Definition 3.2. A boundary perfect reconstruction filter bank is said to be polynomial preserving of degree N if properties 1 and 2 hold for polynomials, p ∈ P N (R).
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
339
Fig. 3. Matrix representation of boundary filter bank transform.
Furthermore, we define Definition 3.3. A boundary perfect reconstruction filter bank is said to be identically polynomial preserving of degree N if properties 10 and 2 hold for polynomials, p ∈ P N (R). Of course, a necessary condition for a boundary filter bank to be polynomial preserving of degree N − 1 is that the filter bank part satisfies that g˜ has N vanishing moments, and for it to be identically polynomial preserving of degree N − 1 that, in addition, h˜ has N vanishing moments, but we also get properties for the boundary matrices that need to be satisfied. These however give rise to linear equations, which are easily solved. We do as follows. We begin with finding He left , He right. Here we have four goals. First the transform shall map polynomials to polynomials, second the matrices shall be small so that not too much extra work is needed for boundaries, third the norms of the powers of the operator shall be small, and fourth the rows of the operator shall not be close to linearly dependent, as this will make the transform close to singular, leading to a large norm for the inverse transform (see, e.g., [12, p. 775]). e left , G e right , where the three last goals are the same as for H e (although, the norm Next, we find G e constraint is not as important as we only will apply G once for each coefficient in the result), but instead of the first goal, we want the operator to be orthogonal to polynomials. Furthermore, we must ensure that e ⊕H e is invertible. the total operator: G Finally, we invert the operator, and get H left , H right , Gleft , Gright . We do an a posteriori check that the norms of the inverse operators are not too large. We perform this for the filter banks listed in the preliminaries. For the spline filter banks we list matrices and shift indices for the boundary transform for functions of even (Table 3) and odd (Table 4) length, such that the corresponding boundary filter bank transform becomes polynomial preserving of the same degree as the filter bank. We do the same for the δ filter banks (Tables 5 and 6), such that the corresponding boundary filter bank transform is identically polynomial preserving of the same degree as the filter bank. As an example, the matrix representation of the transform with the boundary δ1 filter bank for functions of even length and m = 8 is shown in Fig. 4.
340
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Table 3 Boundary matrices and shift indices for boundary spline filter bank for functions of odd length, order 3 and 4 e, e left, He right, G e left , G e right, H left , H right, Gleft , Gright Name N H s He , s G sH , sG β2
3
[0, 3/2, −1/2], [−1/2, 3/2, 0], N , [3/8, −5/8, −3/8, 9/8, −1/2], [3/4, 1/4],
2,−2,2,−2
[1/4, 3/4, 3/4, 1/4, 0, 0, 1/4; 0, 0, 1/4, 3/4, 3/4, 1/4, −3/4; 0, 0, 0, 0, 1/4, 3/4, 3/2] N , [−1/2, −3/2, 3/2, 1/2, 0, 0, 1/2; 0, 0, −1/2, −3/2, 3/2, 1/2, 3/2; 0, 0, 0, 0, 0, 0, −2] β3
4
[1/3, 7/6, 0, −5/6, 1/3], [1/3, −5/6, 0, 7/6, 1/3], [−1/16, 1/4, −3/8, 1/4, −1/16],
0,0,0,0
[−1/16, 1/4, −3/8, 1/4, −1/16], [5/4, 1/2, 1/8, 0, 0, 0, 0, 0, 0; −5/8, 1/2, 3/4, 1/2, 1/8, 0, 0, 0, 0; 1/2, 0, 1/8, 1/2, 3/4, 1/2, 1/8, 0, 0; −1/8, 0, 0, 0, 1/8, 1/2, 3/4, 1/2, 1/8], [1/8, 1/2, 3/4, 1/2, 1/8, 0, 0, 0, −1/8; 0, 0, 1/8, 1/2, 3/4, 1/2, 1/8, 0, 1/2; 0, 0, 0, 0, 1/8, 1/2, 3/4, 1/2, −5/8; 0, 0, 0, 0, 0, 0, 1/8, 1/2, 5/4], [−30/3, 8/3, 2/3, 0, 0, 0, 0; 9, 0, 1, 4, 1, 0, 0; −1, 0, 0, 0, 1, 4, 1], [1, 4, 1, 0, 0, 0, −1; 0, 0, 1, 4, 1, 0, 9; 0, 0, 0, 0, 2/3, 8/3, −30/3]
Table 4 Boundary matrices and shift indices for boundary spline filter bank for functions of even length, order 3 and 4 e, e left , G e right , H left, H right, Gleft , Gright Name N He left , He right, G s He, s G sH , sG β2
3 [0, 3/2, −1/2], [−1/2, 3/2, 0], [−1/2, 9/8, −3/8, −5/8, 3/8], [−3/8, 5/8, 3/8, −9/8, 1/2] 1,−1,1,−1 [3/2, 3/4, 1/4, 0, 0, 0, 0; −3/4, 1/4, 3/4, 3/4, 1/4, 0, 0; 1/4, 0, 0, 1/4, 3/4, 3/4, 1/4], [1/4, 3/4, 3/4, 1/4, 0, 0, 1/4; 0, 0, 1/4, 3/4, 3/4, 1/4, −3/4; 0, 0, 0, 0, 1/4, 3/4, 3/2], [−2, 0, 0, 0, 0, 0, 0; −3/2, −1/2, −3/2, 3/2, 1/2, 0, 0; −1/2, 0, 0, −1/2, −3/2, 3/2, 1/2], [−1/2, −3/2, 3/2, 1/2, 0, 0, 1/2; 0, 0, −1/2, −3/2, 3/2, 1/2, 3/2; 0, 0, 0, 0, 0, 0, 2]
β3
4 [1/3, 7/6, 0, −5/6, 1/3], N , [−5/38, 8/19, −7/19, −2/19, 11/38, −2/19], [1/4, −9/16, 0, 5/8, 0, −9/16, 1/4], [5/4, 1/2, 1/8, 0, 0, 0, 0, 0, 0; −5/8, 1/2, 3/4, 1/2, 1/8, 0, 0, 0, 0; 1/2, 0, 1/8, 1/2, 3/4, 1/2, 1/8, 0, 0; −1/8, 0, 0, 0, 1/8, 1/2, 3/4, 1/2, 1/8], [1/8, 1/2, 3/4, 1/2, 1/8, 0, 0, 0, −1/8, −1/2; 0, 0, 1/8, 1/2, 3/4, 1/2, 1/8, 0, 1/2, 2; 0, 0, 0, 0, 1/8, 1/2, 3/4, 1/2, −5/8, −3; 0, 0, 0, 0, 0, 0, 1/8, 1/2, 5/4, 5/2], [−133/30, 19/15, 19/60, 0, 0, 0, 0; 46/15, 64/15, 46/15, 8, 2, 0, 0; −2, 0, 0, 0, 2, 8, 2], [2, 8, 2, 0, 0, 0, −2, −8; 0, 0, 2, 8, 2, 0, −2/7, −8/7; 0, 0, 0, 0, 2, 8, 12/7, −8/7; 0, 0, 0, 0, 0, 0, 16/7, 64/7]
0,0,0,0
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
341
Table 5 Boundary matrices and shift indices for boundary δ-filter bank for functions of odd length, order 2 and 4 e, e left, He right, G e left , G e right, H left , H right, Gleft , Gright Name N H s He , s G sH , sG δ1
2
N , N , N , N , [1, 1/2], [1/2, 1], N , N
0,0,0,0
δ3
4
N , N , [1/16, −1/4, 3/8, −1/4, 1/16], [1/16, −1/4, 3/8, −1/4, 1/16],
0,0,0,0
[1, 5/16, 0, −1/16, 0, 0, 0, 0, 0, 0; 0, 15/16, 1, 9/16, 0, −1/16, 0, 0, 0, 0; 0, −5/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, 0; 0, 1/16, 0, −1/16, 0, 9/16, 1, 9/16, 0, −1/16], [−1/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, 1/16, 0; 0, 0, −1/16, 0, 9/16, 1, 9/16, 0, −5/16, 0; 0, 0, 0, 0, −1/16, 0, 9/16, 1, 15/16, 0; 0, 0, 0, 0, 0, 0, −1/16, 0, 5/16, 1], [0, −4, 0, 0; 0, 2, 0, −2], [−2, 0, 2, 0; 0, 0, −4, 0] Table 6 Boundary matrices and shift indices for boundary δ-filter bank for functions of even length, order 2 and 4 e, e left, He right, G e left , G e right, H left , H right, Gleft , Gright Name N H s He , s G sH , sG δ1
2
N , N , N , [−1/2, 1/4, 1, −3/4], [1, 1/2],
0,0,0,0
[1/2, 1, 1/2, 0, −1/2; 0, 0, 1/2, 1, 1/2], N , [−2, 0, −2/3; 0, 0, −4/3] δ3
4
N , N , [1/16, −1/4, 3/8, −1/4, 1/16], [73/304, −168/304, 12/304,
0,0,0,0
172/304, −3/304, −156/304, 70/304; 0, 0, 1/16, −1/4, 3/8, −1/4, 1/16], [1, 5/16, 0, −1/16, 0, 0, 0, 0, 0, 0; 0, 15/16, 1, 9/16, 0, −1/16, 0, 0, 0, 0; 0, −5/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, 0; 0, 1/16, 0, −1/16, 0, 9/16, 1, 9/16, 0, −1/16], [−1/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, 0, 0, −73/6768, 0, 73/1128; 0, 0, −1/16, 0, 9/16, 1, 9/16, 0, −1/16, 0, 715/6768, 0, −1289/2256; 0, 0, 0, 0, −1/16, 0, 9/16, 1, 9/16, 0, −851/2256, 0, 1279/752; 0, 0, 0, 0, 0, 0, −1/16, 0, 9/16, 1, 6637/6768, 0, −5519/2256; 0, 0, 0, 0, 0, 0, 0, 0, −1/16, 0, 1021/3384, 1, 5081/2256], [0, −4, 0, 0; 0, 2, 0, −2], [−2, 0, 0, 0, −146/423, 0, 292/141; 0, 0, −2, 0, 116/423, 0, 50/141; 0, 0, 0, 0, −304/423, 0, 608/141; 0, 0, 0, 0, 1120/423, 0, 16/141]
Remark. As when constructing one-sided finite difference stencils, we have a lot of freedom in the construction. The constraint that the norm of the operators do not grow “too fast”, is much weaker than when constructing finite difference stencils, as we typically will never do more than 10 levels in the transform. The only problem we might run into when adapting the transform is if the polynomial e conflict with the constraint that G e ⊕H e is invertible. As we have a large degree constraints on He and G
342
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Fig. 4. Matrix representation of boundary filter bank transform for functions of even length, δ1 -boundary filter bank, and m = 8. e left , G e right this is unlikely to happen, and we have tried quite a of freedom when choosing He left , He right, G few filter banks without experiencing any problems.
Remark. The boundary matrices for H and G contain many elements. However, there are only a few modified columns in the matrix representation of the operator. This means that there will only be a few modified rows in H ∗ and G∗ , so the changes will not be so large. For example, for H ∗ of the δ3 transform of functions of even length, 2 points will be modified at the left edge, and 3 at the right. Remark. In general, boundary filter banks that are constructed from an approach with a multiresolution analysis (e.g., those in [6]) will not satisfy the conditions of a polynomial preserving boundary filter bank. This comes from the fact that they allow for different quadrature rules for approximating inner products to be used at boundaries and in the interior, whereas in our definition, we are only interested in point-values and never use quadrature rules. Therefore, the construction in [6] would not give a sparse representation of polynomials close to boundaries when using the filter bank method. With the previous definitions we can define the boundary filter bank transform for discrete functions of arbitrary length as in the periodic case, but with the difference that we switch between using the boundary filter bank of odd and even length, depending on if m is odd or even in s n ∈ Rm . We denote both the odd e H , G and one step in the boundary filter bank transform will and even length transforms by He , G, perform the following decomposition: Te : Rm → Rl0 (m) ⊕ Rl1 (m) , where l0 (m) =
m/2, (m − 1)/2,
m even, m odd,
l1 (m) =
m/2, (m + 1)/2,
m even, m odd.
(14)
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
343
Repeated iterations give exactly the same formulas as (7) and (8). As the boundary filter bank transform is “close to” the periodic transform, it inherits most of its properties such as arithmetic complexity. Furthermore, the theory from the periodic transform [15] can be used directly to find bounds for the error and number of coefficients in the sparse expansion. We introduce the function spaces
(
Lq,w [0, 1] = f ∈ R[0,1] : ∃N, ∃C, ∀y, ∃a1 , . . . , aN , ∃b1 , . . . , bN N X
)
C kf kC˙ q ((0,1)\ SN [a ,b ]) 6 y, |bi − ai | 6 w . y i=1 i i i=1
(15)
The functions in these spaces are allowed to have a finite number of discontinuities, but the seminorms kf kC˙ q are only allowed to be large on a finite number of intervals of decreasing length, so the spaces are larger than the C q spaces. This extension is important. When using the filter bank method, we want to use a sparse representation for solving hyperbolic PDEs, and we know that the solution can be discontinuous. Even if the solution is C ∞ , but with very sharp gradients in some small regions, we want to avoid having to go to a very fine scale to resolve these small regions. We have the following theorem for the representation of a sparse expansion: Theorem 3.1. Suppose f ∈ Lq,w ([0, 1]) is sampled so that s n = [f ]1x ∈ Rm , 2n−1 6 m < 2n ; that s n is transformed k steps with a boundary perfect reconstruction filter bank that is polynomial preserving of degree N > q − 1, and that the result is thresholded with Tε . Furthermore, suppose that kHe p k∞ 6 C2ζ1 p , k(H ∗ )p k∞ 6 C2ζ2 p . Then
kef k∞ 6 C0 2ζ2 k + k ε, where
(16)
k
ef = T ∗ Tε Te k [f ]1x − [f ]1x ,
(17)
and C0 is independent of n, k, ε. Furthermore, the number of nonzero coefficients in τε , Ns , satisfies Ns 6 C1 k + 2n−k + C2 r, where
2(ζ1 /(q+ζ1 ))n ε −1/(q+ζ1 ) ,
(18)
1 , q + ζ1 1 , r = k2(ζ1 /(q+ζ1 ))n ε −1/(q+ζ1 ) , w = q + ζ1 1 (1−wq)n −w ε , w< , 2 q + ζ1 w>
(19)
and C1 , C2 are independent of n, k, ε. Proof. We give an outline of the proof as it is similar to the periodic case [15]. For the error, kef k∞ , it is clear that thresholding introduces a small error on each level d n−p , and as k(H ∗ )p k∞ 6 C2ζ2 p , the total bound (16) follows by summing over the levels.
344
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
For the number of significant coefficients, Ns , we check for each coefficient in d n−p , what coefficients in s n are influencing it. These coefficients all come from points lying in some interval I = [I , I ], which is small as the transform is local. As long as I does not contain any of the points where kf kC˙ q does not exist, we can make a Taylor expansion of f in I around the midpoint of the interval, f (x) = P (x − z) + r(x − z)(x − z)q ,
z=
I +I , 2
e is orthogonal to polynomials, P is transformed to zero and furthermore, where P is a polynomial. As G p as we have control over kHe k∞ and kf kC˙ q , we get a bound on the size of each coefficient, which depends on how close it is to one of the points where kf kC˙ q does not exist. Summing over all coefficients yields the C2 r part of (18). We have omitted the intervals where I includes points where kf kC˙ q , does not exist. There is no more than a constant number of such points on each level (depending on the number of points where kf kC˙ q ˜ leading to the C1 k part of (18). does not exist, and the length of the filters h, h), n−k Finally, the number of coefficients in s is bounded by 2n−k , leading to the total bound. The only difference compared with the proof in [15] is that finding the influence interval, I , for some of the coefficients involves using the boundary part of the transform. 2
Remark. We could also choose to vary ε. For example, by choosing ε to decrease geometrically over the scales, we could get a bound for the error independently of the number of steps in the transform. However, this will increase the number of coefficients. We could also see such an approach as a rescaling of the transform and inverse transform. If we view it this way it is clear that ζ1 + ζ2 will be constant for a specific boundary filter bank transform. 3.2. Finding bounds for the operator norms We need to verify that, the norms of He p and (H ∗ )p do not grow too quickly as p increases. In [15], two methods for finding bounds on the norms for the periodic forward transform were proposed. The simplest way is to use Young’s inequality, which directly gives a bound. This approach works fine for the δ filter banks, but gives too pessimistic bounds for the spline filter banks (and other nonpositive filters). The other method, which works fine for the spline filter banks, finds the growth of the l 2 norm, by studying the eigenvalues of a certain matrix introduced by Eirola [10]. For the boundary filter bank transform, this second approach is not easily generalized, as Fourier techniques can no longer be used. One could use the idea of asymptotic spectrum for the operator [3,14], as kH p k2 and k(H ∗ )p k2 is the spectral radius of (H ∗ )p H p , which is a quasi-Toeplitz operator. However, the band-width of this matrix will be large even for small p, which makes the method numerically unsuitable. On the other hand, the method based on Young’s inequality is directly generalizable to the boundary case, and by using it for p > 1 we can get better bounds for the spline filter banks than in [15]. It also works for the inverse transform, where we can no longer rely on the fact that there are basis functions in a biorthogonal multiresolution analysis associated to the inverse filters, as was done in [15]. We therefore choose this method, due to its simplicity, and generality (it can be used for any boundary filter bank!).
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
345
Table 7 Bounds for spline and δ boundary filter banks Name
ζ1
ζ2
β2
0.864
0.284
β3
1.447
0.597
δ1
0
0.225
δ3
0
0.556
We need to take into account that we are working with two transforms; one for functions of odd and one for functions of even length. Therefore we study the operator norms of different combinations of these transforms. The following theorem is an immediate consequence of basic facts of the infinity norm: Theorem 3.2. Suppose that
M
Y
e sup Hkj
M k∈{0,1} j =1
= ν1 ,
(20)
∞
where He0 is the transform for functions of odd length, and He1 the transform for functions of even length. Then kHe p k∞ 6 C1 2ζ1 , with ζ1 = (log2 ν1 )/M. Furthermore, suppose that
M
Y
sup Hkj = ν2
M k∈{0,1} j =1
(21)
1
(with H0 and H1 similarly defined). Then k(H ∗ )p k∞ 6 C2 2ζ2 , with ζ2 = (log2 ν2 )/M. Proof. The proof follows directly from the sub-multiplicative property, kABk∞ 6 kAk∞ kBk∞ , and the norm identity kA∗ k∞ = kAk1 . 2 As we are working with the infinity norm for He , we are summing the absolute values of entries of the matrix representation of the operator row-wise, and as the rows of the interior part of the matrix are the same (but shifted), the norms are the same regardless of the size of the discrete function (as long as we have at least one interior row of the operator). The same idea is valid for H , but column-wise. Therefore, we can easily check the conditions of Theorem 3.2. The results are given, for the choice M = 7 in Table 7. We see that the δ filter banks give better bounds than the spline filter banks. Especially, the norm of the forward transform is trivially equal to one, as we are simply doing a sub-sampling. However, the spline filter banks might be preferred in applications where one wants to use their special properties, e.g., as smoothers [4]. Furthermore, one could do more work at the boundary, to decrease these numbers. We should keep in mind that the bounds are rather rough, especially for the inverse transform. Better estimates could be found if we take the specific filter bank into consideration. For example, in [11] the boundary filter banks (which are the same as δ1 and δ3 for the interior part, but different at the boundaries than the ones presented here) are constructed by doing polynomial interpolation, which could be used to get estimates for (H ∗ )p .
346
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
3.3. Numerical examples Our goal is to have a transform that solves the four problems stated in Section 3.1. We know that the representation problem will be solved with polynomial preserving boundary filter banks. In [15] fast algorithms, so called Λ-cycles, were introduced for applying operators on a sparse expansion in O(Ns ) operations. These can be used without modification for our boundary filter banks (except for the differentiation, where one-sided finite difference stencils are needed at the boundary). We choose the β2 boundary filter bank and the test-function f (x) = sin(2π x) + e−40000(x−0.5) + ex , 2
discretized in 1024 points. We transform k = 5 steps, and threshold with ε = 10−3 . The resulting representation has 61 nonzero coefficients, distributed as in Fig. 5(a), bottom figure. The reconstructed function is shown in Fig. 5(a), top figure, and the point-wise error in log-scale in Fig. 5(a), middle figure.
Fig. 5. Representation (a), differential (b), exponential (c), and multiplication (d) of thresholded expansion with Λ-cycle and boundary filter bank. Resulting function above, error in the middle, and distribution of nonzero coefficients below. Parameters: β2 boundary filter bank, 1024 points in original function, k = 5, ε = 10−3 .
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
347
We see that the error is larger at the “spike”, and near the boundary, but that it is of the same order as ε, which is the best we can hope for. For the differentiation problem, we apply the Λ-cycle with the 4th order centered finite difference stencil in the interior, and 4th order one-sided stencils at the boundary. The results are shown in Fig. 5(b). We see that once again we have a larger error near the boundary, but the largest error is near the “spike”. As we transform back to the finest level in that region, this error depends on the finite difference approximation, and thus the error in maximum norm will be the same as when we perform the finite difference approximation directly on the finest grid. The number of significant coefficients has increased to 102, as the Λ-cycle automatically gives candidates for new nonthresholded coefficients. For the operator problems, the spline filter banks will be inadequate, as they are not identically polynomial preserving. This is shown in Figs. 5(c), (d), where we see that the error is large. In our next example we use the same parameters as before, but with the δ3 boundary filter bank with 1025 points. For the representation and differentiation problem, the results are somewhat better than the β2 boundary filter bank, especially near the boundary. This comes from a smaller error-constant of the operator norm. As the δ3 boundary filter bank is identically polynomial preserving it also solves the operator problems satisfactorily. The number of significant coefficients lies between 60 and 84, giving compression factors between 12 and 17. It should be stressed that all the operations involved are O(Ns ). Furthermore, the Λ-cycle could also be defined for other operations, e.g., ENO schemes when solving conservation laws, although the filter banks involved might have to be adapted to satisfy some extra conditions (as when going from polynomial preserving to identically polynomial preserving). For example, in [13] the Euler equations are solved in a manner quite close to the filter bank idea, and the condition h˜ 2k + h˜ 2k+1 = δk is imposed, for consistency.
4. The transform in higher dimensions 4.1. Tensor product regions, s n ∈
NN
i=1 R
mi
The boundary filter bank transform on the interval is immediatelyN generalizable to higher dimensional mi tensor product spaces (“boxes”). We consider a grid functions s n ∈ N i=1 R . One step of the boundary filter bank transform in higher dimensions will perform the following decomposition: Te :
N O
R
mi
→
i=1
M
N O
J ∈{0,1}N
i=1
!
R
lJi (mi )
,
where l is defined as in (14). We define V = n
n O
R , mi
W
n−1
i=1
=
M
N O
J ∈{0,1}N \{1}N
i=1
!
R
lJi (mi )
,
and the transform, Te : V n → W n−1 ⊕ V n−1 , is then defined as Te =
M J ∈{0,1}N
N O i=1
!
TeJ
i
,
e Te1 = H e where Te0 = G,
V n−1 =
N O i=1
Rl1 (mi ) ,
348
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Fig. 6. Representation (a), differential (b), exponential (c), and multiplication (d) of thresholded expansion with Λ-cycle and boundary filter bank. Resulting function above, error in the middle, and distribution of nonzero coefficients below. Parameters: δ3 boundary filter bank, 1025 points in original function, k = 5, ε = 10−3 .
(and we, as in the one-dimensional case, use odd- or even-length transforms in each dimensions, depending on mi ). With the notation M
N O
J ∈{0,1}N \{1}N
i=1
Ge =
!
TeJi ,
e= H
N O
!
Te1 ,
i=1
and similar definitions for G, H, the tensor product rules in the appendix give us e Ge ⊕ H
−1
= G ∗ ⊕ H∗ ,
(22)
just as in the one-dimensional case. When doing more than one step in the transform, we want to use the similar transform to the “square” transform of the continuous theory, only decomposing V parts of functions more than once (see [8,
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
349
p. 313]). This gives us the following definition of n0 steps of the boundary filter bank transform in N dimensions, similar to (7): Te n−n0 =
n−n M0
! e j −1 GeH
e n−n0 , ⊕H
j =1
and for the inverse, e n−n0 −1
T
= T
∗ n−n0
=
n−n M0
H
∗ j −1
!
G
∗
⊕ H∗
n−n0
,
j =1
as in (8). It is an immediate result that the properties 1–4 and 10 of Section 3.1 will be satisfied for all polynomials in P M (RN ), i.e., all of the polynomial information is kept in V n0 . Therefore, the multidimensional transform will behave essentially as the one-dimensional transform. e p k∞ 6 C N 2Nζ1 , As we are combining N one-dimensional transforms, the norm bounds will be kH 1 ∗ p N Nζ2 k(H ) k∞ 6 C2 2 . 4.2. General Cartesian regions The boundary filter bank transform can also be generalized to other Cartesian regions. The idea is that we do the transform (one dimension at a time) as long as there are enough points to do it. If there are not enough points, then we simply keep the information on that level. For example, in Fig. 7 the points close to the boundary will be kept, whereas the interior will be transformed. The Λ-cycles can also be defined for this type of transform, but with the additional rule that transforms to finer levels will be made whenever there are W coefficients, but also when there are V coefficients (that have not been transformed) on a finer level. There is, however, one extra complication entering the picture when doing the transform on more general domains. As the goal is to keep the polynomial representations, we must take into account that functions of different length must be transformed in a consistent manner. For example, in Fig. 8 we want the downsampling step to pick out the same coefficients (circled), when we are doing the horizontal transform. This will not be the case if we only have two transforms, as for example the horizontal transform on the lowest row will pick out the noncircled points. The problem is solved by introducing two new boundary transforms; one for functions of even and one for functions of odd length, so that we altogether have four transforms: He 0 : R2m → Rm ,
e 0 : R2m → Rm , G
He 1 : R2m+1 → Rm+1 ,
e 1 : R2m+1 → Rm , G
He 2 : R2m → Rm ,
e 2 : R2m → Rm , G
He 3 : R2m+1 → Rm ,
e 3 : R2m+1 → Rm+1 . G
For example, in Table 8 the extra transforms are listed for the δ1 filter bank. The extra transforms are needed because of the sub-sampling factor of 2, so four transforms are enough independently of the number of dimensions.
350
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Table 8 Boundary matrices and shift indices for boundary δ1 -filter bank, giving extra transforms for general Cartesian domains e, e left , G e right, H left, H right, Gleft , Gright Name N He left , He right, G s He , s G sH , sG δ1 -odd length
2
N , N , [−3/4, 1, 1/4, −1/2], [−1/2, 1/4, 1, −3/4],
−1,1,−1,1
[3/2, 1, 1/2, 0, 0; −1/2, 0, 1/2, 1, 1/2], [1/2, 1, 1/2, 0, −1/2; 0, 0, 1/2, 1, 3/2], [−4/3, 0, 0; −2/3, 0, −2], [−2, 0, −2/3; 0, 0, −4/3] δ1 -even length 2
N , N , [−3/4, 1, 1/4, −1/2], N , [3/2, 1, 1/2, 0, 0; −1/2, 0, 1/2, 1, 1/2],
−1,1,−1,1
[1/2, 1], [−4/3, 0, 0; −2/3, 0, −2], N
Fig. 7. General transform of Cartesian discretization of disc.
Fig. 8. Transform made horizontally, using four transforms to represent polynomials in an efficient way.
4.3. The unit triangle The idea of transforming regular fine resolution subdivisions to coarser, with the relation (22), and keeping polynomial information in one part the transform, can of course also be performed on nonCartesian discretizations. The essential part for the method to work is once again to keep the conditions 1–4 and 10 satisfied.
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
351
Fig. 9. Uniform subdivision of the unit triangle.
For example, we look at the unit triangle √ √ T = (x, y) ∈ R2 : y > 0, y 6 3x, y 6 3(1 − x) . We triangulate it uniformly according to the rule shown in Fig. 9, and choose as grid points the center of gravity of each triangle. Of course, the idea of a perfect reconstruction filter bank (and even the idea of convolution) has to be modified, as the grids involved are more complicated (e.g., not all triangles have the same orientation). On the other hand, there are three natural directions from every point, which means that the transform of one discrete function to one “polynomial” part and three “rest” parts in some sense is more natural e transforms (but with different orientation), in this setting as we can take the same filter for all three G exG e y transform is quite different to the G exH ey , and H ex G ey in contrast to the Cartesian case, where the G transforms. The idea of the subdivision is shown in Fig. 10. In Fig. 11 we show filters that give an identically polynomial preserving transform of degree 1. The He transform, Fig. 11(a), is simply the δ subdivision (the coefficient in the figure being the coefficient to e for interior points, can all be performed by applying the operator shown in multiply with). The three Gs e transform is performed Fig. 11(b), but in different directions. For boundary points near an edge, the G as in Fig. 11(c), and finally there are 6 boundary points at vertices for which we need a third rule, shown in Fig. 11(d) (this is not the same rule as in Fig. 11(c), although the coefficients are the same, as we use different points). The inverse transform is easily constructed, as we can subtract the He part of the subdivision from each e G part. The idea of Λ-cycles can be used for the transform on the triangle, too. The only difference compared with the Cartesian algorithms is that for the differentiation problem, we use finite difference stencils that are derived from points in a triangulation (see, e.g., [7]). 4.4. Numerical examples We discretize the function f (x, y, z) = ex−y cos(4y + 0.3) + ez +Θ(1.5 − x − y − z) defined on [0, 1]3 giving s n = [f ]1/128,1/130,1/133, and do 5 steps in the boundary filter bank transform, with the δ3 boundary filter bank. We choose ε = 10−4 . The number of significant coefficients is Ns = 146, 435 out of a total of 2, 264, 466, giving a compression factor of 15.5. This result is satisfying as the function has a two-dimensional surface along which it is discontinuous. The result on the surface z = 0.5 is shown in Fig. 12(a), and the error in Fig. 12(b). The maximum norm of the error is kek∞ = 1.49×10−3 ,
352
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Fig. 10. One step in transform on unit triangle.
Fig. 11. Subdivision stencils for the transform on the unit triangle.
which is larger than ε. This is not surprising as we are doing the one-dimensional transform 15 times (5 times in each direction), so here the error growth is beginning to show. In our next example we perform the transform on the triangle with the subdivision method presented earlier. We choose the test function f = ex−y cos(4y + 0.3) + Θ(x + y − 0.8), and discretize it on the unit triangle with an initial function s 8 , having 65, 536 points. We transform 6 steps and threshold with ε = 2 × 10−3 . The number of significant coefficients is Ns = 4563, giving a compression factor of 14.3. This factor could be improved by constructing higher order subdivision schemes. The reconstructed function is shown in Fig. 13(a), where we have mapped it to a Cartesian grid, by simply choosing the closest triangular grid point for each Cartesian point, so that the standard
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
353
Fig. 12. Thresholded expansion of f = ex−y cos(4y + 0.3) + ez +Θ(1.5 − x − y − z) (a), and error (b) on surface z = 0.5.
MATLAB plot routines can be used. The error is shown in Fig. 13(b). In maximum norm we have kek∞ = 4.73 × 10−3 , with the largest error close to the boundary (coming from the larger error constants in the subdivision scheme near the boundary).
5. Conclusions We have constructed transforms for bounded domains that can be used together with thresholding, Λ-cycles and finite difference methods to solve PDEs adaptively. The conditions imposed at the boundary are similar to what is done when using one-sided finite difference stencils, and it is a simple task to construct boundary filter banks that keep the order of approximation for different operators. A rough stability estimate can be given for any boundary filter bank by Theorem 3.2, whereas for better bounds, tailored methods could be used for the specific boundary filter bank. The idea also works for non-Cartesian discretizations, as long as they are regular. For nonregular discretizations, one could use different filter banks in different parts of the domain, and on different levels. This however will take us rather far away from the original wavelet/filter bank decomposition. In a forthcoming paper we will use the boundary filter banks to solve the Euler and Navier–Stokes equations in two dimensions.
354
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
Fig. 13. Thresholded expansion of f = ex−y cos(4y + x + 0.3) + Θ(x + y − 0.8) discretized on the unit triangle (a), and error (b).
6. Linear spaces Suppose S is a finite dimensional vector space over R (or C), and that {νi }i∈J is a basis for S. The set J does not have to be the usual vector indices {1, . . . , n} but could, e.g., be two-dimensional, J = {0, . . . , n1 − 1} × {0, . . . , n2 − 1}. To stress this we use the name linear space. When a basis is chosen, J we can identify S with RJ (where RJ = {f : J → R}), i.e., S is isomorphic to RJ , S ∼ . We define =R P P P 1 2 1 2 1 1 2 an inner product space on S by choosing hv , v i = i∈J αi αi , for v = i∈J αi νi , v = i∈J αi2 νi . e we mean the space of all linear operators from S to S. e We abbreviate L(S, S) by L(S). By L(S, S) e is also a linear space with the obvious definitions of + and ·. For an element in S Of course, L(S, S) v=
X
αk νk ,
α ∈ RJ ,
k∈J
e can be described by an operator A ∈ L(S, S)
Av =
X
βi νei ,
˜
β ∈ RJ ,
i∈Je
where βi =
X k∈J
Ai,k αk ,
˜
A ∈ RJ ×J
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
355
(we will only use one basis for S, so there will be no problem of using the same notation for A as an operator, and A as representation for the operator in a certain basis), and we get the standard operator e
e B ∈ L(S, e S) e composition rule for A ∈ L(S, S),
(B ◦ A)j,k =
X
Bj,l Al,k ,
e
j ∈ Je, k ∈ J.
l∈Je
e is The identity operator is written as I (or IS if not obvious). The adjoint operator A∗ of A ∈ L(S, S) ∗ ∗ e the operator satisfying hAu, viSe = hu, A viS ∀u ∈ S ∀v ∈ S. We immediately get Aj,k = Ak,j (or Ak,j if we are working over C). Of course, the whole theory of matrices, e.g., spectral properties, symmetry, e as the only thing we have done is to choose another notation for its elements. etc. holds for A ∈ L(S, S) def For elements, v1 ∈ S1 , v2 ∈ S2 , we define the (exterior) direct sum, v1 ⊕ v2 =(v1 , v2 ), and by defining addition and multiplication in the obvious way, {v1 ⊕ v2 , v1 ∈ S1 , v2 ∈ S2 } becomes a linear space, which we call S1 ⊕ S2 . If S1 and S2 are inner product spaces we define the inner product on def S1 ⊕ S2 by hu1 ⊕ u2 , v1 ⊕ v2 i =hu1 , v1 i + hu2 , v2 i. We can identify S1 ⊕ S2 with RJ1 ]J2 (where the ] union operator gives an ordered set where elements both in J1 and J2 are counted twice), and more L n generally nr=1 Sr with R]r=1 Jr . If S1 ⊂ S, S2 ⊂ S, and S1 ∩ S2 = {0}, then S1 ⊕ S2 is isomorphic with def S1 + S2 ={s1 + s2 : s1 ∈ S1 , s2 ∈ S2 } (although this is only an isometric isomorphism if S1 ⊥ S2 ). This is the internal direct sum interpretation. We identify L(S, Se1 ) ⊕ L(S, Se2) with L(S, Se1 ⊕ Se2 ) by the rule (H1 ⊕ H2 )v = (H1 v) ⊕ (H2 v), and in e ⊕ L(S2 , S) e with L(S1 ⊕ S2 , S). e With this identification, we the same manner we can identify L(S1 , S) get n M
!∗
=
Ai
i=1
n M
A∗i ,
i=1
We identify the dual space S ∗ of an inner product space S with the space itself. For ν1 ∈ S1 , ν2 ∈ S2 we define ν1 ⊗ ν2 to be the bilinear form on S1 × S2 defined by (ν1 ⊗ ν2 )(η1 , η2 ) = hν1 , η1 ihν2 , η2 i. We def define the tensor product of S1 and S2 as S1 ⊗ S2 = span{ν1 ⊗ ν2 : ν1 ∈ S1 , ν2 ∈ S2 }. We also define an inner product on S1 ⊗ S2 by def X
v1, v2 =
αi1 αi2 ,
i∈J1 ×J2
for
X
v1 =
(i1 ,i2 )∈J1 ×J2
α(i1 1 ,i2 ) νi11 ⊗ νi22 ,
X
v2 =
(i1 ,i2 )∈J1 ×J2
α(i2 1 ,i2 ) νi11 ⊗ νi22 . N
Qn
With these definitions, we can identify S1 ⊗ S2 with RJ1 ×J2 , and more generally nr=1 Sr with R r=1 Jr . e as any linear space, but a When defining tensor products of operators, we could view L(S, S) e simpler (equivalent) definition for Ai ∈ L(Si , Si ), i = 1, . . . , n, is to define the tensor product operator, Nn Nn Nn e A ∈ L( S , S ) i=1 i i=1 i i=1 i as the linear operator satisfying n O i=1
!
Ai
n O i=1
!
vi
=
n O i=1
(Ai vi ),
vi ∈ Si ,
356
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
and we can identify, L( ately get n O
!
i=1 n O
!∗
=
Bi
i=1
=
Ai
i=1 Si ,
!
n O
Ai
Nn
i=1
Nn
e with R(
i=1 Si )
Qn
e
J )×( r=1 r
Qn
J ) r=1 r
. From these definitions we immedi-
n O
(Ai Bi ),
i=1
n O
A∗i ,
i=1
n O
!−1
=
Ai
n O
i=1
A−1 i ,
i=1
and we have the distributive rule (S1 ⊕ S2 ) ⊗ (S3 ⊕ S4 ) ∼ = (S1 ⊗ S3 ) ⊕ (S1 ⊗ S4 ) ⊕ (S2 ⊗ S3 ) ⊕ (S3 ⊗ S4 ). We define the embedding of an operator, Tj (A) ∈ L
j −1 O
n O
Si ⊗ Sj
Si ,
i=j +1
i=1
j −1 O
Si ⊗ Sej
n O
!
Si ,
i=j +1
i=1
for A ∈ L(Sj , Sej ) as Tj (A) = IS1 ⊗ IS2 ⊗ · · · ⊗ ISj−1 ⊗ A ⊗ ISj+1 ⊗ · · · ⊗ ISn . A bijection f : J ↔ {1, . . . , |JP |} can be used to go between a linear space Q and a vector space P|J | representation, S ↔ R|J | , as v = i∈J αi νi = j =1 βj νf −1 (j ) . Especially, for J = ni=1 {0, . . . , mi − 1} we can choose the natural bijection
f (j1 , . . . , jn ) = 1 +
n X i=1
ji
i−1 Y
mi .
k=1
This is the “standard” rule when associated a “long” vector with a grid function defined on an ndimensional “box”, and we call the mapping N . It simply takes one dimension at a time, and orders the grid points subsequently. With this rule, we get the following “basis change” relation between the N previously defined tensor product, and the Kronecker product for matrices, K: N
−1
n O K
Ai N f =
i=1
Ji = {0, . . . , mi − 1},
n O
!
Ai f,
f ∈R
Qn i=1
Ji , Ai ∈ RJei ×Ji ,
i=1
e i − 1}. Jei = {0, . . . , m
References [1] L. Andersson, N. Hall, B. Jawerth, G. Peters, Wavelets on closed subsets of the real line, in: Wavelet Analysis and its Applications—Recent Advances in Wavelet Analysis, Vol. 3, Academic Press, New York, 1994, pp. 1–61.
J. Waldén / Applied Numerical Mathematics 32 (2000) 331–357
357
[2] P. Auscher, Ondelettes à support compact et conditions aux limites, J. Funct. Anal. 111 (1) (1993) 29–43. [3] R.M. Beam, R.F. Warming, The asymptotic spectra of banded Toeplitz and quasi-Toeplitz matrices, SIAM J. Sci. Comput. 14 (4) (1993) 971–1006. [4] G. Beylkin, On the fast Fourier transform of functions with singularities, Appl. Comput. Harmon. Anal. 2 (4) (1995) 363–381. [5] A. Cohen, I. Daubechies, J.C. Feaveu, Biorthogonal bases of compactly supported wavelets, Comm. Pure Appl. Math. 45 (1992) 485–560. [6] A. Cohen, I. Daubechies, P. Vial, Wavelets on the interval and fast wavelet transforms, Appl. Comput. Harmon. Anal. 1 (1) (1993) 54–81. [7] L. Collatz, The Numerical Treatment of Differential Equations, Springer, Berlin, 1960. [8] I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, PA, 1992. [9] G. Deslauriers, S. Dubuc, Symmetric iterative interpolation processes, Constr. Approx. 5 (1989) 49–68. [10] T. Eirola, Sobolev characterization of solutions of dilation equations, SIAM J. Math. Anal. 23 (4) (1992) 1015–1030. [11] M. Holmström, Solving hyperbolic PDEs using interpolating wavelets, Technical Report 189, Department of Scientific Computing, Uppsala University, Uppsala, Sweden, 1996. [12] W. Kahan, Numerical linear algebra, Canadian Math. Bull. 9 (1966) 757–801. [13] S. Müller, B. Gottschlich-Müller, Multiscale concept for conservation laws, Preprint, Institut für Geometrie und Praktische Mathematik, RWTH, Aachen, Germany. [14] P. Schmidt, F. Spitzer, The Toeplitz matrices of an arbitrary Laurent polynomial, Math. Scand. 8 (1960) 15–38. [15] J. Waldén, Filter bank methods for hyperbolic PDEs, SIAM J. Numer. Anal. 36 (4) (1999) 1183–1233.