Accepted Manuscript A splitting preconditioner for a block two-by-two linear system with applications to the bidomain equations Hao Chen, Xiaolin Li, Yan Wang PII: DOI: Reference:
S0377-0427(17)30136-X http://dx.doi.org/10.1016/j.cam.2017.03.017 CAM 11064
To appear in:
Journal of Computational and Applied Mathematics
Received date: 12 August 2015 Revised date: 11 December 2016 Please cite this article as: H. Chen, X. Li, Y. Wang, A splitting preconditioner for a block two-by-two linear system with applications to the bidomain equations, Journal of Computational and Applied Mathematics (2017), http://dx.doi.org/10.1016/j.cam.2017.03.017 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Manuscript Click here to view linked References
A splitting preconditioner for a block two-by-two linear system with applications to the bidomain equations Hao Chen∗, Xiaolin Li, Yan Wang College of Mathematics Science, Chongqing Normal University, Chongqing 401331, China.
Abstract We construct an alternating splitting iteration scheme for solving and preconditioning a block two-by-two linear system arising from numerical discretizations of the bidomain equations. The convergence theory of this class of splitting iteration methods is established and some useful properties of the preconditioned matrix are analyzed. The potential of this approach is illustrated by numerical experiments. Key words: block two-by-two matrix, iterative methods, preconditioning, matrix splitting, bidomain equations
1. Introduction In this work we construct preconditioners for discrete bidomain model. This model describes the electrical activity in the heart, see [18] for details. The general bidomain equations are on the following form ∂v = ∇ · (Mi ∇v) + ∇ · (Mi ∇ue ) + f (v, w), ∂t 0 = ∇ · (Mi ∇v) + ∇ · ((Mi + Me )∇ue ), (1.1) ∂w = g(v, w). ∂t
Here, the unknowns are the transmembrane potential v and the extracellular potential ue . The tensors Mi and Me describe the intra and extra-cellular conductivities of the heart, and w denotes the state of the heart cells. Usually these equations are assigned with homogeneous Neumann boundary conditions, i.e., nT Mi ∇(v + ue ) = 0 and nT (Mi + Me )∇ue + nT Mi ∇v = 0,
(1.2)
where n is the outwards directed unit normal vector defined along the boundary of the heart. Note that ue is only determined up to a constant in the case of Neumann conditions. ∗ Corresponding author. Email:
[email protected] Preprint submitted to Elsevier
December 11, 2016
The bidomain system consists of two partial differential equations (PDEs) of elliptic and parabolic type, coupled through the nonlinear reaction term with a stiff system of ordinary differential equations (ODEs), the so-called membrane model, describing the ionic currents through the cellular membrane. The numerical solution of the bidomain equations is computationally very expensive because of the interaction of the different scales in space and time and the very severe illconditioning of the discrete systems arising at each time step. Several approaches have been developed in order to simulate the bidomain equations effectively. Fully implicit methods in time have been considered in e.g. [20–22] and require the solution of nonlinear systems at each time step. Alternatively, most numerical studies now consider semi-implicit time discretizations and operator splitting schemes, where the reaction and diffusion terms are treated separately. The advantage of these approaches is that they allow for wider time steps than explicit schemes, at the cost of dealing with a large algebraic linear system at each time step. In recent years, considerable effort has been spent in developing efficient solvers for algebraic systems arising from the discrete bidomain equations, see [37] for a detailed overview of the methods. An important class of preconditioners is based on the block LU factorization of the coefficient matrix [12, 16, 17, 19, 24–26, 32, 36]. This class includes a variety of block diagonal and block triangular preconditioners. Besides, we mention optimized Schwarz [16], multigrid [1, 24–27, 33, 38], and multilevel Schwarz [13, 23, 29, 30]. Alternating splitting preconditioning, see [2–5, 8–11, 31], is another type of preconditioning techniques which has proven useful in the solution of some structured linear systems, including block two-by-two systems of saddle point type. In this work, a preconditioning approach based on an alternating splitting of the coefficient matrix is proposed to solve the discrete bidomain equations. The idea is similar to the methods presented in [11], where Kronecker product splitting preconditioners were introduced for implicit Runge-Kutta discretizations of the bidomain model. We demonstrate that the spectrum of the preconditioned matrix shows a tight cluster at unity, and a guideline for choosing the splitting parameter is described. We present the results of numerical experiments, including comparisons with other preconditioners, using test problem generated from discretizations of the two-dimensional (2D) bidomain model by finite difference methods. The paper is organized as follows. In Section 2, a discrete approximation of the bidomain equations (1.1)-(1.2) is described. Section 3 concerns the details on the alternating splitting iteration and we study the convergence of the splitting iteration in Section 4. In Section 5, we consider the use of the splitting as a preconditioner for Krylov subspace methods. Finally, Section 6 is devoted to numerical experiments and we conclude the paper in Section 7. 2. The discrete problem In this section we consider the discretization of the bidomain equations (1.1)-(1.2), which are solved by applying Strang operator splitting method, where the system is split into a nonlinear system of ODEs and a linear system of PDEs, cf. [34]. We denote by ∆t the 2
stepsize, and the approximations to v, ue , w are denoted by v n , une , w n , respectively. The Strang splitting algorithm for the bidomain system (1.1)-(1.2) reads as follows: step 1. Solve the system
∂v = f (v, w), ∂t ∂w = g(v, w), ∂t for tn < t ≤ tn + ∆t/2 with initial conditions v(tn ) = v n and w(tn ) = w n , to obtain v n+1/2 and w n+1/2 . step 2. Solve the coupled PDE system ∂v = ∇ · (Mi ∇v) + ∇ · (Mi ∇ue ), (2.1) ∂t 0 = ∇ · (M ∇v) + ∇ · ((M + M )∇u ), i
i
e
e
for tn < t ≤ tn + ∆t, with v(tn ) = v n+1/2 , ue (tn ) = une and the homogeneous Neumann boundary conditions (1.2). We obtain un+1 and the new temporary value vˆn+1 . e n+1 n+1 step 3. Compute v and w by solving the system ∂v = f (v, w), ∂t ∂w = g(v, w), ∂t
for tn + ∆t/2 < t ≤ tn + ∆t with initial conditions vˆn+1 and w n+1/2 . We solve the ODEs by explicit Euler scheme and the PDEs are solved by implicit Euler method. The time discretization of the PDE system in Step 2 can be written as n+1 − vn v = ∇ · (Mi ∇v n+1 ) + ∇ · (Mi ∇un+1 ), e (2.2) ∆t n+1 n+1 0 = ∇ · (Mi ∇v ) + ∇ · ((Mi + Me )∇ue ), with homogeneous Neumann boundary conditions. By applying an appropriate spatial discretization of (2.2), we get the discrete system n+1 − vn Cv = −A¯i (vn+1 + un+1 ), e (2.3) ∆t ¯ n+1 n+1 ¯ ¯ Ai v + (Ai + Ae )ue = 0,
where variables vn+1 and un+1 are vector (grid) functions approximating v and ue , C is the e ¯ ¯ mass matrix, Ai and Ae are stiffness matrices in Rm×m associated to −∇ · (Mi ∇·) and −∇ · (Me ∇·), respectively. Throughout this paper we will assume that the conductivity tensors Mi and Me are symmetric and positive definite. Therefore, by using centered difference or standard finite element arguments, we can assert that matrices A¯r , r = i, e, are symmetric and positive semidefinite with A¯r e = 0, where e ∈ Rm is the vector of all ones. 3
Multiplying the equations by ∆t in (2.3), then we can rewrite (2.3) into the following matrix form: " # C + Ai Ai Ax = b with A = , (2.4) Ai Ai + Ae where x = [vn+1 ; un+1 ], b = [Cvn ; 0], Ai = ∆tA¯i , Ae = ∆tA¯e . It may be easily verified that e A is positive semidefinite (cf., e.g., [25]). Moreover, for e the vector of all ones, A[0; e] = 0. In other words, ker(A) = span{[0; e]}. As in the continuous model, vn is uniquely determined, while une is determined only up to the same additive time-dependent constant. Hence, at each time step we have to solve a large linear system Ax = b of size 2m × 2m, which is very ill-conditioned and increase considerably the computational costs of the simulations. In this context, preconditioning is therefore mandatory. The aim of this paper is to study efficient preconditioners for the block two-by-two linear system (2.4). 3. The alternating splitting iteration Consider the linear system Ax = b. For the sake of convenience, we denote x = [v T , uT ]T and b = [f T , g T ]T . Observe that A = A1 + A2 , where " # " # C 0 Ai Ai A1 = , A2 = , (3.1) 0 Ae Ai Ai and A1 and A2 are both symmetric and positive semidefinite. Let now α > 0 be a parameter. Similar in spirit to the classical alternating direction implicit method [35], consider the following two splittings of A: A = (A1 + αI) − (αI − A2 ) and A = (A2 + αI) − (αI − A1 ),
where I denotes the 2m-by-2m identity matrix. Note that # " # " Ai + αI Ai C + αI 0 A1 + αI = and A2 + αI = 0 Ae + αI Ai Ai + αI are both nonsingular matrices for all α > 0. The algorithm is obtained by alternating between these two splittings. Given an initial guess x0 = (v 0 , u0 ), the alternating splitting iteration computes a sequence {xk } as follows ( 1 (A1 + αI)xk+ 2 = (αI − A2 )xk + b, (3.2) 1 (A2 + αI)xk+1 = (αI − A1 )xk+ 2 + b. The first half-step of algorithm (3.2) requires the solution ( 1 (C + αI)v k+ 2 = αv k − Ai v k − Ai uk + f, 1
(Ae + αI)uk+ 2 = αuk − Ai uk − Ai v k + g. 4
(3.3)
It is easy to show that matrix A2 + αI admits the following block factorization " #" #" # I 0 αI Ai I 0 A2 + αI = . −I I 0 2Ai + αI I I
(3.4)
Then the second half-step of algorithm (3.2) requires the solution # # " " #" #" #" 1 (αI − C)v k+ 2 + f I 0 v k+1 I 0 αI Ai . = 1 −I I 0 2Ai + αI I I uk+1 (αI − Ae )uk+ 2 + g Denote pk+1 = v k+1 + uk+1, then we can obtain that # " " #" # 1 (αI − C)v k+ 2 + f v k+1 αI Ai = . 1 1 0 2Ai + αI pk+1 (αI − C)v k+ 2 + (αI − Ae )uk+ 2 + f + g It then leads to a smaller (order m) linear system of the form 1
1
(2Ai + αI)pk+1 = (αI − C)v k+ 2 + (αI − Ae )uk+ 2 + f + g.
(3.5)
Once the solution pk+1 to (3.5) has been computed, the vector v k+1 is given by v k+1 = 1 (I − α−1 C)v k+ 2 + α−1 (f − Ai pk+1) and vector uk+1 is given by uk+1 = pk+1 − v k+1 . Note that all the systems in (3.3) and (3.5) are symmetric positive definite, and they can be solved by any method for symmetric positive definite systems, like a sparse Cholesky factorization or a preconditioned conjugate gradient algorithm, or some specialized solver. Note that the addition of a positive term α to the main diagonal of Ae (and C and 2Ai ) improves the condition number. 4. Convergence analysis 1
To analyze the convergence of (3.2), we eliminate the intermediate vector xk+ 2 and write the iteration in fixed point form as xk+1 = Tα xk + c,
(4.1)
Tα = (αI + A2 )−1 (αI − A1 )(αI + A1 )−1 (αI − A2 )
(4.2)
where is the iteration matrix of the method, and c = (αI + A2 )−1 (αI − A1 )(αI + A1 )−1 b + b
= (αI + A2 )−1 [(αI − A1 ) + (αI + A1 )] (αI + A1 )−1 b = 2α(αI + A2 )−1 (αI + A1 )−1 b. 5
One can show that there is a unique splitting A = Pα − Qα with Pα nonsingular such that the iteration matrix Tα is the matrix induced by that splitting, i.e., Tα = Pα−1 Qα = I − Pα−1 A.
(4.3)
Here, matrices Pα and Qα are given by Pα =
1 (αI + A1 )(αI + A2 ), 2α
Qα =
1 (αI − A1 )(αI − A2 ). 2α
(4.4)
The rate of convergence of the splitting iterative scheme depends on the spectral properties of Tα . The following theorem shows the distribution of the eigenvalues of Tα . Theorem 4.1. Let λ be an eigenvalue of the iteration matrix Tα . Then either λ = 1 with multiplicity one or |λ| < 1 for all α > 0. Proof. Note that the matrix A is singular, for e the vector of all ones, A[0; e] = 0. Then by (4.3), it can be shown that the matrix Tα has one unit eigenvalue corresponding to the eigenvector [0; e]. Now, let λ ∈ C be an eigenvalue of Tα and let x ∈ C2m be the corresponding eigenvector with kxk2 = 1 and x ∈ / E := span{[0; e]}. Denote R := (αI − A1 )(αI + A1 )−1 , S := (αI − A2 )(αI + A2)−1 and y = (αI + A2 )x. Note that both R and S are symmetric. Clearly, Tα x = λx implies RSy = λy and taking norms: kRSyk2 = |λ|kyk2. Therefore |λ|2 =
kRSyk22 (Sy)T R2 (Sy) (Sy)T (Sy) zT R2 z yT S 2 y = · = · T , kyk22 (Sy)T (Sy) yT y zT z y y
where z = Sy = (αI − A2 )x. Since As [0; e] = 0, s = 1, 2, it can be easily verified that w∈ / E, w = y, z, provided x ∈ / E. It follows that |λ|
2
zT R2 z yT S 2 y ≤ max T · max T . z∈E / y∈E / z z y y
It follows from the definition of A2 that " # 0 0 A2 = Q QT , 0 2Ai
" # I I 1 . Q= √ 2 −I I
Here Q is orthogonal. It then follows that # " I 0 QT . S =Q −1 0 (αI − 2Ai )(αI + 2Ai ) 6
Since S is symmetric, then S has unit eigenvalues with multiplicity m and the remaining m eigenvalues are in the form (α − 2ν)/(α + 2ν), ν ∈ σ(Ai ), where σ(Ai ) denotes the spectrum of Ai . Consider Sw = (α − 2ν)/(α + 2ν)w. By Ai e = 0, one can easily show that (α − 2ν)/(α + 2ν) = 1 (or ν = 0) if w ∈ E. It then follows that ( 2 ) T 2 y S y α − 2ν max T = 1. = max 1, max y∈E / ν∈σ(Ai )\0 α + 2ν y y It follows from the definition of R that # " (αI − C)(αI + C)−1 0 . R= 0 (αI − Ae )(αI + Ae )−1
Note that R is symmetric and one can show that the eigenvalues of R are in the form (α − µ)/(α + µ), µ ∈ σ(C) ∪ σ(Ae ). Consider Rw = (α − µ)/(α + µ)w. Analogously, by Ae e = 0, we can show that (α − µ)/(α + µ) = 1 (or µ = 0) if w ∈ E. Therefore |λ|
2
α − µ 2 zT R2 z < 1. ≤ max T = max z∈E / µ∈σ(C)∪σ(Ae )\0 α + µ z z
The last inequality in the above equation is due to the fact that all the eigenvalues of C are positive and all the nonzero eigenvalues of Ae are positive. This completes the proof. For solving singular system with iterative methods, the following property holds [6, 14]. We recall that the index of an eigenvalue is the dimension of its largest Jordan block [6]. Lemma 4.2. Let A = M − N be a splitting of the singular matrix A with M nonsingular and let λ be an eigenvalue of M −1 N. Then the iterative method with the iteration matrix M −1 N converges if and only if either |λ| < 1, or |λ| = 1 with index 1. Combination of Theorem 4.1 and Lemma 4.2 implies the convergence of the alternating iteration. Theorem 4.3. The iteration (3.2) for Ax = b is convergent for all α > 0. We remark that the rate of convergence depends on the nonunit eigenvalue of Tα closest to one in modulus [6]. So it make sense to try to choose α so as to make the second largest eigenvalue in modulus as small as possible. In general, finding such a value is a difficult problem. 5. Krylov subspace acceleration Besides its use as a solver, the alternating splitting iteration can also be used as a preconditioner to accelerate Krylov subspace methods such as generalized minimal residual (GMRES). 7
It follows from (4.1) and (4.3) that the linear system Ax = b is equivalent to the linear system (I − Tα )x = Pα−1 Ax = c,
where c = Pα−1 b. This equivalent system can be solved with GMRES, and the matrix Pα can be seen as a preconditioner for GMRES. Equivalently, we can say that GMRES is employed to accelerate the convergence of the alternating splitting iteration applied to Ax = b. Application of the alternating preconditioner within GMRES requires solving a linear system of the form Pα−1 z = r at each iteration. This is done by first solving (αI + A1 )y = 2αr for y, followed by (αI + A2 )z = y. Note that when the alternating scheme is used as a preconditioner for a Krylov subspace method, there is no need to solve the systems Pα−1 z = r exactly. Generally speaking, in practical implementations, inexact solutions result in a much more competitive algorithm. By Theorem 4.1 and Pα−1 A = I − Tα it is readily seen that for all α > 0 the eigenvalues of the preconditioned matrix Pα−1 A are entirely contained in a circle with center (1, 0) and radius one. Therefore, if the splitting iteration method is applied to preconditioned GMRES , it will improve the convergence speed of GMRES considerably. Generally speaking, choosing α so as to minimize the spectral radius of the iteration matrix is not necessarily the best choice when the algorithm is used as a preconditioner for a Krylov subspace method. In the following, a good choice for α can be derived under certain conditions. In the rest of this section we assume that C is the identity matrix. If not, we could apply the alternating iteration to the symmetrically preconditioned system ˆx = b, ˆ Aˆ := C −1/2 AC −1/2 , x ˆ = C −1/2 b, Aˆ ˆ = C 1/2 x, b where
We can show that
" # C 0 C= . 0 C # " ˆi ˆi I + A A , Aˆ = Aˆi Aˆi + Aˆe
where Aˆs = C −1/2 As C −1/2 , s = i, e. The new coefficient matrix Aˆ has the same structure with A, in particular, with C replaced by the identity matrix. It is easy to show that the iteration matrix Tα is similar to Tˆα := (αI − A1 )(αI + A1 )−1 (αI − A2 )(αI + A2 )−1 . 8
Now if we set α = 1, we obtain that # " 0 0 . Tˆα = −2(I − Ae )(I + Ae )−1 Ai (I + 2Ai )−1 (I − Ae )(I + Ae )−1 (I + 2Ai )−1 Let (λ, u) be an eigenpair of the (2, 2) block of Tˆα . Then (I − Ae )(I + Ae )−1 (I + 2Ai )−1 u = λu, or (I − Ae )(I + Ae )−1 v = λ(I + 2Ai )v,
v = (I + 2Ai )−1 u.
Since I + 2Ai is symmetric and positive definite and (I − Ae )(I + Ae )−1 is symmetric, it follows from Sylvester’s Law of Inertia that the eigenvalues of (I − Ae )(I + Ae )−1 (I + 2Ai )−1 are all real. It follows that the eigenvalues of Tˆα are all real and at least m of them are zero eigenvalues. We conclude that the preconditioned matrix Pα−1 A has unit eigenvalues with multiplicity m and the remaining eigenvalues are all in (0, 2) (except the zero eigenvalue), when α = 1. 6. Numerical experiments In this section, some numerical experiments are discussed, with the goal of examining the effectiveness of the splitting preconditioner. All experiments were performed in Matlab. We consider the 2D bidomain equations (1.1) with homogeneous Neumann boundary conditions, and the conductivity tensors are taken as " # 0.3 + 2.7 sin2 (ϕ(x, y)) 0 Mi = , 0 0.3 + 2.7 cos2 (ϕ(x, y)) and Me =
"
#
1.3 + 0.7 sin2 (ϕ(x, y))
0
0
1.3 + 0.7 cos2 (ϕ(x, y))
,
p where ϕ(x, y) = π2 (1 − 51 x2 + y 2). For the nonlinear reaction term and the ODE system in (1.1), we consider the FitzHugh-Nagumo model [15]. The functions f and g are given as v3 − w), f (v, w) = 10(v − 3 and
1 1 (v + 1 − w). 10 2 Note that all of the parameter values and space and time measures we use are nondimensional. We focus mainly on the 2D bidomain problem discretized by centered difference scheme on uniform grids with meshsize h in space and Strang splitting scheme with constant stepsize ∆t in time. Note that the mass matrix C is the identity matrix throughout this section. 9 g(v, w) =
1
1
0.9
0.9
0.8
0.8 0.7 ρ1( Tα )
ρ1( Tα )
0.7 0.6 0.5
0.5
0.4 0.3
0.4
h=1/8 h=1/16 h=1/32
0.2 0.1 0
0.6
1
2
α
(a) ∆t =
3
4
h=1/8 h=1/16 h=1/32
0.3 0.2 0
5
1
2
α
(b) ∆t =
1 16
3
4
5
1 32
0.95
1
0.9 0.9
0.85 0.8 α
ρ (T )
0.7
1
ρ1( Tα )
0.8
0.7 0.65
0.6
0.6
h=1/8 h=1/16 h=1/32
0.5 0.4 0
0.75
1
2
α
(c) ∆t =
3
4
h=1/8 h=1/16 h=1/32
0.55 0.5 0
5
1
2
α
(d) ∆t =
1 64
3
4
5
1 128
Figure 6.1: The largest nonunit eigenvalue ρ1 (Tα ) (in modulus) of the iteration matrices Tα for different α.
10
20
20 h=1/8 h=1/16 h=1/32
18 16
16 14
12
12 α
14
κ
α
κ
h=1/8 h=1/16 h=1/32
18
10
10
8
8
6
6
4
4
2 0
1
2
α
(a) ∆t =
3
4
2 0
5
1
2
α
(b) ∆t =
1 16
20
3
4
1 32
22
h=1/8 h=1/16 h=1/32
18 16
5
h=1/8 h=1/16 h=1/32
20 18 16
14
14 κ
κ
α
α
12
12
10 10
8
8
6
6
4
4
2 0
1
2
α
(c) ∆t =
3
4
2 0
5
1
2
α
(d) ∆t =
1 64
3
4
5
1 128
Figure 6.2: Condition numbers versus α for the preconditioned systems.
6.1. Spectral radius and choice of the parameter α The aim of the tests in this subsection is to verify numerically the spectral analysis presented in the last two sections. Here the coefficient matrices A are derived from the 2D bidomain model on the unit square. We set spatial meshsize h = 1/8, 1/16, 1/32 and time stepsize ∆t = 1/16, 1/32, 1/64, 1/128, respectively. In Figure 6.1 we display the largest nonunit eigenvalue ρ1 (Tα ) (in modulus) of the iteration matrix Tα for different α. We see that the largest nonunit eigenvalue (in modulus) of the iteration matrices Tα are always less than one for α > 0, showing that the splitting iteration always converges, at least for the proposed setting. We also found that changes in the value of spatial meshsize h have little effect on the spectral radius of the iteration matrix. However, as time stepsize ∆t gets smaller, the spectral radius of the iteration matrix gets larger (but less than one). Furthermore, in order to make ρ1 (Tα ) as small as possible, α should be taken closely to one. The best value of α gets smaller as the time mesh is refined. In Figure 6.2 we show the condition numbers κα versus α for the preconditioned matrix Pα−1 A, where κα (Pα−1 A) = σmax /σmin , in which σmax , σmin are the largest and smallest nonzero singular values of Pα−1 A. Different regimes with respect to α and meshsize can be 11
22
18
20
The number of GMRES iterations
The number of GMRES iterations
20
16 14 12 10 h=1/8 h=1/16 h=1/32
8 6 0
1
2
α
(a) ∆t =
3
4
14 12 10
h=1/8 h=1/16 h=1/32
8 1
2
α
(b) ∆t =
1 16
3
4
5
1 32
35
24
The number of GMRES iterations
The number of GMRES iterations
16
6 0
5
26
22 20 18 16 14 h=1/8 h=1/16 h=1/32
12 10 8 0
18
1
2
α
(c) ∆t =
3
4
30 25 20 15
5 0
5
h=1/8 h=1/16 h=1/32
10
1
2
α
(d) ∆t =
1 64
3
4
5
1 128
Figure 6.3: The number of preconditioned GMRES iterations for different α.
clearly observed. We see that the condition number of the preconditioned matrix is essentially independent of the spatial meshsize. But as ∆t gets smaller, the condition number gets larger. On the other hand, in order to make the condition number as small as possible, α should be taken closely to one. In Figure 6.3 we show the number of preconditioned GMRES iterations for different α. We use the Matlab function gmres as the GMRES solver throughout this section. Here the coefficient matrices of the linear systems Ax = b are the same as taken before. The right hand side b is in the form [v; 0] (see Section 2), where v is a random vector generated by Matlab function randn. Here, a zero initial guess is always used and we stop the iteration when the relative residual has been reduced by at least six orders of magnitude (i.e., when kb − Axk k2 ≤ 10−6 kb − Ax0 k2 ). We see that the iteration count is independent of spatial meshsize. The iteration count gets slightly larger when time stepsize gets smaller. Moreover, numerical results show that the number of preconditioned GMRES iterations are nearly minimized when α = 1. We also observe that small changes in the value of α do not have a dramatic effect on the number of iterations.
12
2
2 v u
1.5
v u
1.5
e
e
w
w
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
−1.5
−1.5
−2 0
5
10 t
15
−2 0
20
(a) x = 0.5, y = 1
5
10 t
15
20
(b) x = 4.5, y = 4.5
Figure 6.4: Evolution of the solution of the 2D bidomain equations at different coordinates.
6.2. Comparison with other preconditioners In this subsection, we compare the performance of the splitting preconditioner with other preconditioners, from the point of view of both number of GMRES iterations and elapsed CPU time. Our tests have been done over the domain [0, 5] × [0, 5]. For initial condition, we take v and w constant at the equilibrium value of the system f (v, w) = 0 and g(v, w) = 0 and u constant at 0, except on [0, 0.25] × [0, 5], where we fix v = 2, and also on [2.2, 2.8] × [0, 2.5], where we fix w = 2. In actual computations, all runs are started from an initial vector which is chosen to be the numerical solution of the model in the last time step, terminated when the relative residual had been reduced by at least six orders of magnitude. Besides, the subsystems in (3.3) and (3.5) with coefficient matrices Ae + αI and 2Ai + αI are solved by a V-cycle algebraic multigrid (AMG) algorithm [7, 28]. The smoother in all the tests is one step of Gauss-Seidel iteration for both presmoothing and postsmoothing. We choose α = 1 throughout this subsection. For numerical comparison, we also present results obtained by applying block Jacobi, block Gauss-Seidel and block factorized preconditioners [25, 26]. To be specific, the block Jacobi preconditioner has the form " # C + Ai 0 , PJ = 0 Ai + Ae the block Gauss-Seidel preconditioner is defined as " # C + Ai 0 PGS = , Ai Ai + Ae 13
1.5 4.5
1.5 4.5 1
1 4
4 0.5
3.5 3
0
2.5
0.5
3.5 3
0
2.5 −0.5
2
−1
1.5 1
−1.5
0.5
−0.5 2 −1
1.5 1
−1.5
0.5 −2
−2 1
2
3
4
1
2
(a) t = 4
3
4
(b) t = 6 −1.6
1 4.5
4.5 0.5
4 3.5
0
3
−1.65 4 −1.7
3.5 3
−0.5
2.5 2
−1.75
2.5 −1.8
2 −1
1.5
1.5
1
1
−1.5
0.5
−1.85 −1.9
0.5 −2 1
2
3
(c) t = 8
4
1
2
3
4
(d) t = 10
Figure 6.5: Solution for v of the 2D bidomain equations at different time instants.
and the block factorized preconditioner is defined as " #" # C + Ai 0 I (C + Ai )−1 Ai PF = . Ai Ai + Ae 0 I In actual computation, the subsystems in the diagonal block with coefficient matrices C + Ai and Ai + Ae are solved by a V-cycle AMG algorithm. Figure 6.4 shows the evolution of the solution of the bidomain equations at different coordinates. Figure 6.5 displays the spatial distributions of the transmembrane potential v at four different time instants. The numerical results in Figure 6.4 and Figure 6.5 show that the solution of the bidomain model is characterized by steep gradients in space and time, which is relevant in the electrophysiology of the heart. Table 1 reports the average GMRES iterations count and average CPU time per time 14
step. We can see from the results that the rate of convergence for splitting preconditioned GMRES is essentially independent of h, while showing a mild dependence on ∆t. We can also observe that the splitting preconditioner leads to better numerical results than the block Jacobi preconditioner. The splitting preconditioner also performs better than block factorized preconditioner in terms of computational cost. Furthermore, the block Gauss-Seidel preconditioner performs better than the splitting preconditioner in terms of the number of GMRES iterations, while the splitting preconditioner leads to slightly better performance in terms of the CPU time in some cases. This implies that the computational cost per iteration of the splitting preconditioned GMRES is less than that of the block Gauss-Seidel preconditioned GMRES. We think this could be due to the fact that Ai + Ae is more ill-conditioned than 2Ai + αI, since the addition of a positive term α to the main diagonal of 2Ai improves the condition number. Table 1: The average number of iterations (Iter) per time time step and the average time measured in seconds (Cpu) per time step for solving the bidomain equations. Comparison of the splitting preconditioner Pα , block Jacobi preconditioner PJ , block Gauss-Seidel preconditioner PGS and block factorized preconditioner PF .
∆t
1 8
h 1/8
Iter
Pα
Cpu
Iter
PJ
Cpu
PGS
Iter
Cpu
Iter
PF
Cpu
11.3 0.049 19.1 0.091 9.44 0.048 9.39 0.066
1/16 11.6 0.133 20.6 0.252 11.0 0.147 11.2 0.198 1/32 11.8 0.547 19.2 0.901 11.3 0.617 11.0 0.757
1 16
1 32
1/8
13.2 0.053 17.8 0.078
8.1
0.038
1/16 13.7 0.136 19.2 0.219
9.9
0.120 10.1 0.161
1/32 13.7 0.494 17.9
1.19
9.5
0.448
9.8
1.06
15.7 0.063 15.9 0.068
7.1
0.033
7.1
0.048
1/16 16.1 0.149 18.3 0.198
9.0
0.102
8.9
0.139
1/32 16.1 0.512 17.0 0.669
8.7
0.382
8.7
0.492
1/8
8.1
0.054
7. Conclusions We have introduced a splitting preconditioner for the solution of system arising from numerical discretizations of the bidomain model. We have demonstrated that the spectrum of the preconditioned matrix shows a tight cluster at the unity, and a guideline for choosing the best splitting parameter has been described. Our numerical experiments on a 2D bidomain model suggest that the proposed preconditioning technique is very effective and efficient. 15
Acknowledgements The authors would like to thank the anonymous referees for their many constructive comments and suggestions on improving the paper. This work was supported by the National Natural Science Foundation of China (No.11301575 and No.11471063), the Natural Science Foundation Project of CQ CSTC (No.cstc2014jcyjA00020 and No.cstc2015jcyjBX0083), the Scientific and Technological Research Program of Chongqing Municipal Education Commission (Grant No. KJ1400536). References [1] T. M. Austin, M. L. Trew, A. J. Pullan, Solving the cardiac Bidomain equations for discontinuous conductivities, IEEE Trans. Biomed. Engrg. 53 (2006) 1265–1272. [2] Z.Z. Bai, G.H. Golub, J.Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 1–32. [3] Z.Z. Bai, M. Benzi, F. Chen, Z.Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal. 33 (2013) 343–369. [4] M. Benzi, G. H. Golub, A preconditioner for generalized saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004) 20–41. [5] M. Benzi, M. J. Gander, G. H. Golub, Optimization of the Hermitian and skew-Hermitian splitting iteration for saddle-point problem, BIT 43 (2003) 881–900. [6] A. Berman, R.J. Plemmons, Nonnegative matrices in the Mathematical Sciences, SIAM, Philadelphia, 1994. [7] W. Briggs, V. Henson, S. McCormick, A Multigrid Tutorial, SIAM, Philadelphia, 2000. [8] H. Chen, A splitting preconditioner for the iterative solution of implicit Runge-Kutta and boundary value methods, BIT 54 (2014) 607–621. [9] H. Chen, Generalized Kronecker product splitting iteration for the solution of implicit Runge-Kutta and boundary value methods, Numer. Linear. Algebra. Appl. 22 (2015) 357–370. [10] H. Chen, Kronecker product splitting preconditioners for implicit Runge-Kutta discretizations of viscous wave equations, Appl. Math. Model. 40 (2016) 4429–4440. [11] H. Chen, A splitting preconditioner for implicit Runge-Kutta discretizations of a partial differentialalgebraic equation, Numer. Algor. 73 (2016) 1037–1054. [12] P. Colli Franzone, L. F. Pavarino, A parallel solver for reaction-diffusion systems in computational electrocardiology, Math. Models Methods Appl. Sci. 14 (2004) 883–911. [13] P. Colli Franzone, L. F. Pavarino, S. Scacchi, Exploring anodal and cathodal make and break cardiac excitation mechanisms in a 3D anisotropic bidomain model, Math. Biosci. 230 (2011) 96–114. [14] A. Dax, The convergence of linear stationary iterative processes for solving singular unstructured systems of linear equations, SIAM Rev. 32 (1990) 611–635. [15] M. Ethier, Y. Bourgault, Semi-implicit time-discretization schemes for the Bidomain model, SIAM J. Numer. Anal. 46 (2008) 2443–2468. [16] L. Gerardo-Giorda, L. Mirabella, F. Nobile, M. Perego, A. Veneziani, A model-based block-triangular preconditioner for the Bidomain system in electrocardiology, J. Comput. Phys. 228 (2009) 3625–3639. [17] V. E. Howle, R. C. Kirby, G. Dillon, Block preconditioners for coupled physics problems, SIAM J. Sci. Comput. 35 (2013) S368–S385. [18] J. Keener, J. Sneyd, Mathematical Physiology, Springer-Verlag, New York, 2009. [19] K. A. Mardal, B.F. Nielsen, X. Cai, A. Tveito, An order optimal solver for the discretized bidomain equations, Numer. Linear Algebra Appl. 14 (2007) 83–98. [20] M. Munteanu, L. F. Pavarino, Decoupled Schwarz algorithms for implicit discretizations of nonliear Monodomain and Bidomain systems, Math. Models Methods Appl. Sci. 19 (2009) 1065–1097.
16
[21] M. Munteanu, L. F. Pavarino, S. Scacchi, A scalable Newton-Krylov-Schwarz method for the Bidomain reaction-diffusion system, SIAM J. Sci. Comput. 31 (2009) 3861–3883. [22] M. Murillo, X. C. Cai, A fully implicit parallel algorithm for simulating the non-linear electrical activity of the heart, Numer. Linear Algebra Appl. 11 (2004) 261–277. [23] L. F. Pavarino, S. Scacchi, Multilevel additive Schwarz preconditioners for the Bidomain reactiondiffusion system, SIAM J. Sci. Comput. 31 (2008) 420–443. [24] M. Pennacchio, V. Simoncini, Efficient algebraic solution of reaction-diffusion systems for the cardiac excitation process, J. Comput. Appl. Math. 145 (2002) 49–70. [25] M. Pennacchio, V. Simoncini, Algebraic multigrid preconditioners for the bidomain reaction-diffusion system, Appl. Numer. Math. 59 (2009) 3033–3050. [26] M. Pennacchio, V. Simoncini, Fast structured AMG preconditioning for the bidomain model in electrocadiology, SIAM J. Sci. Comput. 33 (2011) 721–745. [27] G. Plank, M. Liebmann, R. Weber Dos Santos, E. J. Vigmond, G. Haase, Algebraic multigrid preconditioner for the cardiac Bidomain model, IEEE Trans. Biomed. Engrg. 54 (2007) 585–596. [28] J. W. Ruge, K. St¨ uben, Algebraic multigrid, in Multigrid Methods, Frontiers Appl. Math. 3, S. F. McCormick, ed., SIAM, Philadelphia, 1987, pp. 73–130. [29] S. Scacchi, A hybrid multilevel Schwarz method for the bidomain model, Comput. Methods. Appl. Mech. Engrg. 197 (2008) 4051–4061. [30] S. Scacchi, P. Colli Franzone, L. F. Pavarino, B. Taccardi, Computing cardiac recovery maps from electrograms and monophasic action potentials under heterogeneous and ischemic conditions, Math. Models Methods Appl. Sci. 20 (2010) 1089–1127. [31] V. Simoncini, M. Benzi, Spectral properties of the Hermitian and skew-Hermitian splitting preconditioner for saddle point problems, SIAM J. Matrix Anal. Appl. 26 (2004) 377–389. [32] K. Skouibine, W. Krassowsk, Increasing the computational efficiency of a bidomain model of defibrillation using a time-dependent activating function, Ann. Biomed. Engrg. 28 (2000) 772–780. [33] J. Sundnes, G. T. Lines, K. A. Mardal, A. Tveito, Multigrid block preconditioning for a coupled system of partial differential equations modeling the electrical activity in the heart, Comput. Methods Biomech. Biomed. Engrg. 5 (2002) 397–409. [34] J. Sundnes, G. T. Lines, A. Tveito, An operator splitting method for solving the Bidomain equations coupled to a volume conductor model for the torso, Math. Biosci. 194 (2005) 233–248. [35] R. S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, 1962. [36] E. J. Vigmond, F. Aguel, N. A. Trayanova, Computational techniques for solving the bidomain equations in three dimensions, IEEE Trans. Biomed. Engrg. 49 (2002) 1260–1269. [37] E. J. Vigmond, R. Weber dos Santos, A. J. Prassl, M. Deo, G. Plank, Solvers for the cardiac bidomain equations, Progr. Biophys. Molecular Biol. 96 (2008) 3-18. [38] R. Weber dos Santos, G. Plank, S. Bauer, E. J. Vigmond, Parallel multigrid preconditioner for the cardiac bidomain model, IEEE Trans. Biomed. Engrg. 51 (2004) 1960–1968.
17