Preconditioned parallel multisplitting USAOR method for H-matrices linear systems

Preconditioned parallel multisplitting USAOR method for H-matrices linear systems

Applied Mathematics and Computation 275 (2016) 156–164 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

271KB Sizes 2 Downloads 106 Views

Applied Mathematics and Computation 275 (2016) 156–164

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Preconditioned parallel multisplitting USAOR method for H-matrices linear systems Guangbin Wang a,b,∗, Deyu Sun b a b

Department of Mathematics, Qingdao Agricultural University, Qingdao 266109, China Department of Mathematics, Qingdao University of Science and Technology, Qingdao 266061, China

a r t i c l e

i n f o

a b s t r a c t

Keywords: Preconditioned Multisplitting method Convergence USAOR H-matrix

In this paper, the preconditioned multisplitting USAOR method is established for solving the system of linear equations. The convergence and comparison results of the method are given when the coefficient matrices of the linear systems are H-matrices. The method for H-matrices is proved to be more efficient than the multisplitting USAOR method for M-matrices. Finally, a numerical example is given to illustrate the efficiency of our method. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Sometimes, one has to solve a nonsingular linear system as

Ax = b

(1)

where A = (ai j )n×n ∈ Rn×n is nonsingular, b is n-dimensional vector. The basic iterative method for solving (1) is

Mx(k+1) = Nx(k) + b,

k = 0, 1, . . . ,

(2)

where A = M − N, and M is nonsingular, so (2) can be written as

x(k+1) = T x(k) + c,

k = 0, 1, . . . ,

(3)

where T = M−1 N, c = M−1 b. O’leary and White [12] presented the matrix multisplitting method in 1985 for parallelly solving the large sparse linear systems on the multiprocessor systems and it was further studied by many authors. See [1,2,6–8,11,14,16]. Neumann and Plemmons [11] developed some more refined convergence results for one of the cases considered in [12]. Elsner [7] established the comparison theorems about the asymptotic convergence rate of this case. Frommer and Mayer [8] discussed two different variants of relaxed multisplitting methods. White [14] studied the convergence properties of the above matrix multisplitting methods for the symmetric positive definite matrix. Bai [1] studied the convergence domain of the matrix multisplitting relaxation methods. Cao and Liu [6] studied the convergence of two different variants of multisplitting relaxation methods with different weighting schemes. Zhang et al. [16] presented parallel multisplitting method for H-matrices. For parallel matrix multisplitting methods, a very good comprehensive survey is [2] and its references are worth reading. Bai et al. [3] discussed weak-convergence of the splitting methods for solving singular linear systems. In this paper, we will study the preconditioned multisplitting USAOR method for H-matrices linear systems and analyze its convergence theoretically. ∗

Corresponding author at: Department of Mathematics, Qingdao Agricultural University, Qingdao 266109, China. Tel.: +86 0532-88720364. E-mail address: [email protected] (G. Wang).

http://dx.doi.org/10.1016/j.amc.2015.11.068 0096-3003/© 2015 Elsevier Inc. All rights reserved.

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

157

The multisplitting method is as follows. If A is a nonsingular n × n matrix, and Mk , Nk , Ek ∈ Rn×n , k = 1, 2, . . . , K (K ∈ N ) satisfy: (1) A = Mk − Nk , (2) Mk is nonsingular,  (3) Ek is a nonnegative diagonal matrix, and Kk=1 Ek = I, then (Mk , Nk , Ek ) is a multisplitting of A. The multisplitting method for solving (1) is

x(m+1) =

K 

Ek Mk −1 Nk x(m) +

k=1

K 

Ek Mk −1 b, m = 0, 1, . . . ,

k=1

K

 we denote T = k=1 Ek Mk Nk and G = Kk=1 Ek Mk −1 , T is called the iteration matrix. In paper [16], Zhang et al. presented the local relaxed parallel multisplitting method as follows. −1

Algorithm 1. (Local Relaxed Parallel Multisplitting Method) Given an initial vector x0 . For m = 0, 1, 2, . . . , repeat (I) and (II), until convergence. (I) For k = 1, 2, . . . , α , solving yk :

Mk yk = Nk x(m) + b. (II) Computing:

x(m+1) =

α 

Ek y k .

k=1

Let A = I − Lk − Uk , k = 1, 2, . . . , α , where I is an identity matrix, Lk are strictly lower triangular matrices, Uk are general matrices. Now, we consider one kind of parallel multisplitting unsymmetric accelerated over-relaxation (USAOR) method called local relaxed parallel multisplitting unsymmetric accelerated over-relaxation method (LUSAOR). Algorithm 1 associated with LUSAOR method can be written as

x(m+1) = HLUSAOR x(m) + GLUSAOR b, m = 0, 1, . . . ,

(4)

where

HLUSAOR =

α 

EkUω2 γ2 (k )Lω1 γ1 (k ),

k=1

Uω2 γ2 (k ) = (I − γ2Uk )−1 [(1 − ω2 )I + (ω2 − γ2 )Uk + ω2 Lk ], Lω1 γ1 (k ) = (I − γ1 Lk )−1 [(1 − ω1 )I + (ω1 − γ1 )Lk + ω1Uk ], GLUSAOR =

α 

−1

Ek (I − γ2Uk )

[(ω1 + ω2 − ω1 ω2 )I + ω2 (ω1 − γ1 )Lk + ω1 (ω2 − γ2 )Uk ](I − γ1 Lk )−1 .

k=1

In order to solve (1) fasterly, a nonsingular preconditioner P is introduced. The original systems (1) can be transformed into the preconditioned form

PAx = Pb, then, we can define the basic iterative scheme:

x(k+1) = M p −1 Np x(k) + M p −1 Pb, k = 0, 1, 2, . . . , where PA = M p − N p is a splitting of PA, M p is nonsingular. Suppose that A has unit diagonal elements, and A is an H-matrix. In the literature, various authors have suggested different models of preconditioner P for linear systems (1). See [10,15]. In this paper, we consider the preconditioner P as follows



1 ⎢−β2 a21

⎢ ⎢ ⎢ P =I+S=⎢ ⎢ ⎢ ⎣

0 .. . 0 0

−α1 a12 1

0 −α2 a23

−β3 a32 .. . 0 0

1 .. . 0 0

··· ··· .. . .. . −βn−1 an−1,n−2 ···

0 0

0 0

0

0 .. . −αn−1 an−1,n 1

−αn−2 an−2,n−1 1 −βn an,n−1

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(5)

In this paper, we will analyze the convergence of the preconditioned multisplitting USAOR method theoretically and give some comparison results of spectral radius. Finally, we provide one numerical example.

158

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

2. The preconditioned parallel multisplitting USAOR method Let A˜ = PA = D˜ − L˜k − U˜k , k = 1, 2, . . . , K, where

D˜ = diag(1 − α1 a12 a21 , 1 − β2 a21 a12 − α2 a23 a32 , . . . , 1 − βn an,n−1 an−1,n ), L˜k (k = 1, 2, . . . , K ) are strictly lower triangular matrices, U˜k (k = 1, 2, . . . , K ) are general matrices. Then Algorithm 1 associated with preconditioned parallel multisplitting USAOR method (LPUSAOR) can be written as

x(m+1) = H˜ LPUSAOR x(m) + G˜ LPUSAOR b, m = 0, 1, . . . , where

H˜ LPUSAOR =

α 

EkU˜ω2 γ2 (k )L˜ω1 γ1 (k ),

k=1

U˜ω2 γ2 (k ) = (D˜ − γ2U˜k )−1 [(1 − ω2 )D˜ + (ω2 − γ2 )U˜k + ω2 L˜k ], L˜ω1 γ1 (k ) = (D˜ − γ1 L˜k )−1 [(1 − ω1 )D˜ + (ω1 − γ1 )L˜k + ω1U˜k ], G˜ LUSAOR =

α 

−1

Ek (D˜ − γ2U˜k )

[(ω1 + ω2 − ω1 ω2 )D˜

k=1

+ ω2 (ω1 − γ1 )L˜k + ω1 (ω2 − γ2 )U˜k ](D˜ − γ1 L˜k )−1 .

(6)

3. Preliminaries We need the following results. Lemma 1. [9]. A is an H-matrix if and only if there is a positive vector r such that Ar > 0, where r = (r1 , r2 , . . . , rn )T . Lemma 2. [5]. Let A = M − N be an M-splitting of A, then ρ (M−1 N ) < 1 if and only if A is an M-matrix. Lemma 3. [5]. Let A and B be two n × n matrices with 0 ≤ B ≤ A, then ρ (B ) ≤ ρ (A ). Lemma 4. [9]. If A is an H-matrix, then |A−1 | ≤ A−1 . Lemma 5. [13]. Let A be a Z-matrix, then the following statements are equivalent: (1) A is an M-matrix. (2) There is a positive vector x ∈ Rn such that Ax > 0. (3) A−1 ≥ 0. Lemma 6. [11]. Suppose that A1 = M1 − N1 and A2 = M2 − N2 are weak regular splitting of monotone matrices A1 and A2 respectively, such that M2 −1 ≥ M1 −1 . If there exists a positive vector x such that 0 ≤ A1 x ≤ A2 x, then for the monotonic norm associated with x,

−1 −1

M2 N2 ≤ M1 N1 . x x

In particular, if M1 −1 N1 has a positive Perron vector, then

ρ (M2 −1 N2 ) ≤ ρ (M1 −1 N1 ). Lemma 7. [16]. Let A be a nonsingular H-matrix, Lk (k = 1, 2, . . . , α ) are strictly lower triangular matrices, define Uk (k = 1, 2, . . . , α ) such that A = D − Lk − Uk . Assume that

A = |D| − |Lk | − |Uk | = |D| − |B|, if

0 < ω1 , ω2 <

2 , 0 ≤ γ1 ≤ ω1 , 0 ≤ γ2 ≤ ω2 , 1+ρ

then LUSAOR method converges for any initial vector x(0 ) , where ρ = ρ ( J ) = ρ (|D|−1 |B| ). Lemma 8. [4]. Let A−1 ≥ 0, if the splittings A = M − N = P − Q are weak regular, then ρ (T ) < 1 and the unique splitting A = B − C induced by T = P −1 QM−1 N is weak regular, where B = M (M + P − A )−1 P and C = B − A.

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

159

4. Convergence result In this section, we will present the convergence result of the preconditioned multisplitting USAOR method. Theorem 1. LetA be an H-matrix with unit diagonal elements. If (1) for α1 , βn

0≤

|α1 | ≤

0≤

|βn | ≤

1



, |a12 | 2 A−1 ∞ − 1

1



,

|an,n−1 | 2 A−1 − 1 ∞

(2) for αi , βi (i = 2, 3, . . . , n − 1 )

0 ≤ |βi ||ai,i−1 | + |αi ||ai,i+1 | ≤

1

−1

, 2 A − 1 ∞

then PA is an H-matrix. Proof. Let

⎧ ⎨ai j − αi ai,i+1 ai+1, j (PA )i j = ai j − βi ai,i−1 ai−1 j − αi ai, i+1 ai+1, j ⎩ ai j − βi ai,i−1 ai−1, j

i=1 i = 2, . . . , n − 1 i=n

Since A is an H-matrix with unit diagonal elements, then A is an M-matrix and let A = I − P, then P ≥ 0 and ρ (P ) < 1, so

A−1 = (I − P )−1 =

∞ 

P k ≥ I,

k=0

then

−1



A ≥ 1, 2 A−1 − 1 ≥ 1 > 0. ∞ ∞

Assume r = A−1 e, then we have r = A−1 e ≥ e, where e = (1, 1, . . . , 1 )T . (1) For i = 1,

(PAr )1 = |1 − α1 a12 a21 |r1 − |a12 − α1 a12 |r2 −

n    a1 j − α1 a12 a2 j r j j=3

≥ r1 − |α1 a12 a21 |r1 − |a12 |r2 − |α1 a12 |r2 −

n  n      a1 j r j − α1 a12 a2 j r j j=3



j=3

n  n      a1 j r j + |α1 a12 | −|a21 |r1 − |a22 |r2 − a2 j r j = r1 − j =1



j=3

= (Ar )1 − |α1 a12 |(2|a22 |r2 − (Ar )2 ) = 1 − |α1 a12 |(2r2 − 1 ) 1

> 1−

(2r2 − 1 )|a12 | |a12 | 2 A−1 − 1 ∞

≥ 1−



−1

2 A−1 ∞ − 1 |a12 | |a12 | 2 A ∞ − 1 1

= 0. Thus, (PAr )1 > 0. (2) For i = n,

(PAr )n = |1 − βn an,n−1 an−1,n |rn −

n−1    an j − βn an,n−1 an−1, j r j j=1

≥ rn − |βn an,n−1 an−1,n |rn − |an1 |r1 −

n−1  n−1      an j r j − βn an,n−1 an−1, j r j j=1

j=2

160

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164



n−1  n−1      an j r j + |βn an,n−1 | −|an−1,n |rn − |an−1,1 |r1 − an−1, j r j = rn − j=1



j=2

= (Ar )n − |βn an,n−1 | 2|an−1,n−1 |rn−1 − (Ar )n−1 = 1 − |βn an,n−1 |(2rn−1 − 1 ) 1

(2rn−1 − 1 )|an,n−1 |

|an,n−1 | 2 A−1 ∞ − 1



1 ≥ 1− −1

2 A−1 ∞ − 1 |an,n−1 | |an,n−1 | 2 A − 1

> 1−



= 0. Thus, (PAr )n > 0. (3) For i = 2, 3, . . . , n − 1,

(PAr )i = |1 − βi ai,i−1 ai−1,i − αi ai, i+1 ai+1, i |ri −

n    ai j − βi ai, i−1 ai−1, j − αi ai, i+1 ai+1, j r j j =i

≥ ri − |βi ai, i−1 ai−1, i |ri − |αi ai, i+1 ai+1, i |ri −



n  n  n        ai j r j − βi ai, i−1 ai−1, j r j − αi ai, i+1 ai+1, j r j j =i

n  n      ai j r j + |βi ai, i−1 | −|ai−1, i |ri − ai−1, j r j = ri − j =i

j =i





+ |αi ai, i+1 |

j =i n    ai+1, j r j −|ai+1, i |ri −

j =i

= (Ar )i − |βi ai, i−1 |(2|a11 |r1 − (Ar )1 ) − |αi ai, i+1 | 2|ai+1, i+1 |ri+1 − (Ar )i+1 = 1 − |βi ai, i−1 |(2r1 − 1 ) − |αi ai, i+1 |(2ri+1 − 1 )



> 1 − (|βi ai, i−1 | + |αi ai, i+1 | ) 2 A−1 − 1 ∞



1 ≥ 1 − −1

2 A−1 ∞ − 1 2 A − 1



j =i



= 0. Then, (PAr )i > 0, i = 2, 3, . . . , n − 1. Based on the above proof, we know that (PAr ) > 0. So PA is an M-matrix, and PA is an H-matrix. By Lemma 7, we can obtain the following result. Theorem 2. Let A be a nonsingular H-matrix with unit diagonal elements, and A˜ = PA, L˜k (k = 1, 2, . . . , α ) are strictly lower triangular ˜ Define U˜k (k = 1, 2, . . . , α ) such that matrices of A.

A˜ = D˜ − L˜k − U˜k . Assume that

 

 

 

 

 

 

A˜ = D˜  − L˜k  − U˜k  = D˜  − B˜.

Under the assumptions of Theorem 1, if

0 < ω1 ,

ω2 <

2 , 0 ≤ γ1 ≤ ω1 , 0 ≤ γ2 ≤ ω2 , 1 + ρ˜

−1 then LPUSAOR method converges for any initial vector x(0 ) , where ρ˜ = ρ (J˜) = ρ (|D˜ | |B˜| ). The proof of Theorem 2 is similar to the proof of Lemma 7. So omitted.

5. Comparison results of spectral radius Let

A = Mˆ k − Nˆ k =

1

ω1

(|I| − γ1 |Lk | ) −

1

ω1

[(1 − ω1 )|I| + (ω1 − γ1 )|Lk | + ω1 |Uk |]

1 1 ˆˆ k − Nˆˆ k = =M [(1 − ω2 )|I| + (ω2 − γ2 )|Uk | + ω2 |Lk |], (|I| − γ2 |Uk | ) −

ω2

ω2

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

161

where

ˆk = M ˆˆ = M k

1

ω1 1

ω2

(|I| − γ1 |Lk | ),

Nˆ k =

(|I| − γ2 |Uk | ),

Nˆˆ k =

1

ω1 1

ω2

[(1 − ω1 )|I| + (ω1 − γ1 )|Lk | + ω1 |Uk |], [(1 − ω2 )|I| + (ω2 − γ2 )|Uk | + ω2 |Lk |],

then the iteration matrix of parallel multisplitting USAOR method for A is as follows

Hˆ LUSAOR =

α 

ˆˆ −1 Nˆˆ M ˆ −1 ˆ Ek M k k Nk . k

(7)

k=1

Let

           D˜  − γ1 L˜k  − 1 (1 − ω1 )D˜  + (ω1 − γ1 )L˜k  + ω1 U˜k  ω1 ω1   1        1   ˜ ˜ ˜k − N ˜k = =M (1 − ω2 )D˜  + (ω2 − γ2 )U˜k  + ω2 L˜k  , D˜ − γ2 U˜k  − ω2 ω2

PA = M˜ k − N˜ k =

where

1

           D˜  − γ1 L˜k  , N˜ k = 1 (1 − ω1 )D˜  + (ω1 − γ1 )L˜k  + ω1 U˜k  , ω1 ω1   ˜       1   1  ˜ ˜k = ˜k = (1 − ω2 )D˜  + (ω2 − γ2 )U˜k  + ω2 L˜k  , D˜ − γ2 U˜k  , N M ω2 ω2 ˜k = M

1

then the iteration matrix of preconditioned parallel multisplitting USAOR method for PA is as follows

H¨ LPUSAOR =

α 

˜˜ −1 N ˜˜ M ˜ −1 ˜ Ek M k k Nk . k

(8)

k=1

Theorem 3. Let A be a nonsingular H-matrix, L˜k and Lk (k = 1, 2, . . . , α ) are strictly lower triangular matrices of the matrices A˜ = PA and A, respectively. Define U˜k and Uk such that A˜ = PA = D˜ − L˜k − U˜k and A = I − Lk − Uk . Assume that A˜  = |D˜ | − |L˜k | − |U˜k | and A = I − |Lk | − |Uk |. Under the assumptions of Theorem 1, if

0 < ω1 ,

ω2 < 1, 0 ≤ γ1 ≤ ω1 , 0 ≤ γ2 ≤ ω2 ,

then ρ (H˜ LPUSAOR ) ≤ ρ (H¨ LPUSAOR ) ≤ ρ (Hˆ LUSAOR ). Proof. Since A is a nonsingular M-matrix, it is easy to know that

A = Mˆ k − Nˆ k = Mˆˆ k − Nˆˆ k are two weak regular splittings. From Lemma 8, we have A = Bˆk − Cˆk , where

ˆˆ (M ˆˆ − A )−1 M ˆk + M ˆ k , Cˆk = Bˆk − A, Bˆk = M k k then the iteration matrix of LUSAOR method for A can be rewritten as

Hˆ LUSAOR =

α 

Ek Bˆ−1 Cˆk . k

k=1

Similarly, the iteration matrix of LUSAOR method for PA can be rewritten as

H¨ LPUSAOR =

α 

Ek B˜−1 C˜k , k

k=1

˜˜ (M ˜˜ − PA )−1 M ˜k + M ˜ k , C˜k = B˜k − PA. where B˜k = M k k By (6), we have

     L˜ω1 γ1 (k ) = (D˜ − γ1 L˜k )−1 (1 − ω1 )D˜ + (ω1 − γ1 )L˜k + ω1U˜k     −1   ≤ (D˜ − γ1 L˜k ) (1 − ω1 )D˜ + (ω1 − γ1 )L˜k + ω1U˜k    −1          ≤ D˜  − γ1 L˜k  (1 − ω1 )D˜  + (ω1 − γ1 )L˜k  + ω1 U˜k  ˜ −1 N ˜ k. =M k

162

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

Similarly,

     U˜ω2 γ2 (k ) = (D˜ − γ2 L˜k )−1 (1 − ω2 )D˜ + (ω2 − γ2 )L˜k + ω2U˜  k −1 ˜˜ , ˜˜ k N ≤M k

then

   α    H˜ LPUSAOR  =  EkU˜ω2 γ2 (k )L˜ω1 γ1 (k )  k=1  ≤

α 







Ek U˜ω2 γ2 (k )L˜ω1 γ1 (k )

k=1



α 

˜˜ −1 N ˜˜ M ˜ −1 ˜ Ek M k k Nk k

k=1

= H¨ LPUSAOR . Note that Similarly,

B˜−1 PA = k A  = I Bˆ−1 k

(9)

 C˜k , and α I − B˜−1 Ek B˜−1 PA k αk=1 −1k −1 − Bˆk Cˆk , and k=1 Ek Bˆk A =



C˜k = I − k=1 Ek B˜−1 k  −1 ˆ ˆ B C I− α E k = k=1 k k

= I − H¨ LPUSAOR . I − Hˆ LUSAOR .

ˆˆ (M ˆˆ − A )−1 M ˆk + M ˆ k and B˜k = M ˜k + M ˜ k , we have ˜˜ k (M ˜˜ k − PA )−1 M From Bˆk = M k k





ˆˆ −1 , ˆˆ − A M ˆ −1 M ˆk + M =M Bˆ−1 k k k k





˜˜ − PA M ˜˜ −1 . ˜ −1 M ˜k + M =M B˜−1 k k k k Since PA = (I + |S| )A, it is easy to get B˜−1 ≥ Bˆ−1 ≥ 0. k k Let x = A−1 e > 0, then

(PA − A )x = |S|e ≥ 0 and



α 

 Ek B˜−1 k

PA x = (I − H¨ LPUSAOR )x

k=1

 ≥



α 

 Ek Bˆ−1 A x k

k=1



= I − Hˆ LUSAOR x. Thus, we have







H¨ LPUSAOR ≤ Hˆ LUSAOR . x x

As Hˆ LUSAOR is nonnegative, there exists a positive Perron vector. By Lemma 6, the following inequality holds:

ρ (H¨ LPUSAOR ) ≤ ρ (Hˆ LUSAOR ). From (9) and Lemma 3, we have

  ρ (H˜ LPUSAOR ) ≤ ρ (H˜ LPUSAOR  ) ≤ ρ (H¨ LPUSAOR ) ≤ ρ (Hˆ LUSAOR ).

The conclusion is proved. Corollary 1. Let A be a nonsingular M-matrix, L˜k and Lk (k = 1, 2, . . . , α ) are strictly lower triangular matrices of the matrices A˜ = PA and A, respectively. Define U˜k and Uk such that A˜ = PA = D˜ − L˜k − U˜k and A = I − Lk − Uk . Assume that A˜  = |D˜ | − |L˜k | − |U˜k | and A = I − |Lk | − |Uk |. Under the assumptions of Theorem 1, if

0 < ω1 ,

ω2 < 1, 0 ≤ γ1 ≤ ω1 , 0 ≤ γ2 ≤ ω2 ,

then

ρ (H˜ LPUSAOR ) ≤ ρ (H¨ LPUSAOR ) ≤ ρ (HLUSAOR ). Remark 1. Corollary 1 shows that the preconditioned parallel multisplitting USAOR method converges faster than parallel multisplitting USAOR method for M-matrices linear systems.

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164

163

6. Numerical example Consider the linear system

Ax = b, where



1 ⎜− 14

⎜ A=⎜ ⎜ ⎝



− 14 1 .. .

⎟  ⎟ T  1 ⎟, b = 1, , . . . , 1 . ⎟ 4 n2 −1⎠

− 14 ..

. − 14

..

. 1 − 14

4

1

By simple computation, we know that A is an M-matrix. Obviously, A = A, then we have HLUSAOR = Hˆ LUSAOR . We take



−α1 a12 1

0 −α2 a23

−β3 a32 .. . 0 0

1 .. . 0 0

1 ⎢−β2 a21

⎢ ⎢ ⎢ P=⎢ ⎢ ⎢ ⎣

0 .. . 0 0

··· ··· .. . .. . −βn−1 an−1,n−2 ···

0 0

0 0

0

0 .. . −αn−1 an−1,n 1

−αn−2 an−2,n−1 1 −βn an,n−1

⎤ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎦

and r = A−1 e, where e = (1, 1, ..., 1 )T , αi , βi (i = 1, 2, ..., n ) satisfy the conditions of Theorem 1. S2 = {m1 + 1, ..., n}, and determine (I − L1 , U1 , E1 ) and (I − L2 , U2 , E2 ) are the splittings of matrix Let S1 = {1, 2, ..., m1 }, A, where (1 )

(1 )



L1 = (li j ), li j = (1 )

−ai j 0



(1 )

U1 = (ui j ), ui j = (2 )

(2 )



(2 )

−(ai j + li(j1) ) 0

−ai j 0

L2 = (li j ), li j =

i < j, i, j ∈ S1 , otherwise,



i < j, i, j ∈ S2 , otherwise,

−(ai j + li(j2) ) 0

(2 )

U2 = (ui j ), ui j =

i = j, otherwise,

i = j, otherwise,

(k ) (k ) (k ) Ek = diag(E11 , E22 , . . . , Enm ), k = 1, 2.

˜ where Let A˜ = PA = (a˜i j ), determine (D˜ − L˜1 , U˜1 , E1 ) and (D˜ − L˜2 , U˜2 , E2 ) are the splittings of matrix A,

D˜ = diag(a˜11 , a˜22 , . . . , a˜nn ), (1 ) L˜1 = (l˜i j ), l˜i(j1) =



U˜1 = (u˜i(j1) ), u˜i(j1) = L˜2 = (l˜i(j2) ), l˜i(j2) =

−a˜i j 0





U˜2 = (u˜i(j2) ), u˜i(j2) =

i < j, i, j ∈ S1 , otherwise,

−(a˜i j + l˜i(j1) ) 0

−a˜i j 0



i = j, otherwise,

i < j, i, j ∈ S2 , otherwise,

−(a˜i j + l˜i(j2) ) 0

i = j, otherwise,

(k ) (k ) (k ) Ek = diag(E11 , E22 , . . . , Enm ), k = 1, 2.

164

G. Wang, D. Sun / Applied Mathematics and Computation 275 (2016) 156–164 Table 1 Comparison of spectral radius. n

ω1 , γ 1

ω2 , γ 2

ρ (H˜ L )

ρ (H¨ L )

ρ (HL )

50 50 100 100 500 500

0.8,0.4 1.0,0.6 0.8,0.4 1.0,0.6 0.8,0.4 1.0,0.6

0.7,0.6 0.9,0.6 0.7,0.6 0.9,0.6 0.7,0.6 0.9,0.6

0.1704 0.0633 0.1716 0.0638 0.1736 0.0646

0.2067 0.0915 0.2093 0.0926 0.2119 0.0940

0.3173 0.1884 0.3233 0.1914 0.3265 0.1935

We take m1 = Int ( 4n 5 ), α1 = α2 = · · · = αn−1 =



1

11 ⎢− 40 ⎢ ⎢ ⎢ 0 P=⎢ ⎢ .. ⎢ . ⎣ 0

0

1 − 40

0 1 − 40

11 − 40

1 .. . 0 0

1 .. . 0 0

··· ··· .. . .. . 11 − 40 ···

0 0 0 1 − 40 1 11 − 40

1 10

and β2 = β3 = · · · = βn =

0 0



11 10 ,

then the preconditioner

⎥ ⎥ ⎥ 0 ⎥ . .. ⎥ ⎥ . ⎥ −1⎦ 40

1

In Table 1, we give some spectral radius of the iteration matrices, ρ (H˜ L ), ρ (H¨ L ), ρ (HL ) denotes the spectral radius of the iteration matrices H˜ LPUSAOR , H¨ LPUSAOR , HLUSAOR , respectively. From Table 1, we see that the results consistent with the conclusion of Corollary 1. It shows that the preconditioned multisplitting USAOR method discussed in this paper is better than the multisplitting USAOR method. Acknowledgments The authors would like to thank the referees for their valuable comments and suggestions, which greatly improved the original version of this paper. This work was supported by the National Natural Science Foundation of China (Grant no. 11001144) and the Natural Science Foundation of Shandong Province (no. ZR2012AL09). References [1] Z.-Z. Bai, On the convergence domain of the matrix multisplitting relaxation methods for linear systems, Appl. Math. J. Chin. Univ. 13 (1998) 45–52. [2] Z.-Z. Bai, J.-C. Sun, D.-R. Wang, A unified framework for the construction of various matrix multisplitting iterative methods for large sparse system of linear equations, Comput. Math. Appl. 32 (1996) 51–76. [3] Z.-Z. Bai, L. Wang, J.-Y. Yuan, Weak convergence theory of quasi-nonnegative splittings for singular matrices, Appl. Numer. Math. 47 (2003) 75–89. [4] M. Benzi, D.B. Szyld, Existence and uniqueness of splittings for stationary iterative methods with application to alternating methods, Numer. Math. 76 (1997) 309–321. [5] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, SIAM, Philadelphia, 1994. [6] Z.-H. Cao, Z.-Y. Liu, Convergence of relaxed parallel multisplitting methods with different weighting schemes, Appl. Math. Comput. 106 (1999) 181–196. [7] L. Elsner, Comparisons of weak regular splittings and multisplitting methods, Numer. Math. 56 (1989) 283–289. [8] A. Frommer, G. Mayer, Convergence of relaxed parallel multisplitting methods, Linear Algebra Appl. 119 (1989) 141–152. [9] L.Yu. Kolotilina, Two-sided bounds for the inverse of an H-matrix, Linear Algebra Appl. 225 (1995) 117–123. [10] H. Saberi Najafi, S.A. Edalatpanah, Comparison analysis for improving preconditioned SOR-type iterative method, Numer. Anal. Appl. 6 (2013) 62–70. [11] M. Neumann, R.J. Plemmons, Convergence of parallel multisplitting iterative methods for M-matrices, Linear Algebra Appl. 88-89 (1987) 559–573. [12] D.P. O’leary, R.E. White, Multisplittings of matrices and parallel solution of linear systems, SIAM J. Algebraic Discret. Methods 6 (1985) 630–640. [13] R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, 1981. [14] R.E. White, Multisplitting with different weighting schemes, SIAM J. Matrix Anal. Appl. 10 (1989) 481–493. [15] J.-Y. Yuan, D.D. Zontini, Comparison theorems of preconditioned Gauss–Seidel methods for M-matrices, Appl. Math. Comput. 219 (2012) 1947–1957. [16] L.-T. Zhang, T.-Z. Huang, T.-X. Gu, X.-L. Guo, Convergence of relaxed multisplitting USAOR methods for H-matrices linear systems, Appl. Math. Comput. 202 (2008) 121–132.