Two new preconditioned GAOR methods for weighted linear least squares problems

Two new preconditioned GAOR methods for weighted linear least squares problems

Applied Mathematics and Computation 324 (2018) 93–104 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage...

407KB Sizes 0 Downloads 15 Views

Applied Mathematics and Computation 324 (2018) 93–104

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Two new preconditioned GAOR methods for weighted linear least squares problems Shu-Xin Miao a, Yu-Hua Luo a, Guang-Bin Wang b,∗ a b

College of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, PR China Department of Mathematics, Qingdao Agricultural University, Qingdao 266109, PR China

a r t i c l e

i n f o

a b s t r a c t

MSC: 65F10 65F50 Keywords: Preconditioner GAOR method Preconditioned GAOR method Weighted linear least squares problem Comparison

In this paper, the preconditioned generalized accelerated overrelaxation (GAOR) methods for solving weighted linear least squares problems are considered. Two new preconditioners are proposed and the convergence rates of the new preconditioned GAOR methods are studied. Comparison results show that the convergence rates of the new preconditioned GAOR methods are better than those of the preconditioned GAOR methods in the previous literatures whenever these methods are convergent. A numerical example is given to confirm our theoretical results. © 2017 Elsevier Inc. All rights reserved.

1. Introduction In this paper, we consider the weighted linear least squares problem

minn (Ax − b) W −1 (Ax − b), T

x∈R

where A ∈ Rn × n , b ∈ Rn and W ∈ Rn × n is a symmetric positive definite matrix. This problem has many scientific applications, a typical source is a parameter estimation in mathematical modelling, see [4,14,15] for details. In order to solve it, one has to solve a linear system of the form

Hy = f, y, f ∈ Rn , where

 H=

Ip − B L

U Iq − C

(1)



is a non-singular matrix with B = (bi j ) ∈ R p×p , C = (ci j ) ∈ Rq×q , L = (li j ) ∈ Rq×p , U = (ui j ) ∈ R p×q , and p + q = n. Here and elsewhere in the paper, Ik denotes the identity matrix with dimension k. Many iterative methods for solving the linear system (1) have been studied by many authors [4,10]. Such iterative methods require the inverses of I p − B and Iq − C, which is a main drawback of these methods. To avoid this problem, Yuan and Jin [15] proposed the generalized AOR (GAOR) method for solving the linear system (1), and Darvishi and Hessari [3] studied



Corresponding author. E-mail address: [email protected] (G.-B. Wang).

https://doi.org/10.1016/j.amc.2017.12.007 0 096-30 03/© 2017 Elsevier Inc. All rights reserved.

94

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

the convergence of the GAOR method when the coefficient matrix H is a diagonally dominant matrix. The GAOR method for solving the linear system (1) is defined by

yk+1 = Tγ ω yk + ωg, k = 0, 1, 2, . . . , where

 Tγ ω =

 =

Ip γL

0 Iq

−1 

(2)

 0 (1 − ω )In + (ω − γ )

−L

( 1 − ω )I p + ω B ω (γ − 1 )L − ωγ LB

0 0



 +ω

−ωU (1 − ω )Iq + ωC + ωγ LU



B 0

−U C



(3)

is the iteration matrix,



g=

Ip −γ L

0 Iq

 f,

and ω and γ are real parameters with ω = 0. The spectral radius of the iteration matrix Tγ ω is decisive for the convergence, and the smaller it is, the faster the method converges [13]. For improving the convergent rate of the corresponding iterative method, preconditioning techniques are used [2]. Especially, we consider the following equivalent left preconditioned linear system of (1)

P Hy = P f,

(4)

where P ∈ Rn × n , called the left preconditioner, is nonsingular. If we express PH as



 Ip − B  L

PH =



 U  , Iq − C

then the GAOR method for solving the preconditioned linear system (4), which is also called the preconditioned GAOR method [18] for solving the linear system (1), is defined by

yk+1 =  Tγ ω yk + ω g, k = 0, 1, 2, . . . , where

 Tγ ω = and

  g=



(1 − ω )Ip + ωB  ω (γ − 1 ) L − ωγ  LB

Ip −γ  L

(5)



 −ωU  (1 − ω )Iq + ωC + ωγ  LU



0 P f. Iq

Several kinds of preconditioners for the GAOR method for solving the linear system (1) has been proposed in the literatures, see for example [5–9,12,16–18]. In this paper, followed the idea in [7] and based on the works of [5,6,9,16–18], we propose two new preconditioners for the GAOR method and study the convergence rates of the new preconditioned GAOR methods for solving the linear system (1). The remainder of the paper is organized as follows. Section 2 is the preliminaries. In Section 3, after reviewed the preconditioners in the previous literatures, two new preconditioners are introduced and the corresponding preconditioned GAOR methods are proposed. In Section 4, the convergence rates of the new preconditioned GAOR methods with that of the original GAOR method, those of the preconditioned GAOR methods in [5,16,18] and [6,9,17] are compared, respectively. Comparison results show that the convergence rates of the new preconditioned GAOR methods are better than those of the preconditioned GAOR methods in the previous literatures, and that of the GAOR method whenever these methods are convergent. In Section 5, a numerical example is provided in order to confirm the theoretical results. Finally, some conclusions are drawn in Section 6. 2. Preliminaries For A = (ai j ), B = (bi j ) ∈ Rn×n , we write A ≥ B(A > B) if aij ≥ bij (aij > bij ) holds for all i, j = 1, 2, . . . , n. We say that A is nonnegative (positive) if A ≥ 0(A > 0), and A − B ≥ 0 if and only if A ≥ B. These definitions carry immediately over to vectors by identifying them with n × 1 matrices. ρ (∗ ) denotes the spectral radius of a square matrix. A is called irreducible if the directed graph of A is strongly connected [10]. Some useful results which we refer to later are provided below. Lemma 2.1. [10] Let A ∈ Rn × n be a nonnegative and irreducible matrix. Then

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

95

(a). A has a positive eigenvalue equal to ρ (A); (b). A has an eigenvector x > 0 corresponding to ρ (A). Lemma 2.2. [1] Let A ∈ Rn × n be a nonnegative matrix. Then (a). If α x ≤ Ax for a vector x ≥ 0 and x = 0, then α ≤ ρ (A). (b). If Ax ≤ β x for a vector x > 0, then ρ (A) ≤ β . Moreover, if A is irreducible and if 0 = α x ≤ Ax ≤ β x, equality excluded, for a vector x ≥ 0 and x = 0, then α < ρ (A) < β and x > 0. 3. Preconditioned GAOR methods In this section, we will introduce two new preconditioners for the GAOR method and propose the preconditioned GAOR methods for solving the linear system (1). The first new preconditioner is based on the preconditioners in [5,16,18]. Recall that in 2009, Zhou et al. [18] proposed the preconditioner of the form





I p + S1 ( α ) 0

P1 =

0 , Iq

where for α > 0,

⎛ ⎜ ⎝

S1 ( α ) = ⎜

0 0 .. . b p1

α

Then P1 H can be expressed as



I p − B∗1 L

P1 H =



··· ··· .. . ···

0 0 .. . 0

0 0⎟ .. ⎟. .⎠ 0



U1∗ , Iq − C





where B∗1 = B − S1 (α ) I p − B , U1∗ = I p + S1 (α ) U. And the preconditioned GAOR method for solving the linear system (1) is defined as

yk+1 = Tγ ω1 yk + ωg1 , k = 0, 1, 2, . . . , where

 Tγ ω1 =

(1 − ω )Ip + ωB∗1 ω (γ − 1 )L − ωγ LB∗1

is the iteration matrices, g1 = form

1 = P



−ωU1∗ (1 − ω )Iq + ωC + ωγ LU1∗

Ip −γ L

0 Iq



(6)

 (7)

 of the P1 f is the known vectors. In 2012, Yun [16] considered the preconditioner P 1



I p + S1 ( α ) 0

0 , Iq + V1 (τ )

where S1 (α ) is defined as above and for τ > 0

⎛ ⎜ ⎝

V1 (τ ) = ⎜

0 0 .. .

cq1

τ



··· ··· .. . ···

0 0 .. . 0

0 0⎟ .. ⎟ ⎠. . 0

 H can be expressed as Same preconditioned techniques are considered in [5,7,17]. Then the preconditioned matrices P 1

1 H = P



I p − B∗1 L∗1



U1∗ , Iq − C1∗

where B∗1 and U1∗ are as above and L∗1 = (Iq + V1 (τ ))L, C1∗ = C − V1 (τ )(Iq − C ), and the corresponding preconditioned GAOR method is

yk+1 =  Tγ ω1 yk + ω g1 , k = 0 , 1 , 2 , . . . , with the iteration matrix

 Tγ ω1 =



(1 − ω )Ip + ωB∗1 ω (γ − 1 )L∗1 − ωγ L∗1 B∗1

(8)



−ωU1∗ , (1 − ω )Iq + ωC1∗ + ωγ L∗1U1∗

(9)

96

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

and the corresponding known vectors  g1 =

Ip −γ L∗1

0 Iq



 f . Yun [16] gave the following comparison results between the preP 1

conditioned GAOR method defined by (6) and the preconditioned GAOR method defined by (8). Lemma 3.1 ([16, Theorem 3.4]). Let Tγ ω1 and  Tγ ω1 be the iteration matrices defined by (7) and (9), respectively. Assume that the matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bp1 > 0, cq1 > 0, α > 0, τ > 0, α > 1 − b11 , and τ > 1 − c11 . If 0 ≤ γ < 1 and 0 < ω ≤ 1, then

  ρ  Tγ ω1 < ρ Tγ ω1

if

 ρ Tγ ω1 < 1.

In [18], Zhou et al. proposed another preconditioner of the form

 P¯1 =



Ip

0 , Iq

K1 (β )

where for β > 0,

⎛ 0 ⎜ 0 K1 (β ) = ⎜ .. ⎝ . l

− βq1

··· ··· .. . ···

0 0 .. . 0



0 0⎟ .. ⎟. .⎠ 0

And considered the preconditioned GAOR method

yk+1 = T¯γ ω1 yk + ωg¯ 1 , k = 0, 1, 2, . . . ,

(10)

with the iteration matrix



T¯γ ω1 =

( 1 − ω )I p + ω B ω (γ − 1 )L¯ 1 − ωγ L¯ 1 B



−ωU , (1 − ω )Iq + ωC¯1 + ωγ L¯ 1U

and the corresponding known vectors g¯ 1 =

Ip −γ L¯ 1

0 Iq



(11)

P¯1 f, where L¯ 1 = L + K1 (β )(I p − B ) and C¯1 = C − K1 (β )U. Recently,

based on the preconditioners P1 and P¯1 , Huang et al. [5] considered the following preconditioner



P1 =

I p + S1 ( α ) K1 (β )

0 Iq



and the corresponding preconditioned GAOR method

yk+1 = T γ ω1 yk + ωg1 , k = 0, 1, 2, . . . ,

(12)

with the iteration matrix



(1 − ω )Ip + ωB∗1 T γ ω1 = ω (γ − 1 )L¯ 1 − ωγ L¯ 1 B∗1



−ωU1∗ , (1 − ω )Iq + ωC¯1 + ωγ L¯ 1U1∗

and the corresponding known vectors g1 =

Ip −γ L¯ 1

0 Iq



(13)

P 1 f . Huang et al. [5] presented the following comparison theorem.

Lemma 3.2 ([5, Theorem 3.14]). Let T¯γ ω1 and T γ ω1 be the iteration matrices defined by (11) and (13), respectively. Assume that the matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bp1 > 0, lq1 < 0, α > 0, β > 0, α > 1 − b11 and β > 1 − b11 . If 0 ≤ γ < 1 and 0 < ω ≤ 1, then

  ρ T γ ω1 < ρ T¯γ ω1

if

 ρ T¯γ ω1 < 1.

In this paper, following the idea of Miao [7], the first new proposed preconditioner has the form

1 = P



I p + S1 ( α ) K1 (β )



0 , Iq + V1 (τ )

where S1 (α ), K1 (β ) and V1 (τ ) are defined as above.

(14)

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

97

The second new preconditioner is based on the preconditioners in [6,9,17,18]. Zhou et al. [18] also proposed the preconditioner

I p + S 2

0 Iq

0



, where S2 =

the preconditioner



0 0

··· ···

0 0

b p1

0

···

0

. . .

. . .

. . In 2012, Shen et al. [9] parameterized this preconditioner and considered . .

. . .



I p + S2 ( α ) 0

P2 =

b021

0 , Iq

where for αi > 0(i = 2, . . . , p),



0 ⎜α2 b21 S2 ( α ) = ⎜ . ⎝ .. α p b p1

0 0 .. . 0



··· ··· .. . ···

0 0⎟ .. ⎟ ⎠. . 0

The preconditioned GAOR method, corresponding the preconditioner P2 , for solving the linear system (1) is defined as [9]

yk+1 = Tγ ω2 yk + ωg2 , k = 0, 1, 2, . . . , where

 Tγ ω2 =

(1 − ω )Ip + ωB∗2 ω (γ − 1 )L − ωγ LB∗2

(15)

−ωU2∗ (1 − ω )Iq + ωC + ωγ LU2∗





(16)



is the iteration matrix with B∗2 = B − S2 (α ) I p − B , U2∗ = I p + S2 (α ) U, g2 =  of the form Zhao et al. [17] proposed the preconditioner P 2

2 = P



Ip −γ L

0 Iq



P2 f is the known vectors. In 2013,



I p + S2 ( α ) 0

0 , Iq + V2 (τ )

where S2 (α ) is defined as above and for τ j > 0( j = 2, . . . , q )



0 ⎜τ2 c21 V2 (τ ) = ⎜ . ⎝ .. τq c q 1

0 0 .. . 0

··· ··· .. . ···



0 0⎟ .. ⎟ ⎠. . 0

The corresponding preconditioned GAOR method [17] is

yk+1 =  Tγ ω2 yk + ω g2 , k = 0 , 1 , 2 , . . . , where

 Tγ ω2 =



(1 − ω )Ip + ωB∗2 ω (γ − 1 )L∗2 − ωγ L∗2 B∗2

−ωU2∗ (1 − ω )Iq + ωC2∗ + ωγ L∗2U2∗

(17)



is the iteration matrix with L∗2 = (Iq + V2 (τ ))L, C2∗ = C − V2 (τ )(Iq − C ), and  g2 =

(18)

Ip −γ L∗2

0  P2 f .

Iq

Zhao et al. [17] gave the fol-

lowing comparison results between the preconditioned GAOR method defined by (15) and the preconditioned GAOR method defined by (17). Lemma 3.3 ([17, Theorem 2.3]). Let Tγ ω2 and  Tγ ω2 be the iteration matrices defined by (16) and (18), respectively. Assume that 1 the matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bi1 > 0 for some i ∈ {2, . . . , p}, 0 < max2≤i≤p αi < 1−b whenever 0 ≤ b11 < 1 or α i > 0 whenever b11 ≥ 1, cj1 > 0 for some j ∈ {2, . . . , q}, 0 < max2≤ j≤q τ j < whenever c11 ≥ 1. If 0 ≤ γ < 1 and 0 < ω ≤ 1, then

  ρ  Tγ ω2 < ρ Tγ ω2

if

 ρ Tγ ω2 < 1.

In 2012, Shen et al. [9] also proposed another preconditioner of the form



P¯2 =

Ip

K2 (β )



0 , Iq

1 1−c11

11

whenever 0 ≤ c11 < 1 or τ j > 0

98

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

where for βs > 0(s = 1, . . . , q ),



−β1 l11 ⎜−β2 l21 K2 (β ) = ⎜ . ⎝ .. −βq lq1



··· ··· .. . ···

0 0 .. . 0

0 0⎟ .. ⎟ ⎠. . 0

And considered the preconditioned GAOR method [9]

yk+1 = T¯γ ω2 yk + ωg¯ 2 , k = 0, 1, 2, . . . ,

(19)

with the iteration matrix



( 1 − ω )I p + ω B ω (γ − 1 )L¯ 2 − ωγ L¯ 2 B

T¯γ ω2 =



−ωU , (1 − ω )Iq + ωC¯2 + ωγ L¯ 2U

Ip −γ L¯ 2

and the corresponding known vectors g¯ 2 =

0 Iq

(20)



P¯2 f, where L¯ 2 = L + K2 (β )(I p − B ) and C¯2 = C − K2 (β )U. Based on the

preconditioners P2 and P¯2 , Huang et al. [6] studied the following preconditioner



P2 =

I p + S2 ( α ) K2 (β )

0 Iq



and the corresponding preconditioned GAOR method

yk+1 = T γ ω2 yk + ωg2 , k = 0, 1, 2, . . . ,

(21)

with the iteration matrix



(1 − ω )Ip + ωB∗2 ω (γ − 1 )L¯ 2 − ωγ L¯ 2 B∗2

T γ ω2 =



−ωU2∗ , (1 − ω )Iq + ωC¯2 + ωγ L¯ 2U2∗

and the corresponding known vectors g1 =

Ip −γ L¯ 2

0 Iq



(22)

P 2 f . Huang et al. [6] presented the following comparison theorem.

Lemma 3.4. ([6, Theorem 3]) Let T¯γ ω2 and T γ ω2 be the iteration matrices defined by (20) and (22), respectively. Assume that the matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bi1 > 0 for some i ∈ {2, . . . , p}, ls1 < 0 for some s ∈ {1, . . . , q}, 1 1 0 < max2≤i≤p αi < 1−b and 0 < max1≤s≤q βs < 1−b whenever 0 ≤ b11 < 1 or α i > 0 and β s > 0 whenever b11 ≥ 1. If 0 ≤ γ < 1 and 0 < ω ≤ 1, then



11

11

ρ T γ ω2 < ρ T¯γ ω2



if



ρ T¯γ ω2 < 1.

Following the idea of [7] and the works of [6,9,17,18], we propose the second new preconditioner

2 = P



I p + S2 ( α ) K2 (β )



0 , Iq + V2 (τ )

(23)

where S2 (α ), K2 (β ) and V2 (τ ) are defined as above. Note that the proposed preconditioners (14) and (23) in the present paper have the unified form

i = P



I p + Si ( α ) Ki (β )



0 , i = 1, 2. Iq + Vi (τ )

(24)

H be expressed as Let the preconditioned matrices P i

i H = P



i Ip − B  Li



i U i , i = 1, 2, Iq − C

 = B − S (α )(I p − B ), U  = (I p + S (α ))U,   = C − V (τ )(Iq − C ) − K (β )U for i = where B Li = (Iq + Vi (τ ))L + Ki (β )(I p − B ) and C i i i i i i i , we have a calss 1, 2. Then applying the GAOR method to the preconditioned linear system (4) with the preconditioners P i of new preconditioned GAOR methods for i = 1, 2,

yk+1 =  Tγ ωi yk + ω gi , k = 0 , 1 , 2 , . . . , where

 Tγ ωi =



(1 − ω )Ip + ωBi i ω (γ − 1 ) Li − ωγ  Li B

i −ωU i (1 − ω )Iq + ωCi + ωγ  Li U

(25)



are the iteration matrices,  gi are the corresponding known vectors.

(26)

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

99

4. Comparison results  in (24) (or in (14) and (23)) theoretically, In this section, to demonstrate the efficiency of the proposed preconditioners P i three kinds of comparison results are established, and each one contains two comparison theorems. All comparison results are for the case whenever the original GAOR method (2) convergent, i.e., ρ (Tγ ω ) < 1. Firstly, we compare the spectral radii of the iteration matrices corresponding to the preconditioned GAOR methods defined by Eq. (25) with that corresponding to the GAOR method defined by Eq. (2). Theorem 4.1. Let Tγ ω and  Tγ ω1 be the iteration matrices defined by Eqs. (3) and (26) with i = 1, respectively. Assume that the matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bp1 > 0, lq1 < 0, cq1 > 0, α , β , τ > 0, α > 1 − b11 , β > 1 − b11 and τ > 1 − c11 . If 0 ≤ γ < 1, 0 < ω ≤ 1 and ρ (Tγ ω ) < 1, then

  ρ  Tγ ω1 < ρ Tγ ω < 1.

Proof. The iteration matrix Tγ ω in (3) can be rewritten as



Tγ ω

( 1 − ω )I p + ω B = −ω (1 − γ )L

−ωU (1 − ω )Iq + ωC





+ ωγ



0 −LB

0 . LU

Thus, we known that Tγ ω is irreducible and non-negative under the conditions of the theorem. Similarly, it can be proved that  Tγ ω1 is irreducible and non-negative. By Lemma 2.1, for nonnegative irreducible matrix Tγ ω , there is a positive vector x such that

Tγ ω x = λx,

(27)

where λ = ρ (Tγ ω ). Clearly, λ = 1, for otherwise the matrix H is singular. Therefore, one gets that λ < 1 or λ > 1. Note that



ωHx =

Ip γL

0 Iq









I In − Tγ ω x = (1 − λ ) p γL

0 x Iq

(28)

holds. From (27) and (28), one can obtain that

 Tγ ω1 x − λx

 =  Tγ ω1 − Tγ ω x



 ω B1 − B

 = 1 − LB ω (γ − 1 )  L1 − L − ωγ  L1 B  =

ω





 −ω U  1 − U  1 − C − ωγ  1 − LU x C L1 U

−S1 (α ) γ V1 (τ )L + γ K1 (β )(Ip − B ) − K1 (β ) + γ LS1 (α ) + γ V1 (τ )LS1 (α ) + γ K1 (β )(Ip − B )S1 (α )



−S1 (α ) γ K1 (β )(Ip − B ) − K1 (β ) + γ LS1 (α ) + γ V1 (τ )LS1 (α ) + γ K1 (β )(Ip − B )S1 (α )

= (1 − λ )

0



−V1 (τ )

ωHx



0 x. −V1 (τ )

Since α , β , τ > 0, bp1 > 0, lq1 < 0 and cq1 > 0, we known that S1 (α ), K1 (β ) and V1 (τ ) are non-negative matrices which are nonzero. Moreover, by assumptions 0 < ω ≤ 1, 0 ≤ γ < 1, we have



−S1 (α ) γ K1 (β )(Ip − B ) − K1 (β ) + γ LS1 (α ) + γ V1 (τ )LS1 (α ) + γ K1 (β )(Ip − B )S1 (α )

and



−S1 (α ) γ K1 (β )(Ip − B ) − K1 (β ) + γ LS1 (α ) + γ V1 (τ )LS1 (α ) + γ K1 (β )(Ip − B )S1 (α )

0



−V1 (τ ) 0



−V1 (τ )

If λ < 1, then  Tγ ω1 x − λx ≤ 0 and  Tγ ω1 x − λx = 0, Lemma 2.2 implies that ρ ( Tγ ω1 ) < ρ (Tωγ ) < 1.

x≤0

x = 0. 

Similarly, comparing the spectral radius of Tγ ω with that of  Tγ ω2 , we have the following comparison theorem. The difference lies in the fact that some assumptions are changed such that S2 (α ), K2 (β ) and V2 (τ ) are non-negative and nonzero, and  Tγ ω2 is irreducible. Tγ ω2 be the iteration matrices defined by Eqs. (3) and (26) with i = 2, respectively. Assume that the Theorem 4.2. Let Tγ ω and  matrix H in (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bi1 > 0 for some i ∈ {2, . . . , p}, ls1 < 0 for some s ∈ {1, . . . , q}, 0 < 1 1 max2≤i≤p αi < 1−b and 0 < max1≤s≤q βs < 1−b whenever 0 ≤ b11 < 1 or α i > 0 and β s > 0 whenever b11 ≥ 1, cj1 > 0 for some 11

j ∈ {2, . . . , q}, 0 < max2≤ j≤q τ j < then

  ρ  Tγ ω2 < ρ Tγ ω < 1.

1 1−c11

11

whenever 0 ≤ c11 < 1 or τ j > 0 whenever c11 ≥ 1. If 0 ≤ γ < 1, 0 < ω ≤ 1 and ρ (Tγ ω ) < 1,

100

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

Remark 4.1. Theorems 4.1 and 4.2 illustrate that the presented preconditioners in (24) are efficient and the proposed preconditioned GAOR methods are superior to the original GAOR method. Secondly, for the spectral radii of the iteration matrices corresponding to the preconditioned GAOR methods defined by Eq. (25) and that corresponding to the preconditioned GAOR methods defined by Eqs. (12) and (21) (which were studied in [5,6]), we have the following comparison results. Theorem 4.3. Let T γ ω1 and  Tγ ω1 be the iteration matrices defined by Eqs. (13) and (26) with i = 1, respectively. Assume that the matrix H in Eq. (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, α > 0 and β > 0 whenever b11 ≥ 1, α > 1 − b11 and β > 1 − b11 whenever 0 ≤ b11 < 1, and τ > 0 whenever c11 ≥ 1, 0 < τ < 1 − c11 whenever 0 ≤ c11 < 1. If 0 < ω ≤ 1, 0 ≤ γ < 1 and ρ (Tγ ω ) < 1, then

  ρ  Tγ ω1 < ρ T γ ω1 < 1.

Proof. If follows from [5, Theorem 3.11] that ρ (T γ ω1 ) < 1 if ρ (Tγ ω ) < 1. Hence, it suffices to show that ρ ( Tγ ω1 ) < ρ (T γ ω1 ). From (13), we can see that the iteration matrix T γ ω1 is irreducible and non-negative under the conditions of the theorem. Similarly, it can be proved that  Tγ ω1 is irreducible and non-negative. By Lemma 2.1, there is a positive vector x for T γ ω1 such that

T γ ω1 x = ηx,

(29)

where η = 1 is the spectral radius of T γ ω1 . It is easy to see that the equality





Ip γ L¯ 1

ωP1 Hx =

0 (In − T γ ω1 )x = (1 − η ) Iq



Ip γ L¯ 1



0 x Iq

(30)

holds. It follows from (29) and (30) that

 Tγ ω1 x − ηx

 =  Tγ ω1 − T γ ω1 x



 ω B1 − B∗1  = 1 − L¯ 1 B∗ ω (γ − 1 )( L1 − L¯ 1 ) − ωγ  L1 B 1  =

 =



ω

0 ωγ V1 (τ )L(Ip − B )−ωV1 (τ )L+ωγ V1 (τ )LS1 (α )(Ip − B ) 0



0 γ V1 (τ )L(Ip + S1 (α ))

−V1 (τ )

0

0







∗  −ω U  1 − U1    1 − L¯ 1U ∗ x C1 − C¯1 − ωγ L1U 1

ωHx −1

P ωP1 Hx γ V1 (τ )L(Ip + S1 (α )) −V1 (τ ) 1    0 0 (Ip + S1 (α ))−1 0 In = (1 − η ) γ V1 (τ )L(Ip + S1 (α )) −V1 (τ ) K1 (Ip + S1 (α ))−1 In γ L + γ K1 (β )(I − B )   0 0 = (1 − η ) x. −1 V1 (τ )K1 (β )(I p + S1 (α ) ) − γ V1 (τ )K1 (β )(I p − B ) −V1 (τ ) =



0 x ωγ V1 (τ )LU − ωγ V1 (τ )(Iq − C )+ωγ V1 (τ )LS1 (α )U



0 x In

−1

By assumptions, we can see that V1 (τ )K1 (β ) = 0, thus γ V1 (τ )K1 (β ) I p + S1 (α ) − γ V1 (τ )K1 (β )(I p − B ) = 0. Moreover, when 0 < ω ≤ 1, 0 ≤ γ < 1, and V1 (τ ) are non-negative and nonzero matrices, we have



0 0

and



0 0

0



−V1 (τ )

0

x≤0



−V1 (τ )

x = 0.

If η < 1, then  Tγ ω1 x − ηx ≤ 0 and  Tγ ω1 x − ηx = 0, hence, Lemma 2.2 implies that ρ ( Tγ ω1 ) < ρ (T ωγ 1 ).



Adopting the similar prove process, the following results can be obtained, we only state the comparison results without proof. Theorem 4.4. Let T γ ω2 and  Tγ ω2 be the iteration matrices defined by Eqs. (22) and (26) with i = 2, respectively. Assume that the matrix H in Eq. (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bi1 > 0 for some i ∈ {2, . . . , p}, ls1 < 0 for some s ∈ {1, . . . , q}, 0 <

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

max2≤i≤p αi <

1 1−b11

and 0 < max1≤s≤q βs <

j ∈ {2, . . . , q}, 0 < max2≤ j≤q τ j < then

101

whenever 0 ≤ b11 < 1 or α i > 0 and β s > 0 whenever b11 ≥ 1, cj1 > 0 for some

1 1−b11

whenever 0 ≤ c11 < 1 or τ j > 0 whenever c11 ≥ 1. If 0 ≤ γ < 1, 0 < ω ≤ 1 and ρ (Tγ ω ) < 1,

1 1−c11

ρ ( Tγ ω2 ) < ρ (T γ ω2 ) < 1.  in (14) is efficient than the precondiRemark 4.2. From Theorem 4.3, we can observe that the proposed preconditioner P 1  in (14) is more efficient than the preconditioner tioner P 1 [5]. It follows from Lemma 3.2 that the proposed preconditioner P 1  in (23) is better than the preconditioners P¯1 [18]. Theorem 4.4 and Lemma 3.4 indicate that the proposed preconditioner P 2 ¯ P 2 [6] and P2 [9]. Finally, the comparison results about the convergence of the preconditioned GAOR methods defined by Eq. (25) with those of the preconditioned GAOR methods defined by Eqs. (8) (studied in [16]) and (17) (studied in [17]) are given. Theorem 4.5. Let  Tγ ω1 and  Tγ ω1 be the iteration matrices of preconditioned GAOR methods (8) and (25) with i = 1, respectively. Assume that the matrix H in Equation (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, α , β > 0, 0 ≤ b11 < 1, τ > 0 whenever c11 ≥ 1, 0 < τ < 1 − c11 whenever 0 ≤ c11 < 1. If 0 ≤ γ < 1, 0 < ω ≤ 1 and ρ (Tγ ω ) < 1, then

ρ ( Tγ ω1 ) < ρ ( Tγ ω1 ) < 1. Tγ ω1 ) < 1 if ρ (Tγ ω ) < 1. We only need to show ρ ( Tγ ω1 ) < ρ ( Tγ ω1 ). Proof. If follows from [16, Theorem 3.1] that ρ ( By assumptions, it is easy to show that  Tγ ω1 and  Tγ ω1 are irreducible and non-negative. By Lemma 2.1, there is a positive vector x for  Tγ ω1 such that

 Tγ ω1 x = μx,

(31)

where μ = ρ ( Tγ ω1 ) and μ = 1. Moreover, it holds that

ωP1 Hx =



Ip γ L∗1

0 Iq





In −  Tγ ω1 x = (1 − μ )





Ip γ L∗1

0 x. Iq

(32)

From (31) and (32), we can deduce that

 Tγ ω1 x − μx



=

 =

 =

Ip γ L1

0 Iq

Ip γ L1

0 Iq

Ip γ L1

0 Iq

−1 

  =

 =



Ip γ L1

0 Iq

Ip γ L1

0 Iq

Ip γ L1

0 Iq

−L1

−1  −1  

+ (1 − μ ) =

 0 (1 − ω )In + (ω − γ )  1 −ωIP + ωB −ω L1



+

 ωB1 0

0 Iq

−1 

0 0

−1 

0 Iq

x − μx

 x

x 0



0 −ω (I p − B )

K1 (β )

−K1 (β ) 0

−K1 (β ) 

0 ∗ + (1 − μ ) γ  L −L 1



−ω (I p + S1 (α ) )U −ωIq + ωC − ωV1 (τ )(Iq − C ) − ωK1 (β )U



0

−1 

1 −ωU  ωC1

 IP + (1 − μ )  γ L1

−ωI p + ωB − ωS1 (α )(I p − B ) −ω (Iq + V1 (τ ) )L − ωK1 (β )(I p − B )

IP γ L1

1

0 0 0 0



ω (I p − B ) 0

0

−ωU

ωU 0

  0 ωP1 H − −K1 (β ) 

0 0

 



1 H + (1 − μ ) − ωP

IP γ L1



0 ∗ + (1 − μ ) γ  L −L 1

0 0

1

 ω S1 ( α ) ( I p − B ) 0

0 Iq

0 Iq

0 0

 x

 x

ωS1 (α )U



0

x

    0 0 0 0

 = (1 − μ ) −K (β ) 0 + (1 − μ ) γ  L1 − L∗1 0 1    ω (Ip − B ) ωU 0 0 + x K1 (β )S1 (α ) 0 0 0     0 0 0 0 ω (I p − B ) = (1 − μ ) x+ γ K1 (β )(Ip − B ) − K1 (β ) 0 K1 (β )S1 (α ) 0 0 Ip γ L1

−1 

1 −ωU 1 −ωIq + ωC



0 0

ωU 0

 x.



102

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

Note that K1 (β )S1 (α ) = 0, we have



 Tγ ω1 x − μx = (1 − μ )

0 γ K1 (β )(Ip − B ) − K1 (β )



0 x. 0

By the assumptions, we can see that K1 (β ) and S1 (α ) are non-negative and nonzero matrices, hence



0 γ K1 (β )(Ip − B ) − K1 (β )

and



0 γ K1 (β )(Ip − B ) − K1 (β )



0 x≤0 0



0 x = 0. 0

If μ < 1, then  Tγ ω1 x − μx ≤ 0 and  Tγ ω1 x − μx = 0, Lemma 2.2 gives ρ ( Tγ ω1 ) < ρ ( Tγ ω1 ).



Moreover, l11 = 0 leads to K2 (β )S2 (α ) = 0. In a manner similar to that done for Theorem 4.5, we can derive the following comparison theorem for  Tγ ω2 and  Tγ ω2 . Theorem 4.6. Let  Tγ ω2 and  Tγ ω2 be the iteration matrices of preconditioned GAOR methods (17) and (25) with i = 2, respectively. Assume that the matrix H in Eq. (1) is irreducible with L ≤ 0, U ≤ 0, B ≥ 0, C ≥ 0, bi1 > 0 for some i ∈ {2, . . . , p}, l11 = 0 and 1 1 ls1 < 0 for some s ∈ {2, . . . , q}, 0 < max2≤i≤p αi < 1−b and 0 < max1≤s≤q βs < 1−b whenever 0 ≤ b11 < 1, cj1 > 0 for some j ∈

{2, . . . , q}, 0 < max2≤ j≤q τ j <

1 1−c11

11

11

whenever 0 ≤ c11 < 1 or τ j > 0 whenever c11 ≥ 1. If 0 ≤ γ < 1, 0 < ω ≤ 1 and ρ (Tγ ω ) < 1, then

ρ ( Tγ ω2 ) < ρ ( Tγ ω2 ) < 1. Remark 4.3. Theorems 4.5 and 4.6 state that the proposed preconditioners is more efficient than ones in [16] and [17] under some mixed conditions. 5. Numerical example In this section, an example with numerical experiments is given to illustrate the theoretical results provided in the present paper. Example 5.1. The coefficient matrix H in Eq. (1) is given by



H=

I−B L



U , I −C

where B = (bi j ) ∈ R p×p , C = (ci j ) ∈ Rq×q , L = (li j ) ∈ Rq×p , and U = (ui j ) ∈ R p×q with

bii = bi j = bi j = cii = ci j = ci j = li j = ui j =

1 , 1 ≤ i ≤ p, 10(i + 1 ) 1 1 − , 1 ≤ i < j ≤ p, 30 30 j + i 1 1 − , 1 ≤ j < i ≤ p, 30 30(i − j + 1 ) + i 1 , 1 ≤ i ≤ q, 10( p + i + 1 ) 1 1 − , 1 ≤ i < j ≤ q, 30 30( p + j ) + p + i 1 1 − , 1 ≤ j < i ≤ q, 30 30(i − j + 1 ) + p + i 1 1 − , 1 ≤ i ≤ q, 1 ≤ j ≤ p, 30( p + i − j + 1 ) + p + i 30 1 1 − , 1 ≤ i ≤ p, 1 ≤ j ≤ q. 30( p + j ) + i 30

This classical example is introduced firstly in [18], and used to verify the efficiency of the preconditioners for the GAOR method for solving the linear system (1) in many papers, see for example [5–9,12,16,17]. It is not difficult to verify that H is irreducible. Table 1 displays the spectral radii of the corresponding iteration matrices with some randomly chosen p and the parameters α , αi (i = 2, . . . , p), β , βs (s = 1, . . . , q ), τ , τ j ( j = 2 . . . , q ), ω and γ , i = ρ ( which are chosen to be satisfied the conditions of Theorems 4.1–4.6. In Table 1, ρ = ρ (Tω,γ ), ρ i = ρ (T γ ωi ) ρ Tγ ωi )

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

103

Table 1 Spectral radii of GAOR and preconditioned GAOR iteration matrices. n p

γ ω

5 3 0.6 0.8

10 5 0.9 0.7

15 8 0.85 0.95

20 10 0.75 0.95

25 12 0.5 0.8

30 16 0.9 0.6

ρ ρ1 ρ2 ρ1 ρ2

0.2831 0.2508 0.2784 0.2675 0.2803

0.4344 0.4271 0.4316 0.4262 0.4315

0.3677 0.3616 0.3646 0.3613 0.3645

0.5275 0.5240 0.5253 0.5239 0.5253

0.7554 0.7539 0.7543 0.7540 0.7544

0.9122 0.9118 0.9119 0.9118 0.9119

ρ1 ρ2

0.2491 0.2777

0.4226 0.4302

0.3584 0.3631

0.5221 0.5243

0.7531 0.7538

0.9115 0.9117

i = ρ ( and ρ Tγ ωi ) for i = 1, 2. The smallest spectral radii of the the corresponding iteration matrices are marked in black characters for different problem size. 1 < ρ 1 < ρ , ρ 1 < ρ 1 < ρ , ρ 2 < ρ 2 < ρ and ρ 2 < ρ 2 < ρ hold. All results indicate that From Table 1, it can be seen that ρ  return better results than the preconditioner P  [16] and the preconditioner P [5], and the the proposed preconditioner P 1 1 1  return better results than the preconditioner P  [17] and the preconditioner P [6]. Numerical the proposed preconditioner P 2 2 2 results are in accordance with the theoretical results given in Theorems 4.1–4.6. It should be remarked that in Theorem 4.6, we require l11 = 0 and other conditions such that ρ ( Tγ ω2 ) < ρ ( Tγ ω2 ) < 1, however, from the numerical results, we can see that ρ ( Tγ ω2 ) < ρ ( Tγ ω2 ) < 1 holds even if l11 = 0. 6. Conclusions In this paper, two new preconditioners with the unified form

i = P



I p + Si ( α ) Ki (β )



0 , i = 1, 2. Iq + Vi (τ )

for the GAOR method are proposed, the convergence rates of the new preconditioned GAOR methods for solving generalized least squares problems are studied. Comparison results given in Section 4 show that the convergence rates of the new preconditioned GAOR methods are better than those of the preconditioned GAOR methods in [5,6,9,16–18] whenever these methods are convergent. The numerical results given in Section 5 are consistent with the theoretical results given in Section 4. For other choices of Si (α ) and Vi (τ ) (see for example [6,9,11,17]), we can set suitable Ki (β ), propose the preconditioner of this form and study the corresponding preconditioned GAOR methods. We will report the related results in the near future. Acknowledgments Shu-Xin Miao was funded by the the China Postdoctoral Science Foundation (No. 2017M613244) and the Science and Technology Program of Shandong Colleges (J16LI04). References [1] A. Berman, R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Academic Press, New York, 1979. [2] K. Chen, Matrix Preconditioning Techniques and Applications, Cambridge University Press, 2005. [3] M.T. Darvishi, P. Hessari, On convergence of the generalized AOR method for linear systems with diagonally dominant coefficient matrices, Appl. Math. Comput. 176 (2006) 128–133. [4] A. Hadjidimos, Accelerated overrelaxation method, Math. Comput. 32 (1978) 149–157. [5] Z.-G. Huang, Z. Xu, Q. Lu, J.-J. Cui, Some new preconditioned generalized AOR methods for generalized least-squares problems, Appl. Math. Comput. 269 (2015) 87–104. [6] Z.-G. Huang, L.-G. Wang, Z. Xu, J.-J. Cui, Some new preconditioned generalized AOR methods for solving weighted linear least squares problems, Comp. Appl. Math. (2016), doi:10.1007/s40314- 016- 0350- 8. [7] S.-X. Miao, Some preconditioning techniques for solving linear systems, Lanzhou University, 2012. [8] S.-X. Miao, On preconditioned GAOR methods for weighted linear least squares problems, J. Comput. Anal. Appl. 18 (2015) 371–382. [9] H. Shen, X. Shao, T. Zhang, Preconditioned iterative methods for solving weighted linear least squares problems, Appl. Math. Mech. 33 (2012) 375–384. [10] R.S. Varga, Matrix Iterative Analysis, Springer, Berlin, 20 0 0. [11] G. Wang, Y. Du, F. Tan, Comparison results on preconditioned GAOR methods for weighted linear least squares problems, J. Appl. Math. 2012 (2012) 1–9. [12] G. Wang, T. Wang, F. Tan, Some results on preconditioned GAOR methods, Appl. Math. Comput. 219 (2013) 5811–5816. [13] D.M. Young, Iterative Solution of Large Linear Systems, Academic Press, NewYork, 1971. [14] J.Y. Yuan, Numerical methods for generalized least squares problem, J. Comput. Appl. Math. 66 (1996) 571–584. [15] J.Y. Yuan, X.Q. Jin, Convergence of the generalized AOR method, Appl. Math. Comput. 99 (1999) 35–46.

104

S.-X. Miao et al. / Applied Mathematics and Computation 324 (2018) 93–104

[16] J.H. Yun, Comparison results on the preconditioned GAOR method for generalized least squares problems, Int. J. Comput. Math. 89 (2012) 2094–2105. [17] J. Zhao, C. Li, F. Wang, Y. Li, Some new preconditioned generalized AOR methods for generalized least squares problems, Int. J. Comput. Math. 91 (2014) 1370–1381. [18] X. Zhou, Y. Song, L. Wang, Q. Liu, Preconditioned GAOR methods for solving weighted linear least squares problems, J. Comput. Appl. Math. 224 (2009) 242–249.