The nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for solving nonlinear saddle-point problems

The nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for solving nonlinear saddle-point problems

Applied Mathematics and Computation 270 (2015) 291–311 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepag...

448KB Sizes 0 Downloads 54 Views

Applied Mathematics and Computation 270 (2015) 291–311

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

The nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for solving nonlinear saddle-point problems✩ Na Huang, Chang-Feng Ma∗ School of Mathematics and Computer Science, Fujian Normal University, Fuzhou 350007, China

a r t i c l e

i n f o

MSC: 65H10 65F10 65N22 Keywords: Nonlinear saddle-point problem Nonlinear inexact Uzawa algorithm One-step Newton method Preconditioning

a b s t r a c t This paper mainly consider the construction of efficient Uzawa-type algorithms for solving the nonlinear saddle-point problem (NSPP). Two nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method are proposed for solving the NSPPs. By using the so-called energy-norm, the convergence of the proposed algorithms is verified under some practical conditions. In addition, some numerical results are reported to illustrate the behavior of the considered algorithm. © 2015 Elsevier Inc. All rights reserved.

1. Introduction In this paper, we consider the following nonlinear saddle-point system



F (x) + By = f, BT x = g,

(1.1)

where B ∈ Rn×m with full column rank (m ≤ n), and F : Rn → Rn is a nonlinear vector-valued function, not necessarily differentiable. Specially, when F (x) = Ax with A being an n × n symmetric positive definite matrix, the system (1.1) reduces to the well known linear saddle-point problem. Linear saddle-point problem has many applications, such as computational fluid dynamics [1–4], inversion of geophysical data [5], mixed or hybrid finite element approximations of second-order elliptic problems [6,7], stationary semiconductor device [8,9], weighted and equality constrained least squares estimation [10], elasticity problems or the Stokes equations [6], and so on. And the iteration methods for solving linear saddle point problem include Uzawa-type schemes (see [11–15]), iterative projection methods (see [16]), block and approximate Schur complement preconditioners (see [17–21]), iterative null space methods (see [22–24]), splitting methods (see [25–30]). In recent years, there is a rapidly increasing literature which is concerned with preconditioned iterative methods for solving linear saddle-point problem (see [11,12,17,21, 31–34]). In particular, inexact Uzawa-type algorithms have attracted wide attention (see [11,12,17,32,34] and the references

✩ The project Supported by National Natural Science Foundation of China (Grant Nos. 11071041,11201074), Fujian Natural Science Foundation (Grant No. 2013J01006), The University Special Fund Project of Fujian (Grant No. JK2013060) and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under grant number ZDYZ2012-1-02-04. ∗ Corresponding author. Tel.: +86 13763827962. E-mail address: [email protected] (C.-F. Ma).

http://dx.doi.org/10.1016/j.amc.2015.08.022 0096-3003/© 2015 Elsevier Inc. All rights reserved.

292

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

therein), as the inexact Uzawa-type algorithms have minimal memory requirements, and are easy to implement. In [11], Bramble et al. proposed the inexact Uzawa algorithms both in linear and nonlinear case for solving linear saddle point problem. And then many researchers are devoted to this field. In 2001, Hu and Zou, by introducing two variable relaxation parameters, which is similar to the evaluation of the two iteration parameters in the conjugate gradient method, presented a new linear inexact Uzawa method (see [35]). By this new technique, Hu and Zou considered some nonlinear inexact Uzawa methods for iteratively solving linear saddle-point problems (see [36]). Some years later, they proposed an improved variant of Algorithm 4.1 presented in [36]. The new algorithm (Algorithm 2.2 in [37]) is primarily for the case that cond(Kˆ −1 K )  cond(Aˆ −1 A), where K is the Schur complement matrix of linear saddle-point system, Kˆ and Aˆ are the preconditioners of K and A, respectively. The nonlinear saddle-point system (1.1) is also significant in practice. (1.1) arises frequently in nonlinear optimizations (see [6,38,39]), electromagnetic Maxwell equations (see [40,41]), and augmented Lagrangian formulations of inverse problem (see [42]). However, there are relative few researches of the numerical algorithms for solving (1.1) (see [37,43,44]). In [43] Ciarlet presented an Uzawa algorithm for the nonlinear saddle-point system (1.1) as follows



F (xi+1 ) = f − Byi , yi+1 = yi + τ (BT xi+1 − g),

(1.2)

where τ is a positive stepsize. And then Chen came up with a new inexact Uzawa method by approximatively solving the first equation of (1.2) (see [44]). Besides, in [44] Chen proposed a Newton–Uzawa hybrid method by combining the inexact Uzawa method mentioned above with the Newton-type decomposition method proposed in [45]. Hu and Zou considered the nonlinear inexact Uzawa algorithm to solve the system (1.1) (see [37]). However, all the existing iterative methods need to solve the first equation of (1.2) accurately or approximatively with some certain accuracy. In this paper, combining the Newton method with the nonlinear inexact Uzawa method in [21,36,37], we propose two nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for solving the nonlinear system (1.1). And we prove that the new method converges to the solution of (1.1) by using the energy-norm. In order to do this, we need to introduce some notation. Rl will mean the usual l-dimensional Euclidean space.  ·  will mean



1

the usual 2−norm. For any l × l positive definite matrix G, xG = Gx, x = G 2 x for all x ∈ Rl . For each element v ∈ Rn × Rm , we will write it as v = {v1 , v2 }, where v1 ∈ Rn and v2 ∈ Rm . And the energy-norm ||| · ||| is defined as follows (see [35–37])

|||v|||2 = v1 2A−1 + v2 2K , where A = Ax is a positive definite matrix as defined in (2.1) and K = BT A−1 B. The rest of this paper is organized as follows. In Section 2, we propose two nonlinear inexact Uzawa hybrid algorithms based on one-step Newton method for nonlinear saddle-point problems. In Section 3, the rate of convergence of the proposed algorithms is analyzed under some weak smoothness assumptions on the nonlinear functions F(x). Some numerical experiments are presented in Section 4 to show the effectiveness of our methods. And, finally, in Section 5, we draw a brief conclusion. 2. Preliminaries and algorithms In this section, we shall show the new iterative methods, which we call Newton-nonlinear inexact Uzawa hybrid algorithms, for solving nonlinear saddle-point problem (1.1). Firstly, we recall some existing results from [37], [46] and [47], which will be used in the subsequent analysis. As standard assumptions for nonlinear systems, we assume that F is locally Lipschitzian and strongly monotone with modulus μ, i.e.,

F (ξ ) − F (η), ξ − η ≥ μξ − η2 , ∀ξ , η ∈ Rn . By Rademacher’s theorem [46], F is differentiable almost everywhere. Let DF be the set of points where F is differentiable and let ∇ F(ξ ) be the gradient of F at ξ ∈ DF . Then the generalized Jacobian of F at ξ in the sense of Clarke (see [46]) is

∂ F (ξ ) = co∂s F (ξ ), where co∂ s F(ξ ) is the convex hull of the set and

∂s F (ξ ) =



η

lim → ξ , η ∈ DF

 ∇ F (η) .

In this paper, we assume that all the generalized Jacobian matrices of F are symmetrical. This assumption is practical, for example, the nonlinear function F(x) in the saddle problem arising from nonlinear optimizations satisfies this assumption (see [37]). As F is strongly monotone, all matrices from ∂ F(ξ ) for any ξ ∈ Rn are positive definite (see [37,47]). And for any V ∈ ∂ F(ξ ), it holds

V η, η ≥ μη, η . Then by the above results, as in [37,44], we assume that F is semismooth on Rn in the sense that for any ξ ∈ Rn there is a positive definite matrix Aξ such that

lim  → 0, Aξ ∈ ∂s F (ξ + )

F (ξ + ) − F (ξ ) − Aξ  = 0.  

(2.1)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

293

Let {x, y} ∈ Rn × Rm be the exact solution of system (1.1). Let Ax be defined in (2.1) at x. For the sake of simplicity, we shall write Ax as A below. Then by [37], we have the following properties of F: Property 2.1. (see [37]) a. For any ξ ∈ Rn there is a positive definite matrix Aξ such that

F (ξ + ) − F (ξ ) − Aξ A−1 = 0;   A

lim  → 0, Aξ ∈ ∂s F (ξ + )

(2.2)

b. There exists a positive constants c such that

√ cξ − η2A ≤ F (ξ ) − F (η), ξ − η , ∀ ξ , η ∈ Rn .

(2.3)

Evidently, (2.2) implies that for any given constant ω ∈ (0, 1), there is a positive number rω such that

F (x + ) − F (x) − AA−1 ≤ ωA , ∀  ∈ SA (0, rω ).

(2.4)

Finally, we assume some sort of Lipschitzian property on the generalized Jacobian of F: there exists a positive constant L such that for any two vectors ξ , η ∈ Rn ,

A− 2 (Vξ − Vη )A− 2  ≤ Lξ − ηA , ∀ Vξ ∈ ∂ F (ξ ), Vη ∈ ∂ F (η). 1

1

(2.5)

This assumption is the same as the assumption in [37]. Now we discuss the algorithms to solve the nonlinear saddle-point problem (1.1). To this end, we introduce some notations: Let Ai = Axi be a positive definite matrix as defined in (2.1), and let Aˆ i be a positive definite preconditioner of Ai . Like the linear saddle-point problem, we can get an exact Schur complement at xi and its approximation:

Ki = BT A−1 B, Hi = BT Aˆ −1 B. i i Let Kˆi be a preconditioner for Ki . Similarly to the literature on nonlinear inexact Uzawa algorithms, we introduce two nonlinear mapping. One is Ai+1 : Rn −→ Rn satisfying

Ai+1 (Bdi ) − A−1 Bdi Ai+1 ≤ γ A−1 Bdi Ai+1 i+1 i+1 for some γ ∈ [0, 1). And the other nonlinear mapping Hi :

Hi (gi ) −

 ≤δ 

Hi−1 gi Hi

(2.6) Rm

−→

Rm

satisfied



−1 g Hi gi Hi

(2.7)

for some δ g ∈ [0, 1). Now we are in a position to propose the following algorithms for solving the nonlinear saddle-point system (1.1). Algorithm 2.1. Given {x0 , y0 } ∈ Rn × Rm , the sequence {xi , yi } ∈ Rn × Rm is defined for i = 1, 2, . . . as follows: Step 1. Compute fi = f − F (xi ) − Byi and

xi+1 = xi + A−1 fi . i Step 2. Compute gi =



τi =

(2.8) BT x

i+1

− g and di =

gi , di , Ai+1 (Bdi ), Bdi 0,

Kˆi−1 gi .

Then compute the relaxation parameter

gi = 0,

(2.9)

gi = 0.

Update

yi+1 = yi + θi τi di ,

(2.10)

where θ i > 0 is a given damping factor. Algorithm 2.2. Given {x0 , y0 } ∈ Rn × Rm , the sequence {xi , yi } ∈ Rn × Rm is defined for i = 1, 2, . . . as follows: Step 1. Compute fi = f − F (xi ) − Byi and

xi+1 = xi + A−1 fi . i

(2.11)

Step 2. Compute gi = BT xi+1 − g and di = Hi (gi ). Then compute the relaxation parameter



τi =

gi , di , Ai+1 (Bdi ), Bdi 0,

gi = 0,

(2.12)

gi = 0.

Update

yi+1 = yi + θi τi di , where θ i > 0 is a given damping factor.

(2.13)

294

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

3. Convergence analysis In this section, we will establish the convergence theorems for Algorithms 2.1 and 2.2. For this purpose, we need a few auxiliary lemmas. For any fixed constant α ∈ (0, 1), we introduce three error vectors:

exi = x − xi , eyi = y − yi , Ei = {



α fi , eyi }, i = 1, 2, . . . .

Although our algorithms are different from the algorithms in [37], the following lemma can be proven by the exact same way as Lemma 3.6 in [37] with the assumption (2.5). Hence we show the following lemma without proof. Lemma 3.1. (see [37])For any given positive ε < 1, and any xi ∈ SA (x, ε /(2L)), we have

(A − Ai )hA−1 ≤ εhA , ∀ h ∈ Rn .

(3.1)

So all the eigenvalues of the matrix A−1 Ai lie in the interval [1 − ε , 1 + ε ], and

h hA ≤ √ Ai , ∀ h ∈ Rn . 1−ε

(3.2)

The next lemma is significant for the convergence analysis. Lemma 3.2. For any fixed constant α ∈ (0, 1), the following inequality holds



x − x i  A ≤

1+α |||Ei |||. αc

(3.3)

Let ω ∈ (0, 1) be a given parameter, and rω < 1/(2L) be the maximal positive number such that (2.4) is satisfied. If



|||Ei ||| ≤

αc rω 1+α

holds, then



1 (1 + θ )[(2Lrω + ω)2 + θ ] 2 x − xi+1 A ≤ |||Ei |||, α c(1 − 2Lrω )2

where

θ=

1 2



(1 + α)2 + 4α c + α − 1 > 0.

(3.4)

(3.5)

Proof. By (2.3) and Cauchy–Schwarz inequality, we derive

√ cx − xi 2A ≤ F (x) − F (xi ), x − xi ≤ F (x) − F (xi )A−1 x − xi A ,

√ which shows cx − xi A ≤ F (x) − F (xi )A−1 . This together with the fact that

fi = f − F (xi ) − Byi = F (x) + By − F (xi ) − Byi = F (x) − F (xi ) + Beyi

(3.6)

yields

cx − xi 2A ≤ F (x) − F (xi )2A−1 =  fi − Beyi 2A−1 =

1 1  fi 2A−1 + eyi 2K − 2 A− 2 fi , A− 2 Beyi



 fi 2A−1 + eyi 2K + 2 fi A−1 eyi K 1

 fi 2A−1 + eyi 2K + α fi 2A−1 + eyi 2K α

1+α √ 1 eyi 2K =  α fi 2A−1 + 1 + α α 1+α = |||Ei |||2 , α ≤

(3.7)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

295

where the second equality and the second inequality hold by

Beyi A−1 = A− 2 Beyi  = A− 2 Beyi , A− 2 Beyi 2 1

1

1

1

= BT A−1 Beyi , eyi 2 = eyi K . 1

Thus we have the first result. It follows from (2.8) and (3.6) that

x − xi+1 A = x − xi − A−1 f i A i y = A−1 [A ( x − x i i ) − F (x) + F (xi ) − Bei ]A i ≤ A 2 A−1 A 2 A− 2 [Ai (x − xi ) − F (x) + F (xi ) − Beyi ] i 1

1

1

≤ A 2 A−1 A 2 [Ai (x − xi ) − A(x − xi )A−1 i 1

1

+A(x − xi ) − F (x) + F (xi )A−1 + Beyi A−1 ]. Using (3.3) we obtain

 x − x i  A ≤

1+α |||Ei ||| ≤ rω . αc

This, along with (2.4) and Lemma 3.1, leads to

F (xi ) − F (x) − A(xi − x)A−1 ≤ ωx − xi A and

(Ai − A)(x − xi )A−1 ≤ 2Lrω x − xi A . 1

1

From Lemma 3.1, we know that all the eigenvalues of the matrix A− 2 Ai A− 2 lie in the interval [1 − 2Lrω , 1 + 2Lrω ]. It is obvious 1 2

1 2

A  ≤ 1/(1 − 2Lrω ). Thus we have that A A−1 i

x − xi+1 A ≤

1 [(2Lrω + ω)x − xi A + eyi K ]. 1 − 2Lrω

For convenience, we set a = 2Lrω + ω. For any θ > 0, it follows from the second inequality in (3.7) that

(ax − xi A + eyi K )2 = a2 x − xi 2A + eyi 2K + 2ax − xi A eyi K a2 a2 ( fi 2A−1 + eyi 2K + 2 fi A−1 eyi K ) + eyi 2K + θ x − xi 2A + eyi 2K c θ a2 y y y ≤ ( fi 2A−1 + ei 2K + 2 fi A−1 ei K ) + ei 2K c a2 y 2 θ + ( fi 2A−1 + eyi 2K + 2 fi A−1 eyi K ) + e  c θ i K  a2 + θ a2 θ a2 2(a2 + θ ) eyi 2K + =  fi 2A−1 + 1 + + +  fi A−1 eyi K c c c θ c ≤



1 (a2 + θ ) eyi 2K + (θ  fi 2A−1 + eyi 2K ) c θ

1 1 1 (a2 + θ )(1 + θ ) √ eyi 2K . =  α fi 2A−1 + (a2 + θ ) + + αc c θ θc a2 + θ ≤  fi 2A−1 + c

1+

a2 θ a2 + + c c θ

Let

1 1 1 1+θ = + + . αc c θ θc By direct computing, we get

θ=

1 2



(1 + α)2 + 4α c + α − 1 > 0.

(3.8)

296

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

Then we have

(a2 + θ )(1 + θ ) √ ( α fi 2A−1 + eyi 2K ) αc (a2 + θ )(1 + θ ) = |||Ei |||2 , αc

(ax − xi A + eyi K )2 ≤

or in other words, 1  2 (a + θ )(1 + θ ) 2 ax − xi A +   ≤ |||Ei |||. αc

eyi K

This together with (3.8) yields

x − xi+1 A ≤

1 [(2Lrω + ω)x − xi A + eyi K ] 1 − 2Lrω

1  (1 + θ )[(2Lrω + ω)2 + θ ] 2 |||Ei ||| ≤ α c(1 − 2Lrω )2

This completes the proof.  Lemma 3.3. Let ω ∈ (0, 1) be a given parameter, γ < 1/2 be a fixed positive constant such that (2.6) holds, and rω < (1 − 2γ )/(6L) be the maximal positive number such that (2.4) is satisfied. For any fixed constant α ∈ (0, 1), if

|||Ei ||| ≤ min

 α c  12  α c(1 − 2Lrω )2 , rω 1 + α (1 + θ )[(2Lrω + ω)2 + θ ]

holds, then

1 2

Ai+1 (Bdi ) − A−1 Bdi A < A−1 Bdi A holds. Proof. Evidently, we have

Ai+1 (Bdi ) − A−1 Bdi A ≤ Ai+1 (Bdi ) − A−1 Bdi A + A−1 Bdi − A−1 Bdi A . i+1 i+1

(3.9)

By (3.4), we get

x − xi+1 A ≤

1  2Lrω (1 + θ )[(2Lrω + ω)2 + θ ] 2 . |||Ei ||| ≤ rω = 2 2L α c(1 − 2Lrω )

Since

rω <

1 1 − 2γ < , 6L 2L

we know that (3.1) and (3.2) hold with ε = 2Lrω < 1. This, along with (2.6), leads to

Ai+1 (Bdi ) − A−1 Bdi A ≤ i+1 ≤

Ai+1 (Bdi ) − A−1 Bdi Ai+1 i+1  

γ

1 − 2Lrω

1 − 2Lrω

A−1 Bdi Ai+1 i+1

and

A−1 Bdi − A−1 Bdi A = A−1 (A − Ai+1 )A−1 Bdi A i+1 i+1 =

(A − Ai+1 )A−1 Bdi A−1 i+1

≤ 2Lrω A−1 Bdi A i+1 ≤



2Lrω 1 − 2Lrω

A−1 Bdi Ai+1 . i+1

Thereby, from (3.9), we obtain

γ + 2Lrω γ + 2Lrω Ai+1 (Bdi ) − A−1 Bdi A ≤  A−1 Bdi Ai+1 =  Bdi A−1 . i+1 i+1 1 − 2Lrω

1 − 2Lrω

(3.10)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

297

In addition, by Lemma 3.1, we know that all the eigenvalues of the matrix A−1 Ai+1 lie in the interval [1 − 2Lrω , 1 + 2Lrω ]. Therefore, all the eigenvalues of the matrix A−1 A lie in the interval [1/(1 + 2Lrω ), 1/(1 − 2Lrω )]. This, together with the fact that A−1 A i+1 i+1 1

1

A 2 , leads to is similar to A 2 A−1 i+1

A 2 A−1 A 2 (A− 2 Bdi ), A− 2 Bdi ≤ i+1 1

1

1

1

1 1 1 A− 2 Bdi , A− 2 Bdi . 1 − 2Lrω

Namely,

A−1 Bdi , Bdi ≤ i+1

1 A−1 Bdi , Bdi . 1 − 2Lrω

This shows that

Bdi A−1 ≤  i+1

1 1 − 2Lrω

Bdi A−1 .

Then, by (3.10), we derive

Ai+1 (Bdi ) − A−1 Bdi A ≤

γ + 2Lrω 1 − 2Lrω

Bdi A−1 =

γ + 2Lrω 1 − 2Lrω

A−1 Bdi A .

Noticing that rω < (1 − 2γ )/(6L), it follows the result.  By the above lemma, we can introduce a constant δ d < 1/2 that is the minimal positive number satisfying

Ai+1 (Bdi ) − A−1 Bdi A ≤ δd A−1 Bdi A .

(3.11)

Lemma 3.4. (see [17]) Let H be a symmetric, positive definite m × m matrix and let d, dˆ ∈ Rm satisfy

d − dˆH ≤ βdH with 0 ≤ β < 1. Then there exists a symmetric, positive definite matrix Cˆ such that

Cˆdˆ = Hd and

I − Cˆ− 2 HCˆ− 2  ≤ β . 1

1

Now we will prove the following lemma in a way similar to that in [36]. For convenience, we introduce the following fixed number

r∗ = min



α

2L(1 + α)

,



1 − 2γ . 6L

Lemma 3.5. Let ω ∈ (0, 1) be a given parameter, γ < 1/2 be a fixed positive constant such that (2.6) holds, and rω < r∗ be the maximal positive number such that (2.4) is satisfied. For any fixed constant α ∈ (0, 1), if

 α c  12  α c(1 − 2Lrω )2 |||Ei ||| ≤ min , rω 1 + α (1 + θ )[(2Lrω + ω)2 + θ ]

holds, then there is a symmetric and positive definite matrix Qi such that (1) Qi−1 gi = θi τi di ;

(2) all the eigenvalues of the matrix Qi−1 K are in the interval [(1 − β)θi , (1 + β)θi ], where −1

κ = sup cond(Kˆi Ki ), β = i



1−

4κ(1 − 2δd )(1 − 2Lr∗ ) . (κ + 1)2 (1 − δd )2 (1 + 2Lr∗ )

Proof. It is easy to see that

Ai+1 (Bdi ), Bdi = Ai+1 (Bdi ) − A−1 Bdi , Bdi + A−1 Bdi , Bdi . Using the Cauchy–Schwarz inequality and (3.11), yields

|Ai+1 (Bdi ) − A−1 Bdi , Bdi | ≤ Ai+1 (Bdi ) − A−1 Bdi A Bdi A−1 ≤ δd A−1 Bdi A Bdi A−1 = δd di 2K . Thus we have

(1 − δd )di 2K ≤ Ai+1 (Bdi ), Bdi ≤ (1 + δd )di 2K .

298

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

Since

gi , di = gi , Kˆi−1 gi > 0, we get

gi , di gi , di ≤ τi ≤ . (1 + δd )di 2K (1 − δd )di 2K

(3.12)

Let

. F (τ ) = τ di − K −1 gi 2K = τ 2 di 2K − 2τ di , gi + K −1 gi 2K , then



F (τi ) ≤ max F

gi , di gi , di  ,F . (1 + δd )di 2K (1 − δd )di 2K

(3.13) 1

1

On the other hand, from Lemma 3.1, we can obtain that all the eigenvalues of the matrix Ai2 A−1 Ai2 lie in the interval [1 − 2Lrω , 1 + 2Lrω ]. This shows that 1

1

(1 − 2Lrω )I  Ai2 A−1 Ai2  (1 + 2Lrω )I, or in other words,

(1 − 2Lrω )A−1  A−1  (1 + 2Lrω )A−1 . i i Hence we can get

(1 − 2Lrω )Ki  K  (1 + 2Lrω )Ki . By using Lemma 1.1 in [48], we have

1 1 K −1  K −1  K −1 . 1 + 2Lrω i 1 − 2Lrω i This together with the Kantorovich inequality (see [35]) yields

gi , Kˆi−1 gi 2 gi , di 2 = 2 2 −1 −1 di K K gi K Kˆi gi 2K K −1 gi 2K 1

1

=

Kˆi− 2 gi , Kˆi− 2 gi 2 K Kˆi−1 gi , Kˆi−1 gi K −1 gi , gi



Kˆi− 2 gi , Kˆi− 2 gi 2 1 (1 + 2Lrω )Ki Kˆi−1 gi , Kˆi−1 gi  1−2Lr K −1 gi , gi ω i



Kˆi− 2 gi , Kˆi− 2 gi 2 1 − 2Lr∗ ∗ 1 + 2Lr Ki Kˆ −1 gi , Kˆ −1 gi K −1 gi , gi i i i

1

1

1

1

1

1

1

1

Kˆi− 2 gi , Kˆi− 2 gi Kˆi− 2 gi , Kˆi− 2 gi 1 − 2Lr∗ = 1 ∗ 1 + 2Lr Kˆ − 2 K Kˆ − 12 Kˆ − 12 g , Kˆ − 12 g Kˆ 12 K −1 Kˆ 12 Kˆ − 12 g , Kˆ − 12 g i i i i i i i i i i i i i i −1

It follows that



F



1 − 2Lr∗ 4cond(Kˆi Ki ) 1 + 2Lr∗ (cond(Kˆ −1 Ki ) + 1)2 i



1 − 2Lr∗ 1 + 2Lr∗

4κ . (κ + 1)2

[2(1 + δd ) − 1]gi , di 2 gi , di

K −1 gi 2K = 1 − (1 + δd )di 2K (1 + δd )2 di 2K K −1 gi 2K

2(1 + δd ) − 1 1 − 2Lr∗ 4κ K −1 gi 2K . ≤ 1− ∗ 2 2 (1 + δd ) 1 + 2Lr (κ + 1)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

As δ d < 1/2, we have 2(1 − δd ) − 1 > 0. Similarly, we can get



gi , di F (1 − δd )di 2K





2(1 − δd ) − 1 1 − 2Lr∗ 1− (1 − δd )2 1 + 2Lr∗

4κ (κ + 1)2

299

K −1 gi 2K .

This together with (3.13) and the fact that

2(1 + δd ) − 1 2(1 − δd ) − 1 < (1 − δd )2 (1 + δd )2 follows that



F (τi ) ≤

2(1 − δd ) − 1 1 − 2Lr∗ 1− (1 − δd )2 1 + 2Lr∗

4κ (κ + 1)2

K −1 gi 2K = β 2 K −1 gi 2K .

This shows that

τi di − K −1 gi K ≤ βK −1 gi K . Since 0 < δ d < 1/2 and rω < 1/(2L), we know that 0 < β < 1. It follows from Lemma 3.4 that there is a symmetric and positive definite matrix Qˆi such that

Qˆi τi di = KK −1 gi = gi and 1

1

I − Qˆi− 2 K Qˆi− 2  ≤ β . Therefore, we derive that

Qˆi−1 gi = τi di and all the eigenvalues of the matrix Qˆi−1 K are in the interval [1 − β , 1 + β ]. Let Qi =

1

θi

Qˆi , we complete the proof. 

By f = F (x) + By and Algorithm 2.1, and using the notations exi and ei , we have y

fi = f − F (xi ) − Byi = F (x) + By − F (xi ) − Byi = Aexi + Beyi + F (x) − F (xi ) − Aexi = Aexi + Beyi + ϕi ,

(3.14) ϕ

1

1

1

where ϕi = F (x) − F (xi ) − Aexi . Let ei = A− 2 fi − A− 2 ϕi − A 2 A−1 fi . Then it follows from (2.8), (2.10) and (3.14) that i

exi+1 = x − xi − A−1 fi i = A−1 ( fi − Beyi − ϕi ) − A−1 fi i 1

ϕ

= A− 2 ei − A−1 Beyi , and

eyi+1 = y − yi − θi τi di = eyi − Qi−1 gi = eyi − Qi−1 (BT xi+1 − g) = eyi + Qi−1 BT exi+1 ϕ

= eyi + Qi−1 BT (A− 2 ei − A−1 Beyi ) 1

1

ϕ

= eyi − Qi−1 BT A−1 Beyi + Qi−1 BT A− 2 ei . This shows that

A− 2 fi+1 = A 2 exi+1 + A− 2 Beyi+1 + A− 2 ϕi+1 1

1

1

1

ϕ

ϕ

= A 2 (A− 2 ei − A−1 Beyi ) + A− 2 B(eyi − Qi−1 BT A−1 Beyi + Qi−1 BT A− 2 ei 1

1

1

ϕ

1

= (I + A− 2 BQi−1 BT A− 2 )ei − A− 2 BQi−1 BT A−1 Beyi + A− 2 ϕi+1 , 1

1

1

1

and 1

1

− 12

Qi 2 eyi+1 = Qi 2 eyi − Qi − 12

= Qi

− 12

BT A−1 Beyi + Qi ϕ

− 12

BT A− 2 ei + (I − Qi 1

1

ϕ

− 12

)Qi2 eyi .

BT A− 2 e i

BT A−1 BQi

1

) + A− 2 ϕi+1 1

300

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

By the similar trick of [17], [49] and [50], we write − 12

W = Qi

1

BT A− 2 = U  V T

(3.15)

as the singular value decomposition, where U is an orthogonal m × m matrix and V is an orthogonal n × n. Since B is full column rank,  has the form of

 = (0 0), where 0 = diag{σ1 , σ2 , . . . , σm } and σ j > 0, j = 1, 2, . . . , m are the singular values of W. Then for any fixed constant α ∈ (0, 1), we obtain

√ − 1

α A 2 fi+1 1

Qi 2 eyi+1

⎞ ⎛ √

1 ϕ 1 e √ αW WW ⎝ α i ⎠ α A− 2 ϕi+1 + 1 −(I − WW T ) 0 2 y

 + W TW ) α(I √ = αW





α(I √ + W TW ) αW

T

(3.16)

−Qi ei

Using (3.15) we can get



T

αW T WW T −(I − WW T )



αV  T U T U V T V  T U T −(I − U V T V  T U T ) 

√ αV  T  T U T αV (√I +  T  )V T = αU V T −U (I −  T )U T

 



√ V 0 + T ) α  T 02 V T 0 α(I√ = . 0 U α −(I − 02 ) 0 UT 

=



V  T U T U V T ) α(I +√ αU V T

Combining it with (3.16), we have

√ T − 1

αV A 2 fi+1 1

U T Qi 2 eyi+1

 α(I√ + T ) = α

which can be equivalent to

√ T − 1

 αV A 2 fi+1 α(√I +  T  ) 1 = T 2 y α 0  0U Qi ei+1 Let

⎞ ⎛ √

1 T ϕ 1 V e √ α  ⎝ α αV T A− 2 ϕi+1 , i ⎠ + 1 −(I −  ) 0 T 2 y √

T

2 0 2 0

−U Qi ei

⎛ ⎞

√

1 T ϕ T − 12 V e √ α  0 ⎝ α α V A ϕ i i+1 ⎠+ . 1 −(I − 02 ) 0 −0U T Qi 2 eyi √

T

α  T 0 . −(I − 02 )

 α(√I +  T  ) T (α) = α 0 

(3.17)



(3.18)

We will estimate the spectral radius of the symmetric matrix T(α ) or T(α ) as follows. Theorem 3.1. Let ω ∈ (0, 1) be a given parameter, γ < 1/2 be a fixed positive constant such that (2.6) holds, and rω < r∗ be the maximal positive number such that (2.4) is satisfied. For any fixed constant α ∈ (0, 1), if

 |||Ei ||| ≤ min



12  αc α c(1 − 2Lrω )2 , rω 1 + α (1 + θ )[(2Lrω + ω)2 + θ ]

holds, then ρ(T (α)) ≤ ρˆi , where

⎧ (1 + α)(1 − β) ⎪ ⎨1 − θi , 2  ρˆi = α − 1 + 2 ( 1 + α)θ + (α − 1 + 2(1 + α)θi )2 + 4α ⎪ i ⎩ , 2

Proof. It is easy to see that

 = T

 0  0

0



0 =

 02 0

0 . 0

0 < θi <

θi ≥

1−α , 1+α

1−α . 1+α

(3.19)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

Taking into (3.18), we gain



T (α) =

α(I + 02 ) √0

α 02





α 02

0 αI 0

0 , −(I − 02 )

which is similar to



. Tˆ (α) =

αI 0 0

301

0 α(√I + 02 )



√0

α 02

α 02

−(I − 02 )

.

Hence ρ(T (α)) = ρ(Tˆ (α)). It is easy to see that the eigenvalues of Tˆ (α) are α with n − m multiplicity and

α − 1 + (1 + α)σ j2 ±



[α − 1 + (1 + α)σ j2 ]2 + 4α 2

,

j = 1, 2, . . . , m.

By (3.15) we have − 12

Qi

− 12

KQi

− 12

= Qi



1

− 12

1

BT A− 2 A− 2 BQi



= U 0

T

0 V V

 0

UT

0

= U 02U T .

(3.20)

This together with Lemma 3.5 yields that

(1 − β)θi ≤ σ j2 ≤ (1 + β)θi ,

j = 1, 2, . . . , m.

Thus we have

   |α − 1 + (1 + α)(1 ± β)θi | + |α − 1 + (1 + α)(1 ± β)θi |2 + 4α ρ(Tˆ (α)) ≤ max α , . 2

Since

α=

α−1+

and the function z +



(α − 1)2 + 4α



2

α − 1 < 0,

,

z2 + 4α is monotonically increasing in ( − ∞, +∞), we have



ρ(Tˆ (α)) ≤ max

|α − 1 + (1 + α)(1 ± β)θi | +

and if 0 < θi <

1−α 1+α ,

|α − 1 + (1 + α)(1 + β)θi | +



|α − 1 + (1 + α)(1 + β)θi |2 + 4α 2

1−α 1+α ,

ρ(Tˆ (α)) ≤

|α − 1 + (1 + α)(1 ± β)θi |2 + 4α 2

By some calculations, it follows that if θi ≥

ρ(Tˆ (α)) ≤



|α − 1 + (1 + α)(1 − β)θi | +



|α − 1 + (1 + α)(1 − β)θi |2 + 4α 2

Furthermore, if θi ≥

1−α 1+α ,

we get

α − 1 + (1 + α)(1 + β)θi ≥ α − 1 + (1 + α)(1 + β) =

β(1 − α) > 0,

1−α 1+α

and

α − 1 + (1 + α)(1 + β)θi ≤ α − 1 + 2(1 + α)θi . This along with (3.21) yields

ρ(Tˆ (α)) ≤

α − 1 + 2(1 + α)θi +



(α − 1 + 2(1 + α)θi )2 + 4α 2

.

 .

,

(3.21)

.

(3.22)

302

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

When 0 < θi <

1−α , we have 1+α

α − 1 + (1 + α)(1 − β)θi < α − 1 + (1 + α)(1 − β) and

1 − α − (1 + α)(1 − β)θi +



1−α < 0, 1+α

(1 − α − (1 + α)(1 − β)θi )2 + 4α

 2 1 − α − (1 + α)(1 − β)θi + (1 − α)2 + 4α



= 1−

(1 + α)(1 − β) 2

2

θi .

Therefore,

ρ(Tˆ (α)) ≤ 1 −

(1 + α)(1 − β) 2

θi .

Noticing that ρ(T (α)) = ρ(Tˆ (α)) and summing up the above, we follows the result.  For convenience, we introduce the following few parameters, which will be used to describe the convergence region and convergence rate of Algorithm 2.1. Let ω1 < 1 be a fixed positive constant such that





√ √ √ √ [ c(1 − α) − ω1 1 + α ](2α c + ω1 1 + α) 2 ω1 < √ √ √ √ 2 c(1 + α)(α c + ω1 1 + α) c(1 + α) + ω1 1 + α)(1 − β)

holds. And let

2 ω1 , √ (α c(1 + α) + ω1 1 + α)(1 − β) √ √ √ √ [ c(1 − α) − ω1 1 + α ](2α c + ω1 1 + α) = , √ √ √ 2 c(1 + α)(α c + ω1 1 + α)

θ= θ





 ς1 = 1 + ω1



1 1 − ς1 α+1 (1 + θ )[(1 − 2γ + 3ω1 )2 + 9θ ] 2 max ρ ˆ , ς = , ω2 = . 2 i 2 ς2 α2c i 4c(1 + γ )

Lemma 3.6. If the damping factor θ i satisfies θ < θi < θ for any i = 1, 2, . . . , then



α c ρˆi < √ √ α c + ω1 1 + α holds for i = 1, 2, . . . , where ρˆi is defined in (3.19). 1−α 1+α ,

Proof. If 0 < θi <

from (3.19), we know that

√ α c ρˆi < √ √ α c + ω1 1 + α holds if and only if

1−

(1 + α)(1 − β) 2



α c θi < √ √ α c + ω1 1 + α

holds, which is equivalent to

θi >

2 ω1 = θ.  √ (α c(1 + α) + ω1 1 + α)(1 − β)

And when θi ≥

1−α 1+α ,

we need to find θ i satisfying

α − 1 + 2(1 + α)θi +



(α − 1 + 2(1 + α)θi )2 + 4α 2

<

√ α c . √ √ α c + ω1 1 + α

(3.23)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

For convenience, let d =

1 d≡ 2



d2 − α + d

As the function z + inequality



303



α c√ √ . It is easy to see that α c+ω1 α +1



d2 − α d



2

+ 4α .

z2 + 4α is monotonically increasing in ( − ∞, +∞), then the inequality (3.23) is equivalent to the following

α − 1 + 2(1 + α)θi <

d2 − α . d

By direct computing, it follows that

θi <









(d − α)(d + 1) [ c(1 − α) − ω1 1 + α ](2α c + ω1 1 + α) = = θ. √ √ √ 2d(1 + α) 2 c(1 + α)(α c + ω1 1 + α)

Therefore, if θ < θi < θ , then ρˆi <



α c√ √ holds, which completes the proof. α c+ω1 1+α



Theorem 3.2. For any fixed constant α ∈ (0, 1). Let ω ∈ (0, min {ω1 , ω2 }) be a given parameter, γ < 1/2 be a fixed positive constant such that (2.6) holds, and rω < r∗ be the maximal positive number such that (2.4) is satisfied. If the damping factor θ i satisfies θ < θi < θ for any i = 1, 2, . . . and the initial guess {x0 , y0 } satisfies

 |||E0 ||| ≤ min



12  αc α c(1 − 2Lrω )2 , rω . 1 + α (1 + θ )[(2Lrω + ω)2 + θ ]

then Algorithm 2.1 converges, and the rate of convergence can be estimated by

|||Ei+1 ||| ≤ ρi∗ |||Ei |||, i = 1, 2, . . . , where

ρi∗

(3.24)

= ˆ i + ¯ < 1,







1 (1 + θ )[(1 − 2γ + 3ω)2 + 9θ ] 2 ¯ = ω , 2 4c(1 + γ )

α+1 ˆ i = 1 + ω ρˆi , α2c

(3.25)

and ρˆi is defined in (3.19). Proof. Let {x, y} be the solution of the system (1.1). Firstly, we will prove

|||Ei+1 ||| ≤ ρi∗ |||Ei ||| < |||Ei |||

(3.26)

by mathematical induction. We start with the verification of this for i = 0. Taking i = 0 in (3.17), we get

⎞ ⎛

√

1 1 ϕ T − 12 √ V T e0 αV T A− 2 f1 α V A ϕ 1 ⎠ ⎝ α 1 = T (α) + . 1 0 0U T Q02 ey1 − U T Q 2 ey

√

0

0

0

From (3.20), we know that 1

2

1

1

0U T Q02 ey1  = 0U T Q02 ey1 , 0U T Q02 ey1 1

1

= Q02 U 02U T Q02 ey1 , ey1 = Key1 , ey1 = ey1 2K . This together with (3.27), the definition of Ei and the fact that V is an orthogonal matrix yields that



12 √ y 2 2 |||E1 ||| =  α f1 A−1 + e1 K 

1 2 2 1 √ T −1 2 T 2 y 2 =  αV A f1  + 0U Q0 e1  √ T − 1 αV A 2 f1  = 1 0U T Q02 ey1

(3.27)

304

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

⎛ ≤ T (α)⎝

 = T (α)



1 ϕ √ V T e0

α

1

−0U T Q02 ey0 ϕ 2

2 ey0 K

 α e0  +   √1

√

1 αV T A− 2 ϕ1 

⎠ + 

12

+

0



α A− 2 ϕ1 . 1

As T(α ) is symmetrical, we have T (α) = ρ(T (α)). By Theorem 3.1 we follows



12 √ 1 2 |||E1 ||| ≤ ρˆ0  √1αeϕ0  + ey0 2K +  α A− 2 ϕ1 .

(3.28) ϕ

For any positive constant b, it follows from the definition of e0 and ϕ 0 that 2

1

 √ eϕ0  + ey0 2K α =

1

α

2

y 2 A− 2 f0 − A− 2 ϕ0 − A 2 A−1 0 f 0  + e 0  K 1

1

1

1

2 2 −2 −2 f0 − A 2 A−1 ϕ0 ) + ey0 2K ((A0 − A)A−1 0 f 0 A−1 + ϕ0 A−1 − 2A 0 f0 , A 

 1 1 2 2 ≤ f  + 1 +  + ey0 2K . (1 + b)(A0 − A)A−1 ϕ −1 −1 0 0 A A 0 α b

=

1

1

1

α

By (3.3), we get

x − x 0  A ≤



2Lrω 1+α |||E0 ||| ≤ rω = . αc 2L

It follows with (2.4) and Lemma 3.1 that

ϕ0 A−1 = F (x) − F (x0 ) − Aex0 A−1 ≤ ωx − x0 A , and −1 (A0 − A)A−1 0 f 0 A−1 ≤ 2Lrω A0 f 0 A −2 2 = 2Lrω A 2 A−1 f0  0 A A 1

1

1

2 ≤ 2Lrω A 2 A−1 0 A  f 0 A−1 1



1

2Lrω  f0 A−1 ≤ α f0 A−1 , 1 − 2Lrω

where the last inequality holds by rω <

α

2L(1+α)

. Then by using (3.29) and the second inequality in (3.7) we can get



 2 1 1 1 (1 + b)α 2  f0 2A−1 + 1 + ω2 x − x0 2A + ey0 2K  √ eϕ0  + ey0 2K ≤ α b α  1 1+b 2 ≤ ω ( f0 2A−1 + ey0 2K + 2 f0 A−1 ey0 K ) + ey0 2K (1 + b)α 2  f0 2A−1 + α cb   1+b 2 1+b 2 2(1 + b)ω2 = α(1 + b) + ω  f0 2A−1 + 1 + ω ey0 2K +  f0 A−1 ey0 K α cb α cb α cb  

1+b 2 1+b 2 1+b 2 1 ≤ α(1 + b) + ω  f0 2A−1 + 1 + ω ey0 2K + ω α f0 2A−1 + ey0 2K α cb α cb α cb α   1+b 1+b 2 √ 1+b 2 1+b 2 = 1 + b + 2 ω2 + ω  α f0 2A−1 + 1 + ω + 2 ω ey0 2K α cb α cb α cb α cb  1+b 1+b 2 ≤ 1 + b + 2 ω2 + ω |||E0 |||2 . α cb α cb Let b∗ > 0 be the minimal point of the following function

1+b 1+b 2 . g(b) = 1 + b + 2 ω2 + ω . α cb α cb

(3.29)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

305

By direct computing we have



b =ω ∗

and

α+1 α2c





2 α+1 g(b ) = 1 + ω . α2c ∗

This shows that

 

12 

α+1 2 ϕ y 2 1 ≤ 1+ω √ α e 0  + e 0  K |||E0 |||. α2c

(3.30)

On the other hand, by (3.4) we have



1 (1 + θ )[(2Lrω + ω)2 + θ ] 2 x − x 1  A ≤ |||E0 ||| ≤ rω . α c(1 − 2Lrω )2 As rω < (1 − 2γ )/(6L), we have 2Lrω < (1 − 2γ )/3. This along with (2.4) follows that





 α A− 2 ϕ1  = αF (x) − F (x1 ) − Aex1 A−1 √ ≤ αωx − x1 A 

1 √ (1 + θ )[(2Lrω + ω)2 + θ ] 2 ≤ αω |||E0 ||| α c(1 − 2Lrω )2

1  (1 + θ )[(1 − 2γ + 3ω)2 + 9θ ] 2 ≤ω |||E0 ||| 2 4c(1 + γ ) = ||| ¯ E0 |||. 1

This together with (3.28) and (3.30) yields that





|||E1 ||| ≤

α+1 1+ω ρˆ0 + ¯ |||E0 ||| =. ρ0∗ |||E0 |||. α2c

Now we need to verify that ρ0∗ = ˆ 0 + ¯ < 1. By Lemma 3.6, we get





√ α+1 α+1 α c ˆ 0 = 1 + ω ρ ˆ < 1 + ω < 1, √ √ 0 α2c α 2 c α c + ω1 1 + α 

and



 ς1 = 1 + ω1







√ α+1 α+1 α c max ρ ˆ < 1 + ω = 1. √ √ 1 i α2c α 2 c α c + ω1 1 + α i

Thus ω2 > 0. And

1  (1 + θ )[(1 − 2γ + 3ω)2 + 9θ ] 2 2 4c(1 + γ ) 

1 (1 + θ )[(1 − 2γ + 3ω1 )2 + 9θ ] 2 < ω2 2 4c(1 + γ ) 1 − ς1 = ς = 1 − ς1 . ς2 2

¯ = ω

Thereby, we get

ρ0∗ = ˆ 0 + ¯ < ς1 + 1 − ς1 = 1. Summing up the above, we verify that (3.26) hold for i = 0. Now we assume (3.26) holds for i = k − 1 with any integer k > 1. Then with the relation (3.17) for i = k, one can follow exactly the same proof as for i = 0 above to verify that (3.26) holds for i = k. This completes the proof of (3.26) by induction. 

306

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

Remark 3.1. As we can see in Theorem 3.2, Algorithm 2.1 is locally convergence. The initial guess {x0 , y0 } is required to lie within a small neighborhood of {x, y}. In applications, the initial guess may be obtain using a globally convergent algorithm for (1.1) with one or two iterations, for example, the simple perturbation method (see [37]), which is known to have slow convergence but usually converges very fast at the first few iterations. Corollary 3.1. Assume the conditions of Theorem 3.2 are satisfied. Then the following estimates hold:



i−1 i−1 ! α+1 ! exi A ≤ √ ρk∗ |||E0 |||, eyi K ≤ ρk∗ |||E0 |||, i = 1, 2, . . . . α(1 − ω) k=0 k=0

Proof. It is easy to see that ∗ |||Ei ||| ≤ ρi−1 |||Ei−1 ||| ≤

i−1 !

ρk∗ |||E0 |||.

k=0

This shows that



 α fi A−1 ≤ |||Ei ||| ≤

i−1 !

ρk∗ |||E0 |||,

k=0

and

eyi K ≤ |||Ei ||| ≤

i−1 !

ρk∗ |||E0 |||.

k=0

This together with (3.14) yields that

exi A = A 2 exi  = A− 2 fi − A− 2 Beyi − A− 2 ϕi  1

1

1

1

≤ A− 2 fi  + A− 2 Beyi  + A− 2 ϕi  1

=

1

1

 fi A−1 + eyi K + ϕi A−1

i−1 i−1 ! 1 ! ∗ ≤ √ ρk |||E0 ||| + ρk∗ |||E0 ||| + ωexi A .

α k=0

k=0

Hence we have

i−1  ! 1 (1 − ω)exi A ≤ 1 + √ ρ ∗ |||E |||, α k=0 k 0

which follows the result.  Now we study the convergence of Algorithm 2.2. It follows from Lemma 3.4 and (2.7) that there is a symmetric and positive definite matrix Hˆ i such that Hˆ i−1 gi = Hi (gi ) and all eigenvalues of matrix Hˆ i−1 Hi are in the interval [1 − δg , 1 + δg ]. By the same way as [37], we have

cond(Hˆ i−1 Ki ) ≤

1 + δg cond(Aˆ −1 Ai ). i 1 − δg

Hence Algorithm 2.2 can be viewed as a variant of Algorithm 2.1 with Kˆi replaced by Hˆ i . Then by using Theorem 3.1 and Corollary 3.1, we can establish the convergence theorem of Algorithm 2.2. To this end, we introduce some parameters as follows, which are similar to the previous parameters.

1 + δg κ˜ = sup cond(Aˆ −1 Ai ), β˜ = i 1 − δg i



1−

4κ( ˜ 1 − 2δd )(1 − 2Lr∗ ) , (κ˜ + 1)2 (1 − δd )2 (1 + 2Lr∗ )

⎧ ˜ ⎪ ⎨1 − (1 + α)(1 − β) θi , 2  ρ˜i = 2 ⎪ ⎩ α − 1 + 2(1 + α)θi + (α − 1 + 2(1 + α)θi ) + 4α , 2

Let ω ˜ 1 < 1 be a fixed positive constant such that

0 < θi <

θi ≥

1−α , 1+α

1−α . 1+α

√ √ √ √ [ c(1 − α) − ω 2ω ˜1 ˜ 1 1 + α ](2α c + ω ˜ 1 1 + α) < √  √ √ √ ˜ 2 c(1 + α)(α c + ω ˜ 1 1 + α) (α c(1 + α) + ω˜ 1 1 + α)(1 − β)

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

holds. And let

θ∗ =





2ω ˜1 , √ ˜ c(1 + α) + ω ˜ 1 1 + α)(1 − β)



 ς˜1 = 1 + ω˜ 1



θ =

307

√ √ √ √ [ c(1 − α) − ω ˜ 1 1 + α ](2α c + ω ˜ 1 1 + α) , √ √ √ 2 c(1 + α)(α c + ω ˜ 1 1 + α)



1 1 − ς˜1 α+1 (1 + θ )[(1 − 2γ + 3ω˜ 1 )2 + 9θ ] 2 max ρ˜i , ς˜2 = ,ω ˜2 = . 2 ς˜2 α2c i 4c(1 + γ )

Then the following theorem can be established by using Theorem 3.1 and Corollary 3.1. ˜ 1, ω ˜ 2 }) be a given parameter, γ < 1/2 be a fixed positive constant Theorem 3.3. For any fixed constant α ∈ (0, 1). Let ω ∈ (0, min{ω ∗ such that (2.6) holds, and rω < r∗ be the maximal positive number such that (2.4) is satisfied. If the damping factor θ i satisfies θ < ∗ θi < θ for any i = 1, 2, · · · and the initial guess {x0 , y0 } satisfies



|||E0 ||| ≤ min



12  αc α c(1 − 2Lrω )2 , rω . 1 + α (1 + θ )[(2Lrω + ω)2 + θ ]

then Algorithm 2.2 converges, and the rate of convergence can be estimated by

|||Ei+1 ||| ≤ ρ˜i∗ |||Ei |||, i = 1, 2, . . . ,

(3.31)

which implies for i = 1, 2, . . . that



i−1 i−1 ! α+1 ! exi A ≤ √ ρ˜k∗ |||E0 |||, eyi K ≤ ρ˜k∗ |||E0 |||, α(1 − ω) k=0 k=0

where ρ˜i∗ = ˜ i + ¯¯ < 1,

 

α+1 ˜ i = 1 + ω ρ˜i , α2c

¯¯ = ω



1 (1 + θ )[(1 − 2γ + 3ω)2 + 9θ ] 2 . 2 4c(1 + γ )

(3.32)

Remark 3.2. From Theorem 3.3 we see that the rate of convergence ρ˜i∗ depends only on δ d , δ g , θ i and β˜ , and does not depend directly on cond(Kˆ −1 Ki ). Thereby, Algorithm 2.2 can effectively treat the case that cond(Kˆ −1 Ki )  cond(Aˆ −1 Ai ). i

i

i

4. Numerical experiments In this section, we will compare our Algorithm 2.1 with Algorithm 2.2 by different dimension and the damping factor θ i . All experiments were run on a PC with Intel(R) Core(TM) i5-3470 CPU @3.20 GHz and 4.00 GB RAM. All the programming is implemented in MATLAB R2011a (7.12). We report the number of required iterations (k), the CPU time (CPU) required when the process is stopped and the relative error ε , where



 f − F (xi ) − Byi 2 + g − BT xi  ε=  f 2 + g2

2

12 .

In our implementation, we stop our algorithms when ε ≤ 10−5 . Example 4.1. (see [37]) We consider the nonlinear saddle-point problem with the following form:



1 ξ2 ξn ξ1 F (ξ ) = Mξ + , ,··· , 5 1 + ξ12 1 + ξ22 1 + ξn2



B= 0

2Im − Tm

T

T

,

∀ ξ = (ξ1 , ξ2 , . . . , ξn )T ∈ Rn ,

,

where n = 2m, Im is the m × m identity matrix, and



M=

5 I 2 m

− 14 Tm −Im

−Im , Tm = tridiag(1, 0, 1) ∈ Rm×m . 5 I − 14 Tm m 2

And the right-hand side functions f and g in (1.1) are generated using system (1.1) when the exact solution is taken to be



x = (1, 1, · · · , 1) , y = T

T

1 1 1, , · · · , 2 m

.

308

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311 Table 1 Numerical results of Example 4.1 with m = 200. Algorithm 2.1

θi Iter CPU

ε

0.5 17 0.0017 7.1288e-006

Algorithm 2.2 0.7 11 0.0009 9.7587e-006

0.9 10 0.0009 2.9835e-006

0.5 18 0.0013 9.0659e-006

0.7 18 0.0014 9.5177e-006

0.9 25 0.0017 9.1285e-006

0.7 17 0.0062 9.0326e-006

0.9 24 0.0092 9.3199e-006

0.7 15 0.0230 8.8132e-006

0.9 15 0.0219 8.2628e-006

0.7 13 0.0629 9.9798e-006

0.9 15 0.0511 6.7100e-006

0.7 13 0.0954 8.6436e-006

0.9 15 0.0902 5.8025e-006

0.7 13 0.1614 7.7305e-006

0.9 15 0.1560 5.1603e-006

Table 2 Numerical results of Example 4.1 with m = 400. Algorithm 2.1

θi Iter CPU

ε

0.5 17 0.0070 5.9435e-006

Algorithm 2.2 0.7 11 0.0061 8.2437e-006

0.9 9 0.0052 8.7481e-006

0.5 18 0.0052 6.4499e-006

Table 3 Numerical results of Example 4.1 with m = 800. Algorithm 2.1

θi Iter CPU

ε

0.5 16 0.0284 9.9299e-006

Algorithm 2.2 0.7 11 0.0345 6.9725e-006

0.9 9 0.0286 6.9129e-006

0.5 16 0.0213 8.0254e-006

Table 4 Numerical results of Example 4.1 with m = 1200. Algorithm 2.1

θi Iter CPU

ε

0.5 16 0.0558 9.0542e-006

Algorithm 2.2 0.7 11 0.0649 6.3587e-006

0.9 9 0.0760 6.0665e-006

0.5 16 0.0628 6.5567e-006

Table 5 Numerical results of Example 4.1 with m = 1600. Algorithm 2.1

θi Iter CPU

ε

0.5 16 0.1046 8.4954e-006

Algorithm 2.2 0.7 11 0.1029 5.9521e-006

0.9 9 0.1074 5.5399e-006

0.5 16 0.1196 5.6791e-006

Table 6 Numerical results of Example 4.1 with m = 2000. Algorithm 2.1

θi Iter CPU

ε

0.5 16 0.1650 8.0924e-006

Algorithm 2.2 0.7 11 0.1499 5.6434e-006

0.9 9 0.1563 5.1634e-006

0.5 16 0.1397 5.0804e-006

It is easy to verify that F is strongly monotone and Lipschitz continuous. And the gradient matrix of F has the form of



1 − ξ22 1 − ξ12 1 1 − ξn2 ∇ F (ξ ) = M + diag , , · · · , . 5 (1 + ξ12 )2 (1 + ξ22 )2 (1 + ξn2 )2

Let Ai = ∇ F (xi ). In this example, we choose Aˆ i = In and Kˆi = BT B. Then Hi = BT B. And we take the initial guess {x0 , y0 } = {0, 0}. In our experiments, we use Gauss–Seidel iterative method to solve the equations Ai+1 z = Bdi and Hi z = gi such that (2.6) and (2.7) are satisfied with γ = 1/4 and δg = 1/4, respectively. For the convenience of numerical tests, we will replace the norms used in (2.6) and (2.7) by the Euclidean norms of the relative errors of the residuals. Then the numerical results for Example 4.1 can be seen from Tables 1–6.

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

309

Table 7 Numerical results of Example 4.2 with l = 8. Algorithm 2.1

θi Iter CPU

ε

0.5 19 0.0001 9.7583e-006

Algorithm 2.2 0.7 14 0.0001 7.7616e-006

0.9 15 0.0001 7.3532e-006

0.5 24 0.0002 6.7986e-006

0.7 21 0.0001 8.9856e-006

0.9 25 0.0002 7.1585e-006

0.7 29 0.0064 7.7288e-006

0.9 44 0.0077 6.1967e-006

0.7 40 0.7625 6.5183e-006

0.9 72 1.1536 8.7325e-006

Table 8 Numerical results of Example 4.2 with l = 16. Algorithm 2.1

θi Iter CPU

ε

0.5 24 0.0024 8.1690e-006

Algorithm 2.2 0.7 22 0.0018 8.6126e-006

0.9 21 0.0018 6.2237e-006

0.5 30 0.0074 7.9449e-006

Table 9 Numerical results of Example 4.2 with l = 32. Algorithm 2.1

θi Iter CPU

ε

0.5 39 0.0399 7.1492e-006

Algorithm 2.2 0.7 33 0.0335 5.0471e-006

0.9 25 0.0327 8.7970e-006

0.5 39 0.3533 8.0473e-006

As can be seen from Table 1–6, Algorithms 2.1 and 2.2 are both efficient for solving nonlinear saddle-point problem (1.1). And as the dimension increases, the CPU time of Algorithm 2.2 is less than that of Algorithm 2.1. Table 1–6 indicate that both the number of iterations and the CPU time of Algorithms 2.1 and 2.2 depend strongly on the damping factor θ i . When θi = 0.9, the number of iterations of Algorithm 2.1 is least. But the number of iterations of Algorithm 2.2 is least when θi = 0.7 Example 4.2. We consider the nonlinear saddle-point problem with the following form:

1 F (ξ ) = Mξ + √ exp 2πσ where

 M=

I⊗T +T ⊗I 0





ξ 2 2 I ⊗W , B = ∈ R2l ×l , W ⊗I 2σ 2

0 I⊗T +T ⊗I

∈ R2l

2

×2l 2

,

T=

1 1 · tridiag( − 1, 2, −1) ∈ Rl×l , W = · tridiag( − 1, 1, 0) ∈ Rl×l , h h2

h=

1 , n = 2l 2 , m = l 2 , l+1

And the right-hand side functions f and g in (1.1) are generated using system (1.1) when the exact solution is taken to be

x = (1, 1, . . . , 1)T ,

y = (1, 1, . . . , 1)T .

It is easy to verify that F is strongly monotone and Lipschitz continuous. And the gradient matrix of F has the form of





ξ1 ξ2l2 ∇ F (ξ ) = M + √ diag exp , · · · , exp , 2σ 2 2σ 2 2 2πσ 3 1



where ξ = (ξ1 , ξ2 , · · · , ξn )T ∈ R2l . Let Ai = ∇ F (xi ). In this example, we choose σ = 2, Aˆ i = In and Kˆi = Im . Then Hi = BT B. And we take the initial guess {x0 , y0 } = {0, 0}. In our experiments, we use Gauss–Seidel iterative method to solve the equations Ai+1 z = Bdi and Hi z = gi such that (2.6) and (2.7) are satisfied with γ = 1/4 and δg = 1/4, respectively. For the convenience of numerical tests, we will replace the norms used in (2.6) and (2.7) by the Euclidean norms of the relative errors of the residuals. From Table 7–9, we can see that the numerical results of Example 4.2 was affected more obviously by the factor θ i . Although we choose Kˆi = Im , Algorithm 2.1 is still quite efficient. 2

310

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

5. Conclusion remark In this paper, we propose Algorithms 2.1 and 2.2 for solving the nonlinear saddle-point system (1.1). As everyone knows that with the dimension increased the computations for solving linear system increase. And sometimes the equation (2.8) in Algorithm 2.1 and (2.11) in Algorithm 2.2 are not easy to solve accurately. Therefore, we can also introduce the mapping Ai : Rn −→ Rn satisfying

Ai ( fi ) − A−1 fi Ai ≤ γ A−1 f i A i i i for some γ ∈ [0, 1). Then Algorithms 2.1 and 2.2 can be modified as follows. Algorithm 5.1. Given {x0 , y0 } ∈ Rn × Rm , the sequence {xi , yi } ∈ Rn × Rm is defined for i = 1, 2, . . . as follows: Step 1. Compute fi = f − F (xi ) − Byi and si = Ai ( fi ). Then compute the relaxation parameter



νi =

 f i , si , Ai si , si 0,

si = 0, si = 0.

Update

xi+1 = xi + νi si . Step 2. Compute gi = BT xi+1 − g and di = Kˆi−1 gi . Then compute the relaxation parameter



τi =

gi , di , Ai+1 (Bdi ), Bdi 0,

gi = 0, gi = 0.

Update

yi+1 = yi + θi τi di , where θ i > 0 is a given damping factor. Algorithm 5.2. Given {x0 , y0 } ∈ Rn × Rm , the sequence {xi , yi } ∈ Rn × Rm is defined for i = 1, 2, . . . as follows: Step 1. Compute fi = f − F (xi ) − Byi and si = Ai ( fi ). Then compute the relaxation parameter



νi =

 f i , si , Ai si , si 0,

si = 0, si = 0.

Update

xi+1 = xi + νi si . Step 2. Compute gi = BT xi+1 − g and di = Hi (gi ). Then compute the relaxation parameter



τi =

gi , di , Ai+1 (Bdi ), Bdi 0,

gi = 0, gi = 0.

Update

yi+1 = yi + θi τi di , where θ i > 0 is a given damping factor. But the convergence analysis of the two above algorithms is in difficulty, which is our work in the future. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

H.C. Elman, A. Ramage, D.J. Silvester, Algorithm 866: Ifiss, a matlab toolbox for modelling incompressible flow, ACM Trans. Math. Software 33 (2007) 1–18. R. Glowinski, Numerical Methods for Nonlinear Variational Problems, Springer-Verlag, New York, 1984. H.C. Elman, Preconditioners for saddle point problems arising in computational fluid dynamics, Appl. Numer. Math. 43 (2002) 75–89. H.C. Elman, D.J. Silvester, A.J. Wathen, Performance and analysis of saddle point preconditioners for the discrete steady-state Navier–Stokes equations, Numer. Math. 90 (2002) 665–688. E. Haber, U.M. Ascher, D. Oldenburg, On optimization techniques for solving nonlinear inverse problems, Inverse Probl. 16 (2000) 1263–1280. F. Brezzi, M. Fortin, Mixed and Hybrid Finite Element Methods, Springer-Verlag, New York and London, 1991. I. Perugia, V. Simoncini, Block-diagonal and indefinite symmetric preconditioners for mixed finite element formulations, Numer. Linear Algebra Appl. 7 (2000) 585–616. G.E. Sartoris, A 3d rectangular mixed finite element method to solve the stationary semiconductor equations, SIAM J. Sci. Stat. Comput. 19 (1998) 387–403. S. Selberherr, Analysis and Simulation of Semiconductor Devices, Springer-Verlag, New York, 1984. A. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996. J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point problems, SIAM J. Numer. Anal. 34 (1997) 1072–1092.

N. Huang, C.-F. Ma / Applied Mathematics and Computation 270 (2015) 291–311

311

[12] H.C. Elman, G.H. Golub, Inexact and preconditioned Uzawa algorithms for saddle point problems, SIAM J. Numer. Anal. 31 (1994) 1645–1661. [13] M. Fortin, R. Glowinski, Augmented lagrangian methods: Applications to the numerical solution of boundary-value problems, Studies in Mathematics and Its Applications, vol. 15, North-Holland, 1983. [14] W. Zulehner, Analysis of iterative methods for saddle point problems: A unified approach, Math. Comp. 71 (2002) 479–505. [15] G.-F. Zhang, J.-L. Yang, S.-S. Wang, On generalized parameterized inexact Uzawa method for a block two-by-two linear system, J. Comput. Appl. Math. 255 (2014) 193–207. [16] M. Benzi, Solution of equality-constrained quadratic programming problems by a projection iterative method, Rend. Mat. Appl. 13 (1993) 275–296. [17] R.E. Bank, B.D. Welfert, H. Yserentant, A class of iterative methods for solving saddle point problems, Numer. Math. 56 (1990) 645–666. [18] E. de Sturler, J. Liesen, Block-Diagonal Preconditioners for Indefinite Linear Algebraic Systems, Report uiucdcs-r-2002-2279, University of Illinois, Champaign, IL, 2002. [19] A. Klawonn, Block-triangular preconditioners for saddle point problems with a penalty term, SIAM J. Sci. Comput. 19 (1998) 172–184. [20] M.F. Murphy, G.H. Golub, A.J. Wathen, A note on preconditioning for indefinite linear systems, SIAM J. Sci. Comput. 21 (2000) 1969–1972. [21] T. Rusten, R. Winther, A preconditioned iterative method for saddle point problems, SIAM J. Matrix Anal. Appl. 13 (1992) 887–904. [22] M. Arioli, G. Manzini, A null space algorithm for mixed finite-element approximations of Darcy’s equation, Comm. Numer. Meth. Eng. 18 (2002) 645–657. [23] N.I.M. Gould, M.E. Hribar, J. Nocedal, On the solution of equality constrained quadratic programming problems arising in optimization, SIAM J. Sci. Comput. 23 (2001) 1376–1395. [24] V. Sarin, A. Sameh, An efficient iterative method for the generalized stokes problem, SIAM J. Sci. Comput. 19 (1998) 206–226. [25] N. Dyn, W.E. Ferguson Jr., The numerical solution of equality constrained quadratic programming problems, Math. Comp. 41 (1983) 165–170. [26] G.H. Golub, A.j. wathen, an iteration for indefinite systems and its application to the Navier–Stokes equations, SIAM J. Sci. Comput. 19 (1998) 530–539. [27] G.H. Golub, X. Wu, J.-Y. Yuan, Sor-like methods for augmented systems, BIT 41 (2001) 71–85. [28] Z.-Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-hermitian splitting methods for non-hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626. [29] Z.-Z. Bai, G.H. Golub, L.-Z. Lu, J.-F. Yin, Block triangular and skew-hermitian splitting method for positive-definite linear systems, SIAM J. Sci. Comput. 26 (2005) 844–863. [30] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned hermitian and skew-hermitian splitting methods for non-hermitian positive semidefinite linear systems, Numer. Math. 98 (2004) 1–32. [31] J. Bramble, J. Pasciak, A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems, Math. Comp. 50 (1988) 1–18. [32] X. Chen, On preconditioned Uzawa methods and SOR methods for saddle-point problems, J. Comput. Appl. Math. 100 (1998) 207–224. [33] A. Klawonn, An optimal preconditioner for a class of saddle point problems with a penalty term, SIAMJ. Sci. Comput. 19 (1998) 540–552. [34] W. Queck, The convergence factor of preconditioned algorithms of the arrow-Hurwicz type, SIAMJ. Numer. Anal. 26 (1989) 1016–1030. [35] Q.-Y. Hu, J. Zou, An iterative method with variable relaxation parameters for saddle-point problems, SIAM J. Matrix Anal. Appl. 23 (2001) 317–338. [36] Q.-Y. Hu, J. Zou, Two new variants of nonlinear inexact Uzawa algorithms for saddle-point problems, Numer. Math. 93 (2002) 333–359. [37] Q.-Y. Hu, J. Zou, Nonlinear inexact Uzawa algorithms for linear and nonlinear saddle-point problems, SIAM J. OPTIM. 16 (2006) 798–825. [38] R. Fletcher, Practical Methods of Optimization, 2nd ed., John Wiley, Chichester, 1987. [39] P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim. 29 (1991) 119–138. [40] K.H. Chan, K. Zhang, J. Zou, G. schubert, a non-linear, 3-d spherical α 2 dynamo using a finite element method, Phys. Earth Planet. Int. 128 (2001) 35–50. [41] Z. Chen, Q. Du, J. Zou, Finite element methods with matching and nonmatching meshes for maxwell equations with discontinuous coefficients, SIAM J. Numer. Anal. 37 (2000) 1542–1570. [42] Z. Chen, J. Zou, An augmented lagrangian method for identifying discontinuous parameters in elliptic systems, SIAM J. Control Optim. 37 (1999) 892–910. [43] P.G. Ciarlet, Introduction to numerical linear algebra and optimization, Cambirdge University press, Cambridge, 1989. [44] X. Chen, Global and superlinear convergence of inexact Uzawa methods for saddle point problems with nondifferentiable mappings, SIAM J. Numer. Anal. 35 (1998) 1130–1148. [45] J.W. Schmidt, W. Hoyer, C. Haufe, Consistent approximations in newton-type decomposition methods, Numer. Math. 47 (1985) 413–425. [46] F.H. Clarke, Optimization and Nonsmooth Analysis, John Wiley, New York, 1983. [47] L. Qi, J. Sun, A nonsmooth version of newton’s method, Math. Program. 58 (1993) 353–368. [48] Minghui Wang, Musheng Wei, Shanrui Hu, The extremal solution of the matrix equation xs + a∗ x−q a = i, Appl. Math. Comput. 220 (2013) 193–199. [49] Z. Tong, A. Sameh, On an iterative method for saddle point problems, Numer. Math. 79 (1998) 643–646. [50] X.L. Cheng, On the nonlinear inexact Uzawa algorithm for saddle-point problems, SIAM J. Numer. Anal. 37 (2000) 1930–1934.