Improved gradient method for monotone and lipschitz continuous mappings in banach spaces

Improved gradient method for monotone and lipschitz continuous mappings in banach spaces

Acta Mathematica Scientia 2017,37B(2):342–354 http://actams.wipm.ac.cn IMPROVED GRADIENT METHOD FOR MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS IN BAN...

309KB Sizes 0 Downloads 24 Views

Acta Mathematica Scientia 2017,37B(2):342–354 http://actams.wipm.ac.cn

IMPROVED GRADIENT METHOD FOR MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS IN BANACH SPACES∗ Kazuhide NAKAJO Sundai Preparatory School, Surugadai, Kanda, Chiyoda-ku, Tokyo 101-8313, Japan E-mail : [email protected] Abstract Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E and {An }n∈N be a family of monotone and Lipschitz continuos mappings of C into E ∗ . In this article, we consider the improved gradient method by the hybrid method in mathematical programming [10] for solving the variational inequality problem for {An } and prove strong convergence theorems. And we get several results which improve the well-known results in a real 2-uniformly convex and uniformly smooth Banach space and a real Hilbert space. Key words

Variational inequality problem; gradient method; monotone operators; 2uniformly convex Banach space; hybrid method

2010 MR Subject Classification

1

49J40; 47J25; 47H05

Introduction

Throughout this article, let N and R be the set of all positive integers and the set of all real numbers, respectively and E be a real Banach space with norm k · k and E ∗ be the dual of E. For x ∈ E and x∗ ∈ E ∗ , let hx, x∗ i be the value of x∗ at x. Suppose that C is a nonempty closed convex subset of E and A is a monotone operator of C into E ∗ , that is, hx − y, Ax − Ayi ≥ 0 for all x, y ∈ C holds. Then, we consider the variational inequality problem [9, 18], that is the problem of finding an element z ∈ C such that hx − z, Azi ≥ 0 (∀x ∈ C). The set of all solutions of the variational inequality problem for A is denoted by V I(C, A). Goldstein [8] (see [17]) introduced the projection method called gradient projection method for solving the variational inequality problem. It’s iterative scheme is that x1 ∈ C, xn+1 = PC (xn − λAxn ) (∀n ∈ N), where PC is the metric projection onto C and λ > 0. And the projection method was studied by many researchers; see [32] for more details. Let A be a mapping of C into E ∗ . A is said to be Lipschitz continuous if there exists a constant L > 0 such that kAx − Ayk ≤ Lkx − yk ∗ Received

June 5, 2015.

No.2

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

343

holds for all x, y ∈ C and A is called inverse strongly monotone [3, 6, 19, 22] if there exists a constant α > 0 such that hx − y, Ax − Ayi ≥ αkAx − Ayk2 holds for every x, y ∈ C. We know that an inverse strongly monotone operator is monotone and Lipschitz continuous. Let A be an inverse strongly monotone operator with a constant α > 0 of a nonempty closed convex subset C of a real Hilbert space H into H such that V I(C, A) 6= ∅. Iiduka, Takahashi, and Toyoda [11] considered the modified gradient method by the hybrid method in mathematical programming [10] (see also [4, 23, 28]):   x1 = x ∈ C,       yn = PC (xn − λn Axn ),   Cn = {u ∈ C : ku − yn k ≤ ku − xn k},      Qn = {u ∈ C : hxn − u, x − xn i ≥ 0} ,    x n+1 = PCn ∩Qn x

for every n ∈ N, where 0 < a ≤ λn ≤ b < 2α (∀n ∈ N) and proved {xn } converges strongly to z = PV I(C,A) x; see also [24, 25]. In a 2-uniformly convex and uniformly smooth Banach space E, Iiduka and Takahashi [13] introduced the following iterative scheme for an inverse strongly monotone operator A with a constant α > 0 of E into E ∗ such that A−1 0 6= ∅:   x1 = x ∈ E,      −1    yn = J (Jxn − λn Axn ), Cn = {u ∈ E : φ(u, yn ) ≤ φ(u, xn )},     Qn = {u ∈ E : hxn − u, Jx − Jxn i ≥ 0} ,      xn+1 = ΠCn ∩Qn x

for every n ∈ N, where J is the duality mapping of E, φ(x, y) = kxk2 − 2hx, Jyi + kyk2 for every x, y ∈ E, and ΠC is the generalized projection of E onto a nonempty closed convex subset C of E. Then, they proved that if a ≤ λn ≤ αc1 for some a with 0 < a ≤ αc1 , where c1 is the constant in Theorem 2.1, {xn } converges strongly to ΠA−1 0 x. Recently, in a 2-uniformly convex Banach space E whose norm is uniformly Gˆateaux differentiable and for a family of mappings which extend inverse strongly monotone operators, Kimura and Nakajo [15] proved strong convergence by the hybrid method which generalizes the result of [13]. On the other hand, let A be a monotone and Lipschitz continuous mapping with a constant L(> 0) of a nonempty closed convex subset C of E into E ∗ such that F = V I(C, A) 6= ∅. In a finite dimensional Euclidean space Rn , Korpelevich [16] introduced the following so-called extragradient method: x1 = x ∈ C, xn+1 = PC (xn − λA(PC (xn − λAxn ))) (∀n ∈ N), where λ ∈ (0, 1/L). He showed that {xn } converges to an element of F . And Popov [27] proposed the method called modified extragradient method and proved convergence theroem. In a real Hilbert space H, Nadezhkina, Nakajo, and Takahashi [21] considered the improved

344

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

extragradient method by the hybrid method:     x1 = x ∈ C,     yn = PC (xn − λn Axn ),      z = α x + (1 − α )P (x − λ Ay ), n n n n C n n n   Cn = {u ∈ C : ku − zn k ≤ ku − xn k},      Qn = {u ∈ C : hxn − u, x − xn i ≥ 0} ,     x n+1 = PCn ∩Qn x

for every n ∈ N, where 0 < a ≤ αn ≤ b < 1 and 0 < c ≤ λn ≤ d < 1/L for all n ∈ N. They proved that {xn }, {yn }, and {zn } converge strongly to PF x. In this article, motivated by the results of [15, 21], we consider the improved gradient method (not extragradient method) by the hybrid method in a 2-uniformly convex and uniformly smooth Banach space E for solving the variational inequality problem for a family of monotone and Lipschitz continuous mappings and prove the strong convergence theroems. And we get several results which improve the well-known results in a 2-uniformly convex and uniformly smooth Banach space and a Hilbert space.

2

Preliminaries

Throughout this article, we write xn ⇀ x to indicate that a sequence {xn } converges weakly to x and xn → x will symbolize strong convergence. We define a function δE : [0, 2] 7→ [0, 1] called the modulus of convexity of E as follows: δE (ε) = inf{1 − kx + yk/2 : x, y ∈ E, kxk = 1, kyk = 1, kx − yk ≥ ε} for every ε ∈ [0, 2]. E is said to be uniformly convex if δE (ε) > 0 for each ε > 0. Let p > 1. E is called p-uniformly convex if there exists a constant c > 0 such that δE (ε) ≥ cεp for every ε ∈ [0, 2]. It is obviuos that a p-uniformly convex Banach space is uniformly convex. E is said to be strictly convex if k x+y 2 k < 1 for all x, y ∈ E with kxk = kyk = 1 and x 6= y. It is known that a uniformly convex Banach space is strictly convex and reflexive. The duality mapping ∗ J : E −→ 2E of E is defined by J(x) = {f ∈ E ∗ : hx, f i = kxk2 = kf k2 } for every x ∈ E. We know that if E is strictly convex and reflexive, then, the duality mapping J of E is onto and one-to-one and J −1 : E ∗ −→ 2E is the duality mapping of E ∗ . E is called smooth if the limit kx + tyk − kxk lim (2.1) t→0 t exists for every x, y ∈ S(E), where S(E) = {x ∈ E : kxk = 1}. The norm of E is called uniformly Gˆateaux differentiable if for each y ∈ S(E), the limit (2.1) is attained uniformly for x ∈ S(E). E is said to be uniformly smooth if the limit (2.1) is attained uniformly for (x, y) in S(E) × S(E). It is known that E ∗ is uniformly convex if and only if E is uniformly smooth. We know that the duality mapping J of E is single valued if and only if E is smooth. And it is known that if E is uniformly smooth, then the duality mapping J of E is uniformly continuous

No.2

345

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

on bounded subsets of E; see [29, 30] for more details. The following was proved by Xu [33]; see also [12, 13, 34] Theorem 2.1 Let E be a smooth Banach space. Then, the following are equivalent. (i) E is 2-uniformly convex. (ii) There exists a constant c1 > 0 such that for each x, y ∈ E, kx + yk2 ≥ kxk2 + 2hy, Jxi + c1 kyk2 holds. Remark 2.2 The duality mapping J of a real Hilbert space H is the identity mapping I. So, we can choose c1 = 1. Let E be a smooth Banach space. The function φ : E × E −→ R is defined by φ(x, y) = kxk2 − 2hx, Jyi + kyk2 for every x, y ∈ E. We have (kxk − kyk)2 ≤ φ(x, y) ≤ (kxk + kyk)2 for each x, y ∈ E and φ(z, x) + φ(x, y) = φ(z, y) + 2hx − z, Jx − Jyi for all x, y, z ∈ E. And it is known that if E is strictly convex and smooth, then, for x, y ∈ E, φ(x, y) = 0 if and only if x = y; see also [20]. We get the following result by Theorem 2.1. Lemma 2.3 Let E be a 2-uniformly convex and smooth Banach space. Then, for each x, y ∈ E, φ(x, y) ≥ c1 kx − yk2 and hx − y, Jx − Jyi ≥ c1 kx − yk2 hold, where c1 is the constant in Theorem 2.1. Proof

Let x, y ∈ E. From Theorem 2.1, we have φ(x, y) = kxk2 − kyk2 − 2hx − y, Jyi ≥ c1 kx − yk2 ,

where c1 is the constant in Theorem 2.1. So, we get 1 1 (kxk2 − 2hx, Jyi + kyk2 ) + (kyk2 − 2hy, Jxi + kxk2 ) 2 2 1 = (φ(x, y) + φ(y, x)) 2 ≥ c1 kx − yk2 .

hx − y, Jx − Jyi =

 Let C be a nonempty closed convex subset of a strictly convex, reflexive and smooth Banach space E and let x ∈ E. Then, there exists a unique element x0 ∈ C such that φ(x0 , x) = inf φ(y, x). y∈C

We denote x0 by ΠC x and call ΠC the generalized projection of E onto C; see [1, 2, 14]. In a real Hilbert space H, the generalized projection ΠC is equal to the metric projection of H onto C. We have the following well-known result [1, 2, 14] for the generalized projection. Lemma 2.4 Let C be a nonempty convex subset of a smooth Banach space E, x ∈ E and x0 ∈ C. Then, φ(x0 , x) = inf φ(y, x) if and only if hx0 − z, Jx − Jx0 i ≥ 0 for all z ∈ C. y∈C

Let C be a nonempty closed convex subset of a strictly convex and reflexive Banach space E and let x ∈ E. Then, there exists a unique element x0 ∈ C such that kx0 − xk = inf ky − xk. y∈C

Putting x0 = PC (x), we call PC the metric projection of E onto C; see [7]. And we have the following result for the metric projection; see [29] for more details.

346

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

Lemma 2.5 Let C be a nonempty closed convex subset of a strictly convex, reflexive and smooth Banach space E, x ∈ E and x0 ∈ C. Then, x0 = PC x if and only if hx0 −z, J(x−x0 )i ≥ 0 for all z ∈ C.

3

Strong Convergence by the Metric Projection

Let C be a nonempty closed convex subset of E, I be a countable set, and {Ai }i∈I be a family of mappings of C into E ∗ satisfing the following: (i) F = ∩i∈I V I(C, Ai ) 6= ∅; (ii) Ai is a monotone operator for every i ∈ I, that is, hx − y, Ai x − Ai yi ≥ 0 for all i ∈ I and x, y ∈ C; (iii) Ai is Lipschitz continuous for each i ∈ I, that is, there exists a sequence {Li }i∈I in (0, ∞) such that sup Li < ∞ and kAi x − Ai yk ≤ Li kx − yk for every i ∈ I and x, y ∈ C; i∈I

(iv) for all z ∈ F , sup kAi zk < ∞. i∈I

By the condition (iii), then the condition (iv) is equivalent to the condition (v): There exists a z ∈ F such that sup kAi zk < ∞. We know that F is closed and convex. In fact, let i∈I

z1 , z2 ∈ F , β ∈ (0, 1), and z = βz1 + (1 − β)z2 . By the condition (ii) and z1 , z2 ∈ V I(C, Ai ) for all i ∈ I, hx − z1 , Ai xi ≥ hx − z1 , Ai z1 i ≥ 0 and hx − z2 , Ai xi ≥ hx − z2 , Ai z2 i ≥ 0 for every i ∈ I and x ∈ C. So, we get hx − z, Ai xi = βhx − z1 , Ai xi + (1 − β)hx − z2 , Ai xi ≥ 0 for each i ∈ I and x ∈ C. As Ai is hemicontinuous (that is, the real valued function t 7→ hw, Ai (tv + (1 − t)u)i is continuous on [0, 1] for all u, v ∈ C and w ∈ E) by the condition (iii), we get hx − z, Ai zi ≥ 0 for all i ∈ I and x ∈ C, that is, z ∈ V I(C, Ai ) for each i ∈ I. Therefore, F is convex. Next, let {zm }m∈N be a sequence in F such that zm → z. As we have hx− zm , Ai xi ≥ hx− zm , Ai zm i ≥ 0 for every i ∈ I, m ∈ N, and x ∈ C by the condition (ii) and zm ∈ V I(C, Ai ) for all i ∈ I and m ∈ N, we get hx − z, Ai xi ≥ 0 for every i ∈ I and x ∈ C. As Ai is hemicontinuous, we get hx − z, Ai zi for each i ∈ I and x ∈ C, that is, z ∈ V I(C, Ai ) for all i ∈ I. Therefore, F is closed. Now, we propose the improved gradient method by the hybrid method which uses the metric projection and get the following new result in a 2-uniformly convex and uniformly smooth Banach space E. Theorem 3.1 Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E, I be a countable set, and {Ai }i∈I be a family of mappings of C into E ∗ satisfing the conditions (i)–(iv). Let i be a mapping of N to I satisfing the condition (NST) [26] (that is, there exists a subsequence {nk } of {n}n∈N such that for any i ∈ I, there is a constant Mi ∈ N with i ∈ {i(nk ), i(nk + 1), · · · , i(nk + Mi − 1)} for all large enough k ∈ N) and {λn }n∈N be a sequence in (0, ∞) which satisfies 0 < inf λn ≤ sup λn < ∞ n∈N

n∈N

and sup λn Li(n) < c1 , where c1 is the constant in Theorem 2.1. Let x ∈ C and {xn }n∈N be a n∈N

No.2

347

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

sequence in C generated by   x1 = x,      −1    yn = PC (xn − λn J Ai(n) xn ),

Cn = {z ∈ C : hyn − z, J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i ≥ 0} ,     Qn = {z ∈ C : hxn − z, J(x − xn )i ≥ 0} ,      xn+1 = PCn ∩Qn x

for each n ∈ N. Then, {xn } and {yn } converge strongly to PF x.

Proof As E ∗ is uniformly smooth, the duality mapping J −1 of E ∗ is single valued. Cn and Qn are closed and convex for every n ∈ N. Next, we show that for all n ∈ N, F ⊂ Cn . Let z ∈ F and we have hyn − z, Ai(n) yn i ≥ hyn − z, Ai(n) zi ≥ 0 for each n ∈ N from condition (ii). By Lemma 2.5, we get 0 ≤ hyn − z, J(xn − yn − λn J −1 Ai(n) xn )i and hence, 0 ≤ hyn − z, J(xn − yn − λn J −1 Ai(n) xn )i + λn hyn − z, Ai(n) yn i = hyn − z, J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i for every n ∈ N, that is, F ⊂ Cn . From this, we get F ⊂ Cn ∩ Qn for every n ∈ N and {xn } is well-defined. In fact, x1 = x ∈ C is given and from Q1 = C, F ⊂ C1 ∩ Q1 . Assume that xk is well-defined and F ⊂ Ck ∩ Qk for some k ∈ N. There exists a unique element xk+1 = PCk ∩Qk x and we get hxk+1 − z, J(x − xk+1 )i ≥ 0 for all z ∈ Ck ∩ Qk by Lemma 2.5. From F ⊂ Ck ∩ Qk , hxk+1 −z, J(x−xk+1 )i ≥ 0 for every z ∈ F , that is, F ⊂ Qk+1 . So, we obtain F ⊂ Ck+1 ∩Qk+1 . By mathematical induction, we get F ⊂ Cn ∩ Qn for every n ∈ N and {xn } is well-defined. By xn+1 = PCn ∩Qn x and F ⊂ Cn ∩ Qn , we have kxn+1 − xk ≤ kPF x − xk

(3.1)

for every n ∈ N, which implies {xn } is bounded. Let z ∈ F . From condition (iii), we have kAi(n) xn k ≤ kAi(n) xn − Ai(n) zk + kAi(n) zk ≤ Li(n) kxn − zk + sup kAi zk i∈I

≤ sup Li kxn − zk + sup kAi zk i∈I

i∈I

for each n ∈ N. So, by sup Li < ∞ in the condition (iii) and the condition (iv), {Ai(n) xn } is i∈I

bounded. And it follows from kyn k ≤ kyn − (xn − λn J −1 Ai(n) xn )k + kxn − λn J −1 Ai(n) xn k ≤ kxn − λn J −1 Ai(n) xn − zk + kxn − λn J −1 Ai(n) xn k ≤ 2(kxn k + λn kAi(n) xn k) + kzk for every n ∈ N and sup λn < ∞ that {yn } is bounded. From xn+1 ∈ Qn , hxn − xn+1 , J(x − n∈N

xn )i ≥ 0. By Theorem 2.1, we have kxn+1 − xk2 ≥ kxn − xk2 + 2hxn − xn+1 , J(x − xn )i + c1 kxn − xn+1 k2 ≥ kxn − xk2 + c1 kxn − xn+1 k2 for all n ∈ N, which implies that there exists lim kxn − xk and n→∞

lim kxn − xn+1 k = 0.

n→∞

(3.2)

348

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

From condition (iii), xn+1 ∈ Cn and Lemma 2.3, we get 0 ≤ hyn − xn+1 , J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i = hyn − xn+1 , J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) xn i −λn hyn − xn+1 , Ai(n) xn − Ai(n) yn i = hyn − xn , J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) xn i + hxn − xn+1 , J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) xn i − λn hyn − xn , Ai(n) xn − Ai(n) yn i − λn hxn − xn+1 , Ai(n) xn − Ai(n) yn i ≤ −c1 kxn − yn k2 + kxn − xn+1 k · kJ(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) xn k + λn kxn − yn k · kAi(n) xn − Ai(n) yn k + λn kxn − xn+1 k · kAi(n) xn − Ai(n) yn k ≤ (λn Li(n) − c1 )kxn − yn k2 +kxn − xn+1 k(kJ(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) xn k + λn Li(n) kxn − yn k) for all n ∈ N. As {xn }, {yn }, and {Ai(n) xn } are bounded, (3.2), and sup λn Li(n) < c1 , we get n∈N

2

lim (c1 − λn Li(n) )kxn − yn k = 0

n→∞

that is, lim kxn − yn k = 0.

n→∞

(3.3)

By the condition (iii) and (3.3), we get lim kAi(n) xn − Ai(n) yn k = 0. As J −1 is uniformly n→∞ continuous on bounded subsets of E ∗ , we have lim kJ −1 Ai(n) xn − J −1 Ai(n) yn k = 0,

n→∞

which implies lim k(xn − yn − λn J −1 Ai(n) xn ) + λn J −1 Ai(n) yn k = 0. As J is uniformly conn→∞ tinuous on bounded subsets of E, lim kJ(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn k = 0.

n→∞

(3.4)

By hyn − u, J(xn − yn − λn J −1 Ai(n) xn )i ≥ 0 for all n ∈ N and u ∈ C, we obtain hyn − u, J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i − λn hyn − u, Ai(n) yn i ≥ 0 for each n ∈ N and u ∈ C. As hyn − u, Ai(n) yn i ≥ hyn − u, Ai(n) ui for every u ∈ C by the condition (ii), we have hyn − u, J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i ≥ λn hyn − u, Ai(n) ui

(3.5)

for all n ∈ N and u ∈ C. By the condition (NST), there exists a subsequence {xnk } of {xn } such that for any i ∈ I, there is a constant Mi ∈ N with i ∈ {i(nk ), i(nk +1), · · · , i(nk +Mi −1)} for all large enough k ∈ N. Let xnk ⇀ v and i ∈ I. There exists jk ∈ {0, 1, · · · , Mi − 1} such that i(nk + jk ) = i for every large enough k ∈ N. We consider a subsequence of {nk + jk } for all k ∈ {k ∈ N : nk + jk < nk+1 + jk+1 } and suppose that the subsequence is {nk + jk }. We have kxnk +jk − xnk k ≤

nk +M Xi −1 l=nk

kxl+1 − xl k

No.2

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

349

for all k ∈ N, which implies xnk +jk ⇀ v by (3.2). By (3.5), we have kynk +jk − ukkJ(xnk +jk − ynk +jk − λnk +jk J −1 Ai(nk +jk ) xnk +jk ) + λnk +jk Ai(nk +jk ) ynk +jk k ≥ hynk +jk − u, J(xnk +jk − ynk +jk − λnk +jk J −1 Ai(nk +jk ) xnk +jk ) + λnk +jk Ai(nk +jk ) ynk +jk i ≥ λnk +jk hynk +jk − u, Ai(nk +jk ) ui for every k ∈ N and u ∈ C. We have Ai(nk +jk ) = Ai for each k ∈ N, kxnk +jk − ynk +jk k → 0 and ynk +jk ⇀ v by (3.3). As inf λn > 0 and (3.4) holds, n∈N

hv − u, Ai ui ≤ 0 (∀u ∈ C). As Ai is hemicontinuous by condition (iii), we get hu − v, Ai vi ≥ 0 (∀u ∈ C) for all i ∈ I. Therefore, v ∈ F . By weak lower semicontinuity of the norm and (3.1), we get kPF x − xk ≥ lim kxn − xk = lim inf kxnk − xk ≥ kv − xk, n→∞

k→∞

which implies v = PF x and lim kxn − xk = kPF x − xk.

n→∞

(3.6)

By xn+1 = PCn ∩Qn x, F ⊂ Cn ∩ Qn , Theorem 2.1, and Lemma 2.5, we have 0 ≥ hxn+1 − PF x, J(xn+1 − x)i 1 ≥ (kx − xn+1 k2 − kx − PF xk2 + c1 kxn+1 − PF xk2 ). 2 From (3.6), we get lim kxn+1 − PF xk = 0. And by (3.3), {yn } converges strongly to PF x.  n→∞

Remark 3.2 When we consider the convergence to a solution of variational inequality problem in a real Banach space E, we use the generalized projection; see [12, 13, 15] and references therein. So, Theorem 3.1 is the new result using only the metric projection.

4

Strong Convergence by the Generalized Projection

We consider the improved gradient method by the hybrid method using the generalized projection and have the following new result in a 2-uniformly convex and uniformly smooth Banach space E. Theorem 4.1 Assume that C, E, I, {Ai }i∈I , i, {λn }n∈N , and c1 are the same as Theorem 3.1 and let x ∈ C and {xn }n∈N be a sequence in C generated by   x1 = x,      −1    yn = ΠC J (Jxn − λn Ai(n) xn ), Cn = {z ∈ C : hyn − z, Jxn − Jyn i − λn hyn − z, Ai(n) xn − Ai(n) yn i ≥ 0} ,     Qn = {z ∈ C : hxn − z, Jx − Jxn i ≥ 0} ,      xn+1 = ΠCn ∩Qn x for each n ∈ N. Then, {xn } and {yn } converge strongly to ΠF x.

350

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

Proof Cn and Qn are closed convex for every n ∈ N. We show that F ⊂ Cn for all n ∈ N. Let z ∈ F . By Lemma 2.4, hyn − u, Jxn − Jyn − λn Ai(n) xn i ≥ 0 for every n ∈ N and u ∈ C. So, from the condition (ii), we get hyn − z, Jxn − Jyn i ≥ λn hyn − z, Ai(n) xn i = λn hyn − z, Ai(n) xn − Ai(n) yn i + λn hyn − z, Ai(n) yn i ≥ λn hyn − z, Ai(n) xn − Ai(n) yn i + λn hyn − z, Ai(n) zi ≥ λn hyn − z, Ai(n) xn − Ai(n) yn i for each n ∈ N and z ∈ F , which implies F ⊂ Cn for all n ∈ N. From this, we have F ⊂ Cn ∩Qn for every n ∈ N and {xn } is well-defined. In fact, x1 = x ∈ C is given and from Q1 = C, F ⊂ C1 ∩ Q1 . Assume that xk is well-defined and F ⊂ Ck ∩ Qk for some k ∈ N. There exists a unique element xk+1 = ΠCk ∩Qk x and we get hxk+1 − z, Jx − Jxk+1 i ≥ 0 for all z ∈ Ck ∩ Qk by Lemma 2.4. From F ⊂ Ck ∩ Qk , hxk+1 − z, Jx − Jxk+1 i ≥ 0 for every z ∈ F , that is, F ⊂ Qk+1 . So, we obtain F ⊂ Ck+1 ∩ Qk+1 . By mathematical induction, we get F ⊂ Cn ∩ Qn for every n ∈ N and {xn } is well-defined. By xn+1 = ΠCn ∩Qn x and F ⊂ Cn ∩ Qn , we have φ(xn+1 , x) ≤ φ(ΠF x, x)

(4.1)

for all n ∈ N, which implies {xn } is bounded. As in the proof of Theorem 3.1, {Ai(n) xn } is bounded. Let z ∈ F . From φ(yn , J −1 (Jxn − λn Ai(n) xn )) ≤ φ(z, J −1 (Jxn − λn Ai(n) xn )), 0 ≥ kyn k2 − 2hyn , Jxn − λn Ai(n) xn i − kzk2 + 2hz, Jxn − λn Ai(n) xn i ≥ kyn k2 − 2kyn k · kJxn − λn Ai(n) xn k − kzk2 − 2kzk · kJxn − λn Ai(n) xn k ≥ kyn k2 − 2kyn k · (kxn k + λn kAi(n) xn k) − kzk2 − 2kzk · (kxn k + λn kAi(n) xn k) for every n ∈ N. As {xn } and {Ai(n) xn } are bounded, there exists a positive number M such that kyn k2 − M kyn k − kzk2 − M kzk ≤ 0 for all n ∈ N. So, {yn } is bounded. From xn+1 ∈ Qn , we have φ(xn+1 , xn ) + φ(xn , x) = φ(xn+1 , x) + 2hxn − xn+1 , Jxn − Jxi ≤ φ(xn+1 , x) for each n ∈ N. So, there exists lim φ(xn , x) and n→∞

lim φ(xn+1 , xn ) = 0.

n→∞

From Lemma 2.3, we get lim kxn+1 − xn k = 0.

n→∞

From condition (iii), xn+1 ∈ Cn and Lemma 2.3, we get 0 ≤ hyn − xn+1 , Jxn − Jyn i − λn hyn − xn+1 , Ai(n) xn − Ai(n) yn i = hyn − xn , Jxn − Jyn i + hxn − xn+1 , Jxn − Jyn i − λn hyn − xn , Ai(n) xn − Ai(n) yn i − λn hxn − xn+1 , Ai(n) xn − Ai(n) yn i

(4.2)

No.2

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

351

≤ −c1 kxn − yn k2 + kxn − xn+1 k · kJxn − Jyn k + λn kxn − yn k · kAi(n) xn − Ai(n) yn k + λn kxn − xn+1 k · kAi(n) xn − Ai(n) yn k ≤ (λn Li(n) − c1 )kxn − yn k2 + kxn − xn+1 k(kJxn − Jyn k + λn Li(n) kxn − yn k) for all n ∈ N. As {xn } and {yn } are bounded, (4.2), and sup λn Li(n) < c1 , we get n∈N

lim (c1 − λn Li(n) )kxn − yn k2 = 0,

n→∞

which implies lim kxn − yn k = 0.

n→∞

(4.3)

By hyn − u, Jxn − Jyn i ≥ hyn − u, λn Ai(n) xn i and the condition (ii), we have hyn − u, Jxn − Jyn i − λn hyn − u, Ai(n) xn − Ai(n) yn i ≥ λn hyn − u, Ai(n) yn i ≥ λn hyn − u, Ai(n) ui

(4.4)

for every n ∈ N and u ∈ C. By the condition (NST), there exists a subsequence {xnk } of {xn } such that for any i ∈ I, there is a constant Mi ∈ N with i ∈ {i(nk ), i(nk +1), · · · , i(nk +Mi −1)} for all large enough k ∈ N. Let xnk ⇀ v and i ∈ I. There exists jk ∈ {0, 1, · · · , Mi − 1} such that i(nk + jk ) = i for every large enough k ∈ N. As in the proof of Theorem 3.1, xnk +jk ⇀ v. By (4.3), ynk +jk ⇀ v. Because J is uniformly continuous on bounded subsets of E, we get lim kJxn − Jyn k = 0.

n→∞

(4.5)

From (4.4), kynk +jk − uk(kJxnk +jk − Jynk +jk k + λnk +jk Li(nk +jk ) kxnk +jk − ynk +jk k) ≥ λnk +jk hynk +jk − u, Ai(nk +jk ) ui, which implies kynk +jk − uk(kJxnk +jk − Jynk +jk k + λnk +jk Li kxnk +jk − ynk +jk k) ≥ λnk +jk hynk +jk − u, Ai ui for all k ∈ N and u ∈ C. By (4.3) and (4.5), we obtain hv − u, Ai ui ≤ 0 (∀u ∈ C). As Ai is hemicontinuoos by condition (iii), we get hu − v, Ai vi ≥ 0 (∀u ∈ C) for every i ∈ I, that is, v ∈ F . As the norm of E is weak lower semicontinuous, we get φ(v, x) = kvk2 − 2hv, Jxi + kxk2 ≤ lim inf (kxnk k2 − 2hxnk , Jxi + kxk2 ) k→∞

= lim inf φ(xnk , x) = lim φ(xn , x) ≤ φ(ΠF x, x) n→∞

k→∞

by (4.1), which implies v = ΠF x and lim φ(xn , x) = φ(ΠF x, x).

n→∞

By xn+1 = ΠCn ∩Qn x, F ⊂ Cn ∩ Qn , and Lemma 2.4, we have 0 ≥ hxn+1 − ΠF x, Jxn+1 − Jxi =

1 (φ(ΠF x, xn+1 ) + φ(xn+1 , x) − φ(ΠF x, x)), 2

(4.6)

352

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

which implies φ(ΠF x, x) − φ(xn+1 , x) ≥ φ(ΠF x, xn+1 ) for all n ∈ N. From (4.6), we get lim φ(ΠF x, xn+1 ) = 0. By Lemma 2.3, xn → ΠF x. And by n→∞

(4.3), {yn } converges strongly to ΠF x.

5



Deduced Results

By Theorem 3.1, we have the following result for inverse strongly monotone operators using only the metric projection. Theorem 5.1 Let C be a nonempty closed convex subset of a 2-uniformly convex and uniformly smooth Banach space E, I be a countable set, and {Ai }i∈I be a family of inverse strongly monotone operators of C into E ∗ , that is, there exists {αi }i∈I ⊂ (0, ∞) such that hx−y, Ai x−Ai yi ≥ αi kAi x−Ai yk2 for all i ∈ I and x, y ∈ C such that F = ∩i∈I V I(C, Ai ) 6= ∅, inf αi > 0, and sup kAi zk < ∞ for every z ∈ F . Let i be a mapping of N to I satisfing the i∈I

i∈I

condition (NST) and {λn }n∈N be a sequence in (0, ∞) which satisfies 0 < inf λn ≤ sup λn < ∞ n∈N

n∈N

and sup λn /αi(n) < c1 , where c1 is the constant in Theorem 2.1. Let x ∈ C and {xn }n∈N be a n∈N

sequence in C generated by   x1 = x,      −1    yn = PC (xn − λn J Ai(n) xn ),

Cn = {z ∈ C : hyn − z, J(xn − yn − λn J −1 Ai(n) xn ) + λn Ai(n) yn i ≥ 0} ,     Qn = {z ∈ C : hxn − z, J(x − xn )i ≥ 0} ,     x n+1 = PCn ∩Qn x

for each n ∈ N. Then, {xn } and {yn } converge strongly to PF x.

From Theorem 4.1, we get the following result for inverse strongly monotone operators. Theorem 5.2 Assume that C, E, I, {Ai }i∈I , i, {λn }n∈N , and c1 are the same as Theorem 5.1 and let x ∈ C and {xn }n∈N be a sequence in C generated by     x1 = x,     yn = ΠC J −1 (Jxn − λn Ai(n) xn ),   Cn = {z ∈ C : hyn − z, Jxn − Jyn i − λn hyn − z, Ai(n) xn − Ai(n) yn i ≥ 0} ,     Qn = {z ∈ C : hxn − z, Jx − Jxn i ≥ 0} ,      xn+1 = ΠCn ∩Qn x

for each n ∈ N. Then, {xn } and {yn } converge strongly to ΠF x.

In a real Hilbert space H, we have H ∗ = H, J = I, c1 = 1, and ΠC = PC , where I is the identity mapping. So, from Theorems 3.1 and 4.1, we have the following result. Theorem 5.3 Let C be a nonempty closed convex subset of a real Hilbert space H, I be a countable set, and {Ai }i∈I be a family of mappings of C into H satisfing the conditions (i)– (iv). Let i be a mapping of N to I satisfing the condition (NST) and {λn }n∈N be a sequence in

No.2

K. Nakajo: MONOTONE AND LIPSCHITZ CONTINUOUS MAPPINGS

353

(0, ∞) which satisfies 0 < inf λn ≤ sup λn < ∞ and sup λn Li(n) < 1. Let x ∈ C and {xn }n∈N n∈N

n∈N

n∈N

be a sequence in C generated by   x1 = x,         yn = PC (xn − λn Ai(n) xn ),

Cn = {z ∈ C : hyn − z, xn − yn i − λn hyn − z, Ai(n) xn − Ai(n) yn i ≥ 0} ,     Qn = {z ∈ C : hxn − z, x − xn )i ≥ 0} ,     x n+1 = PCn ∩Qn x

for each n ∈ N. Then, {xn } and {yn } converge strongly to PF x.

Remark 5.4 Let C be a nonempty closed convex subset of a real Hilbert space H, I be a countable set, and {Ti }i∈I be a family of Lipschitz continuous pseudo-contractions [5] of C into itself, that is, kTi x − Ti yk2 ≤ kx − yk2 + k(I − Ti )x − (I − Ti )y)k2 holds for all i ∈ I and x, y ∈ C such that supi∈I Li < ∞ and F = ∩i∈I F (Ti ) 6= ∅, where Li is a Lipschitz constant of Ti and F (Ti ) is the set of all fixed points of Ti for each i ∈ I. Then, let Ai = (I − Ti ) for every i ∈ I. It is obtained that Ai is a mapping of C into H with V I(C, Ai ) = F (Ti ) for all i ∈ I and {Ai }i∈I satisfies the conditions (i)–(iv). So, we can apply Theorem 5.3 to this family and get a result different from Zhou’s one [35]. References [1] Alber Y I. Metric and generalized projections operators in Banach spaces: properties and applications//Kartasatos A G. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type 15-50; Lecture Notes in Pure and Appl Math, Vol 178. New York: Dekker, 1996 [2] Alber Y I, Reich S. An iterative method for solving a class of nonlinear operator equations in Banach spaces. Panamer Math J, 1994, 4: 39–54 [3] Baillon J B, Haddad G. Quelques propri´ et´ es des op´ erateurs angle-born´ es et n-cycliquement monotones. Israel J Math, 1977, 26: 137–150 [4] Bauschke H H, Combettes P L. A weak-to-strong convergence principle for fej´ er-monotone methods in Hilbert spaces. Math Oper Res, 2001, 26: 248–264 [5] Browder F E, Petryshyn W V. Construction of fixed points of nonlinear mappings in Hilbert space. J Math Anal Appl, 1967, 20: 197–228 [6] Dunn J C. Convexity, mnotonicity, and gradient processes in Hilbert space. J Math Anal Appl, 1976, 53: 145–158 [7] Goebel K, Reich S. Uniform convexity, hyperbolic geometry, and nonexpansive mappings. Pure and Applied Math, 83. New York: Marcel Dekker, 1984 [8] Goldstein A A. Convex programming in Hilbert space. Bull Amer Math Soc, 1964, 70: 709–710 [9] Hartman P, Stampacchia G. On some nonlinear elliptic differential functional equations. Acta Math, 1966, 115: 153–188 [10] Haugazeau Y. Sur les in´ equations variationnelles et la minimisation de fonctionnelles convexes. Paris, France: Th` ese, Universit´ e de Paris, 1968 [11] Iiduka H. Takahashi W, Toyoda M. Approximation of solutions of variational inequalities for monotone mappings. Panamer Math J, 2004, 14: 49–61 [12] Iiduka H, Takahashi W. Weak converegnce of a projection algorithm for variational inequalities in a Banach space. J Math Anal Appl, 2008, 339: 668–679 [13] Iiduka H, Takahashi W. Strong convergence studied by a hybrid type method for monotone operators in a Banach space. Nonlinear Anal, 2008, 68: 3679–3688

354

ACTA MATHEMATICA SCIENTIA

Vol.37 Ser.B

[14] Kamimura S, Takahashi W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J Optim, 2002, 13: 938–945 [15] Kimura Y, Nakajo K. Strong convergence to a solution of a variational inequality problem in Banach spaces. J Appl Math, Vol 2014. Article ID 346517, 10 pages. http://dx.doi.org/10.1155/2014/346517 [16] Korpelevich G M. The extragradient method for finding saddle points and other problems. Matecon, 1976, 12: 747–756 [17] Levitin E S, Polyak B T. Constrained minimization problems. USSR Comput Math Phys, 1966, 6: 1–50 [18] Lions J L, Stampacchia G. Variational inequalities. Comm Pure Appl Math, 1967, 20: 493–517 [19] Liu F, Nashed M Z. Regularization of nonlinear ill-posed variational inequalities and convergence rates. Set-Valued Anal, 1998, 6: 313–344 [20] Matsushita S, Takahashi W. A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J Approx Theory, 2005, 134: 257–266 [21] Nadezhkina N, Nakajo K, Takahashi W. Applications of extragradient method for solving the combined variational ineqality-fixed point problem in real Hilbert space//Takahashi W, Tanaka T. Nonlinear Analysis and Convex Analysis. Yokohama: Yokohama Publishers, 2007: 399–416 [22] Nakajo K, Takahashi W. Strong and weak convergence theorems by an improved splitting method. Commun Appl Nonlinear Anal, 2002, 9: 99–107 [23] Nakajo K, Takahashi W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J Math Anal Appl, 2003, 279: 372–379 [24] Nakajo K, Shimoji K, Takahashi W. Strong convergence theorems by the hybrid method for families of nonexpansive mappings in Hilbert spaces. Taiwanese J Math, 2006, 10: 339–360 [25] Nakajo K, Shimoji K, Takahashi W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal, 2009, 71: 112–119 [26] Nakajo K, Shimoji K, Takahashi W. Approximations for nonlinear mappings by the hybrid method in Hilbert spaces. Nonlinear Anal, 2011, 74: 7025–7032 [27] Popov L D. Introduction to theory, solving methods and economical applications complementarity problems. Ekaterinburg, Russia: Ural State University, 2001 (Russian) [28] Solodov M V, Svaiter B F. Forcing strong convergence of proximal point iterations in a Hilbert space. Math Programming, 2000, 87A: 189–202 [29] Takahashi W. Nonlinear Functional Analysis. Yokohama: Yokohama Publishers, 2000 [30] Takahashi W. Convex Analysis and Approximation of Fixed Points. Yokohama: Yokohama Publishers, 2000 (Japanese) [31] Takahashi W, Toyoda M. Weak convergence theorems for nonexpansive mappings and monotone mappings. J Optim Theory Appl, 2003, 118: 417–428 [32] Xiu N, Zhang J. Some recent advances in projection-type methods for variational inequalities. J Comput Appl Math, 2003, 152: 559–585 [33] Xu H K. Inequalities in Banach spaces with applications. Nonlinear Anal, 1991, 16: 1127–1138 [34] Z˘ alinescu C. On uniformly convex functions. J Math Anal Appl, 1983, 95 344–374 [35] Zhou H. Strong convergence theorems for a family of Lipschitz quasi-pseudo-contractions in Hilbert spaces. Nonlinear Anal, 2009, 71: 120–125