A projection-type algorithm for solving generalized mixed variational inequalities

A projection-type algorithm for solving generalized mixed variational inequalities

Acta Mathematica Scientia 2016,36B(6):1619–1630 http://actams.wipm.ac.cn A PROJECTION-TYPE ALGORITHM FOR SOLVING GENERALIZED MIXED VARIATIONAL INEQUA...

239KB Sizes 16 Downloads 194 Views

Acta Mathematica Scientia 2016,36B(6):1619–1630 http://actams.wipm.ac.cn

A PROJECTION-TYPE ALGORITHM FOR SOLVING GENERALIZED MIXED VARIATIONAL INEQUALITIES∗

æp)

Kai TU (

1,2†

g4)

Fuquan XIA (

1

1. Department of Mathematic, Sichuan Normal University, Chengdu 610068, China 2. College of Applied Sciences, Beijing University of Technology, Beijing 100124, China E-mail : kaitu− [email protected]; [email protected] Abstract We propose a projection-type algorithm for generalized mixed variational inequality problem in Euclidean space Rn . We establish the convergence theorem for the proposed algorithm, provided the multi-valued mapping is continuous and f -pseudomonotone with nonempty compact convex values on dom(f ), where f : Rn → R∪{+∞} is a proper function. The algorithm presented in this paper generalize and improve some known algorithms in literatures. Preliminary computational experience is also reported. Key words

projection-type algorithm; generalized mixed variational inequality; f -pseudomonotone mapping

2010 MR Subject Classification

1

90C25; 90C30; 90C33

Introduction

Let h·, ·i and k · k denote the usual inner product and norm in Rn , respectively. Let n f : Rn → R ∪ {+∞} be a proper convex lower semicontinuous function and F : Rn → 2R be a multi-valued mapping. In this paper, we consider the generalized mixed variational inequality problem, denoted by GMVI(F, f, dom(f )), which be defined as Find x ∈ dom(f ) and ξ ∈ F (x) such that hξ, y − xi + f (y) − f (x) ≥ 0, ∀y ∈ dom(f ), (1.1) where dom(f ) = {x ∈ Rn : f (x) < +∞} denotes the effective domain of f . Let S be the solution set of problem (1.1). Problem (1.1) is encountered in many applications, in particular in mechanical problems and equilibrium problems, and hence numerical methods and formulations are studied, see [1–9]. On the other hand, the generalized mixed variational inequality problem GMVI(F, f, dom(f )) includes a large variety of problems as special instance. For example, let F be the subdifferential of a finite-valued convex continuous function ψ defined on Rn , the GMVI(F, f, ∗ Received

July 16, 2015; revised March 22, 2016. This work was supported by the Scientific Research Foundation of Sichuan Normal University (20151602), National Natural Science Foundation of China (10671135, 61179033), and the Key Project of Chinese Ministry of Education (212147). † Corresponding author: Kai TU.

1620

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

dom(f )) becomes the following unconstraint convex optimization problem min {f (x) + ψ(x)}.

x∈Rn

We remark that if F be a single-valued mapping, the problem (1.1) is equivalent to the mixed variational inequality problem Find x∗ ∈ dom(f ) such that hF (x∗ ), y − x∗ i + f (y) − f (x∗ ) ≥ 0,

∀y ∈ dom(f ).

(1.2)

A number of papers in the literature studied the theoretical properties and solution algorithms of problem (1.2), see [10–15]. Recently, He [13] extended Algorithm 2.1 of [16] to solve the mixed variational inequality problem. The iteration sequence generated by the algorithm converges to a solution, provided F is f -pseudomonotone and continuous on dom(f ), and f is Lipschitz continuous on dom(f ). Furthermore, if f be the indicator function of a nonempty closed convex set K ⊂ Rn , that is,  0, if x ∈ K; IK (x) = +∞, otherwise. Then problem (1.2) reduces to the variational inequality problem (in short, VI(F, K)): find x∗ ∈ K such that hF (x∗ ), y − x∗ i ≥ 0,

∀y ∈ K.

(1.3)

Many algorithms for solving the VI(F, K) are projection algorithms that employ projections onto the feasible set K of VI(F, K), or onto some related sets, in order to iteratively reach a solution, see [16–20] and the references therein. In particular, Solodov and Svaiter [16] suggested a new projection method, known as the double projection method, for solving problem (1.3). It consists two steps. First, a hyperplane is constructed, which strictly separates current iterate from the solution set. The construction of this hyperplane requires an Armijo-type linesearch. Then the next iterate is produced by projecting the current iterate onto the intersection of the set K and the hyperplane. [18] showed that Solodov and Svaiter’s method obtain a longer stepsize, and guarantee that the distance from the next iterative point to the solution set has a large decrease; and hence it’s a better method than the method proposed by Iusem and Svaiter [19]. On the other hand, if f (x) = IK (x) for all x ∈ Rn , then problem (1.1) collapses to: find x∗ ∈ K and ξ ∗ ∈ F (x∗ ) such that hξ ∗ , y − x∗ i ≥ 0,

∀y ∈ K,

(1.4)

which is called the classical generalized variational inequality problem, denoted by GVI(F, K). Browder [21] introduced problem (1.4) and studied the existence of its solution. Since then, theory and algorithm of GVI(F, K) were much studied in the literatures, see [1, 17, 21–25] and the references therein. By using different Armijo-type linesearches and constructing profitable hyperplanes, which separating strictly current point xi and the solution set S, [24, 25] proposed some double projection algorithms for generalized variational inequality problems. For these papers, the separating hyperplane determines the convergence speed of the sequence generated by the double projection algorithm. Inspired and motivated by the above results, we suggest a new projection-type algorithm to solve the generalized mixed variational inequality problem. In our algorithm, we suggest a new

No.6

K. Tu & F.Q. Xia: A PROJECTION-TYPE ALGORITHM FOR GMVI

1621

Armijo-type linesearch procedure and construct a suitable separating hyperplane separating strictly current point xi and the solution set S. We also obtain the convergence theorem of our algorithm under some suitable conditions. The algorithm presented in this paper generalize and improve Algorithm 2.1 of Solodov and Svatier (SIAM J. Control Optim. 37(3): 765-776, 1999) or Algorithm 3.1 of He (Acta Math. Sci. 27A(2): 215-220, 2007). Noting that, if f (x) = IK (x) for all x ∈ Rn , our algorithm is different from Algorithm 1 of [24, 25]. Furthermore, we present some numerical tests (see Example 4.1 and 4.2 below). Example 4.1 shows that our algorithm can be applied to solve problem (1.1); Example 4.2 shows that our algorithm can solve (1.4) (for the case that f (x) = IK (x) for all x ∈ Rn ). Moreover, comparing with Algorithm 1 of [24, 25], our method can obtain better numerical results.

2

Preliminaries In this section, we list some well known concepts and propositions.

Definition 2.1 Let f : Rn → R ∪ {+∞} be a proper convex function, and K ⊂ dom(f ) be a nonempty set, f is said to be δ−Lipschitz continuous on K (for some δ > 0) if |f (x) − f (y)| ≤ δkx − yk,

∀x, y ∈ K.

Definition 2.2 Let f : Rn → R ∪ {+∞} be a proper convex function, K ⊂ dom(f ) be a n nonempty, closed and convex set, and F : K → 2R be a multi-valued mapping. F is said to be (i) lower semicontinuous at x ∈ K, if we give any sequence {xk } ⊂ K converging to x and any y ∈ F (x), there exist a sequence yk ∈ F (xk ) such that converges to y. (ii) upper semicontinuous at x ∈ K, if for every open set V containing F (x), there is an open set U containing x such that F (y) ⊂ V for all y ∈ K ∩ U . (iii) continuous at x ∈ K, if it is both lower semicontinuous and upper semicontinuous at x. (iv) pseudomonotone on K, if for any x, y ∈ K, u ∈ F (x), v ∈ F (y), hu, y − xi ≥ 0 ⇒ hv, y − xi ≥ 0. (v) monotone on K, if for any x, y ∈ K, u ∈ F (x), v ∈ F (y), hu − v, x − yi ≥ 0. (vi) f -pseudomonotone on K, if for any x, y ∈ K, u ∈ F (x), v ∈ F (y), hu, y − xi + f (y) − f (x) ≥ 0 ⇒ hv, y − xi + f (y) − f (x) ≥ 0. Remark 2.3 (i) Obviously, a monotone mapping must be a f -pseudomonotone mapping. A f -pseudomonotone mapping is not necessary monotone, see Example 3.2 below. (ii) If f ≡ 0, then a f -pseudomonotone mapping reduces to a pseudomonotone mapping. (iii) We would like to point that the notion of f -pseudomonotonicity was used to study gap functions and global error bounds for generalized mixed variational inequalities in Hilbert spaces by Tan et al. [7], the F -complementarity problems in Banach spaces by Yin et al. [26], and the stability analysis for minty mixed variational inequalities in reflexive Banach spaces by Zhong et al. [27]. Notice that the following Definition 2.4 and Lemma 2.5 are from [10].

1622

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

Definition 2.4 For any maximal monotone operator A : Rn → Rn , the resolvent operator associated with A of parameter λ be defined as follows: λ JA (y) = (I + λA)−1 (y),

where λ > 0 is a constant and I denotes the identity operator. Let f be a proper convex lower semicontinuous function, the subdifferential of f at x ∈ dom(f ) is the set ∂f (x) = {x∗ ∈ Rn : f (y) − f (x) ≥ hx∗ , y − xi, ∀y ∈ Rn }. From [10], we know ∂f is a maximal monotone operator; and hence for each λ > 0, Jfλ := (I + λ∂f )−1 is single-valued and nonexpansive, i.e., kJfλ (u) − Jfλ (v)k ≤ ku − vk, ∀u, v ∈ Rn . The resolvent operator Jfλ (·) has the following useful characterization. Lemma 2.5 For a given v ∈ Rn and λ > 0, the inequality hx − v, y − xi + λf (y) − λf (x) ≥ 0 for all y ∈ Rn holds if and only if x = Jfλ (v). Notice that if f be the indicator function of a closed convex set K ⊂ Rn , then the resolvent operator Jfλ (·) reduces to the projection operator PK (·), defined by PK (v) = argmin{kz − vk : z ∈ K}. Let dist(x, K) denotes the distance from x to the nonempty subset K, i.e., dist(x, K) = inf kz − xk. z∈K

The projection operator PK (·) was extensively studied and we list some properties as follows. See ref. [28] for more details. Lemma 2.6 Let K be a nonempty, closed and convex subset of Rn . Then for any x, y ∈ R and z ∈ K, (1) dist(x, K) = kx − PK (x)k; (2) kPK (x) − zk2 ≤ kx − zk2 − kPK (x) − xk2 . n

For simplifying the notation, for any x ∈ Rn , ξ ∈ F (x) and λ > 0, we let r(x, λ, ξ) := x − Jfλ (x − λξ). The following proposition is simple. Proposition 2.7 If x∗ ∈ dom(f ) solves GMVI(F, f, dom(f )) if and only if there exists ξ ∈ F (x∗ ) such that r(x∗ , λ, ξ ∗ ) = 0, for any λ > 0. ∗

Proposition 2.7 provides us a stopping criterion in designing the algorithm.

3

The Algorithm and Convergence Analysis

In this section, we assume that the resolvent of f can be easy to compute and propose our projection-type algorithm formally. Some related properties are presented. We prove the convergence theorem under the following conditions:

No.6

K. Tu & F.Q. Xia: A PROJECTION-TYPE ALGORITHM FOR GMVI

(C1 ) (C2 ) (C3 ) (C4 ) values on

1623

The solution set S of GMVI(F, f, dom(f )) is nonempty. f is a proper convex function such that dom(f ) is closed. f is Lipschitz continuous on dom(f ) with modulus β > 0. F is a continuous and f -pseudomonotone mapping with nonempty compact convex dom(f ).

Remark 3.1 (a) Clearly, assumption (C2 ) implies that dom(f ) is a nonempty, closed and convex subset of Rn . (b) Let f be the indicator function of a closed convex set K, the above assumptions be used to construct double projection algorithm for generalized variational inequalities, see [24, 25]. (c) Let F be single-valued, assumptions (C1 )–(C4 ) be used to construct the algorithm for solving problem (1.2) by [13, 15]; we also give the following Example 3.2 that satisfies all the conditions when F is multi-valued. Example 3.2 Let n = 1, and f be defined by  x2 , if x ∈ [0, 2]; f (x) = +∞, otherwise.

Let F : R → 2R be defined by

F (x) =

 [0, 2 + x], ∅,

if x ∈ [0, 2]; otherwise.

Now we have the following conclusions: (a) f is a proper convex function, and dom(f ) = [0, 2]. (b) It is easy to see that x = 0 solves GMVI(F, f, dom(f )). (c) Clearly, f is a 4-Lipschitz continuous mapping on dom(f ). (d) F is f -pseudomonotone on dom(f ). In fact, let x, y ∈ [0, 2], u ∈ F (x) = [0, 2 + x], v ∈ F (y) = [0, 2 + y], if hu, y − xi + f (y) − f (x) = (u + y + x)(y − x) ≥ 0, then y ≥ x and hence hv, y − xi + f (y) − f (x) = (v + y + x)(y − x) ≥ 0. (e) Clearly, F is a continuous mapping with nonempty compact convex values on dom(f ). (f) F is not monotone on dom(f ). Let x = 1, y = 2, u = 3 ∈ F (x) and v = 2 ∈ F (y), it follows that hu − v, x − yi = (u − v)(x − y) = −1 < 0. Algorithm 3.3 Choose x0 ∈ dom(f ) and take two parameters σ ∈ (0, 1), γ ∈ (0, 1). Set i = 0. Step 1 Take arbitrarily ξi ∈ F (xi ), compute x ˜i = Jf1 (xi −ξi ) and r(xi , 1, ξi ). If r(xi , 1, ξi ) = 0, stop; else go to Step 2. Step 2 Let ki be the smallest nonnegative integer satisfying sup y∈F (xi −γ k r(xi ,1,ξi ))

hy, r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) ≥ σkr(xi , 1, ξi )k2 .

(3.1)

1624

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

Set ηi := γ ki , zi = xi − ηi r(xi , 1, ξi ) and yi ∈ argmax{hy, r(xi , 1, ξi )i|y ∈ F (zi )}. Step 3 Compute the next iterate xi+1 = PCi (xi ), where Ci := {x ∈ dom(f ) : hi (x) ≤ 0}, and hi (x) := hyi , x − zi i + f (x) − f (zi ).

(3.2)

Let i := i + 1 and return to Step 1. Remark 3.4 The computation of yi in Step 2 is implementable in some special cases. In particular, if F (xi − γ k r(xi , 1, ξ)) is a polytope, then it has finitely many extreme points, i.e., {ej }m j=1 . Thus yi = argmax{hej , r(xi , 1, ξi )i | j = 1, 2, · · · , m}. Remark 3.5 Algorithm 3.3 includes some algorithms as special case: (i) Let F be a single-valued mapping and f be the indicator function of a nonempty closed convex set K, then Algorithm 3.3 becomes Algorithm 2.1 in [16]. (ii) Let F be a single-valued mapping, then Algorithm 3.3 becomes Algorithm 3.1 in [13]. Remark 3.6 If f be the indicator function of a nonempty closed convex set K, Algorithm 3.3 can be used to solve generalized variational inequality problem. Let us compare the above algorithm with Algorithm 1 in [24, 25]. (i) ξi can be taken arbitrarily in our method. In [24], choosing ξi needs solving a singlevalued variational inequality and hence is computationally expensive. (ii) Comparing our algorithm with Algorithm 1 in [24, 25], we use different Armijo-type linesearch process and hyperplane. First, we show that Algorithm 3.3 is well defined, and hence our algorithm is implementable, provided the resolvent of f is easy to compute exactly. Obviously, if r(xi , 1, ξi ) = 0 for some i ≥ 0, then it terminates at a solution of problem GVMI(f, F, dom(f )). Therefore, from now on, we assume that r(xi , 1, ξi ) 6= 0 for all i ≥ 0. Lemma 3.7 If r(xi , 1, ξi ) 6= 0, then there exists a nonnegative integer ki satisfying (3.1). Proof

Suppose (3.1) is not satisfied for any integer k, that is, hy, r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) < σkr(xi , 1, ξi )k2 ,

sup y∈F (xi

−γ k r(x

∀k ≥ 1.

(3.3)

i ,1,ξi ))

Applying Lemma 2.5 with λ := 1, v := xi − ξi , x := x ˜i and y := xi , we get f (xi ) − f (˜ xi ) + hξi , r(xi , 1, ξi )i ≥ kr(xi , 1, ξi )k2 .

(3.4)

We denote zi (k) := xi − γ k r(xi , 1, ξi ). Since F is lower semicontinuous, ξi ∈ F (xi ), and zi (k) → xi as k → ∞, then there exists vk ∈ F (zi (k)) such that lim vk = ξi . By (3.3), we k→∞

deduce that

hvk , r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) < σkr(xi , 1, ξi )k2 .

(3.5)

Let k → ∞ in (3.5), we get that hξi , r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) ≤ σkr(xi , 1, ξi )k2 .

(3.6)

Combining (3.4) and (3.6), we have σ ≥ 1. This is a contradiction, which completes the proof.  Now, we list some useful lemmas as follows.

No.6

K. Tu & F.Q. Xia: A PROJECTION-TYPE ALGORITHM FOR GMVI

1625

Lemma 3.8 (see [13]) Let h : Rn → R be a real-valued function on Rn . Let K be the set {x ∈ Rn : h(x) ≤ 0} such that K ⊂ D ⊂ dom(f ). If h is Lipschitz continuous on D with modulus θ > 0, then dist(x, K) ≥ θ−1 max{h(x), 0}, ∀x ∈ D. (3.7) Lemma 3.9 Let x∗ solves GMVI(F, f, dom(f )) and the function hi be defined by (3.2). Then hi (xi ) ≥ ηi σkr(xi , 1, ξi )k2 and hi (x∗ ) ≤ 0. In particular, if r(xi , 1, ξi ) 6= 0, then hi (xi ) > 0. Proof

It follows directly from the definition of hi (x) that hi (xi ) = hyi , xi − zi i + f (xi ) − f (zi ) = hyi , ηi r(xi , 1, ξi )i + f (xi ) − f (zi ) ≥ ηi (hyi , r(xi , 1, ξi )i + f (xi ) − f (˜ xi )) ≥ ηi σkr(xi , 1, ξi )k2 ,

(3.8)

where the first inequality is from the convexity of f , the last one is from inequality (3.1). If r(xi , 1, ξi ) 6= 0, it follows from (3.8) that hi (xi ) > 0. We next prove that hi (x∗ ) ≤ 0. Since x∗ ∈ S, there exist ξ ∗ ∈ F (x∗ ) such that hξ ∗ , y − x∗ i + f (y) − f (x∗ ) ≥ 0, ∀ y ∈ dom(f ). Taking y := zi in the above inequality, we obtain that hξ ∗ , zi − x∗ i + f (zi ) − f (x∗ ) ≥ 0.

(3.9)

By the f -pseudomonotonicity of F , it follows from yi ∈ F (zi ) that hyi , zi − x∗ i + f (zi ) − f (x∗ ) ≥ 0. That is hi (x∗ ) ≤ 0, which completes the proof and implies x∗ ∈ Ci .

(3.10) 

Theorem 3.10 Suppose that Assumptions (C1 )–(C4 ) hold. Then either Algorithm 3.3 terminates in a finite number of iterations or generates an infinite sequence {xi } converging to a solution of problem (1.1). Proof Let x∗ ∈ S. We assume that Algorithm 3.3 generates an infinite sequence {xi }. Thus, for each i, r(xi , 1, ξi ) 6= 0. It follows that kxi+1 − x∗ k2 ≤ kxi − x∗ k2 − kxi − xi+1 k2 = kxi − x∗ k2 − dist2 (xi , Ci ),

(3.11)

where the inequality follows from x∗ ∈ Ci and Lemma 2.6 (2), the equality follows from Lemma 2.6 (1). It follows from (3.11) that {kxi − x∗ k} is a convergent sequence, i.e., there exists some δ ≥ 0 such that lim kxi − x∗ k = δ. (3.12) i→∞

Therefore, {xi } is bounded and lim dist(xi , Ci ) = 0.

i→∞

(3.13)

Since F is continuous with compact convex values, Proposition 3.11 of [29] implies that {F (xi ) : i ∈ N } is a bounded set, and so the sequence {ξi } is bounded. Since Jf1 (·) is nonexpansive, we have that k˜ xi − Jf1 (x0 )k = kJf1 (xi − ξi ) − Jf1 (x0 )k ≤ kxi − ξi − x0 k.

1626

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

By the above inequality, we obtain that there exist M > 0, k˜ xi k ≤ k˜ xi − Jf1 (x0 )k + kJf1 (x0 )k ≤ kxi − ξi − x0 k + kJf1 (x0 )k ≤ M,

(3.14)

and hence the sequences {˜ xi }, {r(xi , 1, ξi )} are bounded. Since zi = xi − ηi r(xi , 1, ξi ), we have that the sequence {zi } is bounded. Similar, we obtain that the sequence {yi } is bounded. Thus there exist τ ≥ 0 such that kyi k ≤ τ . Thus, hi is Lipschitz continuous on dom(f ) with modulus (τ + β) > 0. Noting that xi ∈ / Ci , and applying Lemma 3.8, we have dist(xi , Ci ) ≥ (τ + β)−1 hi (xi ),

∀ i.

(3.15)

It follows from Lemma 3.9 and (3.15) that dist(xi , Ci ) ≥ (τ + β)−1 σηi kr(xi , 1, ξi )k2 .

(3.16)

By (3.13) and (3.16), it follows that lim ηi kr(xi , 1, ξi )k2 = 0.

(3.17)

i→∞

To complete the proof of the theorem, we prove that there exists an accumulation point x ¯ of the sequence {xi } such that x ¯ ∈ S. By the boundness of ηi , we denote α := inf ηi . It is easy i∈N

see that 0 ≤ α ≤ ηi < 1, ∀i. If α > 0, it follows from (3.17) that lim kr(xi , 1, ξi )k = 0.

i→∞

Since r(·, 1, ·) is continuous and the sequence {xi } and {ξi } are bounded, there exists an ac¯ of {(xi , ξi )} such that r(¯ ¯ = 0. Since F is upper semicontinuous cumulation point (¯ x, ξ) x, 1, ξ) with compact convex values, Proposition 3.7 of [29] implies that F is closed and so ξ¯ ∈ F (¯ x). That is, x ¯ ∈ dom(f ) is a solution of GMVI(F, f, dom(f )). ¯ be an accumulation point of {(xi , ξi )}. Thus, there Suppose now that α = 0. Let (¯ x, ξ) exists an index set I such that lim

i(∈I)→∞

ηi = 0,

lim

i(∈I)→∞

xi = x ¯ and

lim

i(∈I)→∞

¯ ξi = ξ.

(3.18)

It follows that lim

i(∈I)→∞

xi − γ −1 ηi r(xi , 1, ξi ) = x ¯.

(3.19)

We also obtain that for large enough i ∈ I, ηi ∈ (0, γ2 ) and hence γ −1 ηi ∈ (0, 12 ). Since xi − γ −1 ηi r(xi , 1, ξi ) = (1 − γ −1 ηi )xi + γ −1 ηi x˜i , xi ∈ dom(f ) and x ˜i ∈ dom(f ), it follows from the convexity of f that for large enough i ∈ I, xi − γ −1 ηi r(xi , 1, ξi ) ∈ dom(f ). We denote zˆi = xi − γ −1 ηi r(xi , 1, ξi ). Since dom(f ) is closed and (3.19), it follows that x ¯ ∈ dom(f ). Since ¯ F is lower semicontinuous, ξ ∈ F (¯ x) and zˆi → x¯ as i ∈ I and i → ∞, so for each i ∈ I, there ¯ It follows that, for sufficiently large i ∈ I, is vi ∈ F (ˆ zi ) such that lim vi = ξ. i(∈I)→∞

σkr(xi , 1, ξi )k2 > sup hy, r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) y∈F (ˆ zi )

≥ hvi , r(xi , 1, ξi )i + f (xi ) − f (˜ xi )

No.6

K. Tu & F.Q. Xia: A PROJECTION-TYPE ALGORITHM FOR GMVI

1627

= hvi − ξi , r(xi , 1, ξi )i + hξi , r(xi , 1, ξi )i + f (xi ) − f (˜ xi ) ≥ hvi − ξi , r(xi , 1, ξi )i + kr(xi , 1, ξi )k2 ,

(3.20)

where the first inequality is from the construction of ki , the second inequality is from vi ∈ F (ˆ zi ), ¯ = 0. The the last one is from (3.4). Letting i ∈ I and i → ∞ in (3.20), we obtain r(¯ x, 1, ξ) above two cases prove that there exists an accumulation point x ¯ of the sequence {xi } such that x ¯ ∈ dom(f ), ¯ = 0, ξ¯ ∈ F (¯ r(¯ x, 1, ξ) x). It follows that x ¯ ∈ dom(f ) is a solution of GMVI(F, f, dom(f )), i.e., x¯ ∈ S. Then, it remains to show that the sequence {xi } has an unique cluster point. Replacing x∗ by x ¯ in the preceding argument, we get that the sequence {kxi − x ¯k} is nonincreasing and hence converges. Since x ¯ is an accumulation point of {xi }, some subsequence of {kxi − x ¯k} converges to zero. This shows that the whole sequence {kxi − x ¯k} converges to zero, hence lim xi = x ¯.  i→∞

4

Numerical Experiments

In this section, we present some numerical experiments for the proposed method. The MATLAB codes are run on a PC (with Intel(R) Core(TM) i7-5500U [email protected](4CPUs), 8192MB RAM) under MATLAB Version 7.12.0.635 (R2011a) Service Pack 1 which contains optimization Toolbox version 6.0. In Example 4.1, we use our algorithm to compute the solution point of problem (1.1). The computational experiments show that our algorithm is valid and effective. In Example 4.2, we use our algorithm to compute the solution of problem (1.4). If n = 4, the GVI(F, K) which we use in Example 4.2 is used in [25]. Thus, Example 4.2 is a extension of the one in [25]. The computational experiment shows that our algorithm is effective to solve GVI(F, K). Moreover, comparing with Algorithm 1 in [24, 25], our method can obtain better numerical results. In Table 1, Table 2 and Table 3, “Iter.” denotes number of iteration and “CPU” denotes the CPU time seconds. We use “nf.” for the total number of times that F is evaluated. The tolerance ǫ means when kr(x, 1, ξ)k ≤ ǫ, the procedure stops. Example 4.1 Let n = 4, K :=



x ∈ Rn :

n X i=1

 xi = 0, |xi | ≤ 2, ∀i ,

and f be defined by

n

 kxk2 , if x ∈ K; f (x) = +∞, otherwise.

Let F : Rn → 2R be defined by  {(t, t + 2x , t + 3x , t + 4x ) : t ∈ [0, 1]}, if x ∈ K; 2 3 4 F (x) = ∅, otherwise.

It is easy to know that all the assumptions in Theorem 3.10 are satisfied and x = (0, 0, 0, 0) is a solution of GMVI(F, f, dom(f )).

1628

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

Now we show that the resolvent of f can be computed. For any g ∈ Rn , let z = (I + ∂f )−1 (g). It follows that finding z is equivalent to solving the following separable convex programming min z =

4 X

θj (xj )

(4.1)

j=1

 4 X   

s.t.

xj = 0,

j=1

  x ∈ K , j 0

j = 1, 2, 3, 4,

where θj (xj ) = x2j + 21 (xj − gj )2 , K0 = {v ∈ R : |v| ≤ 2}. Problem (4.1) can be considered as a special case of multi-block convex minimization problems: min z =

N X

θ˜j (xj )

(4.2)

j=1

s.t.

where Aj ∈ Rp×ni , b ∈ Rp ,

N P

N X    Aj xj = b,

j=1   x ∈ X , j = 1, 2, 3, · · · , N, j i

nj = n and Xi ∈ Rni are closed convex sets; and θ˜j are closed

j=1

convex functions. Since θj is strongly convex and its gradient is Lipschitz continuous, one can use the alternating direction method of multipliers (ADMM) to solve problem (4.1), see [30]. It follows that finding z is equivalent to solving the VI(F, K) with F (x) = 3x − g. Since F is strongly monotone with modulus 3 and K is a nonempty compact convex set, it follows that the above variational inequality has a unique solution z. Thus, the solution point z can be computed by some known algorithms for classical variational inequality problem. We use Algorithm 2.1 of Slodov and Svaiter in [16] to compute the solution point z in this example and choose σ = 0.5, γ = 0.9 for our Algorithm 3.3 in Table 1. Table 1

Result for Example 4.1 Algorithm 3.3

Initial point x0 (0.1, 0.2, 0, −0.3) (−0.5, 0, 0, 0.5) (0, −0.5, 0, 0.5) (0.1, 0.2, −0.3, 0) (−0.5, 0, 0, 0.5) (0, −0.5, 0, 0.5)

Tolerance

iter.(nf.)

CPU

−7

34(243)

15.60010

−7

32(241)

13.44730

−7

34(212)

14.99170

−5

15(114)

4.75803

−5

19(140)

5.44443

−5

15(120)

4.82043

ǫ

10 10 10 10

10 10

Example 4.2 Let K :=

  n X xi = 1 , x ∈ Rn+ : i=1

No.6

1629

K. Tu & F.Q. Xia: A PROJECTION-TYPE ALGORITHM FOR GMVI n

and F : K → 2R be defined by F (x) := {(t, t + 2x2 , · · · , t + nxn ) : t ∈ [0, 1]}. Let f (x) = IK (x), it is easy to know that all the assumptions in Theorem 3.10 are satisfied and (1, 0, · · · , 0) is a solution of GVI(F, K). Notice that if n = 4, the GVI(F, K) which we use in Example 4.2 is used in [25]. Thus, Example 4.2 is a extension of the one in [25]. We first let n = 4 (n denotes the dimension of Euclidean space) (see Table 2), and then we let n = 8 (see Table 3). The choice of parameters of example 4.2 for Algorithm 1 of [24, 25] is what the reference [25] proposed. We choose σ = 0.3 and γ = 0.7 for our Algorithm 3.3 in Table 2 and Table 3. Result for Example 4.2 with n = 4

Table 2 Initial point x0

Tolerance ǫ

(0, 0, 0, 1) (0, 0, 1, 0) (0.5, 0, 0.5, 0) (0, 0, 0, 1) (0, 0, 1, 0) (0.5, 0, 0.5, 0)

Algorithm 1 in [24]

Algorithm 1 in [25]

iter.(nf.)

CPU

iter.(nf.)

CPU

iter.(nf.)

CPU

−7

61(358)

0.71875

56(309)

0.29687

59(202)

0.31250

−7

10

79(544)

1.09375

54(286)

0.26563

40(81)

0.26563

10−7

76(523)

0.85938

51(271)

0.28125

38(77)

0.26563

−5

43(250)

0.52125

41(234)

0.31250

42(142)

0.25000

−5

56(383)

0.82813

39(211)

0.23437

29(59)

0.21875

−5

53(362)

0.68750

37(201)

0.23438

27(55)

0.21875

10

10 10

10

Table 3

Result for Example 4.2 with n = 8 Algorithm 1 in [24] Algorithm 1 in [25]

Initial point x0 1 (1, 1, 1, 1, 1, 1, 1, 1) 8

(0, 0, 0, 0, 0, 0, 0, 1) 1 (2, 2, 2, 0, 1, 1, 1, 1) 10 1 (1, 1, 1, 1, 1, 1, 1, 1) 8

(0, 0, 0, 0, 0, 0, 0, 1) 1 (2, 2, 2, 0, 1, 1, 1, 1) 10

Algorithm 3.3

Tolerance ǫ −7

10

−7

10

Algorithm 3.3

iter.(nf.)

CPU

iter.(nf.)

CPU

iter.(nf.)

CPU

97(621)

1.21875

101(667)

0.42188

92(364)

0.40625

204(1938) 2.82813 246(2526) 1.04688 175(869) 0.84375

−7

123(927)

1.68750

125(946)

0.51563 116(578) 0.51563

−5

68(432)

0.95313

71(467)

0.29688

10 10

−5

10

−5

10

65(256)

0.29688

144(1360) 2.15625 178(1834) 0.79688 125(619) 0.62500 87(655)

1.29688

88(667)

0.35938

82(408)

0.34375

References [1] Rockafellar R T. Monotone operators and the proximal point algorithm. SIAM J Control Optim, 1976, 14(5): 877–898 [2] Tseng P. Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J Control Optim, 1991, 29(1): 119–138 [3] Han W M, Reddy B D. On the finite element method for mixed variational inequalities arising in elastoplasticity. SAIM J Numerical Anal, 1995, 32(6): 1778–1807 [4] Chinchuluun A, Pardalos P M, Migdalas A, et al. Pareto Optimality, Game Theory and Equilibria. Berlin: Springer, 2008 [5] Xia F Q, Huang N J, Liu Z B. A projected subgradient method for solving generalized mixed variational inequalities. Oper Res Lett, 2008, 36(5): 637–642 [6] Wu K Q, Huang N J. The generalized f -projection operator and set-valued variational inequalities in Banach spaces. Nonlinear Anal: TMA, 2009, 71(7): 2481–2490

1630

ACTA MATHEMATICA SCIENTIA

Vol.36 Ser.B

[7] Tang G J, Huang N J. Gap functions and global error bounds for set-valued mixed variational inequalities. Taiwan J Math, 2013, 17(4): 1267–1286 [8] Tran D Q, Muu L D, Nguyen V H. Extragradient algorithm extended to equilibrium problems. Optim, 2008, 57(6):749–779 [9] Dinh B V, Muu L D. A projection algorithm for solving pseudomonotone equilibrium problems and its application to a class of bilevel equilibria. Optim, 2013, 64(3): 559–575 [10] Brezis H. Operateurs Maximaux Monotone et Semi-Groupes de Contractions Dans Les Espaces de Hilbert. Amsterdam: North-Holland Publishing Company, 1973 [11] Bnouhachem A. A self-adaptive method for solving general mixed variational inequalities. J Math Anal Appl, 2005, 309(1): 136–150 [12] Zeng L C, Yao J C. Convergence analysis of a modified inexact implict method for general monotone variational inequalities. Math Methods Oper Res, 2005, 62(2): 211–224 [13] He Y R. A new projection algorithm for mixed variational inequalities. Acta Math Sci, 2007, 27A(2): 215–220 [14] Xia F Q, Li T, Zou Y Z. A projection subgradient method for solving optimization with variational inequality constraints. Optim Lett, 2014, 8(1): 279–292 [15] Tang G J, Zhu M, Liu H W. A new extragradient-type method for mixed variational inequalities. Oper Res Lett, 2015, 43(6): 567–562 [16] Solodov M V, Svaiter B F. A new projection method for variational inequality problems. SIAM J Control Optim, 1999, 37(3): 765–776 [17] Facchinei F, Pang J S. Finite-dimensional Variational Inequalities and Complementarity Problems. New York: Springer-Verlag, 2003 [18] Wang Y J, Xiu N H, Wang C Y. Unified framework of extragradient-type methods for pseudomonotone variational inequalities. J Optim Theory Appl, 2001, 111(3): 641–656 [19] Iusem A N, Svaiter B F. A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optim, 1997, 42(4): 309–321 [20] He Y R. A new double projection algorithm for variational inequalities. J Comput Appl Math, 2006, 185(1): 166–173 [21] Browder F E. Multi-valued monotone nonlinear mapping and duality mappings in Banach space. Trans Amer Math Soc, 1965, 118: 338–351 [22] Anh P N, Muu L D, Nguyen V H, Strodiot J J. Using the Banach contraction principle to implement the proximal point method for multivalued monotone variational inequalities. J Optim Theory Appl, 2005, 124(2): 285–306 [23] Xia F Q, Huang N J. A projection-proximal point algorithm for solving generalized variational inequalities. J Optim Theory Appl, 2011, 150(1): 98–117 [24] Li F L, He Y R. An algorithm for generalized variational inequality with pseudomonotone mapping. J Comput Appl Math, 2009, 228(1): 212–218 [25] Fang C J, He Y R. A double projection algorithm for multi-valued variational inequslities and a unified framework of the method. Appl Math Comput, 2011, 217(23): 9543–9551 [26] Yin H Y, Xu C X, Zhang Z X. The F -complementarity problems and its equivalence with the least element problem. Acta Math Sinica, 2001, 44(4): 679–686 [27] Zhong R Y, Huang N J. Stability analysis for minty mixed variational inequality in reflexive Banach spaces. J Optim Theory Appl, 2010, 147(3): 454–472 [28] Polyak B T. Introduction to Optimization. New York: Optimization Software, 1987 [29] Aubin J P, Ekeland I. Applied Nonlinear Analysis. New York: John Wiley & Sons Incorporated, 1984 [30] Lin T Y, Ma S Q, Zhang S Z. On the global linear convergence of the ADMM with multi-block variables. SIAM J Optim, 2015, 25(3): 1478–1497