An algorithm for generalized variational inequality with pseudomonotone mapping

An algorithm for generalized variational inequality with pseudomonotone mapping

Journal of Computational and Applied Mathematics 228 (2009) 212–218 Contents lists available at ScienceDirect Journal of Computational and Applied M...

455KB Sizes 60 Downloads 109 Views

Journal of Computational and Applied Mathematics 228 (2009) 212–218

Contents lists available at ScienceDirect

Journal of Computational and Applied Mathematics journal homepage: www.elsevier.com/locate/cam

An algorithm for generalized variational inequality with pseudomonotone mappingI Fenglian Li, Yiran He ∗ Department of Mathematics, Sichuan Normal University Chengdu, Sichuan 610068, China

article

a b s t r a c t

info

Article history: Received 18 December 2007 Received in revised form 7 September 2008 Keywords: Generalized variational inequality Pseudomonotone mapping Multi-valued mapping

We suggest an algorithm for a variational inequality with multi-valued mapping. The iteration sequence generated by the algorithm is proven to converge to a solution, provided the multi-valued mapping is continuous and pseudomonotone with nonempty compact convex values. © 2008 Elsevier B.V. All rights reserved.

1. Introduction This paper aims to discuss an algorithm for the following generalized variational inequality: To find x∗ ∈ C and ξ ∈ F (x∗ ) such that



ξ , y − x∗ ≥ 0,

for all y ∈ C ,

(1.1) n

n

where C is a nonempty closed convex set in R , F is a multi-valued mapping from C into R with nonempty values, and h·, ·i denotes the usual inner product in Rn . Theory and algorithms of generalized variational inequality have been much studied in the literature [1,3,6–8,13,24,25]. [4] introduced the problem (1.1) and studied the existence of its solution. Various algorithms for computing the solution of (1.1) are proposed. The well-known proximal point algorithm [23] requires the multi-valued mapping F be monotone. Relaxing the monotonicity assumption, [1] proved if the set C is a box and F is order monotone, then the proximal point algorithm still applies for the problem (1.1). [13] studied multiplier method for the problem (1.1), assuming that F is monotone and C is bounded and is the solution set of finitely many differentiable convex inequalities. Assume that F is pseudomonotone, [11] described a combined relaxation method for solving (1.1), which has been further developed in [16, 18,19,17,12]; see also [15,20]. If the mapping F is multi-valued, the combined relaxation method usually needs the set C having an explicit expression and satisfying the Slater constraint qualification; see [16,20]. For other related methods for generalized variational inequality, see [3,14,25]. This paper suggests a new algorithm for the generalized variational inequality and proves the global convergence of the generated iteration sequence, assuming that F is pseudomonotone in the sense of Karamardian. An obvious difference between our method and the combined relaxation method is that the former does not require C to have an explicit expression. In the case where F is single-valued, we compare the Method 1.1 in [20] and the single-valued version of our Algorithm 1, the main difference is the line search rule generating the stepsize; see the expression (9) in [20] and the expression (2.2) in this paper.

I This work is partially supported by the National Natural Science Foundation of China (No. 10701059) and by the Sichuan Youth Science and Technology Foundation (No. 06ZQ026-013). ∗ Corresponding author. E-mail address: [email protected] (Y. He).

0377-0427/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.cam.2008.09.014

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

213

Let S be the solution set of (1.1), that is, those points x∗ ∈ C satisfying (1.1). Throughout this paper, we assume that the solution set S of the problem (1.1) is nonempty and F is continuous on C with nonempty compact convex values satisfying the following property:

hζ , y − xi ≥ 0,

for all y ∈ C , ζ ∈ F (y), and all x ∈ S .

(1.2)

The property (1.2) holds if F is pseudomonotone on C in the sense of Karamardian [10]. In particular, if F is monotone, then (1.2) holds. The organization of this paper is as follows. In the next section, we recall the definition of continuous multi-valued mapping and present the details of the algorithm and prove several preliminary results for convergence analysis in Section 3. A small example is tested in the last section. 2. Algorithm Let us recall the definition of continuous multi-valued mapping. F is said to be upper semicontinuous at x ∈ C if for every open set V containing F (x), there is an open set U containing x such that F (y) ⊂ V for all y ∈ C ∩ U; F is said to be lower semicontinuous at x ∈ C if for any y ∈ F (x) and any open set V containing y, there is an open set U containing x such that F (y) ∩ V 6= ∅ for all y ∈ C ∩ U. F is said to be continuous at x ∈ C if it is both upper semicontinuous and lower semicontinuous at x. F is said to be continuous on C if it is continuous at every point of C . If F is single-valued, then both upper semicontinuous and lower semicontinuous reduce to the continuity of F . Now we state the algorithm. The algorithm and its convergence analysis are inspired by [9,26]. Let ΠC denote the projector onto C and let µ > 0 be a parameter. Proposition 2.1. x ∈ C and ξ ∈ F (x) solves the problem (1.1) if and only if rµ (x, ξ ) := x − ΠC (x − µξ ) = 0. Algorithm 1. Choose x0 ∈ C and three parameters σ > 0, µ ∈ (0, 1/σ ) and γ ∈ (0, 1). Set i = 0. Step1. If rµ (xi , ξ ) = 0 for some ξ ∈ F (xi ), stop; else take ξi ∈ F (xi ) satisfying y − ξi , rµ (xi , ξi ) ≥ 0,





∀y ∈ F (xi ).

(2.1)

Step2. For every positive integer k, let yk := ΠF (xi −γ k rµ (xi ,ξi )) (ξi ). Step3. Let ki be the smallest nonnegative integer k satisfying yk , rµ (xi , ξi ) ≥ σ krµ (xi , ξi )k2 .





(2.2)

Set zi = xi − ηi rµ (xi , ξi ), where ηi = γ . Step4. Compute xi+1 := ΠCi (xi ) where Ci := {v ∈ C : hi (v) ≤ 0} and ki

hi (v) := sup hξ , v − zi i .

(2.3)

ξ ∈F (zi )

Let i = i + 1 and return to Step 1. We show that Algorithm 1 is well defined and implementable, though we will not give the numerical test in this paper. Remark 2.1. ξi satisfying (2.1) is actually a solution of a classical Hartman–Stampacchia variational inequality. Therefore ξi satisfying (2.1) does exist because the mapping ξ 7→ rµ (xi , ξ ) is continuous and F (xi ) is a nonempty compact convex sets. Moreover, such ξi can be computed by some known algorithms for classical variational inequality. Remark 2.2. If rµ (xi , ξ ) 6= 0, k satisfying (2.2) does exist. Indeed, in view of the following Lemma 2.1, limk→∞ yk = ξi . Therefore lim yk , rµ (xi , ξi ) = ξi , rµ (xi , ξi ) ≥ µ−1 krµ (xi , ξi )k2 > σ krµ (xi , ξi )k2 ,









k→∞

where the first inequality is due to Lemma 2.3 and the second one is due to µ−1 > σ and rµ (xi , ξ ) 6= 0. Remark 2.3. The computation of xi+1 in the Step 4 is implementable. In particular, if F (zi ) is a polytope, then it has finitely many extreme points, say {e1 , . . . , em }. Thus hi (v) := sup hξ , v − zi i = max ξ ∈F (zi )

1≤j≤m



ej , v − zi



.

214

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

Lemma 2.1. The sequence {yk } generated in Step 2 has the following properties: yk ∈ F (xi − γ k rµ (xi , ξi ))

lim yk = ξi .

and

(2.4)

k→∞

Proof. Since F is lower semicontinuous, ξi ∈ F (xi ), and xi − γ k rµ (xi , ξi ) → xi as k → ∞, for each k, there is uk ∈ F (xi − γ k rµ (xi , ξi )) such that limk→∞ uk = ξi . Since yk = ΠF (xi −γ k rµ (xi ,ξi )) (ξi ),

kyk − ξi k ≤ kuk − ξi k → 0, So limk→∞ yk = ξi .

as k → ∞.



Lemma 2.2. The function hi defined by (2.3) is Lipschitz on Rn . Proof. Let u, v ∈ Rn . Since F (zi ) is compact, there is ξu ∈ F (zi ) such that hi (u) = hξu , u − zi i. Therefore, hi (u) − hi (v) ≤ hξu , u − zi i − hξu , v − zi i = hξu , u − vi ≤ M ku − vk, where M := supξ ∈F (zi ) kξ k < ∞. Exchanging the roles of u and v yield that hi is Lipschitz with modulus M.



Lemma 2.3. For every x ∈ Rn and ξ ∈ F (x),



ξ , rµ (x, ξ ) ≥ µ−1 krµ (x, ξ )k2 .

(2.5)

Proof. Since rµ (x, ξ ) = x − ΠC (x − µξ ). The variational property of projection implies that

hx − ΠC (x − µξ ) − µξ , y − ΠC (x − µξ )i ≤ 0,

for all y ∈ C ;

in particular, taking y = x, we obtain the desired inequality.



Lemma 2.4. Let x∗ solve the variational inequality (1.1) and the function hi be defined by (2.3). Then hi (xi ) ≥ ηi σ krµ (xi , ξi )k2 and hi (x∗ ) ≤ 0. In particular, if rµ (xi , ξi ) 6= 0 then hi (xi ) > 0. Proof. Since zi = xi − ηi rµ (xi , ξi ), hi (xi ) = sup hξ , xi − zi i = ηi sup ξ , rµ (xi , ξi )



ξ ∈F (zi )



ξ ∈F (zi )

≥ ηi yki , rµ (xi , ξi ) ≥ ηi σ krµ (xi , ξi )k2 ,

where the last inequality follows from (2.2). Since F satisfies the property (1.2), hi (x∗ ) ≤ 0.



3. Convergence analysis n

Theorem 3.1. If F : C → 2R is continuous with nonempty compact convex values on C and the condition (1.2) holds, then either Algorithm 1 terminates in a finite number of iterations or generates an infinite sequence {xi } converging to a solution of (1.1). Proof. Let x∗ be a solution of the variational inequality problem. By Lemma 2.4, x∗ ∈ Ci . We assume that Algorithm 1 generates an infinite sequence {xi }. In particular, rµ (xi , ξi ) 6= 0 for every i. By Step 4, it follows from Lemma 2.4 in [9] that

kxi+1 − x∗ k2 ≤ kxi − x∗ k2 − kxi+1 − xi k2 ≤ kxi − x∗ k2 − dist2 (xi , Ci ),

(3.1) ∗ 2

where the last inequality is due to xi+1 ∈ Ci . It follows that the sequence {kxi+1 − x k } is nonincreasing, and hence is a convergent sequence. Therefore, {xi } is bounded and lim dist(xi , Ci ) = 0.

(3.2)

i→∞

Since F (x) is continuous with compact values, Proposition 3.11 in [2] implies that {F (xi ) : i ∈ N } is a bounded set, and so the sequences {ξi } and {zi } are bounded. Thus the continuity of F implies that {F (zi )} is a bounded set: for some M > 0, sup kζ k ≤ M ,

ζ ∈F (zi )

for all i.

(3.3)

In view of Lemma 2.2, each function hi is Lipschitz continuous on C with modulus M. Noting that xi 6∈ Ci and applying Lemma 2.3 in [9], we obtain that dist(xi , Ci ) ≥ M −1 hi (xi ),

for all i.

(3.4)

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

215

It follows from (3.1), (3.4) and Lemma 2.4 that dist(xi , Ci ) ≥ M −1 hi (xi ) ≥ M −1 σ ηi krµ (xi , ξi )k2 . Thus (3.2) implies that lim ηi krµ (xi , ξi )k2 = 0.

(3.5)

i→∞

If lim supi→∞ ηi > 0, then we must have lim infi→∞ krµ (xi , ξi )k = 0. Since rµ (·, ·) is continuous and the sequences {xi } and {ξi } are bounded, there exists an accumulation point (¯x, ξ¯ ) of {(xi , ξi )} such that rµ (¯x, ξ¯ ) = 0. This implies that x¯ solves the variational inequality (1.1). Replacing x∗ by x¯ in the preceding argument, we obtain that the sequence {kxi − x¯ k} is nonincreasing and hence converges. Since x¯ is an accumulation point of {xi }, some subsequence of {kxi − x¯ k} converges to zero. This shows that the whole sequence {kxi − x¯ k} converges to zero, and hence limi→∞ xi = x¯ . Suppose now that limi→∞ ηi = 0. Let (¯x, ξ¯ ) be any accumulation point of {(xi , ξi )}: there exists some subsequence {(xij ,

   k −1 ξij )} converging to (¯x, ξ¯ ). By the construction of algorithm, ykij −1 ∈ F xij − γ ij rµ (xij , ξij ) ≡ F xij − γ −1 ηij rµ (xij , ξij ) . Without loss of generality, we assume that yki −1 converges to y¯ ∈ F (¯x). By the choice of ηi , (2.2) implies that j D E σ krµ (xij , ξij )k2 > ykij −1 , rµ (xij , ξij ) D E

= ykij −1 − ξij , rµ (xij , ξij ) + ξij , rµ (xij , ξij ) D E ≥ ykij −1 − ξij , rµ (xij , ξij ) + µ−1 krµ (xij , ξij )k2 , for all j, where the last inequality holds as a consequence of Lemma 2.3. Letting j → ∞, we obtain that

σ krµ (¯x, ξ¯ )k2 ≥ y¯ − ξ¯ , rµ (¯x, ξ¯ ) + µ−1 krµ (¯x, ξ¯ )k2 .

(3.6)

By virtue of (2.1), inf



y∈F (xi )

y − ξi , rµ (xi , ξi ) ≥ 0,



∀i,

which implies that inf y − ξ¯ , rµ (¯x, ξ¯ ) ≥ 0,





y∈F (¯x)

being F continuous (see Proposition 3.23 in [2]). Since y¯ ∈ F (¯x), it follows that



y¯ − ξ¯ , rµ (¯x, ξ¯ ) ≥ 0.



This together with (3.6) yields that

σ krµ (¯x, ξ¯ )k2 ≥ µ−1 krµ (¯x, ξ¯ )k2 . Since µσ < 1, rµ (¯x, ξ¯ ) = 0; that is, x¯ solves the variational inequality (1.1). Applying the same argument as in the previous case, we get that limi→∞ xi = x¯ .  Now we provide a result on the convergence rate of the iterative sequence generated by Algorithm 1. To establish this result, we need a certain error bound to hold locally (see (3.7) below). The research on error bound is a large topic in the field of mathematical programming. One can refer to the survey [21] for the roles played by error bounds in the convergence analysis of iterative algorithms; more recent developments on this topic are included in Chapter 6 in [5]. For any λ > 0, define P (λ) := {(x, ξ ) ∈ C × Rn : ξ = ΠF (x) (ξ − rµ (x, ξ )), krµ (x, ξ )k ≤ λ}. We say that F is locally Lipschitz on C if for every x ∈ C , there are L > 0 and a neighborhood U of x such that H (F (y), F (z )) ≤ Lky − z k for all y, z ∈ U ∩ C , where H denotes the Hausdorff metric. Theorem 3.2. In addition to the assumptions in the above theorem, if F is locally Lipschitz continuous on C and if there exist positive constants c and λ such that dist(x, S ) ≤ c krµ (x, ξ )k,

for all (x, ξ ) ∈ P (λ),

then there is a constant α > 0 such that for sufficiently large i, dist(xi , S ) ≤ p

1

α i + dist−2 (x0 , S )

.

(3.7)

216

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

Proof. By the proof of Theorem 3.1, {xi } and {ξi } are convergent sequences. Since yki −1 ∈ F xi − γ ki −1 rµ (xi , ξi )





F xi − γ −1 ηi rµ (xi , ξi ) and since ηi ≤ 1, there is δ > 0 such that



max{kxi k, kxi − γ −1 ηi rµ (xi , ξi )k} ≤ δ,

∀ i.

The local Lipschitz continuity of F implies that there is L > 0 such that F is Lipschitz on B(0, δ) with modulus L. Put η := min{1/2, L−1 γ (µ−1 − σ )}. We first prove that ηi > η for all i. By the construction of ηi , we have ηi ∈ (0, 1]. If ηi = 1, then clearly ηi > 1/2 ≥ η. Now we assume that ηi < 1. Since ηi = γ ki , it follows that the nonnegative integer ki ≥ 1. Thus the construction of ki implies that



yki −1 , rµ (xi , ξi ) < σ krµ (xi , ξi )k2 .



(3.8)

Since yki −1 ∈ F (xi − γ −1 ηi rµ (xi , ξi )) and F is of compact values, the definition of Hausdorff metric implies the existence of ζi ∈ F (xi ) such that

kyki −1 − ζi k ≤ H (F (xi − γ −1 ηi rµ (xi , ξi )), F (xi )) ≤ Lγ −1 ηi krµ (xi , ξi )k,

(3.9)

where the second inequality is due to the Lipschitz property of F verified in the beginning of this proof. By (3.8), (2.1) and (3.9),





σ krµ (xi , ξi )k2 > yki −1 − ζi , rµ (xi , ξi ) + ζi − ξi , rµ (xi , ξi ) + ξi , rµ (xi , ξi ) ≥ µ−1 krµ (xi , ξi )k2 − kyki −1 − ζi kkrµ (xi , ξi )k ≥ µ−1 krµ (xi , ξi )k2 − Lγ −1 ηi krµ (xi , ξi )k2 . Therefore ηi > L−1 γ (µ−1 − σ ) ≥ η. Let x∗ ∈ ΠS (xi ). By the proof of Theorem 3.1 and (3.7), we obtain that for sufficiently large i, dist2 (xi+1 , S ) ≤ kxi+1 − x∗ k2 ≤ kxi − x∗ k2 − M −2 ηi2 (µ−1 − σ )2 krµ (xi , ξi )k4

≤ kxi − x∗ k2 − M −2 η2 (µ−1 − σ )2 krµ (xi , ξi )k4 ≤ dist2 (xi , S ) − M −2 η2 (µ−1 − σ )2 c −4 dist(xi , S )4 . Write α for M −2 η2 (µ−1 − σ )2 c −4 . Applying Lemma 6 in Chapter 2 of [22], we have

q

q

dist(xi , S ) ≤ dist(x0 , S )/ α i dist2 (x0 , S ) + 1 = 1/ α i + dist−2 (x0 , S ). This completes the proof.



Remark 3.1. Chapter 6 in [5] presents some results on the availability of (3.7) where the mapping F is single-valued; see Proposition 6.2.1(b), Corollary 6.2.2, Theorem 6.2.5. As a consequence of the following result, if F is strongly monotone and Lipschitz continuous on C , then (3.7) holds. n

Proposition 3.1. Suppose that F : C 7→ 2R is strongly monotone and Lipschitz continuous: there are α > 0 and β > 0 such that for all x, y ∈ C ,

hu − v, y − xi ≥ αky − xk2 , H (F (x), F (y)) ≤ βkx − yk.

∀ u ∈ F (y), v ∈ F (x),

(3.10) (3.11)

Then the problem (1.1) has a unique solution x and there is a constant c > 0 such that for all x ∈ C and all ξ satisfying ξ = ΠF (x) (x − rµ (x, ξ )), ∗

kx − x∗ k ≤ c krµ (x, ξ )k. Proof. Let x∗ be the unique solution of (1.1). Then there exists some u∗ ∈ F (x∗ ) such that

hu∗ , y − x∗ i ≥ 0,

for all y ∈ C .

(3.12)

Fix any x ∈ C . The Lipschitz continuity of F yields that there exists ζ ∈ F (x) such that

kζ − u∗ k ≤ H (F (x), F (x∗ )) ≤ βkx − x∗ k. Since x − rµ (x, ξ ) = ΠC (x − µξ ) ∈ C , (3.12) yields that

hu∗ , x − rµ (x, ξ ) − x∗ i ≥ 0.

(3.13)

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

217

Table 1 Example 4.1 Initial point

Iter.

Tolerance

Output

(1, 0, 0) (0, 1, 0) (0.5, 0.5, 0) (1, 0, 0) (0, 1, 0) (0.5, 0.5, 0)

28 23 26 21 17 19

10−7 10−7 10−7 10−5 10−5 10−5

(0, 0.00000010492621, 0.99999989507379) (0, 0.00000011920929, 0.99999988079071) (0, 0.00000007259976, 0.99999992740024) (0, 0.00001319501520, 0.99998680498480) (0, 0.00000762939453, 0.99999237060547) (0, 0.00000932348858, 0.99999067651142)

In view of the variational property of projection,



rµ (x, ξ ) − µξ , x − rµ (x, ξ ) − x∗ ≥ 0.



By the last two expressions and (3.10), we have

α kx − x∗ k2 ≤ h ξ − u∗ , x − x∗ i ≤ hrµ (x, ξ ), ξ − u∗ i −

1

µ

krµ (x, ξ )k2 +

1

µ

hrµ (x, ξ ), x − x∗ i

1

≤ hrµ (x, ξ ), ζ − u∗ i + hrµ (x, ξ ), x − x∗ i µ   1 ≤ krµ (x, ξ )k kζ − u∗ k + kx − x∗ k µ   1 ≤ β+ krµ (x, ξ )kkx − x∗ k, µ where the third inequality is because of ξ = ΠF (x) (x − rµ (x, ξ )) and ζ ∈ F (x), and the last one is due to (3.13). Therefore

kx − x∗ k ≤

µβ + 1 krµ (x, ξ )k. αµ

This completes the proof.



The following example shows that (3.7) is possibly true even if F is not strongly monotone. Example 3.1. Let C := [−1, ∞) and F (x) := {x2 + 1}. Then F is pseudomonotone but not monotone on C and the solution set of (1.1) is S = {−1}. One has (3.7) holds for c = max{1, µ1 } and λ = 1. 4. Numerical test In this section, we present a small numerical experiment for the algorithm. The MATLAB codes are run on a PC (with CPU Intel P4) under MATLAB Version 6.5.1.199709 (R13) Service Pack 1 which contains Optimization Toolbox Version 2.3. Example 4.1. Let n = 3,

( C :=

n

x ∈ R+ :

n X

) xi = 1 ,

i=1 n

and F : C → 2R be defined by F (x) := {(t , t − x1 , t − x2 ) : t ∈ [0, 1]}. Then the set C and the mapping F satisfy the assumptions of Theorem 3.1, and (0, 0, 1) is a solution of the generalized variational inequality. Since the set C has no interior point, C is not a feasible set of a real-valued convex inequality satisfying the Slater constraint qualification. We take σ = 0.4 and γ = 0.9. The tolerance ε means when kr (x, ξ )k ≤ ε , the procedure stops (See Table 1.). Acknowledgements We are grateful to the referees for valuable suggestions. References [1] E. Allevi, A. Gnudi, I.V. Konnov, Generalized variational inequalities, Math. Methods Oper. Res. 63 (3) (2006) 553–565. [2] Jean-Pierre Aubin, Ivar Ekeland, Applied Nonlinear Analysis, John Wiley & Sons Inc, New York, 1984.

218

F. Li, Y. He / Journal of Computational and Applied Mathematics 228 (2009) 212–218

[3] A. Auslender, M. Teboulle, Lagrangian duality and related multiplier methods for variational inequality problems, SIAM J. Optim. 10 (4) (2000) 1097–1115. [4] Felix E. Browder, Multi-valued monotone nonlinear mappings and duality mappings in Banach spaces, Tran. Amer. Math. Soc. 118 (1965) 338–351. [5] F. Facchinei, Jong-Shi Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2003. [6] S.C. Fang, E.L. Peterson, Generalized variational inequalities, J. Optim. Theory Appl. 38 (3) (1982) 363–383. [7] Masao Fukushima, The primal Douglas-Rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem, Math. Program. 72 (1, Ser. A) (1996) 1–15. [8] Yi Ran He., Stable pseudomonotone variational inequality in reflexive Banach spaces, J. Math. Anal. Appl. 330 (2007) 352–363. [9] Yiran He, A new double projection algorithm for variational inequalities, J. Comput. Appl. Math. 185 (1) (2006) 166–173. [10] S. Karamardian, Complementarity problems over cones with monotone and pseudomonotone maps, J. Optim. Theory Appl. 18 (4) (1976) 445–454. [11] I.V. Konnov, On the rate of convergence of combined relaxation methods, Izv. Vyssh. Uchebn. Zaved. Mat. 37 (12) (1993) 89–92. [12] I.V. Konnov, A class of combined iterative methods for solving variational inequalities, J. Optim. Theory Appl. 94 (3) (1997) 677–693. [13] I.V. Konnov, The method of multipliers for nonlinearly constrained variational inequalities, Optimization 51 (6) (2002) 907–926. [14] I.V. Konnov, Iterative algorithms for multi-valued inclusions with Z mappings, J. Comput. Appl. Math. 206 (2007) 358–365. [15] Igor Konnov, Combined relaxation methods for variational inequalities, Springer-Verlag, Berlin, 2001. [16] Igor V. Konnov, A combined relaxation method for variational inequalities with nonlinear constraints, Math. Program. 80 (2, Ser. A) (1998) 239–252. [17] Igor V. Konnov, On the convergence of combined relaxation methods for variational inequalities, Optim. Methods Softw. 9 (1–3) (1998) 77–92. [18] Igor V. Konnov, A combined relaxation method for a class of nonlinear variational inequalities, Optimization 51 (1) (2002) 127–143. [19] Igor V. Konnov, A combined relaxation method for nonlinear variational inequalities, Optim. Methods Softw. 17 (2) (2002) 271–292. [20] Igor V. Konnov, Combined relaxation methods for generalized monotone variational inequalities, in: Generalized convexity and related topics, in: Lecture Notes in Econom. and Math. Systems, vol. 583, Springer, Berlin, 2007, pp. 3–31. [21] Jong-Shi Pang, Error bounds in mathematical programming, Math. Program. 79 (1–3, Ser. B) (1997) 299–332. [22] Boris T. Polyak, Introduction to Optimization, Optimization Software Inc. Publications Division, New York, 1987. [23] R. Tyrrell Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim. 14 (5) (1976) 877–898. [24] Romesh Saigal, Extension of the generalized complementarity problem, Math. Oper. Res. 1 (3) (1976) 260–266. [25] Geneviève Salmon, Jean-Jacques Strodiot, Van Hien Nguyen, A bundle method for solving variational inequalities, SIAM J. Optim. 14 (3) (2003) 869–893. [26] M.V. Solodov, B.F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim. 37 (3) (1999) 765–776.