A projected subgradient method for solving generalized mixed variational inequalities

A projected subgradient method for solving generalized mixed variational inequalities

Operations Research Letters 36 (2008) 637–642 Contents lists available at ScienceDirect Operations Research Letters journal homepage: www.elsevier.c...

286KB Sizes 2 Downloads 117 Views

Operations Research Letters 36 (2008) 637–642

Contents lists available at ScienceDirect

Operations Research Letters journal homepage: www.elsevier.com/locate/orl

A projected subgradient method for solving generalized mixed variational inequalities Fu-quan Xia a,∗ , Nan-jing Huang b,c , Zhi-bin Liu d,c a Department of Mathematics, Sichuan Normal University, Chengdu, Sichuan 610066, PR China b Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, PR China c State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation, Chengdu, 610500, PR China d Department of Applied Mathematics, Southwest Petroleum University, Chengdu, 610500, PR China

article

a b s t r a c t

info

Article history: Received 29 September 2007 Accepted 31 March 2008 Available online 8 May 2008

We consider the projected subgradient method for solving generalized mixed variational inequalities. In each step, we choose an εk -subgradient uk of the function f and wk in a set-valued mapping T , followed by an orthogonal projection onto the feasible set. We prove that the sequence is weakly convergent. © 2008 Elsevier B.V. All rights reserved.

Keywords: Iterative scheme Projected subgradient method Set-valued mapping Paramonotonicity εk -subgradient

1. Introduction Let X be a nonempty closed convex subset of Hilbert space H. Let T : X → 2H be a set-valued mapping and f : H → (−∞, +∞] be a lower semi-continuous (l.s.c) proper convex function. We consider a generalized mixed variational inequality problem (in short, GMVIP): find x∗ ∈ X such that there exists w∗ ∈ T (x∗ ) satisfying

hw∗ , y − x∗ i + f (y) − f (x∗ ) ≥ 0,

∀y ∈ X .

(1.1)

The generalized mixed variational inequality problem (1.1) is encountered in many applications, in particular, in mechanical problems (see, e.g., [17]) and equilibrium problems (see, e.g., [5,11]). It is well known that problem (1.1) includes a large variety of problems as special instances. For example, if T is the subdifferential of a finite-valued convex continuous function ϕ defined on Hilbert space H, then problem (1.1) reduces to the following nondifferentiable convex optimization problem: min{f (x) + ϕ(x)}. x ∈X

Furthermore, if T is single-valued and f = 0, then problem (1.1) reduces to the following classical variational inequality problem: find x∗ ∈ X such that, for all y ∈ X ,

hT (x∗ ), y − x∗ i ≥ 0.

(1.2)

Many methods have been proposed to solve classical variational inequalities (1.2) in finite and infinite dimensional spaces. The simplest one among these is the projection method which has been intensively studied by many authors (see, e.g., [8,13,20]). However, the classical projection method is not suitable for solving the generalized mixed variational inequality problem (1.1). Therefore, it is worth studying implementable methods for solving problem (1.1). Algorithms that can be applied for solving problem (1.1) or one of its variants are numerous (see, for example, [19]). For the case when T is maximal monotone, the most famous method is the proximal method (see, e.g., Rockafellar [18]). Splitting methods have also

∗ Corresponding author. E-mail address: [email protected] (F.-q. Xia). 0167-6377/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.orl.2008.03.007

F.-q. Xia et al. / Operations Research Letters 36 (2008) 637–642

638

been studied to solve problem (1.1). Here the set-valued mapping T and ∂(f + ψX ) play separate roles. The simplest splitting method is the forward–backward scheme (see, e.g., Tseng [21]). Based on the so-called auxiliary problem principle, Cohen [4] developed a general algorithm framework for solving problem (1.1) in Hilbert space H. The corresponding method is a generalization of the forward–backward method. More precisely, let Ω be a strongly monotone and Lipschitz continuous auxiliary operator on H, and let {µk } be a sequence of positive real numbers. The problem considered at iteration k is as follows: xk+1 ∈ [Ω + µk ∂(f + ψk )]−1 [Ω − µk T ](xk ),

i.e.,

(AP ) k

(

choose wk ∈ T (xk ) and find xk+1 ∈ X such that, for all x ∈ X , 1 k+1 hwk + µ− ) − Ω (xk )], x − xk+1 i + f (x) − f (xk+1 ) ≥ 0, k [Ω (x

where Ω is chosen as the gradient of some continuously differentiable and strongly convex function h with Lipschitz continuous gradient. In that case, the subproblem can be equivalently written in the following minimization form:  xk+1 = argmin{f (x) + hwk , x − xk i + µ−1 [h(x) − h(xk ) − h∇ h(xk ), x − xk i]}, k

(AP )

x∈X

 with

wk ∈ T (xk ).

When f is a nonsmooth convex function, the subproblems (APk ) may be very hard to solve. Thus, several authors proposed approximating the function f by using the subgradient of f to generate a sequence of more tractable convex functions (see, for example, [12,14,19]). On the other hand, Alber, Iusem and Solodov [2] considered the projected subgradient method for constrained convex optimization in a Hilbert space, consisting of a step in the direction opposite to an εk -subgradient of the objective at a current iterate, followed by an orthogonal projection onto the feasible set. They proved that the sequence generated in this way is weakly convergent to a minimizer if the optimization problem has solutions, and is unbounded otherwise. They also presented some convergence rate results. Motivated and inspired by the research work going on this field, in this paper, we use ideas similar to Alber, Iusem and Solodov [2] to provide a new projected subgradient method for solving the generalized mixed variational inequality problem (1.1) in Hilbert spaces. We first show how to generate the sequence {xk } and how to adapt the stopping criterion. We also prove that the sequence {xk } generated by the algorithm is weakly convergent and that the weak limit point of this sequence is a solution of problem (1.1), in which the set-valued mapping T is paramonotone and the function f is nonsmooth and convex. 2. Preliminaries For a convex function f : H → (−∞, +∞], dom f = {x ∈ H : f (x) < ∞} denotes its effective domain,

∂ f (·) = {p ∈ H : f (y) ≥ f (·) + hp, y − ·i − ,

∀y ∈ H}

denotes its -subdifferential and ∂f = ∂0 f its subdifferential. Suppose that X ⊂ H is a nonempty closed convex subset and dist(z, X ) := inf kz − xk x ∈X

is the distance from z to X . Let PX [z] denote the projection of z onto X , that is, PX [z] satisfies the condition

kz − PX [z]k = dist(z, X ). The following well-known properties of the projection operator will be used below. Proposition 2.1 ([16]). Let X be a nonempty closed convex subset in H. Then the following properties hold: (i) hx − y, x − PX [x]i ≥ 0 for all x ∈ H and y ∈ X ; (ii) kPX [x] − PX [y]k ≤ kx − yk for all x, y ∈ H. Definition 2.1. Let X be a nonempty subset of Hilbert space H. A set-valued mapping T : X → 2H is said to be: (i) monotone if

hu − v, x − yi ≥ 0,

∀x, y ∈ X , u ∈ T (x), v ∈ T (y);

(ii) paramonotone if T is monotone and hu − v, x − yi = 0 with x, y ∈ X , u ∈ T (x), v ∈ T (y) implies u ∈ T (y), v ∈ T (x). (iii) Lipschitz continuous on a subset B of X , if there exists L > 0 such that, H(T (x), T (y)) ≤ Lkx − yk,

∀x, y ∈ B,

where H(·, ·) is the Hausdorff metric on a nonempty bounded closed subset of H. Remark 2.1. The notion of paramonotonicity was introduced in Bruck [3] and further studied in Iusem [9]. Lemma 2.1 ([9]). If T is paramonotone and x∗ is a solution of the generalized mixed variational inequality problem (1.1), then xˆ is a solution of ˆ ∈ T (xˆ ) such that problem (1.1) if and only if xˆ ∈ X and there exists w

ˆ , x∗ − xˆ i + f (x∗ ) − f (xˆ ) ≥ 0. hw

F.-q. Xia et al. / Operations Research Letters 36 (2008) 637–642

639

¯ (weakly), wk ∈ T (xk ) implies Definition 2.2. A set-valued mapping T is said to be weakly closed on X if xk → x¯ (weakly), xk ∈ X and wk → w ¯ ∈ T (x¯ ). that w Obviously, if T is weakly closed on X , then T (x) is a weakly closed subset of H for each x ∈ X . The following result is Proposition 3.6(c) of Salmon, Strodiot and Nguyen [19]. Lemma 2.2 ([19]). Let x∗ be a given point and f : H → (−∞, +∞] a proper convex lower semi-continuous function. If T is a monotone operator such that T is bounded on bounded subsets of X and weakly closed on X , then l(x) = inf w∈T (x) hw, x − x∗ i + f (x) − f (x∗ ) is weakly lower semi-continuous on X . We also need the following lemma. P P∞ Lemma 2.3 ([2]). Let {αk } and {βk } be real sequences. Assume that αk ≥ 0 for all k ≥ 0 with ∞ k=0 αk = ∞ and k=0 αk βk < ∞. Suppose there exist k˜ ≥ 0 and θ > 0 such that βk ≥ 0 for all k ≥ k˜ and βk+1 − βk ≤ θαk for all k. Then limk→∞ βk = 0.

The following definition originates in [7] and has been further elaborated in [10]. Definition 2.3. Let H be a Hilbert space and V a nonempty subset of H. A sequence {xk } is said to be quasi-Fejér-convergent in V if, for any P∞ x¯ ∈ V , there exist k¯ ≥ 0 and a sequence {δk } ⊂ R+ such that k=0 δk < ∞ and kxk+1 − x¯ k2 ≤ kxk − x¯ k2 + δk for all k ≥ k¯ . The following result is Proposition 1 of Alber, Iusem and Solodov [2]. Lemma 2.4. Let {xk } be a quasi-Fejér-convergent sequence in V . Then: 1. {xk } is bounded; 2. {kxk − x¯ k2 } converges for all x¯ ∈ V ; 3. if all weak accumulation points of {xk } belong to V then {xk } is weakly convergent, i.e., it has a unique accumulation point. 3. Projected subgradient method From now on, we adopt the following assumptions (A1 )–(A4 ): (A1 ) The solution set S of problem (1.1) is nonempty (see, for example, [1]). (A2 ) f : H → (−∞, +∞] is a proper convex lower semi-continuous function with X ⊂ int(dom f ) and ∂ε f is bounded on any bounded subsets of X . (A3 ) T : X → 2H is a paramonotone set-valued mapping with bounded convex values such that T is weak closed on X and Lipschitz continuous on a bounded subset of X . (A4 ) A sequence {αk } of nonnegative real numbers satisfying ∞ X

αk = ∞,

(3.1)

α2k < ∞

(3.2)

k=0

∞ X k=0

and a nonincreasing sequence {εk } of nonnegative real numbers such that there exists µ > 0 satisfying

εk ≤ µαk

(3.3)

for all k. Remark 3.1. (1) Since f is a proper convex lower semi-continuous function, f is also a weakly lower semi-continuous and continuous over int(domf ) (see [6]). (2) We know that a monotone operator is locally bounded at interior points of its domain (see [17]). Since int(dom f ) = int(dom ∂f ), we deduce that ∂f is locally bounded at any point of int(dom f ) (see [17]). Hence, if H is finite dimensional, ∂f is always bounded on bounded subsets of X . This is not true in a general Hilbert space. However, a sufficient condition for ∂f to be bounded on bounded subsets of X is that |f | is bounded on bounded subsets of X (see [2]). (3) The assumption (A3 ) is the same as Theorem 3.12(a) of [19]. By assumption (A3 ) and Lemma 3.5 of [19], we know that T is bounded on any bounded subsets of X . Algorithm 3.1. Step 0. (Initiation) Select initial x0 ∈ X and w0 ∈ T (x0 ). Set k = 0. Step 1. If 0 ∈ T (xk ) + ∂f (xk ), stop; else go to Step 2. Step 2. Let uk ∈ ∂εk f (xk ), ηk = max{1, kwk k + kuk k} and   xk+1 = PX xk −

αk k (w + uk ) ηk

with αk , εk satisfying (3.1)–(3.3). Step 3. Take wk+1 ∈ T (xk+1 ) such that kwk+1 − wk k ≤ (1 +

(3.4)

1 k+1

)H(T (xk+1 ), T (xk )). Let k = k + 1 and return to Step 1.

F.-q. Xia et al. / Operations Research Letters 36 (2008) 637–642

640

Remark 3.2. In Hilbert space H, a set-valued mapping T has closed values if T has weakly closed convex values. By Assumption A3 , we have that T has nonempty bounded closed values. Thus, by Nadler’s Theorem [15], there exists wk+1 ∈ T (xk+1 ) such that   1 kwk+1 − wk k ≤ 1 + H(T (xk+1 ), T (xk )). k+1 This shows that Algorithm 3.1 is well defined. Remark 3.3. If T is a single-valued mapping, then we can compute wk+1 as follows: wk+1 = T (xk+1 ).

However, we do not know how to compute wk+1 when T is a set-valued mapping. Therefore, Step 3 of Algorithm 3.1 is purely conceptual when T is a set-valued mapping. Now we analyze the convergence of the sequence generated by Algorithm 3.1 in Hilbert space H. Theorem 3.1. Suppose that the sequence {xk } generated by Algorithm 3.1 is finite. Then the last term is a solution of problem (1.1). Proof. If the sequence is finite, then it must stop at Step 1 for some xk . In this case, we have 0 ∈ T (xk ) + ∂f (xk ) and so there exists w ∈ T (xk ) such that −w ∈ ∂f (xk ). By the definition of subgradient of f ,

hw, y − xk i + f (y) − f (xk ) ≥ 0,

∀y ∈ X

k

and so x is a solution of problem (1.1). This completes the proof.



From now on we assume that the sequence {xk } generated by Algorithm 3.1 is infinite. The ideas of the proofs of the following Theorem 3.2 are adapted from Lemma 1 of Alber, Iusem and Solodov [2]. Theorem 3.2. Suppose that Assumptions (A1 )–(A4 ) hold. Then the sequence {xk } generated by Algorithm 3.1 is bounded. Proof. Let x∗ ∈ S, zk = xk − (αk /ηk )(wk + uk ) and βk = hwk , xk − x∗ i + f (xk ) − f (x∗ ). It follows from (3.4) that xk ∈ X for all k and so PX [xk ] = xk . By using Proposition 2.1(ii) and kwk + uk k ≤ ηk , we have

kxk+1 − xk k = kPX [zk ] − PX [xk ]k ≤ kzk − xk k =

αk k kw + uk k ≤ αk . ηk

(3.5)

In the following chain of equalities and inequalities, where we establish a upper bound of αk βk /ηk , the equalities are trivial and the inequalities are justified immediately. It follows that

α2k + kxk − x∗ k2 − kxk+1 − x∗ k2 ≥ kxk+1 − xk k2 + kxk − x∗ k2 − kxk+1 − x∗ k2 (by (3.5)) k ∗ k k+1 k ∗ k k = 2hx − x , x − x i = 2hx − x , x − z i + 2hxk − x∗ , zk − xk+1 i αk = 2 hwk + uk , xk − x∗ i + 2hxk − zk , zk − xk+1 i + 2hzk − x∗ , zk − xk+1 i ηk αk k = 2 hw + uk , xk − x∗ i + 2hxk − zk , zk − xk+1 i + 2hzk − x∗ , zk − PX [zk ]i ηk αk k ≥ 2 hw + uk , xk − x∗ i + 2hxk − zk , zk − xk+1 i (by Proposition 2.1(i)) ηk αk k = 2 hw + uk , xk − x∗ i + 2hxk − zk , zk − xk i + 2hxk − zk , xk − xk+1 i ηk αk k ≥ 2 hw + uk , xk − x∗ i − 2kxk − zk k2 − 2kxk − zk kkxk − xk+1 k ηk αk k α2 α2 ≥ 2 hw + uk , xk − x∗ i − 2 2k kwk + uk k2 − 2 k kwk + uk k (by (3.5)) ηk ηk ηk αk k ≥ 2 hw + uk , xk − x∗ i − 4α2k (since kwk + uk k ≤ ηk ) ηk αk k k ≥ 2 [hw , x − x∗ i + f (xk ) − f (x∗ ) − εk ] − 4α2k (by the definition of ∂εk f (xk )) ηk αk αk = 2 βk − 2 εk − 4α2k ηk ηk αk ≥ 2 βk − 2αk εk − 4α2k (since ηk ≥ 1) ηk αk ≥ 2 βk − (2µ + 4)α2k (by (3.3)). ηk

(3.6)

Since x∗ ∈ S, there exists w∗ ∈ T (x∗ ) such that

hw∗ , y − x∗ i + f (y) − f (x∗ ) ≥ 0,

∀y ∈ X .

(3.7)

F.-q. Xia et al. / Operations Research Letters 36 (2008) 637–642

641

Taking y = xk in (3.7), we have

hw∗ , xk − x∗ i + f (xk ) − f (x∗ ) ≥ 0.

(3.8)

It follows from the paramonotonicity of T that

hwk , xk − x∗ i + f (xk ) − f (x∗ ) ≥ 0,

∀wk ∈ T (xk ).

(3.9)

Therefore, βk ≥ 0 and (3.6) implies that 0≤2

αk βk ≤ kxk − x∗ k2 − kxk+1 − x∗ k2 + (2µ + 5)α2k . ηk

(3.10)

Now we prove that the sequence {xk } is bounded. Let δk = (2µ + 5)α2k and γ = +∞. Since kxk+1 − x∗ k2 ≤ kxk − x∗ k2 + δk for all k, we conclude that

kxk+1 − x∗ k2 ≤ kx0 − x∗ k2 +

k X

δj ≤ kx0 − x∗ k2 +

j=0

∞ X

P∞

k=0

α2k . By (3.2), γ < +∞ so that

P∞

k=0

δk = (2µ + 5)γ <

δj .

j=0

Thus, the sequence {xk } is contained in a certain ball centered at x∗ and the boundedness of {xk } follows. This completes the proof.



Theorem 3.3. Let x∗ ∈ S be a solution of problem (1.1) and

βk = hwk , xk − x∗ i + f (xk ) − f (x∗ ). If Assumptions (A1 )–(A4 ) hold, then limk→∞ βk = 0. Proof. By Theorem 3.2, the sequence {xk } is bounded and there exists a constant λ > 0 such that kxk k ≤ λ for all k. Let B be a bounded set containing {xk } and ε¯ = sup{εk }. Then [ uk ∈ ∂εk f (xk ) ⊂ ∂ε¯ f (xk ) ⊂ ∂ε¯ f (y). (3.11) y∈B

By (3.11), assumptions (A2 ) and (A3 ), we know that {uk } and {wk } are both bounded and so there exists ρ > 1 such that kuk k + kwk k ≤ ρ for all k. It follows that

ηk = max{1, kuk k + kwk k} ≤ max{1, ρ} = ρ. By (3.10), 0≤

2

ρ

αk βk ≤ kxk − x∗ k2 − kxk+1 − x∗ k2 + (2µ + 5)α2k .

(3.12)

Summing up (3.12), we get ∞ 2X

ρ

αk βk ≤ kx0 − x∗ k2 + (2µ + 5)

k=0

∞ X

α2k .

(3.13)

k=0

It follows from (3.2) that ∞ X

αk βk < ∞.

(3.14)

k=0

Observe that

βk+1 − βk = hwk+1 , xk+1 − x∗ i − hwk , xk − x∗ i + f (xk+1 ) − f (xk ) = hwk+1 , xk+1 − xk i + hwk+1 − wk , xk − x∗ i + f (xk+1 ) − f (xk ) ≤ hwk+1 + uk+1 , xk+1 − xk i + hwk+1 − wk , xk − x∗ i + εk+1 ≤ (kuk+1 k + kwk+1 k)kxk+1 − xk k + kwk+1 − wk kkxk − x∗ k + εk 1



≤ ρkxk+1 − xk k + (kxk k + kx∗ k) 1 +

k+1



H(T (xk+1 ), T (xk )) + εk

≤ (ρ + 2Lλ + 2Lkx∗ k)kxk+1 − xk k + εk ≤ (ρ + 2Lλ + 2Lkx∗ k + µ)αk ,

(3.15)

using the definition of ∂εk+1 f (x ) in the first inequality, εk+1 ≤ εk and the Cauchy–Schwartz inequality in the second one, the choice of wk+1 in the third one, boundedness of {xk } and the Lipschitz continuity of T in the fourth one and (3.3) together with (3.5) in the fifth one. Let θ = ρ + 2Lλ + 2Lkx∗ k + µ. By (3.9), we have βk ≥ 0 for all k. It follows from (3.1), (3.14) and (3.15) and Lemma 2.3 that limk→∞ βk = 0. This completes the proof.  k+1

Theorem 3.4. Suppose that Assumptions (A1 )–(A4 ) hold. Then every weak accumulation point of the sequence {xk } generated by Algorithm 3.1 is a solution of problem (1.1).

F.-q. Xia et al. / Operations Research Letters 36 (2008) 637–642

642

Proof. Let x∗ ∈ S be a solution of problem (1.1). Then there exists w∗ ∈ T (x∗ ) such that

hw∗ , x − x∗ i + f (x) − f (x∗ ) ≥ 0,

∀x ∈ X .

(3.16)

Since T is monotone, it follows from (3.16) that

hw, x − x∗ i + f (x) − f (x∗ ) ≥ 0,

∀w ∈ T (x), ∀x ∈ X

and so inf hw, x − x∗ i + f (x) − f (x∗ ) ≥ 0,

w∈T (x)

∀x ∈ X .

(3.17)

Let l(x) = inf hw, x − x∗ i + f (x) − f (x∗ ). w∈T (x)

By (3.17), we have l(x) ≥ 0 for all x ∈ X . Now we prove that each weak accumulation point of {xk } is a solution of problem (1.1). Let x¯ be a weak accumulation point of {xk }, which exists by Theorem 3.2. Since X is a closed subset of H and {xk } ⊂ X , x¯ ∈ X and thus l(x¯ ) ≥ 0. Assume {xkj } is a subsequence of {xk } whose weak limit is x¯ . By the definition of βk in Theorem 3.3, we have l(xk ) ≤ βk . It follows from Lemma 2.2 that l(x) is weakly lower semi-continuous. By Theorem 3.3, 0 ≤ l(x¯ ) ≤ liminf j→∞ l(xkj ) ≤ liminf j→∞ βkj = limj→∞ βkj = 0

(3.18)

and so l(x¯ ) = 0. By the definition of infimum, there exists a sequence {sk } contained in T (x¯ ) such that, for all k ≥ 1, 0 ≤ hsk , x¯ − x∗ i + f (x¯ ) − f (x∗ ) < 1/k. Since the subset T (x¯ ) is bounded and weakly closed, there exists a subsequence of {sk } converging weakly to some s ∈ T (x¯ ). It follows that 0 ≤ hs, x¯ − x∗ i + f (x¯ ) − f (x∗ ) ≤ 0, and by Lemma 2.1, x¯ is a solution of problem (1.1) because T is paramonotone. This completes the proof.



Theorem 3.5. Suppose that Assumptions (A1 )–(A4 ) hold. Then the sequence {xk } generated by Algorithm 3.1 is weakly convergent. Proof. For each x∗ ∈ S, by (3.10), we deduce that

kxk+1 − x∗ k2 ≤ kxk − x∗ k2 + (2µ + 5)α2k . It follows from (3.2) and Definition 2.3 that the sequence {xk } is quasi-Fejér-convergent in S. By Theorem 3.4 and Lemma 2.4(3), we have that the sequence {xk } is weakly convergent. This completes the proof.  Acknowledgments The authors are grateful to Professor P. Marcotte and the referee for valuable comments and suggestions. This work was supported by the National Natural Science Foundation of China (10671135), the Specialized Research Fund for the Doctoral Program of Higher Education (20060610005), the NSF of Sichuan Education Department of China (07ZB068) and Open Fund (PLN0703) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University). References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

J.P. Aubin, I. Ekeland, Applied Nonlinear Analysis, Wiley, Now York, 1984. Y.I. Alber, A.N. Iusem, M.V. Solodov, On the projected subgradient method for nonsmooth convex optimization in a Hilbert space, Math. Program. 81 (1998) 23–35. R.D. Bruck, An iterative solution of a variational inequality for certain monotone operators in Hilbert space, Bull. Amer. Math. Soc. 81 (1975) 890–892. G. Cohen, Auxiliary problem principle extended to variational inequalities, J. Optim. Theory Appl. 49 (1988) 325–333. G. Cohen, Nash equilibria: Gradient and decomposition algorithms, Large Scale Systems 12 (1987) 173–184. I. Ekeland, R. Temam, Convex Analysis and Variational Inequalities, North-Holland, Amsterdam, 1976. Y.M. Ermoliev, On the method of generalized stochastic gradients and quasi-Fejér sequences, Cybernetics 5 (1969) 208–220. F. Facchinei, J.S. Pang, Finite Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2003. A.N. Iusem, On some properties of paramonotone operators, J. Convex Anal. 5 (1998) 269–278. A.N. Iusem, B.F. Svaiter, M. Teboulle, Entropy-like proximal methods in convex programming, Math. Oper. Res. 19 (1994) 790–814. I. Konnov, A combined relaxation method for a class of nonlinear variational inequalities, Optimization 51 (2002) 127–143. B. Lemaire, Coupling optimization methods and variational convergence, in: K.H. Hoffmann, J.B. Hiriart-urruty, C. Lemaréchal, J. Zowe (Eds.), Trends in Mathematical Optimization, in: Int. Ser. Numer. Math., Birkhauser-verlag, Basel, 1998, pp. 163–179. P. Marcotte, Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems, INFOR 29 (1991) 258–270. S. Makler-Scheimberg, V.H. Nguyen, J.J. Strodiot, Family of perturbation methods for variational inequalities, J. Optim. Theory Appl. 89 (1996) 423–452. S.B. Nadler, Multi-valued contraction mappings, Pacific J. Math. 30 (1969) 475–488. B.T. Polyak, Introduction to Optimization, Optimization Software, New York, 1987. P. Panagiotopoulos, G. Stavroulakis, New types of variational principles based on the notion of quasidifferentiability, Acta Mech. 94 (1994) 171–194. R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control Optim. 14 (1976) 877–898. G. Salmon, J.J. Strodiot, V.H. Nguyen, A bundle method for solving variational inequalities, SIAM J. Optim. 14 (2004) 869–893. M.V. Solodov, B.F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim. 37 (1999) 765–776. P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities, SIAM J. Control Optim. 29 (1991) 119–138.