Journal of Mathematical Analysis and Applications 268, 334 – 343 (2002) doi:10.1006/jmaa.2001.7896, available online at http://www.idealibrary.com on
A Class of Projection Methods for General Variational Inequalities Muhammad Aslam Noor Etisalat College of Engineering, P.O. Box 980, Sharjah, United Arab Emirates E-mail:
[email protected]
and Themistocles M. Rassias Department of Mathematics, National Technical University of Athens, Zografou Campus, 15780, Athens, Greece E-mail:
[email protected] Submitted by William F. Ames Received November 27, 2001
In this paper, we consider and analyze a new class of projection methods for solving pseudomonotone general variational inequalities using the Wiener–Hopf equations technique. The modified methods converge for pseudomonotone operators. Our proof of convergence is very simple as compared with other methods. The proposed methods include several known methods as special cases. 2002 Elsevier Science (USA)
Key Words: variational inequalities; Wiener–Hopf equations; projection method; fixed point; convergence.
1. INTRODUCTION General variational inequality, introduced and studied by Noor [5] in 1988, is a useful and significant generalization of variational inequalities. It has been shown that a wide class of odd-order, nonsymmetric obstacle, unilateral, equilibrium, nonlinear nonconvex programming, and quasi variational inequalities problems can be studied in the general framework of general variational inequalities; see, for example, [6–9, 11–15, 26]. In recent 334 0022-247X/02 $35.00 2002 Elsevier Science (USA)
All rights reserved.
general variational inequalities
335
years, several iterative projection-type methods have been suggested and analyzed by various authors by modifying the projection methods. It has been shown that these modified methods converge for pseudomonotone operators, improving the convergence criteria effectively. For recent advances in projection-type methods for solving variational inequalities and related complementarity problems, see Xiu and Zhang [25] and the references therein. Equally important is the area of the Wiener–Hopf equations, which has played an important and fundamental role in developing some powerful and efficient numerical techniques as well as studying the sensitivity analysis for various classes of variational inequalities (inclusions). Recently Noor et al. [15] have suggested a modified projection-type method for solving variational inequalities by using the Wiener–Hopf equations technique. It has been shown that this modified method is as efficient and robust as the projection methods of Solodov and Svaiter [19] and Iusem and Svaiter [4]. We extend these ideas to suggest a new class of projection-type methods for solving general variational inequalities. Our method of convergence is very simple as compared with other methods. Since general variational inequalities include (quasi) variational inequalities, complementarity problems, tangent projection equations, and nonlinear optimization problems as special cases, the results obtained in this paper continue to hold for these problems.
2. PRELIMINARIES Let H be a real Hilbert space, whose inner product and norm are denoted by · · and ·, respectively. Let K be a closed convex set in H and let T g H → H be nonlinear operators. We now consider the problem of finding u ∈ H, gu ∈ K such that Tu gv − gu ≥ 0
for all gv ∈ K
(2.1)
Problem (2.1) is called the general variational inequality, which was introduced and studied by Noor [5] in 1988. It has been shown that a large class of unrelated odd-order and nonsymmetric obstacle, unilateral, contact, free, moving, and equilibrium problems arising in regional, physical, mathematical, engineering, and applied sciences can be studied in the unified and general framework of the general variational inequalities (2.1); see [6–14] and the references therein. For g ≡ I, where I is the identity operator, problem (2.1) is equivalent to finding u ∈ K such that Tu v − u ≥ 0
for all v ∈ K
(2.2)
336
noor and rassias
which is known as the classical variational inequality introduced and studied by Stampacchia [21] in 1964. For recent state of the art, see [1–26] and the references therein. From now on, we assume that g is onto K unless otherwise specified. If Nu = w ∈ H w v − u ≤ 0 for all v ∈ K is a normal cone to the convex set K at u, then the general variational inequality (2.1) is equivalent to finding u ∈ H, gu ∈ K such that −T u ∈ Ngu which are known as the general nonlinear equations. If T tg is the projection of −Tu at gu ∈ K, then it can be shown that the general variational inequality problem (2.1) is equivalent to finding u ∈ H, gu ∈ K such that T tg u = 0 which are known as the tangent projection equations; see [26]. This equivalence has been used to discuss the local convergence analysis of a wide class of iterative methods for solving general variational inequalities (2.1). If K ∗ = u ∈ H u v ≥ 0 for all v ∈ K is a polar (dual) cone of a convex cone K in H, then problem (2.1) is equivalent to finding u ∈ H such that gu ∈ K
Tu ∈ K ∗
Tu gu = 0
(2.3)
which is known as the general complementarity problem. For gu = mu + K, where m is a point-to-point mapping, problem (2.3) is called the implicit (quasi) complementarity problem. If g ≡ I, then problem (2.3) is known as the generalized complementarity problem. Such problems have been studied extensively in the literature; see, for example, [2, 9, 12, 14, 16, 20]. We now recall the following well-known result and concepts. Lemma 2.1.
For a given z ∈ H, u ∈ K satisfies the inequality u − z v − u ≥ 0
for all v ∈ K
(2.4)
if and only if u = PK z where PK is the projection of H onto K. Also, the projection operator PK is nonexpansive. Related to the general variational inequalities, we now consider the problem of Wiener–Hopf equations. To be more precise, let QK = I − PK , where I is the identity operator and PK is the projection of H onto K For given
general variational inequalities
337
nonlinear operators T g H → H consider the problem of finding z ∈ H such that ρTg−1 PK z + QK z = 0
(2.5)
Equations of the type (2.5) are called the general Wiener–Hopf equations, which were introduced and studied by Noor [6, 7]. For g = I, we obtain the original Wiener–Hopf equations, which were introduced and studied by Shi [18] and Robinson [17] in different settings independently. Using the projection operators technique, one can show that the variational inequalities are equivalent to the Wiener–Hopf equations. This equivalent alternative formulation has played a fundamental and important role in studying various aspects of variational inequalities. It has been shown that Wiener–Hopf equations are more flexible and provide a unified framework to develop some efficient and powerful numerical techniques for solving variational inequalities and related optimization problems; see, for example, [8, 9, 11, 14, 17, 18] and the references therein. Definition 2.1. For all u v ∈ H, the operator T H → H is said to be (i)
g-monotone, if Tu − Tv gu − gv ≥ 0
(ii)
g-pseudomonotone, if Tu gv − gu ≥ 0
implies
Tv gv − gu ≥ 0
For g ≡ I, Definition 2.1 reduces to the usual definition of monotonicity and pseudomonotonicity of the operator T . Note that monotonicity implies pseudomonotonicity but the converse is not true; see [2, pp. 291–293].
3. MAIN RESULTS In this section, we use the Wiener–Hopf equations technique to suggest and analyze a new class of iterative projection methods for solving the general variational inequalities (2.1). For this purpose, we need the following result, which can be proved by invoking Lemma 2.1. Lemma 3.1 [5]. The function u ∈ H, gu ∈ K is a solution of (2.1) if and only if u ∈ H satisfies the relation gu = PK gu − ρTu where ρ > 0 is a constant and g is onto K.
(3.1)
338
noor and rassias
Lemma 3.1 implies that problems (2.1) and (3.1) are equivalent. This alternative formulation is very important from the numerical analysis point of view. This fixed-point formulation was used to suggest and analyze a number of iterative methods for general variational inequalities (2.1); see [6–14]. We use this alternative formulation to show that the general variational inequalities (2.1) are equivalent to the general Wiener–Hopf equations (2.5). Lemma 3.2 [7]. The general variational inequality (2.1) has a unique solution u ∈ H, gu ∈ K if and only if the general Wiener–Hopf equation (2.5) has a unique solution z ∈ H, where gu = PK z
z = gu − ρTu
(3.2)
We now define the residual vector Ru by the relation Ru = gu − PK gu − ρTu
(3.3)
From Lemma 3.1, it is clear that u ∈ H, gu ∈ K is a solution of (2.1) if and only if u ∈ H is a zero of the equation Ru = 0 Using Lemma 3.2, the general Wiener–Hopf equation (2.5) can be written as gu − PK gu − ρTu − ρTu + ρTg−1 PK gu − ρTu = Ru − ρTu + ρTg−1 PK gu − ρTu = 0
(3.4)
Invoking Lemma 3.1, one can easily show that u ∈ H, gu ∈ K is a solution of (2.1) if and only if u ∈ H, gu ∈ K is a zero of the equation (3.4). Now for η ∈ 0 1 gu ∈ K PK gu − ρTu ∈ K, we have gw = 1 − η gu + ηPK gu − ρTu = gu − ηRu ∈ K
(3.5)
since K is a convex set. On the basis of the above observations, we can rewrite the equation (3.4) as gu = gu − αd
(3.6)
d = ηRu − ηρTu + ρTg−1 gu − ηRu
(3.7)
where and α is a positive stepsize. The fixed-point formulation (3.6) enables us to suggest and analyze a new class of modified projection methods for solving general variational inequalities (2.1).
general variational inequalities
339
Algorithm 3.1. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme gun+1 = PK gun − αn dn
n = 0 1 2
(3.8)
where ηn = amk and mk is the smallest nonnegative integer m such that (3.9) ρ Tu − Tg−1 gun − am Run Run ≤ σRun 2 αn =
1 − σ Run 2 dn 2
dn = ηn Run − ρηn Tun + ρTg−1 gun − ηn Run
(3.10) (3.11)
and σ ∈ 0 1 is a constant. We now discuss some special cases of Algorithm 3.1. Case I.
Note that, for ηn = 1, Algorithm 3.1 reduces to:
Algorithm 3.2. For a given u0 ∈ H, compute the approximate solution un+1 by the iterative scheme gun+1 = PK gun − αn dn where
n = 0 1 2
ρ Tun − Tg−1 PK gun − ρTun Run ≤ σRun 2 αn =
1 − σ Run 2 dn 2
dn = Run − ρTun + ρTg−1 PK gun − ρTun which appears to be a new one. Case II.
For ηn = 0, Algorithm 3.1 collapses to:
Algorithm 3.3. scheme
For a given u0 ∈ H, compute un+1 by the iterative
gun+1 = PK gun − ρTun
n = 0 1 2
which is known as the projection method; see [5]. For the local convergence analysis of Algorithms 3.2 and 3.3, see [26]. Case III. For g ≡ I, where I is the identity operator, we obtain the corresponding algorithms for classical variational inequalities (2.2), which were considered and studied by Solodov and Tseng [20], He [3], and Noor et al. [15]. This shows that Algorithm 3.1 is a unifying one and includes various known algorithms as special cases.
340
noor and rassias
For the convergence analysis of Algorithm 3.1, we need the following result. The analysis is in the spirit of Sun [22] and Noor [13]. Lemma 3.3. Let u¯ ∈ H be a solution of (2.1). If T H → H is gpseudomonotone, then gu − gu ¯ d ≥ 1 − σ ηRu 2
for all u ∈ H
(3.12)
Proof. Let u¯ ∈ H be a solution of the general variational inequality (2.1). Then T u ¯ gv − gu ¯ ≥0
for all gv ∈ K
implies Tv gv − gu ¯ ≥ 0
(3.13)
since the operator T is pseudomonotone. Taking gv = gu − ηRu in (3.13), we have Tg−1 gu − ηRu gu − ηRu − gu ¯ ≥ 0
(3.14)
Consider gu − gu ¯ ρTg−1 gu − ηRu = ρgu − gu − ηRu Tg−1 gu − ηRu + ρgu − ηRu − gu ¯ Tg−1 gu − ηRu ≥ ρηRu Tg−1 gu − ηRu
using (3.14)
−1
= −ρηRu Tu − Tg gu − ηRu + ρηRu Tu ≥ −σηRu 2 + ηρRu Tu
(3.15)
where we have used (3.9). ¯ in (2.4), we Setting z = gu − ρTu, u = PK gu − ρTu, and v = gu have ¯ − PK gu − Tu ≥ 0 PK gu − ρTu − gu + ρTu gu which implies, using (3.3), −Ru + ρTu gu ¯ − gu + Ru ≥ 0 from which, we obtain gu − gu ¯ Ru − ρTu ≥ Ru 2 − ρRu Tu
(3.16)
Combining (3.15) and (3.16), we obtain gu − gu ¯ ηRu − ηρTu + ρTg−1 gu − ηRu ≥ 1 − σ ηRu 2 that is, gu − gu ¯ d ≥ 1 − σ ηRu 2 the required result.
general variational inequalities
341
Lemma 3.4. Let u¯ ∈ H be a solution of (2.1) and let un+1 be the approximate solution obtained from Algorithm 3.1. Then gun+1 − gu ¯ 2 ≤ gun − gu ¯ 2− Proof.
1 − σ 2 η2n Run 4 dn 2
(3.17)
From (3.8), (3.10), and (3.12), we have
gun+1 − gu ¯ 2 ≤ gun − gu ¯ − α n dn 2 ≤ gun − gu ¯ 2 − 2αn gu − gu ¯ dn + α2n dn 2 ≤ gun − gu ¯ 2 − 2αn 1 − σ ηn Run 2 + α2n dn 2 ≤ gun − gu ¯ 2−
1 − σ 2 η2n Run 4 dn 2
the required result. Theorem 3.1. Let un+1 be the approximate solution obtained from Algorithm 3.1 and let u¯ ∈ H be the solution of (2.1). If H is a finite-dimensional subspace and g is injective, then limn→∞ un = u. ¯ Proof. Let u∗ ∈ H be a solution of (2.1). Then, from (3.17), it follows that the sequence un is bounded and ∞ 1 − σ 2 η2n Run 4 ≤ gu0 − gu∗ 2 dn 2 n=0
which implies that either lim Run = 0
(3.18)
lim ηn = 0
(3.19)
n→∞
or n→∞
Assume that (3.18) holds. Let u¯ be the cluster point of un and let ¯ Since R is the subsequence uni of the sequence un converge to u continuous, it follows that Ru ¯ = limi→∞ Runi = 0, which implies that u¯ is a solution of (2.1) by invoking Lemma 3.1 and gun+1 − gu ¯ 2 ≤ gun − gu ¯ 2
(3.20)
Thus the sequence un has exactly one cluster point and, consequently, limn→∞ gun = gu . ¯ Since g is injective, it follows that limn→∞ un = u¯ ∈ H, satisfying the general variational inequality (2.1). Assume that (3.19) holds, that is, limn→∞ ηn = 0. If (3.9) does not hold, then, by a choice of ηn , we obtain σRun 2 ≤ ρTun − Tg−1 gun − ηn Run Run
(3.21)
342
noor and rassias
Let u¯ be a cluster point of un and let uni be the corresponding subse¯ Taking the limit in (3.21), we have quence of un converging to u. σRu ¯ 2 ≤ 0 which implies that Ru ¯ = 0; that is, u¯ ∈ H is solution of (2.1) by invoking Lemma 3.1 and (3.20) holds. Repeating the above arguments, we conclude that limn→∞ un = u. ¯ ACKNOWLEDGMENTS The authors thank Professors Y. J. Wang and N. H. Xiu for their constructive and useful comments on the earlier version of this paper.
REFERENCES 1. D. P. Bertsekas and J. Tsitsiklis, “Parallel and Distributed Computation: Numerical Methods,” Prentice Hall, Englewood Cliffs, NJ, 1989. 2. F. Giannessi and A. Maugeri, “Variational Inequalities and Network Equilibrium Problems,” Plenum, New York, 1995. 3. B. S. He, A class of projection and contraction methods for variational inequalities, Appl. Math. Optim. 35 (1997), 69–76. 4. A. N. Iusem and B. F. Svaiter, A variant of Korpelech’s method for variational inequalities with a new strategy, Optimization 42 (1997), 309–321. 5. M. Aslam Noor, General variational inequalities, Appl. Math. Lett. 1 (1988), 119–121. 6. M. Aslam Noor, Generalized Wiener–Hopf equations and nonlinear quasi variational inequalities, Panamer. Math. J. 2 (1992), 51–70. 7. M. Aslam Noor, Wiener–Hopf equations and variational inequalities, J. Optim. Theory Appl. 79 (1993), 197–206. 8. M. Aslam Noor, Some recent advances in variational inequalities I: Basic concepts, New Zealand J. Math. 26 (1997), 53–80. 9. M. Aslam Noor, Some recent advances in variational inequalities II: Other concepts, New Zealand J. Math. 26 (1997), 229–255. 10. M. Aslam Noor, An extragradient method for general monotone variational inequalities, Adv. Nonlinear Variational Inequalities 2 (1999), 25–31. 11. M. Aslam Noor, Wiener–Hopf equations techniques for variational inequalities, Korean J. Comput. Appl. Math. 7 (2000), 581–599. 12. M. Aslam Noor, New approximation schemes for general variational inequalities, J. Math. Anal. Appl. 251 (2000), 217–229. 13. M. Aslam Noor, Modified projection method for pseudomonotone variational inequalities, Appl. Math. Lett. 15 (2002), to appear. 14. M. Aslam Noor, K. Inayat Noor, and Th. M. Rassias, Some aspects of variational inequalities, J. Comput. Appl. Math. 47 (1993), 285–312. 15. M. Aslam Noor, Y. J. Wang, and N. H. Xiu, Some projection methods for variational inequalities, preprint. 16. M. Patriksson, “Nonlinear Programming and Variational Inequalities: A Unified Approach,” Kluwer Academic, Dordrecht, 1998.
general variational inequalities
343
17. S. M. Robinson, Normal maps induced by linear transformations, Math. Oper. Res. 17 (1992), 691–714. 18. P. Shi, Equivalence of variational inequalities with Wiener–Hopf equations, Proc. Amer. Math. Soc. 111 (1991), 339–346. 19. M. V. Solodov and B. F. Svaiter, A new projection method for variational inequality problems, SIAM J. Control Optim. 42 (1997), 309–321. 20. M. V. Solodov and P. Tseng, Modified projection type methods for monotone variational inequalities, SIAM J. Control Optim. 34 (1996), 1814–1830. 21. G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C. R. Acad. Sci. Paris 258 (1964), 4413–4416. 22. D. Sun, A class of iterative methods for solving nonlinear projection equations, J. Optim. Theory Appl. 91 (1996), 123–140. 23. Y. J. Wang, N. H. Xiu, and C. Y. Wang, Unified framework of projection methods for pseudomonotone variational inequalities, J. Optim. Theory Appl. 111 (2001), 643–658. 24. Y. J. Wang, N. H. Xiu, and C. Y. Wang, A new version of extragradient projection method for variational inequalities, Comput. Math. Appl. 42 (2001), 969–979. 25. N. H. Xiu and J. Zhang, Some recent advances in projection-type methods for variational inequalities, preprint. 26. N. Xiu, J. Zhang, and M. Aslam Noor, Tangent projection equations and general variational equalities, J. Math. Anal. Appl. 258 (2001), 755–762.