An self-adaptive LQP method for constrained variational inequalities

An self-adaptive LQP method for constrained variational inequalities

Applied Mathematics and Computation 189 (2007) 1586–1600 www.elsevier.com/locate/amc An self-adaptive LQP method for constrained variational inequali...

283KB Sizes 6 Downloads 61 Views

Applied Mathematics and Computation 189 (2007) 1586–1600 www.elsevier.com/locate/amc

An self-adaptive LQP method for constrained variational inequalities Xiao-Ling Fu a, Abdellah Bnouhachem

b,*,1

a

b

Department of Mathematics, Nanjing University, Nanjing 210093, PR China School of Management Science and Engineering, Nanjing University, Nanjing 210093, PR China

Abstract In this paper, we present a logarithmic-quadratic proximal (LQP) type prediction–correction methods for solving constrained variational inequalities VIðS; f Þ; where S is a convex set with linear constraints. The computational load in each iteration is quite tiny. However, the number of iterations is significantly dependent on a parameter which balances the primal and dual variables. We then propose a self-adaptive prediction–correction method that adjusts the scalar parameter automatically. Under certain conditions, the global convergence of the proposed method is established. In order to demonstrate the efficiency of the proposed method, we provide numerical results for a convex nonlinear programming and traffic equilibrium problems.  2006 Elsevier Inc. All rights reserved. Keywords: Proximal point algorithm; Variational inequality; Prediction–correction; Traffic equilibrium problems

1. Introduction Let A 2 Rnm , b 2 Rm and f be a continuous mapping from Rn into itself. In this paper we focus essentially on the structured variational inequalities: Find x 2 S such that ðx  x ÞT f ðx Þ P 0

8x 2 S;

ð1:1Þ

with linear constraints   S ¼ x 2 Rn jAT x 6 b; x P 0 :

ð1:2Þ

By attaching the Lagrangian multiplier vector y 2 Rmþ to the linear constraints AT x  b 6 0, we obtain T

ðVIðX; F ÞÞ Find u 2 X such that ðu  u Þ F ðu Þ P 0 *

1

8u 2 X;

Corresponding author. E-mail addresses: [email protected] (X.-L. Fu), [email protected] (A. Bnouhachem). This author was supported by MOEC grant 20060284001 and by NSFC grant numbers: 70371019 and 10571083.

0096-3003/$ - see front matter  2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.12.032

ð1:3Þ

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1587

where u¼

  x ; y

 F ðuÞ ¼

f ðxÞ þ Ay



AT x þ b

and

X ¼ Rnþ  Rmþ :

ð1:4Þ

It is well known that solving VIðX; F Þ is equivalent to finding a root of the following maximal monotone operator: T ðuÞ :¼ F ðuÞ þ N X ðuÞ;

ð1:5Þ

where NX(Æ) is the normal cone operator to X defined by  fy 2 X : y T ðv  xÞ 6 08v 2 Xg if x 2 X; N X ðxÞ ¼ ; otherwise: A classical method to solve this problem is the Proximal Point Algorithm (PPA) proposed first by Martinet, see, e.g., [8], is a well known algorithms for finding a root of (1.5). ðPPAÞ

0 2 bk T ðuÞ þ ðu  uk Þ:

ð1:6Þ k

Recently, a significant number of interior proximal methods have been developed via replacing u  u with ru Dðu; uk Þ, where Dð; Þ is a distance-like function, see, e.g., [5,10,6,1]. The logarithmic-quadratic function was introduced by Auslender, et al, see, e.g., [1–4]. For a given uk 2 Rnþþ ð:¼ int Rnþ Þ  Rmþþ ð:¼ int Rmþ Þ and bk > 0, when the LQP method is applied to solve the VIðX; F Þ, it takes the solution of following inclusion as the new iterate ukþ1 : ðLQPÞ

0 2 bk T ðuÞ þ ru Dðu; uk Þ;

where k

ru Dðu; u Þ ¼

rx Dðx; xk Þ ry Dðy; y k Þ

! ¼

ð1:7Þ

ðx  xk Þ þ lðxk  X 2k x1 Þ mðy  y k Þ þ mlðy k  Y 2k y 1 Þ

! ;

ð1:8Þ

X k ¼ diagðxk1 ; xk2 ; . . . ; xkn Þ; Y k ¼ diagðy k1 ; y k2 ; . . . ; y km Þ; x1 (y1) is an n-vector(m-vector) whose jth element is 1=xj ð1=y j Þ, l 2 ð0; 1Þ is the logarithmic proximal parameter, m > 0 and 1 80 n P xkj > 1 k k 2 k k 2 > > B 2 kx  x k þ l j¼1ððxj Þ log xj þ xj xj  ðxj Þ Þ C > > C > 2 j > j¼1 > > : þ1; otherwise: It should be noted that Dð; Þ is a specific distance-like function, and Dðx; yÞ ¼ 0 if and only if x ¼ y. The first term of Dðu; uk Þ is to avoid that the new iterate is too far away from uk ; and the second term is to force the new iterate ukþ1 to stay in Rnþþ  Rmþþ . Since ukþ1 ¼ ðxkþ1 ; y kþ1 Þ 2 Rnþþ  Rmþþ , the solution of (1.7) can be obtained via finding the positive solution of the following system: bk ðf ðxÞ þ AyÞ þ x  ð1  lÞxk  lX 2k x1 ¼ 0; T

k

bk ðA x þ bÞ þ my  mð1  lÞy 

mlY 2k y 1

¼ 0:

ð1:10aÞ ð1:10bÞ

Note that the system (1.10a) and (1.10b) is nonlinear equation, which is not easy to solve. Moreover, x and y are overlapped and should be solve simultaneously. Very recently, He et al. [7] proposed an LQP based prediction–correction method for nonlinear complementarity problems. Inspired by the techniques developed by He et al. [7], we introduce an LQP-based prediction–correction method which dose not suffer from the above difficulty and makes the LQP method more practical. Each iteration of the new method contains a prediction and a correction, the predictor is obtained via solving the LQP system approximately under significantly relaxed accuracy criterion and the new iterate is computed directly by an explicit formula derived from the

1588

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

original LQP method. In addition, since the self-adaptive adjustment rule is necessary in practice, this paper describes a self-adaptive rule which provides a significant improvement in the iteration process. We give some notations which are needed for the rest of the paper. Let G denotes a symmetric positive 1=2 definite matrix. Then kvkG denotes ðvT GvÞ . Throughout this paper we make the following standard assumptions: Assumption A: A1 f(x) is continuous and monotone mapping with respect to Rnþ , i.e., ðx  ~xÞT ðf ðxÞ  f ð~xÞÞ P 0

8x; ~x 2 Rnþ :

A2 The solution set of VIðX; F Þ, denoted by X , is nonempty. Note that F ðuÞ is monotone whenever f ðxÞ is monotone. 2. The proposed algorithm In this section, we present our method for solving (1.1) and (1.2). Indeed, finding the root of the nonlinear Eqs. (1.10a) and (1.10b) is not an easy task. To handle this we prefer solving (1.10a) and (1.10b) approximately to solving it exactly. First, the variable x is replaced by the current xk in (1.10b) to obtain y, denoted by ~y k . We find ~y k 2 Rnþþ such that bk ðAT xk þ bÞ þ m~y k  mð1  lÞy k  mlY 2k ð~y k Þ

1

¼ 0:

ð2:1Þ

In order to use the new information as soon as possible, we does not use yk in (1.10a), but we use this new ~y k in (1.10a) to get an approximation of x, denoted by ~xk . bk ðf ðxk Þ þ A~y k Þ þ ~xk  ð1  lÞxk  lX 2k ð~xk Þ

1

¼ 0:

ð2:2Þ

We obtain the predictor ~ uk ¼ ð~xk ; ~y k Þ, then the ukþ1 is obtained via correcting the predictor ~uk with small computation load. We describe the new method in details. Algorithm: An self-adaptive LQP method Prediction step: For a given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rmþþ ; bk > 0; l 2 ð0; 1Þ and m > 0, the predictor k ~ u ¼ ð~xk ; ~y k Þ 2 Rnþþ  Rmþþ is obtained via solving the following system: ( bk ðf ðxÞ þ AyÞ þ x  ð1  lÞxk  lX 2k x1 ¼: nkx  0; ðPredictionÞ ð2:3Þ bk ðAT x þ bÞ þ my  mð1  lÞy k  mlY 2k y 1 ¼: nky  0; where 1l 2 k 2 2 g ku  ~ uk kG kG1 nk kG 6 1þl ! k n x nk ¼ nky

g 2 ð0; 1Þ;

ð2:4Þ ð2:5Þ

and  G¼

ð1 þ lÞI n 0

 0 : mð1 þ lÞI m

ð2:6Þ

Correction step: Take the new iterate ukþ1 ðsÞ, called corrector, as the solution of the following system: ( sbk ðf ð~xk Þ þ A~y k Þ þ x  ð1  lÞxk  lX 2k x1 ¼ 0; ðCorrectionÞ ð2:7Þ sbk ðAT~xk þ bÞ þ my  mð1  lÞy k  lmY 2k y 1 ¼ 0; where s is a positive scalar. We will discuss how to choose s in Section 3.

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

pffiffiffi Adjusting m to balance knkx k and knky k= m 8 pffiffiffi k k > < mk  0:5 if knx k > kny k= m; pffiffiffi mkþ1 :¼ mk  2 if knky k= m > knkx k; > : mk ; otherwise:

1589

ð2:8Þ

Parameter m play an important role in the numerical experiment. Remark 2.1. The main task of the prediction is to find an approximate solution of the following equations: bk ðf ðxÞ þ A~y k Þ þ x  ð1  lÞxk  lX 2k x1 ¼ 0; T k

k

bk ðA ~x þ bÞ þ my  mð1  lÞy 

lmY 2k y 1

¼ 0:

ð2:9aÞ ð2:9bÞ

We can choose a suitable bk > 0 and set the exact solution of bk ðAT xk þ bÞ þ my  mð1  lÞy k  lmY 2k y 1 ¼ 0;

ð2:10Þ

denoted by ~y k , as the approximate solution of (2.9b). Then set the exact solution of bk ðf ðxk Þ þ A~y k Þ þ x  ð1  lÞxk  lX 2k x1 ¼ 0;

ð2:11Þ

denoted by ~xk , as the approximate solution of (2.9a). It following from (2.3) and (2.10), (2.11) that !     nkx f ð~xk Þ  f ðxk Þ bk ðf ð~xk Þ  f ðxk ÞÞ k ¼ bk : n ¼ ¼ nky bk ðAT~xk þ AT xk Þ AT ðxk  ~xk Þ Note that the positive solution of (2.10) and (2.11) can be obtained explicitly by  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~y ki ¼ pki þ ðpki Þ2 þ 4lðy ki Þ2 2;  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~xki ¼ ski þ ðski Þ2 þ 4lðxki Þ2 2;

ð2:12Þ ð2:13Þ

with pk ¼ ð1  lÞy k  bk ðAT xk þ bÞ=m; sk ¼ ð1  lÞxk  bk ðf ðxk Þ þ A~y k Þ: It is easy to verify that ~y k > 0; ~xk > 0 whenever y k > 0; xk > 0. Criterion (2.4) can be satisfied via choosing suitable bk . For example, when f is Lipschitz continuous on Rnþ , with modulus L.   1 1 2 1 k 2 T k k k 2 k 2 kf ðx Þ  f ð~x Þk þ kA ðx  ~x Þk kG n kG ¼ bk 1þl mð1 þ lÞ     1 1 b2k 1 T 2 2 2 L2 þ kAT Ak kxk  ~xk k 6 kA 6 b2k L þ Ak kuk  ~uk kG : 2 1þl mð1 þ lÞ m ð1 þ lÞ ffi pffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 Therefore, criterion (2.4) is satisfied when bk 6 g 1  l= 1þl L2 þ mð1þlÞ kAT Ak . Remark 2.2. In the correction step, xkþ1 ðsÞ and y kþ1 ðsÞ, can be independently obtained by  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 kþ1 k xi ðsÞ ¼ si ðsÞ þ ðski ðsÞÞ þ 4lðxki Þ 2;  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðsÞ ¼ pki ðsÞ þ ðpki ðsÞÞ2 þ 4lðy ki Þ2 2; y kþ1 i

ð2:14Þ ð2:15Þ

where sk ðsÞ ¼ ð1  lÞxk  sbk ðf ð~xk Þ þ A~y k Þ; pk ðsÞ ¼ ð1  lÞy k  sbk ðAT~xk þ bÞ=m:

ð2:16Þ

1590

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

Thus the computation load of the correction step is quite tiny. If xk > 0; y k > 0, then it follows that ~xk > 0; ~y k > 0 and xkþ1 > 0; y kþ1 > 0. Therefore, the sequence xk and yk always lie in the interior of Rnþþ and Rmþþ , respectively. 3. Contractive property of the generated sequence The following lemma is inspired by Lemma 2 of [2]. For completeness, a proof is provided. n m Lemma 3.1. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rm þþ and q ¼ ðqx ; qy Þ 2 R  R , let u ¼ ðx; yÞ be the positive solution of the following equations: ( qx þ x  ð1  lÞxk  lX 2k x1 ¼ 0; ð3:1Þ qy þ my  mð1  lÞy k  mlY 2k y 1 ¼ 0:

Then for any w 2 X, we have 1 1l k mð1  lÞ k T 2 2 2 2 kx  xk þ ky  yk ; ðw  uÞ q P ðku  wkG  kuk  wkG Þ þ 2 2 2 where G is defined by (2.6).

ð3:2Þ

Proof. Let w ¼ ðwx ; wy Þ, we will show that 1þl 1l T 2 2 2 ðkx  wx k  kxk  wx k Þ þ kx  xk k ðwx  xÞ qx P 2 2 and

ð3:3Þ

mð1 þ lÞ mð1  lÞ 2 2 2 ðky  wy k  ky k  wy k Þ þ ky  y k k : 2 2 Since x > 0; xk and wx P 0, we have T

ð3:4Þ

2

ð3:5Þ

ðwy  yÞ qy P

ðwx Þi ðxki Þ =xi P ðwx Þi ð2xki  xi Þ: It follows from (3.1) that T

2

ððwx Þi  xi Þ ðqx Þi ¼ ðxi  ðwx Þi Þðxi  ð1  lÞxki  lðxki Þ =xi Þ P ðxi Þ2  ð1  lÞxi xki  lðxki Þ2  xi ðwx Þi þ ð1  lÞxki ðwx Þi þ lðwx Þi ð2xki  xi Þ 2

2

¼ ðxi Þ  ð1  lÞxi xki  lðxki Þ  ð1 þ lÞxi ðwx Þi þ ð1 þ lÞxki ðwx Þi ¼

1þl 1l k 2 2 2 ððxi  ðwx Þi Þ  ðxki  ðwk Þi Þ Þ þ ðxi  xi Þ : 2 2

Hence, (3.3) holds. Similarly, we can get (3.4). Adding (3.3) and (3.4) the proof is completed.

h

We let 2

2

HðsÞ :¼ kuk  u kG  kukþ1 ðsÞ  u kG ;

ð3:6Þ

which measures the progress obtained at the kth iteration. Note that the progress H(s) is a function of the step length s in the correction step. It is natural to consider maximizing this function by choosing an optimal parameter s. The solution u* is unknown, so we can not maximize H(s) directly. The following results convert the task to maximize function U(a) (will be defined in (3.22)), which does not include the unknown solution u*. First, we apply Lemma 3.1 to the prediction step. ~k be the predictor produced by (2.3). For each Lemma 3.2. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rm þþ , let u w ¼ ðwx ; wy Þ 2 X we have 1 1l k mð1  lÞ k kx  ~xk k2  ky  ~y k k2 : ðw  ~ uk ÞT ðnk  bk F ð~ uk ÞÞ 6 ðkuk  wk2G  k~ uk  wk2G Þ  2 2 2

ð3:7Þ

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1591

Proof. We apply Lemma 3.2 to the prediction step (2.3). Note that ! bk ðf ð~xk Þ þ A~y k Þ  nkx qpre :¼ uk Þ  nk : ¼ bk F ð~ bk ðAT~xk þ bÞ  nky

ð3:8Þ

By setting q ¼ qpre in (3.1) and u ¼ ~ uk in (3.2), it follows that 1 1l k mð1  lÞ k kx  ~xk k2  ky  ~y k k2 : ðw  ~ uk ÞT ðqpre Þ 6 ðkuk  wk2G  k~ uk  wk2G Þ  2 2 2 The assertion is proved.

ð3:9Þ

h

Now, we apply Lemma 3.1 to the correction step. ~k be the predictor produced by (2.3) and ukþ1 ðsÞ be the Lemma 3.3. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rm þþ , let u corrector (dependent on (s) produced by (2.7)). The we have 2

2

T

HðsÞ P ð1  lÞkxk  xkþ1 ðsÞk þ mð1  lÞky k  y kþ1 ðsÞk þ 2sbk F ð~uk Þ ðukþ1 ðsÞ  ~uk Þ:

ð3:10Þ

Proof. The proof is an application of Lemma 3.1 to the correction step (2.4).   sbk ðf ð~xk Þ þ A~y k Þ uk Þ: ¼ sbk F ð~ qcor :¼ sbk ðAT~xk þ bÞ

ð3:11Þ

By setting q ¼ qcor in (3.1) and w ¼ u 2 X ; u ¼ ukþ1 ðsÞ in (3.2), we get 1 1l k mð1  lÞ k kx  xkþ1 ðsÞk2 þ ky  y kþ1 ðsÞk2 : ðu  ukþ1 ðsÞÞT qcor P ðkukþ1 ðsÞ  u k2G  kuk  u k2G Þ þ 2 2 2 ð3:12Þ It follows from the above inequality and definition of H(s) that 2

2

T

HðsÞ P ð1  lÞkxk  xkþ1 ðsÞk þ mð1  lÞky k  y kþ1 ðsÞk þ 2sbk F ð~uk Þ ðukþ1 ðsÞ  u Þ ¼ ð1  lÞkxk  xkþ1 ðsÞk2 þ mð1  lÞky k  y kþ1 ðsÞk2 2sbk F ð~uk ÞT fðukþ1 ðsÞ  ~uk Þ þ ð~uk  u Þg P ð1  lÞkxk  xkþ1 ðsÞk2 þ mð1  lÞky k  y kþ1 ðsÞk2 þ 2sbk F ð~uk ÞT ðukþ1 ðsÞ  ~uk Þ:

ð3:13Þ  T

The last inequality of (3.13) use the fact that F is monotone and the inequalityð~uk  u Þ ðF ð~uk ÞÞ P ð~ uk  u ÞT F ðu Þ P 0. h Now we deal with the last term of (3.10). ~k be the predictor produced by (2.3) and ukþ1 ðsÞ be the Lemma 3.4. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rm þþ , let u corrector produced by (2.7). Then we have ðukþ1 ðsÞ  ~ uk ÞT bk F ð~ uk Þ P ðukþ1 ðsÞ  ~ uk ÞT Gd k  lkxk  ~xk k2  mlky k  ~y k k2 ;

ð3:14Þ

where d k ¼ ðuk  ~ uk Þ þ G1 nk :

ð3:15Þ

Proof. Since ukþ1 ðsÞ 2 X, by substituting w ¼ ukþ1 ðsÞ in (3.7) we get T ðukþ1 ðsÞ  ~ uk Þ ðnk  bk F ð~ uk ÞÞ

1 1l k mð1  lÞ k 2 2 2 2 kx  ~xk k  ky  ~y k k ; uk  ukþ1 ðsÞkG Þ  6 ðkuk  ukþ1 ðsÞkG  k~ 2 2 2 using the following identity: 1 1 T 2 2 2 uk  ukþ1 ðsÞkG  kuk  ukþ1 ðsÞkG Þ þ kuk  ~uk kG : ðukþ1 ðsÞ  ~ uk Þ Gðuk  ~ uk Þ ¼ ðk~ 2 2

ð3:16Þ

ð3:17Þ

1592

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

Adding (3.16) and (3.17), we obtain (use the definition of G in (2.6)) uk ÞT fGðuk  ~ uk Þ þ nk  bk F ð~ uk Þg 6 lkxk  ~xk k þ mlky k  ~y k k; ðukþ1 ðsÞ  ~

ð3:18Þ

which implies (3.14). h Consequently, we have the following theorem. Theorem 3.1. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rm uk be the predictor produced by (2.3) and ukþ1 ðsÞ be the þþ , Let ~ corrector produced by (2.7). Then we have HðsÞ P

1l k 2 T 2 2 ku  ukþ1 ðsÞkG þ 2sðukþ1 ðsÞ  ~uk Þ Gd k  2slkxk  ~xk k  2smlky k  ~y k k 1þl

ð3:19Þ

Proof. Combining the assertion of Lemmas 3.3 and 3.4 we get 2

2

T

HðsÞ P ð1  lÞkxk  xkþ1 ðsÞk þ mð1  lÞky k  y kþ1 ðsÞk þ 2sðukþ1 ðsÞ  ~uk Þ Gd k 2

2

 2slkxk  ~xk k  2smlky k  ~y k k :

ð3:20Þ

We observe the first two terms of the right-hand-side of (3.20), 2

2

ð1  lÞkxk  xkþ1 ðsÞk þ mð1  lÞky k  y kþ1 ðsÞk

1l 1  l 2 2 2 ð1 þ lÞkxk  xkþ1 ðsÞk þ mð1 þ lÞky k  y kþ1 ðsÞk ¼ kuk  ukþ1 ðsÞkG : ¼ 1þl 1þl Substituting it into (3.20), we obtain the required result.

h

a, where a is a positive scalar. The For convenience of analysis, we consider that s takes the form of s ¼ 1l 1þl following theorem has already been studied in [7]. For the sake of completeness and to convey an idea of the 1l technique involved, we include its proof. It provides a lower bound function of Hð1þl aÞ, which is a concave quadratic function of a. Theorem 3.2. [7]Let HðsÞ be defined by (3.6) and d k be defined by (3.15). If we take s ¼ 1l 1þl a in (3.6), then for any u 2 X and a > 0, we have   1l 1l a P UðaÞ; ð3:21Þ H 1þl 1þl where UðaÞ :¼ 2auk  a2 kd k kG

2

ð3:22Þ

uk :¼ kxk  ~xk k2 þ mky k  ~y k k2 þ ðuk  ~ uk Þ T nk :

ð3:23Þ

and

Proof. Recall that 

1l a H 1þl



k

¼ ku 

2 u kG

2   kþ1 1  l  a u :  u 1þl G

ð3:24Þ

1l a, it follows from (3.19) and (3.15) that Since s ¼ 1þl      T  1þl 1l 1l 2 2 H a P 2a ukþ1 a  uk þ ðuk  ~uk Þ Gd k  2alkxk  ~xk k  2amlky k  ~y k k 1l 1þl 1þl   2 k kþ1 1  l a þ u  u 1 þ l G

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1593

T 2 2 2 ¼ 2aðuk  ~ uk Þ Gd k  2alkxk  ~xk k  2amlky k  ~y k k  a2 kd k kG  2   k k kþ1 1  l þ u  u a  ad 1þl G

P 2afðuk  ~ uk ÞT Gd k  lkxk  ~xk k2  mlky k  ~y k k2 g  a2 kd k k2G 2

2

T

2

2

¼ 2afkxk  ~xk k þ mky k  ~y k k þ ðuk  ~uk Þ nk g  a2 kd k kG ¼ 2auk  a2 kd k kG : The assertion follows from (3.22) and (3.25) directly. Note that

1l Hð1þl aÞ

ð3:25Þ

h

1l can be regarded as the progress made by ukþ1 ð1þl aÞ, and

1l UðaÞ 1þl

is a lower bound of

1l Hð1þl aÞ.

Therefore, it motivates us to choose such a that reaches the maximum of UðaÞ. Since UðaÞ is concave quadratic function of a, it reaches its maximum at u ð3:26Þ ak ¼ kk 2 ; kd kG with Uðak Þ ¼ ak uk :

ð3:27Þ

Under Condition (2.4) we have 2uk  kd k k2G ¼ 2kxk  ~xk k2 þ 2mky k  ~y k k2  kuk  ~uk k2G  kG1 nk k2G ¼ ð1  lÞkxk  ~xk k2 þ mð1  lÞky k  ~y k k2  kG1 nk k2G

1  l 2 2 2 ð1 þ lÞkxk  ~xk k þ mð1 þ lÞky k  ~y k k  kG1 nk kG ¼ 1þl 1l k 1l 2 2 2 ku  ~ ð1  g2 Þkuk  ~uk kG : ¼ uk kG  kG1 nk kG P 1þl 1þl

ð3:28Þ

Therefore, it follows from (3.26) and (3.28) that 1 ak P : 2

ð3:29Þ

Consequently, from (3.27), (3.28) and (3.29) we obtain Uðak Þ P

ð1  g2 Þð1  lÞ k ku  ~ uk k2G : 4ð1 þ lÞ

ð3:30Þ

Based on numerical experiments, we prefer to multiply the optimal value a* by a relaxation factor c 2 ½1; 2Þ (better when close to 2). Thus the correction step with optimal a* of the proposed method is to find the solution ukþ1 ð1l ca Þ of the following system of equations: 1þl ( 1l  ca bk ðf ð~xk Þ þ A~y k Þ þ x  ð1  lÞxk  lX 2k x1 ¼ 0; 1þl ð3:31Þ ðP  C ca Þ 1l  ca bk ðAT~xk þ bÞ þ my  mð1  lÞy k  lmY 2k y 1 ¼ 0; 1þl where ak ¼

kxk  ~xk k2 þ mky k  ~y k k2 þ ðuk  ~ uk Þ T nk 2

kd k kG

:

ð3:32Þ

4. Convergence of the proposed method In this section, we consider the convergence analysis of the proposed method and this is the main motivation.

1594

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

   Theorem 4.1. Let ukþ1 ð1l 1þl ca Þ be the solution of (3.31). Then for any u 2 X and c 2 ½1; 2Þ, we have 2  2   kþ1 1  l  cð2  cÞð1  g2 Þ 1  l 2  k  2 u 6 ku  u k  kuk  ~uk kG : ca  u G 1þl 4 1 þ l G

ð4:33Þ

Proof. Note that for c 2 ½1; 2Þ, we obtain 2

Uðcak Þ ¼ 2cak uk  ðc2 ak Þðak kd k kG Þ ¼ ð2cak  c2 ak Þuk ¼ cð2  cÞUðak Þ: It follows from Theorem 3.2 and (4.34) that     1l  1l 1l k  2 kþ1 1  l  H ca ¼ ku  u k  ku ca  u k2 P Uðca Þ ¼ cð2  cÞ Uða Þ: 1þl 1þl 1þl 1þl

ð4:34Þ

ð4:35Þ

Then the assertion follows from (3.30) immediately. It follows from Theorem 4.1 that there is a constant c > 0 such that 2

2

2

kukþ1  u kG 6 kuk  u kG  ckuk  ~ uk kG 

8u 2 X :

ð4:36Þ



Since (4.36) holds for any u 2 X , we have 2

2

2

½distðukþ1 ; X Þ 6 ½distðuk ; X Þ  ckuk  ~ uk kG ;

ð4:37Þ

where distðuk ; X Þ ¼ inffku  u kG ju 2 X g:

ð4:38Þ

Therefore, P  C cak belong to contractive methods because its new iterate is closer to the solution set X . To prove the convergence of the proposed method we need the following lemma, its proof is again an application of Lemma 3.2. h Lemma 4.1. For given uk ¼ ðxk ; y k Þ 2 Rnþþ  Rmþþ and bk > 0, let ~uk be obtained by (2.3), then for each u 2 X, we have   T  ð1 þ lÞx  ðlxk þ ~xk Þ x  ~xk ð4:39Þ ðu  ~ uk ÞT ðbk F ð~ u k Þ  nk Þ P : mð1 þ lÞy  mðly k þ ~y k Þ y  ~y k Proof. First, substituting w ¼ u in (3.7) we have 1 1l mð1  lÞ T 2 2 2 2 uk  ukG  kuk  ukG Þ þ kx  ~xk k þ ky  ~y k k : ðu  ~ uk Þ ðbk F ð~ uk  nk ÞÞ P ðk~ 2 2 2

ð4:40Þ

Therefore, for showing (4.39), we need only to prove 1þl 1l k ðk~xk  xk2  kxk  xk2 Þ þ kx  ~xk k2 ¼ ðxk  ~xk ÞT ðð1 þ lÞx  ðlxk þ ~xk ÞÞ 2 2

ð4:41Þ

mð1 þ lÞ mð1  lÞ k 2 2 2 T ðk~y k  yk  ky k  yk Þ þ ky  ~y k k ¼ mðy k  ~y k Þ ðð1 þ lÞy  ðly k þ ~y k ÞÞ: 2 2

ð4:42Þ

and

First, by a manipulation we get 1þl 1l k ðk~xk  xk2  kxk  xk2 Þ þ kx  ~xk k2 2 2 ¼ ð1 þ lÞxT xk  ð1 þ lÞxT~xk  ð1  lÞð~xk ÞT xk  lkxk k2 þ k~xk k2 ¼ ð1 þ lÞxT ðxk  ~xk Þ  ðxk  ~xk ÞT ðlxk þ ~xk Þ ¼ ðxk  ~xk ÞT ðð1 þ lÞx  ðlxk þ ~xk ÞÞ: and thus (4.41) holds. Similarly, we can prove (4.42). And the proof is completed. h

ð4:43Þ

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1595

Now, the convergence of the proposed method could be proved as follows: k Theorem 4.2. If inf 1 k¼0 bk :¼ b > 0, then the sequence fu g generated by the proposed method convergence to some u1 which is a solution of VIðX; F Þ.

Proof. It follows from (4.36) that fuk g is a bounded sequence and uk kG ¼ 0: lim kuk  ~

ð4:44Þ

k!1

Consequently, f~ uk g is also bounded. Since limk!1 kuk  ~uk kG ¼ 0, kG1 nk kG < c0 kuk  ~uk kG and bk P b > 0, it follows from (4.39) that T

lim ðu  ~ uk Þ F ð~uk Þ P 0

8u 2 X:

k!1

ð4:45Þ

Because f~ uk g is bounded, it has at least one cluster point. Let u1 be a cluster point of f~uk g and the subsequence f~ ukj g converges to u1 . It follows that T

lim ðu  ~ ukj Þ F ð~ukj Þ P 0 8u 2 X

ð4:46Þ

j!1

and consequently T

ðu  u1 Þ F ðu1 Þ P 0

8u 2 X:

ð4:47Þ

1

This means that u is a solution of VIðX; F Þ. Note that the inequality (4.36) is true for all solution points of VIðX; F Þ, hence we have 2

2

kukþ1  u1 kG 6 kuk  u1 kG

8k P 0:

ð4:48Þ

~kj ! u1 ðj ! 1Þ and uk  ~ Since u uk ! 0 ðk ! 1Þ, for any given e > 0, there exits an integer l > 0 such that k~ ukl  u1 kG < e=2 and

k~ ukl  ukl kG < e=2:

ð4:49Þ

Therefore, for any k P k l , it follows from (4.48) and (4.49) that ukl kG þ k~ukl  u1 kG 6 e: kuk  u1 kG 6 kukl  u1 kG 6 kukl  ~

ð4:50Þ

This implies that the sequence fuk g converges to u1 which is a solution of VIðX; F Þ. h 5. A self-adaptive version For given xk > 0 and bk > 0, the main task of the proposed method is to solve (2.3) such that (2.4) is satisfied. In particular, in some cases an accepted approximate solution can be directly obtained if bk is suitable small. Note that 1 1 knk k2 þ knk k2 : kG1 nk k2G ¼ ð5:51Þ 1þl x mð1 þ lÞ y In addition, 1l k ku  ~ uk k2G ¼ ð1  lÞkd kx k2 þ mð1  lÞkd ky k2 ; 1þl

ð5:52Þ

where d kx ¼ xk  ~xk

and

d ky ¼ y k  ~y k :

ð5:53Þ

Therefore, Condition (2.4) can be written as rk :¼

mknkx k2 þ knky k2 mð1  l2 Þkd kx k2 þ m2 ð1  l2 Þkd ky k2

!1=2 6 g;

g 2 ð0; 1Þ:

ð5:54Þ

1596

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

In this section, we suggest a self-adaptive procedure to find such a suitable small bk. If rk 6 g, the prediction ~ uk ¼ ð~xk ; ~y k Þ is accepted; otherwise, reduce the value of bk by bk :¼ bk  0:8=rk and repeat the procedure. Too small values of bk, however, usually lead to extremely slow convergence according to our numerical pffiffiffiffiffiffiffiffiffiffiffi k experiments. Thus it is necessary to avoid this situation. In addition, balance ðkn k= 1 þ lÞ and x pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðknky k= mð1 þ lÞÞ via adjusting m is also necessary for practical computation. To do so, a strategy of enlarging bk and adjusting mk is proposed in the following algorithm. It is well known that solving VIðX; F Þ (1.1) and (1.2) is equivalent to finding a zero point of ! ! x  P Rnþ fx  ½f ðxÞ þ Ayg ex ðuÞ eðuÞ :¼ ¼ : ð5:55Þ ey ðuÞ y  P Rmþ fy  ½AT x þ bg So, we choose keðuk Þk1 6 e as the stop criterion. The implementation details of a self-adaptive version of the proposed method Step 0. Let b0 ¼ 1; c 2 ð0; 2Þ; gð:¼ 0:95Þ < 1; l ¼ 0:01; e ¼ 106 ; k ¼ 0; x0 > 0 and y 0 > 0. Step 1. If keðuk Þk1 6 e, then stop. Otherwise, go to Step 2. Step 2. (Prediction step) Produce the predictor ~ uk ¼ ð~xk ; ~y k Þ: Calculate f ðxk Þ and AT xk  b; T k k k (1) p :¼ ð1  lÞy  bk ðA x þ bÞ=mk ;  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~y ki :¼ pki þ ðpki Þ2 þ 4lðy ki Þ2 2; sk :¼ ð1  lÞxk  bk ðf ðxk Þ þ A~y k Þ;  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~xki :¼ ski þ ðski Þ2 þ 4lðxki Þ2 2; d kx :¼ xk  ~xk ;

d ky :¼ y k  ~y k ;

nkx :¼ bk ðf ð~xk Þ  f ðxk ÞÞ; nky :¼ bk AT ðxk  ~xk Þ; vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u u mk knkx k2 þ knky k2 : rk :¼ t mk ð1  l2 Þkd kx k2 þ m2k ð1  l2 Þkd ky k2 (2) If rk > g,then reduce bk by bk :¼ bk  0:8=rk and go to Step 1. Step 3. Adjust b and m for the next iteration if necessary: (1) Prepare an enlarged b for next iteration if rk is too small, ( bk  0:7=rk if rk 6 0:5; bkþ1 :¼ otherwise: bk (2) Adjust m for balancing the next iteration

mkþ1

8 m  0:5 > < k :¼ mk  2 > : mk ;

if

t1 > 4t2 ;

if

t2 > 4t1 ;

where

knkx k t1 :¼ pffiffiffiffiffiffiffiffiffiffiffi ; 1þl

otherwise;

knky k t2 :¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi: mð1 þ lÞ

Step 4. Calculate the step-size in the correction step (see (3.32) and (3.15)): T

ak ¼

T

ðd kx þ nkx Þ d kx þ ðmk d ky þ nky Þ d ky T

1

T

1

k ½ð1 þ lÞd kx þ nkx  ½d kx þ ð1 þ lÞ nkx  þ ½mk ð1 þ lÞd ky þ nky  ½d ky þ m1 k ð1 þ lÞ ny 

ak ¼ cak bk

1l : 1þl

;

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1597

Step 5. (Correction step) Calculate the new iterate ukþ1 ¼ ðxkþ1 ; y kþ1 Þ: sk :¼ ð1  lÞxk  bk ðf ð~xk Þ þ A~y k Þ; qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 kþ1 k xi :¼ ðsi þ ðski Þ þ 4lðxki Þ Þ=2; pk :¼ ð1  lÞy k  bk ðAT~xk þ bÞ=mk ; qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 k :¼ ðp þ ðpki Þ þ 4lðy ki Þ Þ=2; y kþ1 i i k :¼ k þ 1;

go to Step 1:

6. Numerical experiments We apply our proposed method in the following examples to illustrate its advantage and efficiency. All codes was run on a P4-2.6G desktop computer. 6.1. A test problem of VI(S, f) In order to verify the theoretical assertions, we consider the following problem which was used in [9]. Finding x 2 S such that T

ðx  x Þ f ðx Þ P 0 where

0

3

B B 4 B f ðxÞ ¼ B B 16 B @ 15 4

8x 2 S;

4

16

15

1 5

5 2

10 11

10 11

11 7

3 10

1 1 0 15 4x41 C CB C B 4C B 11 CB x2 C B 7x2 C B 10 C C B CB C C B 3 B 4C C B B C 7 C CB x3 C þ 10 B 5x3 C þ B 50 C C CB C B 4C B 10 A@ x4 A @ 9x4 A @ 30 A 25 1 x5 8x45 4

10

x1

1

0

and S ¼ fx 2 R5 jAx 6 d; xi P 0; i ¼ 1; 2; . . . ; 5g; in which 0

1 0 B 2 2 B A¼B @ 2 2 5 3

0:5 0 4 2

1 0 2 0:5 2 C C C; 2 3 A 0

2

0

1 10:00 B 37:84 C B C d¼B C: @ 12:84 A 20:88 T

The solution point of this problem is x ¼ ð9:08; 4:84; 0:00; 0:00; 5:00Þ . We demonstrate the necessity of the LQP-PC method with self-adaptive variable parameter m (marked as LQP-PC-V). For comparison purposes, we also coded the LQP-PC method with fixed parameter m ¼ 1 (marked as LQP-PC-F). We used the following choices of parameters in both LQP-PC-V and LQP-PC-F: l ¼ 0:1; c ¼ 1:95; g ¼ 0:95. The parameter b0 is initially chosen to be 1 and the starting point x0 ¼ ½0; 0; 100; 0; 0T . We report the number of iterations, function evaluations and the CPU time for different e in the Table 1. The results of the testing indicate that adapting m lead to a significant reduction of the iteration numbers. The LQP-PC-V represents a significant improvement in computation costs over the LQP-PC-F. 6.2. Numerical experiments for traffic equilibrium problems The test examples in this section are arising from the traffic equilibrium problems.

1598

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

Table 1 Number of it. and function eval. for different e Different e 5

e ¼ 10 e ¼ 106 e ¼ 107

No. of f evaluations

CPU-time

LQ‘P-PC-V

No. of iterations LQP-PC-F

LQP-PC-V

LQP-PC-F

LQP-PC-V

LQP-PC-F

86 101 113

494 553 628

211 250 279

1255 1389 1572

0.047 s 0.047 s 0.062 s

0.282 s 0.282 s 0.297 s

6.2.1. Traffic equilibrium problems We consider a network [7] shown in Fig. 1 which consisted of 25 nodes, 37 links and 6 O/D pairs. We apply the proposed method to this traffic network equilibrium problem with two modifications, one with link capacities and the other with both link capacities and lower bounds on travel demands. We use the same notations as [7]. The traffic equilibrium problems can be described as follows: aðx  x ÞT F ðx Þ P 0

8x 2 S;

ð6:1Þ

where F ðxÞ ¼ AtðAT xÞ  BkðBT xÞ: For the two modifications, S has the following different forms: • Traffic equilibrium problems with link capacity bound, S ¼ fx 2 Rn jAT x 6 b; x P 0g;

b is the given link capacity vector:

ð6:2Þ

• Traffic equilibrium problems with link capacity bound and demand lower bound, S ¼ fx 2 Rn jAT x 6 b; BT x P d; x P 0g:

ð6:3Þ

It is clear that all these traffic equilibrium problems are special cases of the structured variational inequality (1.3) and (1.4). We apply the proposed method to solve these problems. In all test implementations, we use the forms of function tðf Þ and kðdÞ in [7], and we take u0 ¼ ðx0 ; y 0 Þ, where y 0 ¼ 0 and each element of x0 equals 1, l ¼ 0:01; c ¼ 1:95; g ¼ 0:95. For this test problem, the stopping criteria  kex ðuk Þk1 k max ; key ðu Þk1 6 e ð6:4Þ kex ðu0 Þk1 for different e is reasonable.

Fig. 1. A directed network with 25 nodes and 37 links.

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

1599

6.2.2. Problems with link capacity bounds The constraints set of problem with link capacity bounds is S ¼ fx 2 Rn jAT x 6 b; x P 0g, where b is a given capacity vector. We report the numbers of iterations, the number of function evaluations, and the CPU time for different capacities and different e in Table 2. As illustrated in Section 6.2.1, the output vector x is the path-flow, and the link flow vector is AT x. In fact, * y in the output is referred to as the toll charge on the congested link. For the example with link capacity b ¼ 40 we list the optimal link flow and the toll charge in Table 3. Indeed, the link toll charge is greater than zero if and only if the link flow reaches the capacity. 6.2.3. Problems with link capacity bounds and demand low bounds The constraints set of problem in this subsection is S ¼ fx 2 Rn jAT x 6 b; BT x P d; x P 0g, where b and d are given vectors. In the test example we let each element of b and d equal 40 and 10, respectively. We report the numbers of iteration, the mapping evaluation, and the CPU time for different capacities and different e in Table 4. The dual variable y  consists of two subvectors y I (to the capacity constraints AT x 6 b) and y II (to the demand lower bounds BT x P d). y I in the output is referred to as the toll charge on the congested link, while y II represents the subsidy on the O/D pair. For the test example, we list the optimal link flow and the toll charge in Table 5. The optimal demand and the subsidy of each O/D pair are given in Table 6. The outputs coincide with the optimal condition. For all test problems, the solutions are obtained in a moderate number of iterations. Thus the proposed method is efficient at solving these problems. Theoretically, each iteration consists of at least two times of

Table 2 Number of it. and function eval. for different e Link flow capacity

b = 30 b = 40 b=1

No. of iterations

CPU-time

Scaling

105

106

107

No. of F evaluations 105

106

107

e ¼ 106

factor m

140 224 311

175 254 397

193 283 501

294 475 643

364 540 819

400 602 1033

0.157 s 0.187 s 0.313 s

0.0156 0.0156 0.0156

Table 3 The optimal link flow and the toll charge on the link when b ¼ 40 Link

Flow

Charge

Link

Flow

Charge

Link

Flow

Charge

Link

Flow

Charge

1 2 3 4 5 6 7 8 9 10

40.00 38.15 40.00 13.81 0 0 0 0 0 40.00

4.3 0 163.2 0 0 0 0 0 0 1.1

11 12 13 14 15 16 17 18 19 20

1.85 11.96 26.19 13.81 0 0 0 0 0 40.00

0 0 0 0 0 0 0 0 0 1.8

21 22 23 24 25 26 27 28 29 30

40.00 40.00 26.19 0 0 0 0 0 26.19 1.85

1.1 136.6 0 0 0 0 0 0 0 0

31 32 33 34 35 36 37 – – –

11.96 40.00 40.00 26.19 28.04 40.00 0 – – –

0 164.2 135.6 0 0 301.3 0 – – –

Table 4 Number of it. and function eval. for different e Link flow capacity AT x 6 40

Demand low bound BT x P 10

No. of iterations

No. of F evaluations

CPU-time

105

106

107

105

106

107

e ¼ 107

395

478

560

838

1012

1183

0.372 s

Scaling factor m

0.0156

1600

X.-L. Fu, A. Bnouhachem / Applied Mathematics and Computation 189 (2007) 1586–1600

Table 5 The optimal link flow and the toll charge on the link when b ¼ 40 Link

Flow

Charge

Link

Flow

Charge

Link

Flow

Charge

Link

Flow

Charge

1 2 3 4 5 6 7 8 9 10

40.00 40.00 40.00 25.39 16.70 6.82 6.82 6.82 0 40.00

17.4 12.6 333.9 0 0 0 0 0 0 25.6

11 12 13 14 15 16 17 18 19 20

10.00 10.00 14.61 8.69 9.88 0 0 6.82 0 36.86

0 0 0 0 0 0 0 0 0 0

21 22 23 24 25 26 27 28 29 30

40.00 40.00 19.96 3.30 13.18 13.18 13.18 20.00 23.14 6.86

6.9 306.5 0 0 0 0 0 0 0 0

31 32 33 34 35 36 37 – – –

10.00 34.65 25.35 23.14 30.00 40.00 14.65 – – –

0 0 0 0 0 294.0 0 – – –

Table 6 The optimal demand and the related subsidy (O, D) pair

(1, 20)

(1, 25)

(2, 20)

(3, 25)

(1, 24)

(11, 25)

Optimal demand Subsidy

10 909.4

10 730.6

10 710.2

10 29.1

60 0

20 0

evaluation of f ðxÞ. From the numerical experiments, the number of evaluations of f per iteration is approximately equal to 2.25. Roughly speaking, the convergent speed of the proposed method is linear according to the numerical experiments. References [1] A. Auslender, M. Haddou, An interior proximal point method for convex linearly constrained problems and its extention to variational inequalities, Mathematical Programming 71 (1995) 77–100. [2] A. Auslender, M. Teboulle, S. Ben-Tiba, A logarithmic-quadratic proximal method for variational inequalities, Computational Optimization and Applications 12 (1999) 31–40. [3] A. Auslender, M. Teboulle, Lagrangian duality and related multiplier methods for variational inequality problems, SIAM Journal on Optimization 10 (4) (2000) 1097–1115. [4] A. Auslender, M. Teboulle, A log-quadratic projection method for convex feasibility problems, in: D. Butanariu, Y. Censor, S. Reich (Eds.), Inherently parallel Algorithms in Feasiblity and Optimization and their Applications 8 (2001) 1–10. [5] A. Ben-Tal, M. Zibulevsky, Penalty-barrier methods for convex programming problems, SIAM Journal on Optimization 7 (1997) 347–366. [6] J. Eckstein, Approximate iterations in Bregman-function-based proximal algorithms, Mathematical Programming 83 (1998) 113–123. [7] B.S. He, L-Z. Liao, X.M. Yuan, A LQP based interior prediction–correction method for nonlinear complementarity problems, Journal of Computational Mathematics 24 (1) (2006) 33–44. [8] B. Martinet, Regularization d’inequations variationelles par approximations successives, Revue Francaise d’informatique et de Recherche Operationelle 4 (1970) 154–159. [9] K. Taji, M. Fukushima, T. Ibaraki, A globally convergent Newton method for solving strongly monotone variational inequalities, Mathematical Programming 58 (1993) 369–383. [10] M. Teboulle, Convergence of proximal-like algorithms, SIAM Journal on Optimization 7 (1997) 1069–1083.