An inexact secant algorithm for large scale nonlinear systems of equalities and inequalities

An inexact secant algorithm for large scale nonlinear systems of equalities and inequalities

Applied Mathematical Modelling 36 (2012) 3612–3620 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepag...

220KB Sizes 2 Downloads 29 Views

Applied Mathematical Modelling 36 (2012) 3612–3620

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

An inexact secant algorithm for large scale nonlinear systems of equalities and inequalities Chao Gu a,⇑, Detong Zhu b a b

School of Math. and Info., Shanghai LiXin University of Commerce, Shanghai 201620, PR China Business College, Shanghai Normal University, Shanghai 200234, PR China

a r t i c l e

i n f o

Article history: Received 28 January 2011 Received in revised form 10 September 2011 Accepted 31 October 2011 Available online 7 November 2011

Keywords: Nonlinear systems Optimization Nonmonotone technique Line search Filter Convergence

a b s t r a c t In this paper, an inexact secant algorithm in association with nonmonotone technique and filter is proposed for solving the large scale nonlinear systems of equalities and inequalities. The systems are transformed into a continuous constrained optimization solved by inexact secant algorithm. Global convergence of the proposed algorithm is established under the reasonable conditions. Numerical results validate the effectiveness of our approach. Ó 2011 Elsevier Inc. All rights reserved.

1. Introduction A classical algorithm for solving the system of nonlinear equations is Newton method. The method is attractive because it converges rapidly from any sufficiently good initial guess point. However, solving a system of linear equations (the Newton equations) at each stage can be expensive if the number of unknowns is large and may not be justified when xk is far from a solution. Therefore, Dembot et al. [1] considered a class of inexact Newton methods which were used to solve one of the linear systems inexactly. In the latter, inexact quasi-Newton methods and inexact secant methods were presented for the solutions of large scale nonlinear equations and nonlinear optimization [2,3]. Consider the following nonlinear systems of equalities and inequalities

ci ðxÞ ¼ 0;

i 2 E;

ð1:1aÞ

ci ðxÞ 6 0;

i 2 I;

ð1:1bÞ

S T where ci(x): Rn ! R; i 2 E I ; E I ¼ ;. Systems of nonlinear equalities and inequalities appear in a wide variety of problems in applied mathematics. These systems play a central role in the model formulation design and analysis of numerical techniques employed in solving problems arising in complementarity and variational inequalities. Many researchers considered the problem, especially for the numerical methods. Dennis et al. [4] considered a trust region approach for this problem. Macconi et al. [5] discussed the problem with simple bound and presented the global and local convergence of two trust region methods. Smoothing Newton-like method was studied by Yang et al. [6] and Zhang and Huang [7]. But none of these theories considered inexact algorithm for large scale problems (1.1). ⇑ Corresponding author. E-mail addresses: [email protected] (C. Gu), [email protected] (D. Zhu). 0307-904X/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.apm.2011.10.034

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

3613

In this paper, we transform the problem (1.1) into an equality constrained optimization problem solved by an inexact secant algorithm with nonmonotone technique and filter. Filter methods presented by Fletcher and Leyffer [8] for nonlinear programming (NLP), offer an alternative to merit functions, as a tool to guarantee global convergence of algorithms for nonlinear optimization. Filter strategies are focused on because of promising numerical results [8–14,18]. Nonmonotone technique presented by Grippo et al. [15] is used to speed up the convergence progress in some ill-conditioned cases. The paper is outlined as follows. In Section 2, we propose an inexact secant algorithm; the global convergence of the algorithm is proved in Section 3; finally, we report some numerical experiments in Section 4. Notation: k  k is the ordinary Euclidean norm in the paper. Let rf(x) denote the gradient of the function f(x), rc(x) denote the Jacobian of the constraint c(x). 2. An inexact secant algorithm Let vector functions C E ðxÞ to be the vector functions whose components are ci(x) for i 2 E and C I ðxÞ to be the vector functions whose components are ci(x) for i 2 I . Given a 0–1 diagonal indicator matrix WðxÞ 2 Rpp whose diagonal entries are

wi ðxÞ ¼



1; i 2 I and ci ðxÞ P 0; 0;

ð2:1Þ

i 2 I and ci ðxÞ < 0:

Now, we define the function

1 2

UI ðxÞ ¼ C I ðxÞT WðxÞC I ðxÞ;

ð2:2Þ

and problem (1.1) can be written as

1 min C I ðxÞT WðxÞC I ðxÞ; 2 subject to C E ðxÞ ¼ 0:

ð2:3aÞ ð2:3bÞ 2

It is easy to see that the function t+ = max{t, 0} is continuous and that (t+) satisfies

  d 1 2 tþ ¼ tþ : dt 2 Hence, (t+)2 is differentiable [4]. Thus, if each ci ðxÞ; i 2 I , is continuously differentiable,

rUI ðxÞ ¼ rC I ðxÞT WðxÞC I ðxÞ

ð2:4Þ

is well defined and continuous. The inexact secant algorithm for (2.3) is given by the following iterative scheme. Inexact secant algorithm: For k = 0 Until convergence Do

kkþ1 ¼ ðrC E ðxk ÞT rC E ðxk ÞÞ1 rC E ðxk ÞT rUI ðxk Þ;

ð2:5Þ

Bk wk ¼ rx ‘ðxk ; kkþ1 Þ þ r k ;

ð2:6Þ

hk ¼ Pðxk ; Bk Þwk ;

ð2:7Þ T

1

v k ¼ rC E ðxk ÞðrC E ðxk Þ rC E ðxk ÞÞ sk ¼ hk þ v k ;

C E ðxk Þ;

xkþ1 ¼ xk þ sk ;

ð2:8Þ ð2:9Þ ð2:10Þ

where the Bk denotes the approximation of the Hessian r2xx ‘ðxk ; kk Þ of the Lagrangian function

‘ðx; kÞ ¼ UI ðxÞ þ kT C E ðxÞ;

ð2:11Þ

   1 Pðxk ; Bk Þ ¼ I  rC E ðxk Þ rC E ðxk ÞT B1 rC E ðxk ÞT is an oblique projection onto the null subspace N ATk , vector rk k rC E ðxk Þ represents the residual vector such that B1 k

kr k k 6 gk minfkrx ‘ðxk ; kkþ1 Þk; kC E ðxk Þkg;

ð2:12Þ

where the nonnegative forcing sequence {gk} is used to control the level of accuracy and gk 2 [0, 1). As we approach the solution the residual rk will go to zero, meaning that the linear system is being solved more exactly. Throughout the paper we use the BFGS secant update by

BBFGS kþ1 ¼ Bk þ

yk yTk Bk sk ðBk sk ÞT  ; yTk sk sTk Bk sk

where yk = rx‘(xk + hk, kk+1)  rx‘(xk, kk+1).

ð2:13Þ

3614

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

To determine whether to accept or deny the trial point, some criterion is employed. The merit function is thus utilized:

‘ðx; kÞ ¼ UI ðxÞ þ kðxÞT C E ðxÞ and hðxÞ ¼

1 kC E ðxÞk2 : 2

During the line search, a trial point xk(ak,l) = xk + ak,lsk is acceptable if

hðxk ðak;l ÞÞ 6 ð1  ch Þ

max

06j6mðkÞ1

hðxkj Þ

ð2:14aÞ

or

‘ðxk ðak;l ÞÞ 6

max

06j6mðkÞ1

‘ðxkj Þ  c‘ hðxk Þ

ð2:14bÞ

holds for ch, c‘ 2 (0, 1), where m(0) = 0, 0 6 m(k) 6 min{m(k  1) + 1, M}, M is a positive constant. Let mk ðaÞ ¼ arUI ðxk ÞT  T sk  akTk C E ðxk Þ þ a rkTk sk C E ðxk Þ which is obtained by Taylor expansions of UI ðxÞ; kðxÞ and C E ðxÞ around xk into direction sk. When the condition

mk ð1Þ 6 nsTk Bk sk and  mk ðak;l Þ > d½hðxk Þsh

ð2:15Þ

hold, where n 2 (0, 1), d > 0 and sh 2 (0, 1), the trial point xk(ak,l) has to satisfy the nonmonotone Armijo condition

‘ðxk ðak;l ÞÞ 6

max

06j6mðkÞ1

‘ðxkj Þ þ g‘ mk ðak;l Þ

ð2:16Þ

  instead of (2.14), where g‘ 2 0; 12 . For the sake of a simplified notation we define the filter in this paper not as a list but as a set F k containing all (h, ‘) that are prohibited in iteration k. We say that a trial point xk(ak,l) is acceptable to the filter if

ðhðxk ðak;l ÞÞ; ‘ðxk ðak;l ÞÞÞ R F k : At the beginning of the proposed algorithm, the filter is initialized to

F k ¼ fðh; ‘Þ 2 R2 : h P 104 ; ;g: If the accepted trial step size does not satisfy the condition (2.16), the filter is augmented for a new iteration using the nonmonotone update formula

F kþ1 ¼ F k [ fðh; ‘Þ 2 R2 : h P ð1  ch Þ

max

06j6mðkÞ1

hðxkj Þ; ‘ P

max

06j6mðkÞ1

‘ðxkj Þ  c‘ hðxk Þg:

ð2:17Þ

We do not augment the filter if (2.15) and (2.16) hold for the accepted step size. In order to detect the situation where no admissible step size can be found and the restoration phase has to be invoked, we define that

amin ¼ ga  k

 8  > < min 1  ð1ch Þhk ; ‘k ‘k c‘ hk ; hk

mk ð1Þ

dh

; if mk ðaÞ < nasTk Bk sk ; ð1Þ

sh k

mk

> : 1  ð1ch Þhk ; hk

ð2:18Þ

otherwise

where ga 2 ð0; 1;  hk ¼ max06j6mðkÞ1 hðxkj Þ; ‘k ¼ max06j6mðkÞ1 ‘ðxkj Þ. The proposed algorithm switch to the feasibility restoration phase when ak,l becomes smaller than amin k . The algorithm is described as follows. Algorithm 1   Given: Starting point x0; F 0 ¼ fðh; ‘Þ 2 R2 : ð104 ; ;Þg; ch, c‘ 2 (0, 1); sh 2 (0, 1)n 2 (0, 1); d > 0; g‘ 2 0; 12 ; ga 2 (0, 1]; 0 < s1 6 s2 < 1;  > 0; M > 0. 1. Compute UI ðxk Þ; rUI ðxk Þ; C E ðxk Þ; rC E ðxk Þ; hk ; Bk ; kk . If UI ðxk Þ þ kC E ðxk Þk 6  or krC I ðxk ÞT Wðxk ÞC I ðxk Þþ rC E ðxk Þkk k þ hðxk Þ 6 , stop. 2. Compute wk such that

kBk wk þ rx ‘ðxk ; kkþ1 Þk 6 gk minfkrx ‘ðxk ; kkþ1 Þk; kC E ðxk Þkg; then obtain sk = hk + vk from (2.7) and (2.8).

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

3. 4. 5. 6. 7. 8. 9. 10. 11.

3615

Set ak,0 = 1 and l 0. Compute amin by (2.18). k min If ak;l < ak , go to Step 11. Otherwise, compute xk(ak,l) = xk + ak,lsk. If ðhðxk ðak;l ÞÞ; ‘ðxk ðak;l ÞÞÞ R F k , go to Step 9. If (2.15) holds, go to Step 7. Otherwise, go to Step 8. If (2.16) holds, accept xk+1 = xk(ak,l), go to Step 10. Otherwise, go to Step 9. If (2.14) holds, accept xk+1 = xk(ak,l), augment the filter using (2.17) and go to Step 10. Otherwise, go to Step 9. Choose new trial step size. Choose ak,l+1 2 [s1ak,l, s2ak,l], set l l + 1, and go back to Step 4. Continue with next iteration. m(k + 1) = min{m(k) + 1, M}, k k + 1, and go back to Step 1. Feasibility restoration phase. Compute a new iterate xk+1 by decreasing the infeasibility measure h so that xk+1 satisfies the sufficient decrease conditions (2.14a)and is acceptable to filter. Go to Step 1.

Algorithm 2 (Feasibility restoration phase). Consider the least-squares problem

1 kC E ðxÞk2 : 2

min hðxÞ ¼

We use the Levenberg–Marquardt method [16] to minimize the function h(x) until xk+1 satisfies the sufficient decrease conditions (2.14a) and is acceptable to the filter F k . 3. Global convergence Assumption G. In the following, we denote the set of indices of those iterations in which the filter has been augmented by A. (G1) C E ðxÞ; C I ðxÞ and k(x) are continuously differentiable and bounded on Rn . (G2) The iterate {xk} remains in compact subset S  Rn . (G3) There exist two constant a and b such that akpk2 6 pTBkp 6 bkpk2 for all k, where p 2 Rn ; 0 < a 6 b.

Lemma 1. If khk k ¼ kC E ðxk Þk ¼ 0, the xk is a KKT point of (2.3). Proof. See [17]. h Lemma 2. Suppose Assumptions G hold. Then

hðxk Þ ¼ 0 implies mk ðaÞ 6 nasTk Bk sk :

ð3:1Þ

Proof. From the definition of P(xk, Bk) we get

ATk hk ¼ ATk Pðxk ; Bk Þwk ¼ 0;

h i T 1 Bk hk ¼ Bk Pðxk ; Bk Þwk ¼ Bk Pðxk ; Bk Þ B1 k rx ‘ðxk ; kkþ1 Þ þ Bk Pðxk ; Bk ÞBk r k ¼ rUI ðxk Þ  1 1 þ Ak ATk B1 ATk B1 k Ak k rUI ðxk Þ þ Bk Pðxk ; Bk ÞBk r k :

If h(xk) = 0, we have

vk = 0 and C E ðxk Þ ¼ 0. It follows from h(xk) = 0, (3.2) and (2.12) that

 T mk ðaÞ ¼ aUI ðxk ÞT sk  akTk C E ðxk Þ þ a rkTk sk C E ðxk Þ; ðG1Þ

T

T

¼ ahk Bk hk þ hk Bk Pðxk ; Bk ÞB1 k r k þ OðakC E ðxk ÞkÞ; sTk Bk sk

¼ a

ð3:2Þ

6 nasTk Bk sk :



ð3:3aÞ ð3:3bÞ ð3:3cÞ

Lemma 3. Suppose Assumptions G hold. Then no (h, ‘)-pair corresponding to a feasible point is ever included in the filter, i.e.,

minfh : ðh; ‘Þ 2 F k g > 0: Proof. The proof of (3.4) is by induction. Since F 0 ¼ fðh; ‘Þ 2 R2 : ð104 ; ;Þg, the claim is valid for k = 0.

ð3:4Þ

3616

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

Suppose that the claim is true for k. If h(xk) > 0 and the filter is augmented in iteration k, then minfh : ðh; ‘Þ 2 F kþ1 g > 0 from the update rule (2.17) and max06j6m(k)1h(xkj) P h(xk). If, on the other hand, h(xk) = 0, we have mk ðaÞ 6 nasTk Bk sk from (3.1) and mk ðak;l Þ > d½hðxk Þsh so that the condition (2.15) is true. So (2.16) must be satisfied, i.e., F kþ1 ¼ F k . h Remark 1. Unless the feasibility restoration phase terminates at a stationary point, h(x) eventually produces a point acceptable to the filter. Lemma 4. Suppose that Assumptions G hold and the filter is augmented infinitely, i.e., jAj ¼ 1. Then

lim hðxk Þ ¼ 0:

k!1 k2A

Proof. The proof is by contradiction. Suppose that there exists an infinite subsequence {ki} of A for which

hðxki Þ P 

ð3:5Þ

for some  > 0. At each iteration ki ; ðhki ; ‘ki Þ is added to the filter which means that no other (h, ‘) can be added to the filter at a later stage within the square ½ hki  ch ;  hki   ½‘ki  c‘ ; ‘ki . And the area of the each of these squares is at least chc‘2. From Assumption G1 we have ‘1 6 ‘k 6 ‘2 and 0 6 hk 6 h1, where ‘1, ‘2 and h1 are constants. Thus (h, ‘)-pairs associated with the filter are restricted to B ¼ ½0; h1   ½‘1 ; ‘2 . It is clear that the area of B is finite. Therefore the B is completely covered by at most a finite number of such squares in contradiction to the infinite subsequence {ki} satisfying (3.5). h Theorem 1. Suppose Assumptions G hold and jAj ¼ 1. Then there exists a subsequence fki g # A such that

lim khki k þ kC E ðxki Þk ¼ 0:

ð3:6Þ

i!1

Proof. By Assumption G2 and jAj ¼ 1, there exists an accumulation point  x, i.e., limi!1 xki ¼  x; ki 2 A. It follows from Lemma 4 that

lim kC E ðxki Þk ¼ 0: i!1

If limi!1 khki k ¼ 0, then  x is a KKT point. Otherwise, there exist a subsequence fxki g of fxki g and a constant  > 0 so that for all j kij

khki k P : j

By the choice of fxki g, we have j

kij 2 A for all kij :

ð3:7Þ

According to khki k P , Assumption G3 and n 2 (0, 1), we have j

 T mki ðaÞ þ nasTki Bki ski ¼ arUðxki ÞT ski  akTki C E ðxki Þ þ a rkTki ski C E ðxki Þ þ nasTki Bki ski j

j

j

j

j

T hki

¼ a

j

¼ aðn 

j

j

sTki

Bki hki þ na j

j

T 1Þhki

j

j

j

Bki ski þ j

j

j

T hk Bk Pðxk ; Bk ÞB1 k rk

j

j

j

þ OðkC E ðxki ÞkÞ

Bki hki þ OðkC E ðxki ÞkÞ 6 aaðn  1Þkhki k2 þ OðkC E ðxki ÞkÞ j

j

j

2

6 aaðn  1Þ þ OðkC E ðxki ÞkÞ: Since n  1 < 0; mki ðaÞ 6 nasTki Bki ski for sufficiently large kij . In a similar manner we can show that mki ðaÞ P d½hðxki Þsh for j j j j j j sufficiently large kij . This means that the condition (2.15) is satisfied for sufficiently large kij . Therefore, the reason for accepting the step size a must have been that a satisfies nonmonotone Armijo condition (2.16) in Step 7. Consequently, the filter is not augmented in iteration kij which contradicts to (3.7). h Lemma 5. Suppose that Assumptions G hold and the filter is augmented finitely, i.e., jAj < 1. Then

lim mk ðak;l Þ ¼ 0:

k!1

ð3:8Þ

Proof. From assumptions there exists K 2 N so that for all iterations k P K the filter is not augmented, i.e., k R A for all k P K. We then have that for all k P K both conditions (2.15) and (2.16) are satisfied for ak.

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

3617

Let l(k) be an integer such that

k  mðkÞ 6 lðkÞ 6 k; ‘ðxlðkÞ Þ ¼ max ‘ðxkj Þ:

ð3:9Þ

06j6mðkÞ1

Taking into account that m(k + 1) 6 m(k) + 1, one can write

‘ðxlðkþ1Þ Þ ¼

max

06j6mðkþ1Þ1

‘ðxkþ1j Þ 6 max ‘ðxkþ1j Þ ¼ max½‘ðxlðkÞ Þ; ‘ðxkþ1 Þ ¼ ‘ðxlðkÞ Þ; 06j6mðkÞ

which implies that the sequence {‘(xl(k))} is non-increasing. Moreover we obtain from (2.16), for k P K

‘ðxlðkÞ Þ ¼ ‘ðxlðkÞ1 þ alðkÞ1 slðkÞ1 Þ 6

max

06j6mðlðkÞ1Þ1

‘ðxlðkÞ1j Þ þ g‘ mlðkÞ1 ðalðkÞ1 Þ ¼ ‘ðxlðlðkÞ1Þ Þ þ g‘ mlðkÞ1 ðalðkÞ1 Þ:

Since {‘(xl(k))} is bounded below and non-increasing, {‘(xl(k))} converges. Therefore

lim mlðkÞ1 ðalðkÞ1 Þ ¼ 0:

k!1

By (2.15) and Assumption G3 we have mlðkÞ1 ðalðkÞ1 Þ 6 nalðkÞ1 sTlðkÞ1 BlðkÞ1 slðkÞ1 6 naalðkÞ1 kslðkÞ1 k2 which implies

lim alðkÞ1 kslðkÞ1 k ¼ 0:

k!1

Let ^l ¼ lðk þ M þ 2Þ. Similar to the proof in [15] one can show, by induction, that for any given j P 1

lim a^lðkÞ1 ks^lðkÞj k ¼ 0; lim ‘ðx^lðkÞj Þ ¼ ‘ðxlðkÞ Þ:

k!1

ð3:10Þ

k!1

Now for any k,

xkþ1 ¼ x^lðkÞ 

^lðkÞk1 X

a^lðkÞj s^lðkÞj :

j¼1

Note that ^lðkÞ  k  1 6 M þ 1 and by (3.10), we have

lim kxkþ1  x^lðkÞ k ¼ 0:

k!1

Since {‘(xl(k))} admits a limit, it follows from the continuity of ‘ that

lim ‘ðxk Þ ¼ lim ‘ðx^lðkÞ Þ:

k!1

k!1

By (2.16) and taking limits for k ? 1, we obtain limk?1mk(ak,l) = 0.

h

Theorem 2. Suppose that Assumptions G hold and jAj < 1. Then

lim khk k þ kC E ðxk Þk ¼ 0:

k!1

Proof. It follows from (2.15) and (3.8) that

lim hk ¼ 0;

ð3:11Þ

lim ak ksk k ¼ 0:

ð3:12Þ

k!1

and k!1

Now let  x be any limit point of {xk}. Then by (3.12) there are two cases: Case 1. If limk?1kskk = 0, then limk?1khkk = 0 from (2.9) which implies  x is a KKT point. Case 2. There exists a subsequence fxki g of {xk} such that

lim aki ¼ 0: i!1

ak

Thus, there exists K such that, for all ki P K; s1i must have been rejected because it is not acceptable to the current filter, i.e.,

      aki aki ; ‘ xk i 2 F ki ¼ F K : h xk i

s1

s1

or it violates (2.16), i.e., it satisfies

ð3:13Þ

3618

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

  ak ‘ xki þ i ski >

s1

max

06j6mðki Þ1

‘ðxki j Þ þ g‘ mki





aki : s1

ð3:14Þ

In the following, we will show that (3.13) can not be true for sufficiently large ki. Consider (3.13): From Lemma 3 we have minfh : ðh; ‘Þ 2 F K g > 0. By second order Taylor expansion of h(x) and kskk 6 Ms, we get

      2 aki ak aki 6 1  2 i hki þ C h M 2s h xk i :

s1

s1

s1

a  ak k It follows from limi!1 s1i ¼ 0, limi!1 hki ¼ 0, and Lemma 3 that for sufficiently large ki xki s1i is accepted by filter F ki . Therefore, for all ki P K

  ak ‘ xki þ i ski >

s1

max

06j6mðki Þ1

‘ðxki j Þ þ g‘ mki









aki aki P ‘ðxki Þ þ g‘ mki : s1 s1

By the theorem of the mean we have

    ak aki ; mki xki i P g‘ mki

s1

s1

xki 2 ð0; 1Þ:

Take limits and we have

 P 0: ð1  g‘ Þm Since 1  g‘ > 0 and mk(1) < 0 we obtain

 ¼ 0: m It follow from mk ð1Þ 6 nsTk Bk sk that s ¼ 0. Then limk?1khkk = 0 from (2.9) which implies  x is a KKT point. h 4. Numerical experiments In this section we present the numerical results of Algorithm 1 on HP (i5 CPU) personal computer with 2 GB memory. The selected parameter values are:  = 106,ch = 0.5,c‘ = 0.5, n = 0.5,d = 1,sh = 0.5,g‘ = 0.3,ga = 0.5, s1 = s2 = 0.5,M = 5. The computation terminates when stopping criterion UI ðxk Þ þ kC E ðxk Þk 6  or krC I ðxk ÞT Wðxk ÞC I ðxk Þ þ rC E ðxk Þkk k þ hðxk Þ 6  is satisfied. 1 The choice of the forcing term gk is gk ¼ kþ1 ; i ¼ 1; 2; . . .. In the following, we mainly focus on the CPU time required by exact and inexact algorithm for three large scale problems. NIT and NUI stand for the numbers of iterations, function evaluations, respectively. CPU time denotes the CPU time used when the iteration is stopped. error=UI ðxk Þ þ hðxk Þ. Example 1

x1 ðx5  1Þ  10x5 ¼ 0; x2 ðx6  1Þ  10x6 ¼ 0; x3 ðx7  1Þ  10x7 ¼ 0; x4 ðx8  1Þ  10x8 ¼ 0; x1 þ x2 þ x3 6 0; xn2 þ xn1 þ xn 6 0: Initial guess: x0 = (.1, .1, . . . , .1)T. Example 2

 x1 þ 2x2  x3 þ  x2 þ 2x3  x4 þ

1 2ðn  1Þ 1 2ðn  1Þ

x1 þ x2 þ xn 6 0: Initial guess: x0 = (.5, .5, . . . , .5)T.

2

 x2 þ

 x3 þ 2

2 þ1 n1 3 þ1 n1

3 3

¼ 0; ¼ 0;

3619

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620 Table 1 Numerical results for Examples 1 using secant algorithm and inexact secant algorithm with M = 3. Dim

Secant

n = 50 n = 100 n = 500 n = 1000 n = 2000 n = 5000

Inexact secant

NIT

NUI

CPU time

Error

NIT

NUI

CPU time

Error

4 4 4 4 4 4

4 4 4 4 4 4

0.078001 0.1404 0.6708 1.6692 5.1168 36.442

4.8074e016 4.8074e016 4.8074e016 4.8074e016 4.8074e016 4.8074e016

4 4 4 4 4 4

4 4 4 4 4 4

0.1404 0.1716 0.65520 1.3572 3.1200 10.936

2.1532e015 2.1532e015 2.1532e015 2.1532e015 2.1532e015 2.1532e015

Table 2 Numerical results for Examples 2 using secant algorithm and inexact secant algorithm with M = 3. Dim

Secant

n = 50 n = 100 n = 500 n = 1000 n = 2000 n = 5000

Inexact secant

NIT

N UI

CPU time

Error

NIT

NUI

CPU time

Error

3 3 3 3 2 2

4 4 4 4 3 3

0.04680 0.093601 0.42120 1.0920 2.1528 14.165

7.1995e011 9.6408e013 1.0066e016 8.4320e017 4.2274e007 6.7519e008

3 3 3 3 2 2

4 4 4 4 3 3

0.06248 0.12480 0.40560 0.87261 1.4352 5.4288

7.1995e011 9.6408e013 4.7730e016 7.6279e016 4.2274e007 6.7519e008

Table 3 Numerical results for Examples 3 using secant algorithm and inexact secant algorithm with M = 3. Dim

Secant

n = 50 n = 100 n = 500 n = 1000 n = 2000 n = 5000

Inexact secant

NIT

NUI

CPU time

Error

NIT

N UI

CPU time

Error

7 7 7 7 7 7

9 9 9 9 9 9

0.1248 0.2184 1.4508 4.7268 21.123 237.36

1.7764e013 1.7764e013 1.7764e013 1.7764e013 1.7764e013 1.7764e013

7 7 7 7 7 7

9 9 9 9 9 9

0.1560 0.2496 1.3260 3.7740 14.617 150.73

1.7764e013 1.7764e013 1.7764e013 1.7764e013 1.7764e013 1.7764e013

Example 3

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 þ xn3  8 ¼ 0; x1 þ xn1 6 0; x2 þ xn2 6 0; x3 þ xn3 6 0; x4 þ xn4 6 0; x5 þ xn5 6 0: Initial guess: x0 = (5, 5, . . . , 5)T. Numerical results for Examples 1–3 using secant algorithm and inexact secant algorithm with M = 3 are showed in Tables 1–3. It is easy to see that the inexact algorithm is much better than the exact one for large scale problems 1–3. Moreover, when n is larger than 500, the larger the dimension, the better the numerical results. Also, we compare the nonmonotone inexact algorithm (M = 3) with monotone one (M = 0) for Example 1. Table 4 shows that nonmonotone algorithm is better Table 4 Numerical results for Examples 1 using inexact secant algorithm with M = 0, 3. Dim

n = 50 n = 100 n = 500 n = 1000 n = 2000 n = 5000

M=0

M=3

NIT

NUI

CPU time

Error

NIT

NUI

CPU time

Error

4 4 4 4 4 4

16 16 16 16 16 16

0.2028 0.3432 1.1544 2.3868 5.0544 16.349

4.8074e016 4.8074e016 4.8074e016 4.8074e016 4.8074e016 4.8074e016

4 4 4 4 4 4

4 4 4 4 4 4

0.1404 0.1716 0.65520 1.3572 3.1200 10.936

2.1532e015 2.1532e015 2.1532e015 2.1532e015 2.1532e015 2.1532e015

3620

C. Gu, D. Zhu / Applied Mathematical Modelling 36 (2012) 3612–3620

than monotone one. The main reason is that nonmonotone algorithm has more flexibility for the acceptance of the trial step and requires less computational costs compared with the monotone case. When M = 0, the exact secant algorithm reduces to a monotone line search filter secant algorithm which is analyzed in [13]. It is to say that inexact technique improves the performance of traditional filter algorithm for large scale problem. The new algorithm looks attractive for solving problems where the number of variables n is much larger than the number of constraints m. When the number of constraints approaches the number of variables, an alternative procedure that merits consideration is to solve both m  m linear systems inexactly and the n  n linear system exactly. In fact, we can even try to solve the three linear systems inexactly. But the convergence is still a problem. Convergence of these alternative approaches will be treated in a future report. Acknowledgements The authors gratefully acknowledge the partial supports of the National Science Foundation Grant (10871130) of China, the Ph.D. Foundation Grant (20093127110005), the Shanghai Leading Academic Discipline Project (No. S30405), the Innovation Program of Shanghai Municipal Education Commission, the Shanghai Finance Budget Project (1130IA15). References [1] R.S. Dembot, S.C. Eisenstat, T. Steihaug, Inexact newton methods, SIAM J. Numer. Anal. 19 (1982) 400–408. [2] E.G. Birgin, N.a. Krejivc, J.M. Martinez, Globally convergent inexact quasi-Newton methods for solving nonlinear systems, Numer. Algorithm 32 (2003) 249–260. [3] R. Fontecilla, Inexact secant methods for nonlinear constrained optimization, SIAM J. Numer. Anal. 27 (1990) 154–165. [4] J.E. Dennis, M. El-Alem, K. Williamson, A trust-region approach to nonlinear systems of equalities and inequalities, SIAM J. Optim. 9 (1999) 291–315. [5] M. Macconi, B. Morini, M. Porcelli, Trust-region quadratic methods for nonlinear systems of mixed equalities and inequalities, Appl. Numer. Math. 59 (2009) 859–876. [6] L. Yang, Y. Chen, X. Tong, Smoothing newton-like method for the solution of nonlinear systems of equalities and inequalities, Numer. Math. Theor. Meth. Appl. 2 (2009) 224–236. [7] Y. Zhang, Z. Huang, A nonmonotone smoothing-type algorithm for solving a system of equalities and inequalities, J. Comput. Appl. Math. 233 (2010) 2312–2321. [8] R. Fletcher, S. Leyffer, Nonlinear programming without a penalty function, Math. Program. 91 (2002) 239–269. [9] C. Chin, A. Rashid, K. Nor, Global and local convergence of a filter line search method for nonlinear programming, Optimiz. Methods Softw. 22 (2007) 365–390. [10] R. Fletcher, S. Leyffer, P.L. Toint, On the global convergence of a filter-SQP algorithm, SIAM J. Optimiz. 13 (2002) 44–59. [11] R. Fletcher, N.I.M. Gould, S. Leyffer, P.L. Toint, A. Wächter, Global convergence of a trust-region SQP-filter algorithm for general nonlinear programming, SIAM J. Optimiz. 13 (2002) 635–659. [12] C. Gu, D. Zhu, A filter interior-point algorithm with projected Hessian updating for nonlinear optimization, J. Appl. Math. Comput. 29 (2009) 67–80. [13] C. Gu, D. Zhu, A secant algorithm with line search filter method for nonlinear optimization, Appl. Math. Modell. 35 (2011) 879–894. [14] K. Su, Z. Yu, A modified SQP method with nonmonotone technique and its global convergence, Comput. Math. Appl. 57 (2009) 240–247. [15] L. Grippo, F. Lampariello, S. Ludidi, A nonmonotone line search technique for Newtons method, SIAM J. Numer. Anal. 23 (1986) 707–716. [16] J. Nocedal, S. Wright, Numerical Optimization, Springer-Verlag, New York, NY, 1999. [17] J. Zhang, D. Zhu, A projective quasi-Newton method for nonlinear optimization, J. Comput. Appl. Math. 53 (1994) 291–307. [18] A. Wächter, L.T. Biegler, Line search filter methods for nonlinear programming: motivation and global convergence, SIAM J. Optimiz. 16 (2005) 1–31.