Newton waveform relaxation method for solving algebraic nonlinear equations

Newton waveform relaxation method for solving algebraic nonlinear equations

Available online at www.sciencedirect.com Applied Mathematics and Computation 201 (2008) 553–560 www.elsevier.com/locate/amc Newton waveform relaxat...

358KB Sizes 1 Downloads 94 Views

Available online at www.sciencedirect.com

Applied Mathematics and Computation 201 (2008) 553–560 www.elsevier.com/locate/amc

Newton waveform relaxation method for solving algebraic nonlinear equations q Shulin Wu a,*, Chengming Huang a, Yong Liu b a

b

Department of Mathematics, Huazhong University of Science and Technology, Wuhan 430074, PR China Department of the Aeronautic Instruments and Electronic Engineering, The First Aeronautic Institute of Air Force, Xinyang 464000, PR China

Abstract We introduce a new notion for solving algebraic nonlinear equations, which is derived from the well known waveform relaxation iterative method, and is well suited to couple different numerical method for nonlinear equations, such as classical Newton’s method, quasi-Newton method, Conjugate-Gradient method, etc. We show in this paper that the arithmetic obtained by coupling the classical Newton’s method has essential capability for parallel computation and converges globally. Numerical results validate the theoretical analysis very well. Ó 2007 Elsevier Inc. All rights reserved. Keywords: Newton’s method; Global convergence; Nonlinear equations; Parallel computation; Waveform relaxation

1. Introduction 1.1. Waveform relaxation method Let us begin with the waveform relaxation method which is an important and effective numerical method for large scale differential systems [4–11]. Consider the following systems:  0 y ðtÞ ¼ gðt; yðtÞÞ; t P a; ð1:1Þ yðaÞ ¼ g; t ¼ a; where y 2 Rn and g : R  Rn ! Rn . The key idea of waveform relaxation method for systems (1.1) is to chose some function G : R  Rn  Rn ! Rn and then solve the following equations successively:  0 y kþ1 ðtÞ ¼ Gðt; y k ðtÞ; y kþ1 ðtÞÞ; t P a; ð1:2Þ y kþ1 ðaÞ ¼ g; t ¼ a; q *

This work was supported by NSF of China (No. 10671078) and by Program for NCET, the State Education Ministry of China. Corresponding author. E-mail addresses: [email protected] (S. Wu), [email protected] (C. Huang), [email protected] (Y. Liu).

0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2007.12.044

554

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

where y 0 ðtÞ is a given initial approximation of yðtÞ which is the solver of (1.1). Then we obtain y kþ1 ðtÞ from y k ðtÞ by solving (1.2), and the corresponding process is called continuous-time waveform relaxation iteration. The function G, which is called splitting function, is chosen to attempt to decouple systems (1.1) into easily solvable independent subsystems, which may then be solved separately. The function G is minimally assumed to satisfy a consistency condition, which ensures that the solution to (1.1) is a fixed one of (1.2), i.e., Gðt; vðtÞ; vðtÞÞ ¼ gðt; vðtÞÞ; for any function vðtÞ 2 Rn . 1.2. New notion for algebraic nonlinear equations Assume we have to solve nonlinear equations f ðxÞ ¼ 0;

ð1:3Þ n

n

0

where f : D  R ! R . Provided we have a starting guess x of the unknown solutions x at hand, like the continuous-time waveform relaxation iteration, we choose a function F : D  D ! Rn which satisfies F ðx; xÞ ¼ f ðxÞ;

ð1:4Þ

for any x 2 Rn . And then we solve the following equations from xk to xkþ1 : F ðxk ; xkþ1 Þ ¼ 0:

ð1:5Þ

If xk ! ^x as k ! þ1, by the consistency condition (1.4), we know that ^x is a solution of nonlinear Eq. (1.3). We see that it is well suited to couple different numerical methods to solve (1.5), like classical Newton’s method, quasi-Newton method, Conjugate-Gradient method, etc. In this paper, we use classical Newton’s method to solve (1.5). This leads to the following arithmetic written compactly as: for k ¼ 0; 1; . . . with a given initial approximation x0 of xkþ1 ; for m ¼ 0; 1; . . . ; M  1 F 2 ðxk ; xm ÞMxm ¼ F ðxk ; xm Þ; xmþ1 ¼ xm þ Mx;

ð1:6Þ

end xkþ1 ¼ xM ; end here and below: oF ðx; zÞ jz¼y : ð1:7Þ oz If we set x0 ¼ xk and M ¼ 1 in (1.6), by the consistency condition (1.4), arithmetic (1.6) is equivalent to F 2 ðx; yÞ ¼

F 2 ðxk ; xk ÞMxk ¼ f ðxk Þ;

xkþ1 ¼ xk þ Mxk ;

k ¼ 0; 1; . . .

ð1:8Þ

And we call method (1.8) Newton waveform relaxation method. At a first glimpse the iterative formula (1.8) is very similar to the classic Newton’s method for (1.3) written as f 0 ðxk ÞMxk ¼ f ðxk Þ;

xkþ1 ¼ xk þ Mxk ;

k ¼ 0; 1; . . .

ð1:9Þ

provided f is at least once continuously differentiable. Newton’s method is an important and basic method [12], which converges quadratically and locally.

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

555

To know the key ideas of Newton’s method in the historical perspective as well as modern investigations and applications, we can refer the survey paper [13] and references therein. But at a second glimpse, we notice that the splitting function F which satisfies (1.4) can be chosen broadly. Special choice of F can make F 2 ðx; xÞ be a diagonal or block diagonal matrix and invertible in Rn . Therefore, the iterative method (1.8) can be processed simultaneously and stably by computer with multi-processor. The organization of this paper is as follows: Section 2 provides the main result and its proof. Section 3 illustrates two numerical examples to validate our theoretic analysis. 2. Convergence analysis At present moment, we give an affine covariant Lipschitz condition which is sufficient to guarantee the convergence of the method (1.8), and many others can be done in future. Lemma 2.1. Let F : D  D ! Rn be a continuously mapping with D  Rn open and convex and satisfy the consistent condition (1.4). Function F ðx; yÞ is Fre´chet differentiable with the second variable y and matrix F 2 ðx; yÞ is invertible for any x; y 2 D. Assume that kF 1 2 ðx; xÞðF ðy; yÞ  F ðx; yÞÞk 6 akx  yk; x; y 2 D;

ð2:1Þ

kF 1 2 ðx; xÞðF 2 ðx; xÞ

ð2:2Þ

 F 2 ðx; yÞÞk 6 bkx  yk; x; y 2 D; 1 kx0  x k ¼ 0 ; . ¼ a þ b0 < 1; 2 Bðx ; rÞ  D; r ¼ kx0  x k;

ð2:3Þ

where Bðx ; rÞ is a ball around x with radius r. Then the sequence fxk g obtained from the method (1.8) is well defined, remains in the open ball Bðx ; rÞ and converges to x with F ðx ; x Þ ¼ 0 ði:e:; f ðx Þ ¼ 0Þ. Proof. Set ek ¼ xk  x , t 2 ½0; 1, and then by (1.8) and condition (2.1) we have  1  kek þ tMxk k ¼ kek þ tF 1 2 ðxk ; xk Þðf ðx Þ  f ðxk ÞÞk ¼ kð1  tÞek þ tF 2 ðxk ; xk Þðf ðx Þ  f ðxk Þ þ F 2 ðxk ; xk Þek Þk    1  6 ð1  tÞkek k þ tkF 1 2 ðxk ; xk ÞðF ðx ; x Þ  F ðxk ; x ÞÞk þ tkF 2 ðxk ; xk ÞðF ðxk ; x Þ  F ðxk ; xk Þ

þ F 2 ðxk ; xk Þek Þk Z 1 6 tkF 1 ðx ; x Þ ðF 2 ðxk ; xk Þ  F 2 ðxk ; x þ sek ÞÞek dsk þ ð1  tÞkek k þ takek k: k k 2 0

Applying condition (2.2), we arrive at   1 2 kek þ tMxk k 6 ð1  tÞek þ t akek k þ bkek k : 2

ð2:4Þ

Set t ¼ 1 in (2.4), and it follows that: 1 2 kekþ1 k 6 akek k þ bkek k : 2 Set k ¼ kek k, and then we rewrite (2.5) as   1 kþ1 6 k a þ k : 2 Since . ¼ a þ 12 b0 < 1, by a simply induction method, from (2.5) and (2.6) we have k < k1 <    < 0 ;

ð2:5Þ

ð2:6Þ

556

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

and kþ1 1 1 6 a þ bk < a þ b0 ¼ .: 2 2 k Therefore k 6 .k 0 : From this we have lim k ¼ 0:

k!þ1

Since . < 1 and k 6 0 for k P 0, by (2.4) we have   1  kxk þ tMxk  x k 6 ð1  tÞk þ tk a þ b0 6 k 6 0 ¼ kx0  x k: 2   ; kx0  x kÞ would lead to a contradiction. From this, any statement xk þ tMxk 2Bðx

h

By Lemma 2.1, we can get the global convergence of the method (1.8) as follows: Theorem 2.1. If the conditions about the splitting function F in Lemma 2.1 are full filled, and moreover a < 1 in (2.1) and b ¼ 0 in (2.2), i.e., the Jacobi matrix F 2 ðx; yÞ is independent of the second variable y, then the iterative method (1.8) converges globally. 3. Numerical results In this section, we give two examples to illustrate the global convergence and the considerable computational efficiency of the method (1.8). Example 1. Consider the following equations: ! 109 arctanð10x19 Þ þ esinðx2 Þ þ 3 f ðxÞ ¼ ¼ 0: x1 þ x2  sinð3x1 Þ Let x

F ðxk ; xkþ1 Þ ¼

109 arctanð101;k9 Þ þ ½aðx1;kþ1  x1;k Þ þ 1esinðx2;k Þ þ 3 x1;k þ bx2;kþ1 þ ð1  bÞx2;k  sinð3x1;k Þ

ð3:1Þ

! ð3:2Þ

;

and it is clear that F ðx; xÞ ¼ f ðxÞ, where a; b 2 R and a; b 6¼ 0. Routine calculation yields ! aesinðx2;k Þ 0 F 2 ðxk ; xÞ ¼ ; 0 b

ð3:3Þ

and thus the Lipschitz constant b ¼ 0 in this example. It is clear from (3.3) that matrix F 2 ðx; xÞ is invertible for any x 2 R2 since a; b 6¼ 0, but the matrix f 0 ðxÞ does not have this property. By (3.2) and (3.3), we get kF 1 2 ðxk ; xk ÞðF ðxk ; xkþ1 Þ  F ðxkþ1 ; xkþ1 ÞÞk1 0 h    i 1   x x sin x sin x sin x arctan 1;k9 þe ð 2;kþ1 Þ e ð 2;k Þ aðx1;kþ1 x1;k Þe ð 2;k Þ   109 arctan 1;kþ1 9 10 10 B C   B C sin x ¼ @ ae ð 2;k Þ A :   x1;kþ1 x1;k þð1bÞðx2;kþ1 x2;k Þþsinð3x1;k Þsinð3x1;kþ1 Þ   b

1

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

557

By Lagrange’s mean theorem it follows:      x1;kþ1 x1;k 9 10 arctan  arctan ¼ c1 ðx1;k ; x1;kþ1 Þðx1;kþ1  x1;k Þ; 9 10 109 esinðx2;kþ1 Þ  esinðx2;k Þ ¼ ec2 cosðc3 Þðx2;kþ1  x2;k Þ; sinð3x1;k Þ  sinð3x1;kþ1 Þ ¼ 3 cosðc4 Þðx1;kþ1  x1;k Þ; where 0 < c1 ðx1;k ; x1;kþ1 Þ 6 1, 1 6 c2 6 1. Hence, we have 0  1    sin x   c1 ðx1;k ;x1;kþ1 Þae ð 2;k Þ ðx1;kþ1 x1;k Þþec2 cosðc3 Þðx2;kþ1 x2;k Þ B C 1 sinðx2;k Þ kF 2 ðxk ; xk ÞðF ðxk ; xkþ1 Þ  F ðxkþ1 ; xkþ1 ÞÞk1 ¼ @  A ae   ð13 cosðc4 ÞÞðx1;kþ1 x1;k Þþð1bÞðx 2;kþ1 x2;k Þ   b

1

6 aða; bÞkxkþ1  xk k1 : By Theorem 2.1 we know that the sequence fxk g is contained in ball Bðx ; rÞ, r ¼ kx0  x k1 . Therefore, jx1;k j < 1 and this gives that there exists c > 0 such that c 6 c1 ðx1;k ; x1;kþ1 Þ 6 1: And thus we get 





c e2 b1 4 ; aða; bÞ ¼ max max 1  ; : ; max ae a b b 

ð3:4Þ

provided a > e2 and b > 4. We chose arbitrarily a ¼ e2 þ 1, b ¼ 5 and a ¼ e2 þ 5, b ¼ 10 to satisfy aða; bÞ < 1. c By (3.4) we have aðe2 þ 1; 5Þ ¼ 1  e3cþe and aðe2 þ 5; 10Þ ¼ 1  e3 þ5e . Hence, aðe2 þ 1; 5Þ < aðe2 þ 5; 10Þ, and this implies that the convergence of the method (1.8) with a ¼ e2 þ 1; b ¼ 5 will be sharper than the case a ¼ e2 þ 5; b ¼ 10. This theoretic analysis predicts the actual numerical results very well (see Fig. 1., the average iterative number to obtain kf ðxk Þk1 1015 is 500 and 800 for a ¼ e2 þ 1; b ¼ 5 and a ¼ e2 þ 5; b ¼ 10, respectively). We implement the classical Newton’s method (1.9) and our new method (1.8) with ten initial approximations x0 . These ten x0 are obtained randomly by computer. And the diminution of residuals kf ðxk Þk1 is given as follows:

a=exp(2)+1, b=5

40

a=exp(2)+5, b=10

30

10

10

30

10

20

10 Residual ||f(x)||

Residual ||f(x)||

20

10

10

10

0

10

10

10

0

10

-10

10

-10

10

-20

10

0

-20

100

200 300 400 Iterative Number: k

500

600

10

0

200

400 600 Iterative Number: k

Fig. 1. Dot lines: classical Newton’s method; solid lines: method (1.8).

800

1000

558

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

To obtain kf ðxk Þk1 6 1012 , the minimal iterative number k and the corresponding computation time t are listed in the following tables for each initial approximation x0 . In each table,  denotes non-convergence. From Fig. 1 and Tables 1 and 2 we see clearly that, the new method (1.8) is insensitive to the initial approximation x0 . Example 2. Consider the following nonlinear equations[1–3]: f ðxÞ ¼ x  /ðxÞ ¼ 0;

ð3:5Þ

with cosði

/i ðxÞ ¼ e

10 P

xj Þ

j¼1

;

i ¼ 1; . . . ; 10:

For this nonlinear equations, we chose the following splitting function: F ðxk ; xkþ1 Þ ¼ axkþ1 þ ð1  aÞxk  wðxk ; xkþ1 Þ;

ð3:6Þ

with cosði

wi ðxk ; xkþ1 Þ ¼ ðbðxi;kþ1  xi;k Þ þ 1Þe

10 P j¼1

xj;k Þ

;

i ¼ 1; . . . ; 10;

ð3:7Þ

10

and it is clear that F ðx; xÞ ¼ f ðxÞ holds for any x 2 R . Routine calculation yields 0   10 P cos xj;k B j¼1 B a  be B   B 10 P B cos 2 xj;k B j¼1 a  be F 2 ðxk ; xÞ ¼ B B B B B B @

1

..

C C C C C C C; C C C  C C 10 P A cos 10 xj;k

. a  be

j¼1

and from this we know that the matrix F 2 ðx; xÞ is invertible with properly chosen parameters a, b, and the method (1.8) would converge globally provided aða; bÞ < 1. Table 1 a ¼ e2 þ 1, b ¼ 5 Classical Newton’s method

New method (1.8)

Number: k

Time: t (s)

Number: k

Time: t (s)

   

   

509 475  460

0.2656 0.2500  0.2315

Table 2 a ¼ e2 þ 5, b ¼ 10 Classical Newton’s method

New method (1.8)

Number: k

Time: t (s)

Number: k

Time: t (s)

   

   

845 831  798

0.4688 0.4375  0.4371

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560 a=7.5, b=0.2

20

a=8, b=0.25

30

10

559

10

20

10

10

Residual ||f(x)||

Residual ||f(x)||

10

0

10

10

10

0

10

-10

10

-10

10

-20

10

0

-20

100

200 300 Iterative Number: k

400

10

0

100

200 300 400 Iterative Number: k

500

600

Fig. 2. Dot lines: classical Newton’s method; solid lines: method (1.8).

However, it is difficult to get an instructive expression of aða; bÞ under the splitting (3.6) and (3.7). At present moment, we hesitatingly chose a ¼ 7:5, b ¼ 0:2 and a ¼ 8, b ¼ 0:25 and implement the method (1.8) and the classical Newton’s method (1.9) with ten initial approximations x0 chosen randomly. The diminution of residuals kf ðxk Þk1 is shown in Fig. 2. To obtain kf ðxk Þk1 6 1012 , the minimal iterative number k and the corresponding computation time t are listed in the following Tables 3 and 4 for each initial approximation x0 . Now, it is clear that under choice a ¼ 7:5, b ¼ 0:2 and a ¼ 8, b ¼ 0:25 the convergence of method (1.8) seems insensitive to the initial approximation x0 . From the third and fourth row of Table 3, we see that even though the iterative number of the method (1.8) is larger than the classical Newton’s method (1.9), the method (1.8) takes the vantage of much less computational time. Moreover, our actual computation shows that the average computational time costed by per iteration of the method (1.9) is about 5 times of the method (1.8).

Table 3 a = 7.5, b = 0.2 Classical Newton’s method

New method

Number: k

Time: t (s)

Number: k

Time: t (s)

323 134   

1.4688 0.4969   

417 343 425  342

0.2656 0.2500 0.2833  0.2344

Table 4 a = 8, b = 0.25 Classical Newton’s method

New method

Number: k

Time: t (s)

Number: k

Time: t (s)

    

    

373 363 370  376

0.2556 0.2500 0.2513  0.2556

560

S. Wu et al. / Applied Mathematics and Computation 201 (2008) 553–560

4. Conclusions We suggest a new notion for solving algebraic nonlinear equations, which is based on the waveform relaxation method. The key idea of this notion is to chose some splitting function F ðx; xÞ which satisfies F ðx; xÞ ¼ f ðxÞ for any x 2 Rn . The new notion coupled with the classical Newton’s method leads to the arithmetic (1.6) and (1.8). By some special choice of F, the method (1.8) takes vantage of global convergence, less storage and simultaneous computation. Acknowledgements The author would like to thank Prof. Shi Baochang and Dr. Lu Zhihong from department of mathematics of Huazhong University of Science and Technology. This paper was much improved thanks to their many comments and suggestions. References [1] E. Allgower, K. Georg, Simplicial and continuation methods for approximating fixed points and solutions to systems of equations, SIAM Rev. 22 (1980) 1–58. [2] P. Deulfhard, Newton Methods for Nonlinear Problems: Affine Invariant and Adaptive Algorithms, Springer, Berlin, 2004, pp. 236– 237. [3] K. Georg, On tracing an implicitly defined curve by quasi-Newton steps and calculating bifurcation by local perturbations, SIAM J. Sci. Statist. Comput. 2 (1981) 35–50. [4] K.J. int Hout, On the convergence of waveform relaxation methods for stiff nonlinear ordinary differential equations, Appl. Numer. Math. 18 (1995) 175–190. [5] E. Lelarasmee, A.E. Ruehli, A.L. Sangiovanni-Vincentelli, The waveform relaxation methods for time-domain analysis of large scale integrated circuits, IEEE CAD IC Syst. 1 (1982) 131–145. [6] U. Miekkala, O. Nevanlinna, Convergence of dynamic iteration methods for initial value problems, SIAM J. Sci. Statist. Comput. 8 (1987) 459–482. [7] U. Miekkala, O. Nevanlinna, Sets of convergence and stability regions, BIT 27 (1987) 557–584. [8] U. Miekkala, Dynamic iteration methods applied to linear DAE systems, J. Comput. Appl. Math. 25 (1989) 131–151. [9] O. Nevanlinna, Remarks on Picard–Lindel of iteration, Part I, BIT 29 (1989) 328–346. [10] O. Nevanlinna, Remarks on Picard–Lindel of iteration, Part II, BIT 29 (1989) 535–562. [11] O. Nevanlinna, Linear acceleration of Picard–Lindel of iteration, Numer. Math. 57 (1990) 147–156. [12] A.M. Ostrowski, Solution of Equations in Euclidean and Banach Space, third ed., Academic Press, NewYork, 1973. [13] B.T. Polyak, Newtons method and its use in optimization, Eur. J. Operat. Res. 181 (2007) 1086–1096.