The extrapolation of Numerov's scheme for nonlinear two-point boundary value problems

The extrapolation of Numerov's scheme for nonlinear two-point boundary value problems

Applied Numerical Mathematics 57 (2007) 253–269 www.elsevier.com/locate/apnum The extrapolation of Numerov’s scheme for nonlinear two-point boundary ...

224KB Sizes 0 Downloads 8 Views

Applied Numerical Mathematics 57 (2007) 253–269 www.elsevier.com/locate/apnum

The extrapolation of Numerov’s scheme for nonlinear two-point boundary value problems ✩ Yuan-Ming Wang a,b a Department of Mathematics, East China Normal University, Shanghai 200062, People’s Republic of China 1 b Division of Computational Science, E-Institute of Shanghai Universities, Shanghai Normal University,

Shanghai 200234, People’s Republic of China Available online 18 April 2006

Abstract This paper is concerned with the extrapolation algorithm of Numerov’s scheme for semilinear and strongly nonlinear twopoint boundary value problems. The asymptotic error expansion of the solution of Numerov’s scheme is obtained. Based on the asymptotic error expansion, Richardson’s extrapolation is constructed, and so the accuracy of the numerical solution is greatly increased. Numerical results are presented to demonstrate the efficiency of the extrapolation algorithm. © 2006 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Nonlinear two-point boundary value problem; Numerov’s method; Asymptotic error expansion; Richardson’s extrapolation

1. Introduction In studying some problems arising in electromagnetism, biology, astronomy, boundary layer and other topics, we often meet the following semilinear two-point boundary value problem (see, e.g., [3,5,22,19]):  2 d u 0 < x < 1, − dx 2 + f (x, u(x)) = 0, (1.1) u(0) = u0 , u(1) = u1 , where u0 , u1 are two constants, and the function f (·, u) is, in general, nonlinear in u. As we know, it is difficult to give the analytical solution of the problem (1.1), even if the function f (·, u) is linear in u. Therefore, numerical methods are of considerable practical interest. There are many numerical methods for problem (1.1) (see, e.g., [19,15–17,11,13,31]). One of the well-known numerical methods is due to Numerov (cf. [3,16,20]). Let N be a positive integer, and construct net ωh = {xi : xi = ih, i = 0, 1, . . . , N},

h = 1/N.

(1.2)

✩ This work was supported in part by the National Natural Science Foundation of China No.10571059, E-Institutes of Shanghai Municipal Education Commission No. E03004, Shanghai Priority Academic Discipline, and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. E-mail address: [email protected] (Y.-M. Wang). 1 Corresponding address.

0168-9274/$30.00 © 2006 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.apnum.2006.02.009

254

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Then Numerov’s scheme for (1.1) reads ⎧ −u (x ) + 2u (x ) − u (x ) h i−1 h i h i+1 ⎪ ⎪ ⎨ h2 + 12 (f (xi−1 , uh (xi−1 )) + 10f (xi , uh (xi )) + f (xi+1 , uh (xi+1 ))) = 0, ⎪ 1  i  N − 1, ⎪ ⎩ uh (0) = u0 , uh (1) = u1 .

(1.3)

Because Numerov’s method possesses the accuracy of fourth order, it has attracted considerable attention in recent years and has been extensively used in practical computations (cf. [3,13,2,10,30,26,27,29]). The survey paper [4] summarizes many important results on the recent developments of the Numerov’s method. Suppose that we are required to find the steady temperature distribution in a rod. If the ends of the rod are kept at given temperature and the thermal conductivity depends on the temperature distribution, we solve the following strongly nonlinear two-point boundary value problem:  d du − dx (k(u) dx ) + f (x, u(x)) = 0, 0 < x < 1, (1.4) u(0) = u0 , u(1) = u1 , where the function k(u), which represents the thermal conductivity, is assumed to satisfy basic hypothesis (H) in Section 3. In general, the function k(u) is nonlinear in u. Such a problem also arises in the theories of diffusion, chemical kinetics, etc. (see, e.g., [1,6–8,24,25]). In a recent article [28] the author showed that by a proper transformation, Numerov’s method is also valid for the above strongly nonlinear problem. The Richardson extrapolation technique is an efficient approach to increase the accuracy of the numerical solution. The success of the extrapolation technique relies on the existence of an asymptotic error expansion of the numerical solution. The monograph [18] presents a systematic treatment of the application of the extrapolation technique on the classical finite difference scheme. The extrapolation technique for Numerov’s method has been used for the solution of Sturm–Liouville problems (see [23]). For the general nonlinear Numerov’s scheme (1.3), we have not yet seen any extrapolation result in the literature. The aim of this paper, therefore, is to establish an asymptotic error expansion of the solution of Numerov’s scheme (1.3) for nonlinear problems (1.1) and (1.4), and construct Richardson’s extrapolation by using the asymptotic error expansion. The outline of the paper is as follows: in Section 2, we establish an asymptotic error expansion for the solution of (1.3) and construct Richardson’s extrapolation algorithm for (1.1). Section 3 is devoted to the strongly nonlinear problem (1.4). It is shown that Richardson’s extrapolation algorithm is also valid for (1.4). In Section 4, we give some numerical results which demonstrate the efficiency of the extrapolation algorithm. 2. Asymptotic error expansion and Richardson’s extrapolation for (1.1) In this section we establish an asymptotic error expansion for the solution of (1.3) and construct Richardson’s extrapolation algorithm for the numerical solution of (1.1). 2.1. Asymptotic error expansion To establish an asymptotic error expansion for the solution of (1.3), we first introduce some lemmas. Let J = (Ji,j ) and B = (Bi,j ) denote the symmetric tridiagonal matrices with the following elements Ji,i = 2, 5 Bi,i = , 6

Ji,i−1 = Ji,i+1 = −1, 1 Bi,i−1 = Bi,i+1 = , 12

1  i  N − 1, 1  i  N − 1.

−1 It can be checked that J −1 = (Ji,j ) exists and  (N −j )i , i  j, −1 Ji,j = (N N −i)j N , i > j.

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

For two constants M and M satisfying M  M > −π 2 we define ⎧ 12 ⎪ , M > −8, M > 0, ⎪ M ⎪ ⎪ ⎪ ⎪ ⎨ 1, M > −8, M  0,

 12  12 h( M, M) = M ⎪ , π 2 (1 + π 2 ) , M  −8, M > 0, min ⎪ ⎪ M ⎪  ⎪ ⎪ ⎩ 12 (1 + πM2 ), M  −8, M  0. π2

255

(2.1)

Then we have the following results. Lemma 2.1. (See Lemmas 2.2 and 2.3 of [2].) The following inequality holds:  2 N −1 iπ N 2 N −1 sin , , i = 1, 2, . . . , N − 1. Ji,j  min 2π N 8 j =1

Lemma 2.2. (See Lemma 2.4 of [2].) The following equality holds

 N −1 π 5 + cos N (j − 1)π iπ jπ (j + 1)π −1 sin = Ji,j sin , + 10 sin + sin 2 π N N N N 2 sin 2N j =1

i = 1, 2, . . . , N − 1.

Lemma 2.3. (See Lemma 3.1 of [30].) Let M = diag(M1 , . . . , MN −1 ) be a diagonal matrix and let M and M be two constants such that −π 2 < M  Mi  M,

i = 1, 2, . . . , N − 1.

(2.2)

Then (J + h2 BM)−1 exists and is nonnegative provided that h < h( M, M). Lemma 2.4. Let M, M and Mi (i = 0, 1, 2, . . . , N ) be the given constants such that (2.2) holds. Assume that zi (i = 0, 1, 2, . . . , N ) satisfy  2 −zi−1 + 2zi − zi+1 + h12 (Mi−1 zi−1 + 10Mi zi + Mi+1 zi+1 ) = gi , i = 1, 2, . . . , N − 1, (2.3) z0 = zN = 0. Then when h < h( M, M), ⎧ −2 ⎨ (maxi |gi |/8)h , max |zi |  (maxi |gi |/(8 + M ))h−2 , ⎩ i (maxi |gi |/(2π(1 − θ )))h−2 ,

M  0, −8 < M < 0, −π 2 < M  −8, h  H,

(2.4)

where H < h(M, M ) and θ =−

M π 2 (1 − π 2 H 2 /12)

< 1.

Proof. Define M = diag(M1 , . . . , MN −1 ) and G = (g1 , . . . , gN −1 )T . Then (2.3) may be expressed in the form   (2.5) J + h2 BM Z = G, where Z = (z1 , . . . , zN −1 )T and the matrices J and B are the same as before. Notice by Lemma 2.3 that the inverse (J + h2 BM)−1 exists and is nonnegative. For proving (2.4) we have three different cases. (i) M  0. In this case, J + h2 BM  J . By the nonnegative property of J −1 and (J + h2 BM)−1 , we have 0  (J + h2 BM)−1  J −1 (see [21]). Clearly B∞ = 1, and by Lemma 2.1,  −1  J 

= ∞

max

1iN −1

N −1 j =1

−1 Ji,j 

N 2 h−2 = . 8 8

256

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Thus (J + h2 BM)−1 ∞  J −1 ∞  h−2 /8, and by (2.5), 1 Z∞  G∞ h−2 . 8 The desired result (2.4) follows. (ii) −8 < M < 0. For this case, we define   M + = diag M1+ , . . . , MN+−1 , Mi+ = max{Mi , 0},

M − = M − M +.

Let J = J + h2 BM + . Then by Lemma 2.3, J −1 exists and is nonnegative. Since J  J , we have J −1 ∞  J −1 ∞  N 2 /8 which implies that h2 J −1 BM − ∞  −M/8 < 1. Hence    1 8  I + h2 J −1 BM − −1    . ∞ 2 −1 − 8 + M 1 − h J BM ∞ Finally we have        J + h2 BM −1  =  I + h2 J −1 BM − −1 J −1   ∞ ∞

N2 h−2 = 8+M 8+M

from which and (2.5), we arrive at the conclusion (2.4). (iii) −π 2 < M  −8 and h  H . In this case, we write (2.5) into J Z + h2 BM − Z = G, where J and M − are the same as before. Since 0  J −1  J −1 , we have from the above relation that for i = 1, 2, . . . , N − 1, |zi | 

N −1

N −1  Mh2 −1  Ji,j |zj −1 | + 10|zj | + |zj +1 | . 12

−1 Ji,j |gj | −

j =1

j =1

This implies that |zi | sin

iπ N



N −1 G∞

sin

iπ N j =1

−1 Ji,j −

Mh2 12 sin

N −1

iπ N j =1

 −1  |zj −1 | + 10|zj | + |zj +1 | . Ji,j

Furthermore, |zi | sin

iπ N



N −1 G∞

sin

iπ N j =1

−1 Ji,j



Mh2 12 sin

N −1

iπ N j =1

−1 Ji,j

(j − 1)π jπ (j + 1)π sin + 10 sin + sin N N N

By Lemmas 2.1 and 2.2, |zi | sin iπ N



π 5 + cos N |zi | G∞ 2 2 N −  Mh max . 2 π i 2π 24 sin 2N sin iπ N

Since h  H and sin2 x  x 2 (1 − x 2 /3) for x  0, we have |zi | |zi | |zi | G∞ 2 M G∞ 2 max   . N − 2 N + θ max iπ iπ 2 2 i sin iπ 2π 2π π (1 − π h /12) i sin N sin N N Therefore, max i

|zi | sin iπ N



G∞ −2 h 2π(1 − θ )

from which we obtain (2.4).

2

For simplicity, we let u(k) (x) =

dk u (x), dx k

fu (x, u) =

∂f (x, u), ∂u

fu(k) (x, u) =

∂kf (x, u), ∂uk

k  1.

 max i

|zi | sin iπ N

.

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

257

Also we denote by O(hk ) the function satisfying |O(hk )|  chk , where c is a positive constant independent of h and may not be the same at different occurrences. Now we give the main result of this subsection. Theorem 2.1. Let u(x) and uh (xi ) be the solutions of (1.1) and (1.3), and assume that there exists an open interval Λ in R such that u(x) ∈ Λ for all x ∈ [0, 1] and uh (xi ) ∈ Λ for each i. Let us also assume that u(x) ∈ C r+2 [0, 1], f (x, v) ∈ C r ((0, 1) × Λ) where r  4 and −π 2 < M  fu (x, v)  M,

x ∈ (0, 1), v ∈ Λ.

(2.6)

Then there exist functions uj (x) which belong to C r+2−2j [0, 1] and do not depend on h such that for sufficiently small h, uh (xi ) = u(xi ) +

l

  uj (xi )h2j + O hr ,

xi ∈ ω h ,

(2.7)

j =2

where l = [(r − 1)/2]. Proof. Let u0 (x) = u(x) and u1 (x) ≡ 0 on [0, 1]. Define l − 1 functions uj (x) (j = 2, 3, . . . , l) on [0, 1] from ⎧ (2)   ⎪ −uj (x) + fu x, u(x) uj (x) ⎪ ⎪ ⎪ ⎪ ⎪ j ⎪   ⎪ 1 2 (2k+2) ⎪ (2j ) ⎪ = − x, u(x) + f uj −k (x) ⎪ ⎪ ⎪ 6(2j )! (2k + 2)! ⎪ k=1 ⎪ ⎨ j j −2 p (2.8)   1 (p)  1 (2k+2) ⎪ ⎪ fu x, u(x) gj −k−1 (x), utk (x) − − ⎪ ⎪ p! 6(2k + 2)! ⎪ ⎪ t1 +···+tp =j k=1 p=2 k=0 ⎪ ⎪ ⎪ tk 1 ⎪ ⎪ ⎪ ⎪ 0 < x < 1, ⎪ ⎪ ⎩ uj (0) = uj (1) = 0, where gj (x) =

j  1 (p)  fu x, u(x) p!

p=1



p 

utk (x),

0 < x < 1.

t1 +···+tp =j k=1 tk 1

We first show that all l − 1 functions uj (x) are well-defined. Eq. (2.8) for u2 (x) has the simple form:    1 (6) 1 (4)  (2) −u2 (x) + fu x, u(x) u2 (x) = − f u (x). x, u(x) + (2.9) 144 360 The right-hand side of Eq. (2.9) contains only the function u(x) and is r − 4 times continuously differentiable. Since the equation is linear and the coefficients are r − 1 times continuous differentiable, there is a unique solution in C r−2 [0, 1] to (2.9) under the condition (2.6) (see [17,10]). This proves that u2 (x) is well-defined and belongs to C r−2 [0, 1]. Now we assume that the functions uk (x) (k = 2, . . . , j − 1) with uk (x) ∈ C r+2−2k [0, 1] are well-defined by (2.8). We see from (2.8) that the right-hand side of the equation does not contain functions uk (x) whose index k is more than j − 1, and is r − 2j times continuously differentiable. Moreover, the left-hand side of the equation is linear with respect to uj (x) and the coefficients are r − 1 times continuous differentiable. Therefore there is a unique solution in C r+2−2j [0, 1] to (2.8) under the condition (2.6) (see [17,10]). This shows that uj (x) is well-defined and belongs to C r+2−2j [0, 1]. Thus, all l − 1 functions uj (x) are well defined and belong to C r+2−2j [0, 1] (j = 2, . . . , l). Define v(x) =

l j =0

uj (x)h2j ,

0  x  1.

258

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

We note that for sufficiently small h, v(xi ) ∈ Λ because u(xi ) ∈ Λ. By Taylor expansion,  l p l        1 (p)  2j uj (xi )h + O hr f xi , v(xi ) = f xi , u(xi ) + fu xi , u(xi ) p! j =1

p=1

  = f xi , u(xi ) +

l p=1

 1 (p)  fu xi , u(xi ) p!

l



h2j

j =p

l 1 (p)     = f xi , u(xi ) + fu xi , u(xi ) h2j p! p=1

  utk (xi ) + O hr

t1 +···+tp =j k=1 tk 1



j

j =1

p 

p 

  utk (xi ) + O hr

t1 +···+tp =j k=1 tk 1

l     = f xi , u(xi ) + h2j gj (xi ) + O hr . j =1

Then we have      h2   f xi−1 , v(xi−1 ) + 10f xi , v(xi ) + f xi+1 , v(xi+1 ) 12      h2   f xi−1 , u(xi−1 ) + 10f xi , u(xi ) + f xi+1 , u(xi+1 ) = 12 l    h2 2j  h gj (xi−1 ) + 10gj (xi ) + gj (xi+1 ) + O hr+2 . + 12 j =1

Again by Taylor expansion,      h2   f xi−1 , u(xi−1 ) + 10f xi , u(xi ) + f xi+1 , u(xi+1 ) 12        h2   = f xi , u(xi ) h2 + f xi−1 , u(xi−1 ) − 2f xi , u(xi ) + f xi+1 , u(xi+1 ) 12 l−1    2 1    h2j +4 f (2j +2) xi , u(xi ) + O hr+2 = f xi , u(xi ) h + 6 (2j + 2)! j =0

l      h2j +2 (2j )  = f xi , u(xi ) h2 + f xi , u(xi ) + O hr+2 . 6(2j )! j =1

In the same manner, l  h2 2j  h gj (xi−1 ) + 10gj (xi ) + gj (xi+1 ) 12 j =1

=

l

h2j +2 gj (xi ) +

j =1

=

l

j =1

h2j +2 gj (xi ) +

j =1

=

l j =1

l  h2 2j  h gj (xi−1 ) − 2gj (xi ) + gj (xi+1 ) 12

h2j +2 gj (xi ) +

  1 2j +2 h2k+2 (2k+2) gj h (xi ) + O hr+2 6 (2k + 2)! 1 6

l

l−j

j =1

k=0

l−1 l−k k=0 j =1

  h2(k+j )+4 (2k+2) (xi ) + O hr+2 gj (2k + 2)!

(2.10)

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

=

l

h2j +2 gj (xi ) +

j =1

=

l

259

l−1 l   1 h2j +4 (2k+2) gj −k (xi ) + O hr+2 6 (2k + 2)! k=0 j =k+1

2j +2

h

gj (xi ) +

j =1

l

2j +2

h

j =2

j −2 k=0

  1 (2k+2) g (xi ) + O hr+2 . 6(2k + 2)! j −k−1

Substituting the above relations into (2.10) we obtain      h2   f xi−1 , v(xi−1 ) + 10f xi , v(xi ) + f xi+1 , v(xi+1 ) 12 l l  2   h2j +2 (2j )  f xi , u(xi ) + h2j +2 gj (xi ) = f xi , u(xi ) h + 6(2j )! j =1

+

l j =2

h2j +2

j −2 k=0

j =1

  1 (2k+2) gj −k−1 (xi ) + O hr+2 . 6(2k + 2)!

(2.11)

Again by Taylor expansion, −v(xi−1 ) + 2v(xi ) − v(xi+1 ) = −2

l j =0

= −2

l j =0

l−j   h2k+2 (2k+2) u h (xi ) + O hr+2 (2k + 2)! j 2j

k=0

h2j +2

j k=0

  1 (2k+2) uj −k (xi ) + O hr+2 . (2k + 2)!

We have thus that      h2   f xi−1 , v(xi−1 ) + 10f xi , v(xi ) + f xi+1 , v(xi+1 ) 12   2    (2) 1  (4) 4 (2) xi , u(xi ) h4 + g1 (xi )h4 = −u0 (xi ) + f xi , u(xi ) h − u(2) 1 (xi )h − 12 u0 (xi ) − f  j l   2 1 (2k+2) 2j +2 − h (xi ) − f (2j ) xi , u(xi ) + u 6(2j )! (2k + 2)! j −k j =2 k=0  j −2   1 (2k+2) g − gj (xi ) − (xi ) + O hr+2 . 6(2k + 2)! j −k−1

−v(xi−1 ) + 2v(xi ) − v(xi+1 ) +

k=0

Since u0 (x) = u(x), u1 (x) ≡ 0 and uj (x) (j = 2, . . . , l) are defined by (2.8), all terms in h2j +2 (j = 0, 1, . . . , l) cancel out. Thus ⎧ h2 ⎪ ⎨ −v(xi−1 ) + 2v(xi ) − v(xi+1 ) + 12 (f (xi−1 , v(xi−1 )) + 10f (xi , v(xi )) + f (xi+1 , v(xi+1 ))) (2.12) = O(hr+2 ), 1  i  N − 1, ⎪ ⎩ v(0) = v(1) = 0. Now by (1.3) and (2.12), the error ei = uh (xi ) − v(xi ) satisfies that  2 −ei−1 + 2ei − ei+1 + h12 (Mi−1 ei−1 + 10Mi ei + Mi+1 ei+1 ) = O(hr+2 ), e0 = eN = 0,

1  i  N − 1,

(2.13)

where Mi = fu (xi , ξi ) and ξi lies between uh (xi ) and v(xi ). Since u(xi ) ∈ Λ for each i, we have that for sufficiently small h, v(xi ) ∈ Λ. This ensures that ξi ∈ Λ and thus by (2.6), Mi satisfy the condition (2.2). Finally by Lemma 2.4, we arrive at the conclusion (2.7). 2 An immediate consequence of Theorem 2.1 is the convergence of solution of Numerov’s scheme (1.3).

260

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Corollary 2.1. Let u(x) and uh (xi ) be the solutions of (1.1) and (1.3), and assume that there exists an open interval Λ in R such that u(x) ∈ Λ for all x ∈ [0, 1] and uh (xi ) ∈ Λ for each i. Let us also assume that u(x) ∈ C 6 [0, 1], f (x, v) ∈ C 4 ((0, 1) × Λ) and −π 2 < M  fu (x, v)  M, Then

x ∈ (0, 1), v ∈ Λ.

  max uh (xi ) − u(xi )  c1 h4 ,

xi ∈ωh

where c1 is a positive constant independent of h. Remark 2.1. Corollary 2.1 shows that when h → 0, the solution uh (xi ) of (1.3) converges to the solution u(xi ) of (1.1) with the convergence order O(h4 ) in the L∞ -norm. 2.2. Richardson’s extrapolation algorithm An important application of the asymptotic error expansion (2.7) is to construct Richardson’s extrapolation. Lemma 2.5. Let μk (k = 1, 2, . . . , l) be the distinct positive constants. Then there exists a unique solution to the system of linear equation ⎧ l ⎪ ⎪ ⎪ γk = 1, ⎪ ⎪ ⎨ k=1 (2.14) l ⎪ ⎪ ⎪ −2j ⎪ μk γk = 0, j = 2, 3, . . . , l, ⎪ ⎩ k=1

and this unique solution is given by  l −1 k−1 l   1 1 2l 2 γk = μ k μs , 2 2 2 μ − μs s=k+1 μk − μ2s s=1 s=1 k Proof. The proof is standard and is omitted.

k = 1, 2, . . . , l.

(2.15)

2

Let μk (k = 1, 2, . . . , l) be the distinct positive integers. For each positive integer Nk = μk N , we construct net ωhk in [0, 1] (as (1.2)) with mesh size hk = 1/Nk (= h/μk ), where h = 1/N . Also let uhk (xi ) be the solution of (1.3) on the net ωhk . Then all solutions uhk (xi ) are defined on the net ωh with the mesh size h. We now construct Richardson’s extrapolation algorithm as follows. Richardson’s Extrapolation Algorithm:

Step 1. Compute

 γk = μ2l k

l s=1

μ2s

−1 k−1 

l  1 1 , 2 2 2 μ − μs s=k+1 μk − μ2s s=1 k

k = 1, 2, . . . , l.

(2.16)

Step 2. Compute extrapolation solution

Uh (xi ) =

l

γk uhk (xi ),

xi ∈ ω h .

k=1

For the extrapolation solution Uh (xi ) we have the following error estimate.

(2.17)

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Theorem 2.2. Let the condition in Theorem 2.1 be satisfied. Then for sufficiently small h,   max Uh (xi ) − u(xi )  c2 hr , xi ∈ωh

261

(2.18)

where c2 is a positive constant independent of h. Proof. By Theorem 2.1 we have the asymptotic error expansions uhk (xi ) = u(xi ) +

l

  2j uj (xi )hk + O hrk ,

xi ∈ ωhk ,

k = 1, 2, . . . , l.

j =2

Summing the above expansions up with the weights γk we have  l  l l 2j   γk u(xi ) + hk γk uj (xi ) + O hr , Uh (xi ) = j =2

k=1

xi ∈ ω h .

k=1

By Lemma 2.5, l

l

γk = 1,

k=1

−2j

μk

γk = 0,

j = 2, 3, . . . , l,

k=1

which is equivalent to l

γk = 1,

k=1

l

2j

hk γk = 0,

j = 2, 3, . . . , l.

k=1

By the above relations we obtain   Uh (xi ) = u(xi ) + O hr , xi ∈ ωh . This proves (2.18).

2

The estimate (2.18) shows that the Richardson’s extrapolation solution Uh (xi ) converges to the solution u(xi ) of (1.1) with the convergence order O(hr ) in the L∞ -norm, and so it has a higher accuracy than each uhk (xi ). Let us consider two methods for choosing the positive integers μk in the above algorithm. In the first method we take μk = k (k = 1, 2, . . . , l). For these integers the weights γk defined by (2.16) are reduced to (−1)l−k 12k 2l+2 , k = 1, 2, . . . , l. (2l 3 + 3l 2 + l)(l − k)!(l + k)! These weights γk for several values of l are given in Table 1. Secondly, we choose μk = 2k−1 (k = 1, 2, . . . , l). In this case the weights γk in (2.16) take the form γk =

γk =

k−1 l−k 3 × 2(k−1)(k+2)  1  1 , 4l − 1 4s − 1 1 − 4s s=1

k = 1, 2, . . . , l.

(2.19)

(2.20)

s=1

In Table 2, we list some weights γk for several l. Note that for large l, one can compute the extrapolation solution Uh (xi ) in (2.17) using the following modified Neville algorithm without first computing the weights γk . Table 1 The weights γk with μk = k l

γ1

γ2

2

1 − 15

16 15

3

1 336

32 − 105

4

1 − 10800 1 475200

32 675 256 − 51975

5

γ3

γ4

γ5

729 560 − 2187 2800 59049 246400

8192 4725 − 262144 155925

1953125 798336

262

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Table 2 The weights γk with μk = 2k−1 l

γ1

γ2

2

1 − 15

16 15

3

1 945

16 − 189

4

1 − 240975 1 246517425

16 11475 16 − 2900205

5

γ3

γ4

γ5

1024 945 1024 − 11475 1024 690525

262144 240975 262144 − 2900205

268435456 246517425

Modified Neville Algorithm:

Step 1. For k = 1, 2, . . . , l , we let (0)

Tk (xi ) = uhk (xi ),

xi ∈ ωhk .

(2.21) (j )

Step 2. For j = 1, 2, . . . , l − 1 and k = 1, 2, . . . , l − j , we compute Tk (xi ) on the net ωh by the recurrence formula (j ) Tk (xi ) =

 j +k−1  j +k  j +k     μ2 μ2 μ2 μ2s (j −1) (j −1) s s s (xi ) − (xi ) − T T , μ2j +k k μ2 k+1 μ2j +k μ21 s=1 s=2 1 s=1

xi ∈ ω h .

(2.22) (l−1)

The following theorem shows that T1 Uh (xi ) in (2.17).

(xi ) from the above algorithm coincides with the extrapolation solution

(j )

Theorem 2.3. Let Tk (xi ) be determined by the modified Neville algorithm, and let γk be defined by (2.16). Then we have T1(l−1) (xi ) =

l

γk uhk (xi ),

xi ∈ ω h .

(2.23)

k=1

Proof. Consider the case l = 2. It follows from (2.16) that     γ2 = −μ42 / μ41 − μ42 . γ1 = μ41 / μ41 − μ42 , Since by (2.22),     (0)   (0) (1) T1 = μ41 / μ41 − μ42 T1 − μ42 / μ41 − μ42 T2 , the result (2.23) with l = 2 follows. Assume that the statement (2.23) holds for l − 1. Then we have (l−2)

T1

(xi ) =

l−1

αk uhk (xi ),

k=1

(l−2)

T2

(xi ) =

l−1

βk uhk+1 (xi ),

xi ∈ ω h ,

k=1

where the coefficients αk and βk are given by  l−1 −1 k−1 l−1   1 1 2(l−1) μ2s , k = 1, 2, . . . , l − 1, αk = μk 2 2 2 μ − μs s=k+1 μk − μ2s s=1 s=1 k  l−1 −1 k−1 l−1   1 1 2(l−1) 2 βk = μk+1 μs+1 , k = 1, 2, . . . , l − 1. 2 2 2 μ − μs+1 s=k+1 μk+1 − μ2s+1 s=1 s=1 k+1 By (2.22) we have that for xi ∈ ωh ,

(2.24)

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

263

 l−1  l  l     μ2 μ2 μ2 μ2 (l−2) (l−2) s s s s (xi ) − (xi ) − 2 T T μ2 1 μ2 2 μ2l μ1 s=1 l s=2 1 s=1  l  l   l−1  l−1  l−1  μ2 μ2 μ2 μ2 s s s s αk uhk (xi ) − βk uhk+1 (xi ) − 2 = 2 2 2 μ μ μ μ1 l l 1 s=1 k=1 s=2 k=1 s=1

(l−1) T1 (xi ) =

=

l

ρk uhk (xi ),

k=1

where  l−1  l   l    l 2 μ2 μ2 μ2  μ2 μs μ2s s s s s − 2 , − 2 , α1 βl−1 ρ1 = ρl = μ2 μ2l μ1 μ2 μ2l μ1 s=1 l s=1 s=2 1 s=1  l−1  l    l μ2 μ2 μ2 μ2  s s s − − s2 , k = 2, 3, . . . , l − 1. α β ρk = k k−1 2 2 2 μ μ μ μ1 l s=1 l s=2 1 s=1 It suffices to show ρk = γk for all k = 1, 2, . . . , l. Using (2.24) we have ρ1 =

 l−1  μ2 s 2 μ s=1 l



= μ2l 1

l

2(l−1) μ1

−1 μ2s

 l−1 s=1

l 

μ2 s=2 1

s=1

μ2s

−1 l−1 

1 2 μ − μ2s s=2 1

 l s=1

μ2s μ2s − μ2l μ21



1 = γ1 . − μ2s

The same reasoning gives ρl = γl . Again by (2.24), 

 l l−1 l 2(l−1) k−1    μ2 μ2  μk 1 1 1 1 s ρk = − − s2 2 μ2l s=1 μ2k − μ2s s=k+1 μ2k − μ2s μ21 s=2 μ2k − μ2s s=k+1 μ2k − μ2s μ μ1 l s=1  l −1 k−1 l   1 1 = μ2l μ2s = γk , k = 2, 3, . . . , l − 1. k 2 2 2 μ − μs s=k+1 μk − μ2s s=1 s=1 k 2(l−1) k−1 

μk

The theorem is proved.

2

Remark 2.2. The numbers in Table 2 can also be derived by the formula 7.2.12 of [14]. It is possible to obtain the asymptotic error expansion (2.7) by using the theory of B-Series for Numerov type methods (see [12,9]). It is not difficult to show that the asymptotic error expansion (2.7) and the extrapolation algorithm are also valid for Numerov’s method of coupled systems of nonlinear two-point boundary value problems (see [29,4,32] for Numerov’s method of coupled systems). 3. Asymptotic error expansion and Richardson’s extrapolation for (1.4) In this section we show that the extrapolation algorithm in the previous section is also valid for strongly nonlinear problem (1.4). Define I to be an interval in R such that u0 , u1 ∈ I, and assume that the nonlinear function k(u) in (1.4) satisfies the following basic hypothesis: (H) The function k(u) is continuously differentiable for all u ∈ I, and there exist positive constants k0 and k1 such that k0  k(u)  k1 for all u ∈ I.

264

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Let δ ∈ I. We introduce the transformation T : u w = T (u) =

k(s) ds,

∀u ∈ I.

(3.1)

δ

It is clear from hypothesis (H) that T is a strictly monotone increasing function of u in I, and therefore, the inverse T −1 exists. Using the inverse transformation u = T −1 (w) we transform problem (1.4) into the problem:  2 − ddxw2 + g(x, w(x)) = 0, 0 < x < 1, (3.2) w(0) = T (u0 ), w(1) = T (u1 ), where g(x, w(x)) = f (x, T −1 (w(x))). Clearly, if u is a solution of (1.4) such that u(x) ∈ I for each x then w(x) = T (u(x)) is a solution of (3.2) and vice versa. Numerov’s scheme for (3.2) is as follows ⎧ 2 ⎪ ⎨ −wh (xi−1 ) + 2wh (xi ) − wh (xi+1 ) + h12 (g(xi−1 , wh (xi−1 )) + 10g(xi , wh (xi )) + g(xi+1 , wh (xi+1 ))) = 0, (3.3) 1  i  N − 1, ⎪ ⎩ wh (0) = T (u0 ), wh (1) = T (u1 ). Making use of the transformation T , we have the following result (see [28]): Lemma 3.1. Let hypothesis (H) hold, and let wh (xi ) be the solution of (3.3) on ωh such that wh (xi ) ∈ T (I) for each i. Then there exists a finite difference solution to the problem (1.4) on ωh and this solution is given by uh (xi ) = T −1 (wh (xi )). Moreover, the finite difference solution uh (xi ) has the same accuracy as the finite difference solution wh (xi ). Our main result in this section is given in the following theorem. Theorem 3.1. Let hypothesis (H) hold, and let w(x) and wh (xi ) be the respective solutions of (3.2) and (3.3) such that w(x) ∈ T (I), wh (xi ) ∈ T (I) for each x and i. Assume that there exists an open interval Λ in R such that w(x), wh (xi ) ∈ Λ for each x and i. Also assume that k(u) ∈ C r+1 (I), w(x) ∈ C r+2 [0, 1], T −1 (w(x)) ∈ C r+2 [0, 1], g(x, v) ∈ C r ((0, 1) × Λ) where r  4 and −π 2 < M  gw (x, v)  M,

x ∈ (0, 1), v ∈ Λ.

(3.4)

Then u(x) = T −1 (w(x)) is a solution of (1.4) and uh (xi ) = T −1 (wh (xi )) is a finite difference solution of it. Moreover there exist functions vj (x) which belong to C r+2−2j [0, 1] and do not depend on h such that for sufficiently small h, uh (xi ) = u(xi ) +

l

  vj (xi )h2j + O hr ,

xi ∈ ω h ,

(3.5)

j =2

where l = [(r − 1)/2]. Proof. Let w0 (x) = w(x) and w1 (x) ≡ 0 on [0, 1]. Define l − 1 functions wj (x) (j = 2, 3, . . . , l) on [0, 1] from ⎧   (2) −wj (x) + gw x, w(x) wj (x) ⎪ ⎪ ⎪ ⎪ ⎪ j ⎪ ⎪   1 2 ⎪ (2k+2) (2j ) ⎪ ⎪ g w = − x, w(x) + (x) ⎪ ⎪ 6(2j )! (2k + 2)! j −k ⎨ k=1 (3.6) j j −2 p ⎪   ⎪ 1 (p)  1 (2k+2) ⎪ ⎪ g g˜ x, w(x) wtk (x) − (x), 0 < x < 1, − ⎪ ⎪ p! w 6(2k + 2)! j −k−1 ⎪ ⎪ t +···+t =j p=2 k=1 k=0 ⎪ p 1 ⎪ ⎪ tk 1 ⎪ ⎩ wj (0) = wj (1) = 0,

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

265

where g˜ j (x) =

j  1 (p)  gw x, w(x) p!

p=1



p 

0 < x < 1.

wtk (x),

t1 +···+tp =j k=1 tk 1

All above l − 1 functions wj (x) are well-defined and belong to C r+2−2j [0, 1] (see the proof of Theorem 2.1). Moreover, we have from Theorem 2.1 that for sufficiently small h, wh (xi ) = w(xi ) +

l

  wj (xi )h2j + O hr ,

xi ∈ ω h ,

j =2

that is, l       T uh (xi ) = T u(xi ) + wj (xi )h2j + O hr ,

xi ∈ ω h .

j =2

By the definition of T , u h (xi )

k(s) ds =

l

  wj (xi )h2j + O hr ,

xi ∈ ω h .

(3.7)

j =2

u(xi )

Since by Lemma 3.1 and Corollary 2.1, uh (xi ) − u(xi ) = O(h4 ) for sufficiently small h, we have u h (xi )

     k(s) ds = k u(xi ) uh (xi ) − u(xi ) + O h8 .

u(xi )

By (3.7) and the above relation, uh (xi ) = u(xi ) +

l     wj (xi ) 2j h + O h8 + O hr , k(u(xi ))

xi ∈ ω h

j =2

and so if r  8, the desired result (3.5) follows. If r > 8, we have from (3.8) that uh (xi ) = u(xi ) +

l   wj (xi ) 2j h + O h8 , k(u(xi ))

xi ∈ ω h .

j =2

Using this relation we obtain that  l 2 wj (xi )  2   uh (xi ) − u(xi ) = h2j + O h12 , k(u(xi ))

 3   uh (xi ) − u(xi ) = O h12 .

j =2

Therefore for sufficiently small h,  l 2    1 (1)     wj (xi ) 2j k(s) ds = k u(xi ) uh (xi ) − u(xi ) + k u(xi ) + O h12 h 2 k(u(xi ))

u h (xi )

j =2

u(xi )

l        vj (xi )h2j + O h12 + O hr , = k u(xi ) uh (xi ) − u(xi ) + j =4

(3.8)

266

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

where   1 vj (x) = k (1) u(x) 2



2  wtp (x)

t1 +t2 =j p=1 t1 1,t2 1

k(u(x))

∈ C r+2−2j [0, 1],

j = 4, . . . , l.

This relation and (3.7) imply that uh (xi ) = u(xi ) −

l l     vj (xi ) 2j wj (xi ) 2j h + h + O h12 + O hr , k(u(xi )) k(u(xi )) j =4

j =2

and so if r  12, we obtain the result (3.5). A continuation of the above process shows that the result (3.5) holds for any r  4.

2

As before, we can construct Richardson’s Extrapolation Algorithm for problem (1.4). Then the asymptotic expansion (3.5) ensures that the Richardson’s extrapolation solution Uh (xi ) defined by (2.17) converges to the solution u(xi ) of (1.4) with the convergence order O(hr ) in the L∞ -norm. Thus Richardson’s extrapolation is also valid for the strongly nonlinear problem (1.4). Remark 3.1. The main requirement of the above results for the strongly nonlinear problem (1.4) is the construction of the inverse T −1 . This construction may be complicated due to the nonlinearity of the function k(u). Nevertheless, it can still be implemented well in some specific problems. For a detailed discussion and some examples we refer to [28]. 4. Numerical results In this section we give some numerical results to demonstrate the efficiency of the Richardson’s extrapolation algorithm. Example 1. We first consider the Logistic problem  2 d u − dx 0 < x < 1, 2 = σ u(1 − u) + q(x), u(0) = u(1) = 0,

(4.1)

where σ is a positive constant and q(x) is a given function. The introduction of the function q in (4.1) is to construct an analytical solution which is used to compare with the numerical solution. Indeed, if q(x) is taken as   q(x) = π 2 − σ sin(πx) + σ sin2 (πx), (4.2) then the solution of (4.1) is given by u(x) = sin(πx). Now we use the scheme (1.3) to solve (4.1) numerically. For a positive integer N we construct net ωh with the mesh size h = 1/N , and let uh (xi ) be the solution of (1.3) on the net ωh . In the computation, the monotone iteration in [27] is applied to obtain the solution uh (xi ) of the nonlinear scheme (1.3), and the tolerance ε = 10−10 is taken in the iterations. Denote by error(N ) the maximum absolute error of the solution uh (xi ):   (4.3) error(N ) = max uh (xi ) − u(xi ). xi ∈ωh

The value of error(N ) for σ = 15 is graphed in Fig. 1. Then we construct extrapolation solution Uh (xi ) by the Richardson’s Extrapolation Algorithm with the mesh sizes in the ratios 1 : 2 and 1 : 2 : 4, respectively. The corresponding maximum absolute errors, which are denoted by error(2) (N ) and error(3) (N ) where N is the number of mesh points used in the extrapolation, are also sketched in Fig. 1. It is seen that the Richardson’s extrapolation gives much better results. To demonstrate the convergence orders of the difference solution uh (xi ) and the extrapolation solution Uh (xi ) we compute

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

267

Fig. 1. Maximal absolute errors of approximation solutions for (4.1), (4.2). Note. (1): the error of the difference solution uh (xi ); (2): the error of the extrapolation solution on two nets with mesh sizes in a ratio of 1 : 2; (3): the error of the extrapolation solution on three nets with mesh sizes in a ratio of 1 : 2 : 4. Table 3 Convergence orders of approximation solutions for (4.1), (4.2) N

order(N )

order(2) (N )

4.0121 4.0058 4.0016 4.0004

5.0621 5.8633 5.9727

4.0153 4.0060 4.0016 4.0004

5.3595 5.8945 5.9765

order(3) (N )

N

order(N )

order(2) (N )

order(3) (N )

8.2656 8.2269

10 20 40 80

3.9653 4.0039 4.0010 4.0003

5.4915 5.9170 5.9814

8.1659 8.0101

8.2407 8.1029

10 20 40 80

3.9580 4.0040 4.0010 4.0003

5.6228 5.9352 5.9921

8.0824 8.8762

σ = 15 8 16 32 64 σ = π2 8 16 32 64

Note. order(N ): the convergence order of the difference solution uh (xi ); order(2) (N ): the convergence order of the extrapolation solution on two nets with mesh sizes in a ratio of 1 : 2; order(3) (N ): the convergence order of the extrapolation solution on three nets with mesh sizes in a ratio of 1 : 2 : 4.

order(N) = log2

error(N/2) error(N )

(2)

order

(N) = log2

order(3) (N) = log2

 ,

error(2) (N/2) error(2) (N ) error(3) (N/2) error(3) (N )

 ,  .

(4.4)

These values for several N are listed in Table 3. We see from the table that the extrapolation solution Uh (xi ) is sixthorder accurate when two nets are used in the extrapolation, whereas it arrives at eighth-order accurate if three nets are used. These results coincide with our theoretical analysis. Example 2. Our second example is devoted to the strongly nonlinear problem (1.4). Consider the following nonhomogeneous problem:

268

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

Fig. 2. Maximal absolute errors of approximation solutions for (4.5), (4.7). Note. (1): the error of the difference solution uh (xi ); (2): the error of the extrapolation solution on two nets with mesh sizes in a ratio of 1 : 2; (3): the error of the extrapolation solution on three nets with mesh sizes in a ratio of 1 : 2 : 4.



du d (aebu+c dx ) + f (x, u) = 0, − dx u(0) = u(1) = 0.5,

0 < x < 1,

(4.5)

where a, b are two positive constants and c is an arbitrary constant. Obviously, this problem has the form (1.4) with u0 = u1 = 0.5 and k(u) = aebu+c . It is easily shown that the function k(u) satisfies hypothesis (H) with I = [0, σ ], where σ is an arbitrary positive constant. Let δ = 0.5. Then the transformation T in (3.1) is defined by u w = T (u) =

aebs+c ds =

 aec  bu e − eb/2 , b

0.5

and its inverse is clearly given by

 b 1 −1 b/2 u = T (w) = ln . w+e b aec Assume that the function f (x, u) is defined by   f (x, u) = ab3 (1 − 2x)2 − 2ab ebu+c .

(4.6)

(4.7)

Then u(x) = bx(1 − x) + 0.5 is the analytic solution of (4.5). To solve (4.5) numerically, we let wh (xi ) be the solution of (3.3) with the mesh size h = 1/N , where g(x, w) = f (x, T −1 (w)) with T −1 and f (x, u) being given by (4.6) and (4.7), respectively. By Lemma 3.1, uh (xi ) = T −1 (wh (xi )) is a finite difference solution of (4.5). As the first example, we use the monotone iteration in [27] to compute the solution wh (xi ), and the tolerance ε = 10−10 is taken in the iterations. The maximum absolute errors error(N ) of uh (xi ) for (c, a, b) = (0, 1, 2) are sketched in Fig. 2. After obtaining the solution uh (xi ) we construct extrapolation solution Uh (xi ) by the Richardson’s Extrapolation Algorithm with the mesh sizes in the ratios 1 : 2 and 1 : 2 : 4, respectively. The corresponding maximum absolute errors error(2) (N) and error(3) (N ) are also sketched in Fig. 2. We see again that the Richardson’s extrapolation gives more accurate numerical results. In Table 4, we give the convergence orders order(N ), order(1) (N ) and order(3) (N ) of the difference solution uh (xi ) and the extrapolation solution Uh (xi ), which are defined from (4.4). The numerical results show the sixth-order convergence of the extrapolation solution Uh (xi ) when two nets are used in the extrapolation, and the eighth-order convergence if three nets are used.

Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 253–269

269

Table 4 Convergence orders of approximation solutions for (4.5), (4.7) N

order(N )

order(2) (N )

order(3) (N )

N

order(N )

order(2) (N )

order(3) (N )

8.1328 8.1347

10 20 40 80

3.9928 4.0163 4.0040 4.0010

6.0286 6.0400 6.0077

7.8332 8.4000

8.1650 8.2361

10 20 40 80

3.9867 4.0144 4.0035 4.0009

6.0264 6.0338 6.0049

7.9141 8.7997

(c, a, b) = (0, 1, 2) 8 16 32 64

4.1164 4.0258 4.0062 4.0016

6.2451 6.0622 6.0146

(c, a, b) = (−0.5, 0.4, 1.8) 8 16 32 64

4.1013 4.0227 4.0055 4.0014

6.2163 6.0530 6.0115

Note. order(N ): the convergence order of the difference solution uh (xi ); order(2) (N ): the convergence order of the extrapolation solution on two nets with mesh sizes in a ratio of 1 : 2; order(3) (N ): the convergence order of the extrapolation solution on three nets with mesh sizes in a ratio of 1 : 2 : 4.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]

E. Adams, W.F. Ames, On contracting interval iteration for nonlinear problems in Rn : Part II: Applications, Nonlinear Anal. 5 (1981) 525–542. R.P. Agarwal, On Numerov’s method for solving two point boundary value problems, Utilitas Math. 28 (1985) 159–174. R.P. Agarwal, Difference Equations and Inequalities, Marcel Dekker, New York, 1992. R.P. Agarwal, Y.-M. Wang, Some recent developments of the Numerov’s method, Computers Math. Appl. 42 (3–5) (2001) 561–592. W.F. Ames, Nonlinear Partial Differential Equations in Engineering, Academic Press, New York, 1972. W.F. Ames, Numerical Methods for Partial Differential Equations, Academic Press, New York, 1977. R. Aris, The Mathematical Theory of Diffusion and Reaction in Permeable Catalysts, Clarendon Press, Oxford, 1975. D.G. Aronson, H.F. Weinberger, Nonlinear diffusion in population genetics, combustion, and nerve propagation, in: Partial Differential Equations and Related Topics, Lecture Notes in Mathematics, vol. 446, Springer, New York, 1975, pp. 5–49. R.P.K. Chan, P. Leone, A. Tsai, Order conditions and symmetry for two-step hybrid methods, Int. J. Computer Math. 81 (2004) 1519–1536. M.M. Chawla, P.N. Shivakumar, Numerov’s method for non-linear two-point boundary value problems, Int. J. Computer Math. 17 (1985) 167–176. M.M. Chawla, R. Subramanian, A new fourth order cubic spline method for nonlinear two-point boundary value problems, Int. J. Computer Math. 22 (1987) 321–341. J.P. Coleman, Order conditions for a class of two-step methods for y  = f (x, y), IMA J. Numer. Anal. 23 (2003) 197–220. L. Collatz, The Numerical Treatment of Differential Equations, third ed., Springer, Berlin, 1960. G. Dahlquist, A. Bjorck, Numerical Methods, Prentice-Hall, Englewood Cliffs, NJ, 1974. B.-Y. Guo, J.J.H. Miller, Iterative and Petrov–Galerkin methods for solving a system of one-dimensional nonlinear elliptic equations, Math. Comp. 58 (1992) 531–547. P. Henrici, Discrete Variable Methods in Ordinary Differential Equations, John Wiley, New York, 1962. M. Lees, Discrete methods for nonlinear two-point boundary value problems, in: J.H. Bramble (Ed.), Numerical Solution of Partial Differential Equations, Academic Press, New York, 1966, pp. 59–72. G.I. Marchuk, V.V. Shaidurov, Difference Methods and Their Extrapolations, Springer, New York, 1983. K.W. Morton, Numerical Solution of Convection–Diffusion Problems, Chapman & Hall, London, UK, 1996. B.V. Numerov, A method of extrapolation of perturbations, Roy. Ast. Soc. Monthly Notices 84 (1924) 592–601. J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1976. C.V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992. J.D. Pryce, Numerical Solutions of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993. J. Wang, Monotone method for diffusion equations with nonlinear diffusion coefficients, Nonlinear Anal. 34 (1998) 113–142. J. Wang, C.V. Pao, Finite difference reaction–diffusion equations with nonlinear diffusion coefficients, Numer. Math. 85 (2000) 485–502. Y.-M. Wang, Monotone methods for a boundary value problem of second-order discrete equation, Computers Math. Appl. 36 (6) (1998) 77–92. Y.-M. Wang, Accelerated monotone iterative methods for a boundary value problem of second-order discrete equations, Computers Math. Appl. 39 (3/4) (2000) 85–94. Y.-M. Wang, Numerov’s method for strongly nonlinear two-point boundary value problems, Computers Math. Appl. 45 (2003) 759–763. Y.-M. Wang, R.P. Agarwal, Monotone methods for solving a boundary value problem of second order discrete system, Mathematical Problems in Engineering: Theory, Methods and Applications 5 (1999) 291–315. Y.-M. Wang, B.-Y. Guo, On Numerov scheme for nonlinear two-points boundary value problem, J. Comput. Math. 16 (1998) 345–366. Y.-M. Wang, B.-Y. Guo, Petrov–Galerkin methods for nonlinear systems without monotonicity, Appl. Numer. Math. 36 (2001) 57–78. Y.-M. Wang, B.-Y. Guo, Monotone finite difference schemes for nonlinear systems with mixed quasi-monotonicity, J. Math. Anal. Appl. 267 (2002) 599–625.