Applied Numerical Mathematics 57 (2007) 1081–1096 www.elsevier.com/locate/apnum
Monotone iterative technique for numerical solutions of fourth-order nonlinear elliptic boundary value problems ✩ Yuan-Ming Wang a,b a Department of Mathematics, East China Normal University, Shanghai 200062, People’s Republic of China 1 b Division of Computational Science, E-Institute of Shanghai Universities, Shanghai Normal University,
Shanghai 200234, People’s Republic of China Available online 14 November 2006
Abstract This paper is concerned with finite difference solutions of a class of fourth-order nonlinear elliptic boundary value problems. The nonlinear function is not necessarily monotone. A new monotone iterative technique is developed, and three basic monotone iterative processes for the finite difference system are constructed. Several theoretical comparison results among the various monotone sequences are given. A simple and easily verified condition is obtained to guarantee a geometric convergence of the iterations. Numerical results for a model problem with known analytical solution are given. © 2006 IMACS. Published by Elsevier B.V. All rights reserved. Keywords: Fourth-order elliptic equations; Finite difference systems; Monotone iterations; Upper and lower solutions; Rate of convergence
1. Introduction Boundary value problems of fourth-order differential equations have been given considerable attention in the literature, and most of the discussions are devoted to the existence, uniqueness, and multiplicity of solutions for the following two-point boundary value problem:
u(iv) = f (x, u, u ), u(0) = u(1) = 0,
0 < x < 1, u (0) = u (1) = 0,
(1.1)
where f (x, u, u ) is, in general, a nonlinear function of u and u (cf. [1,2,7,11,12,16,18,19,28,32,36]). The above problem describes the static deflection of an elastic bending beam (with hinged ends) under a possible nonlinear loading (cf. [14,31]). It also describes the steady state of a prototype equation for phase transitions in condensed matter systems (cf. [13,33]), and is also useful in studying travelling waves in a suspension bridge (cf. [15,20]). In recent ✩ This work was supported in part by the National Natural Science Foundation of China No. 10571059, E-Institutes of Shanghai Municipal Education Commission No. E03004, Shanghai Priority Academic Discipline, and the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry. E-mail address:
[email protected] (Y.-M. Wang). 1 Corresponding address.
0168-9274/$30.00 © 2006 IMACS. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.apnum.2006.10.001
1082
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
years, attention has been given to the following fourth-order elliptic boundary value problem in a multidimensional domain and with the more general boundary condition: (k(x)u) = f (x, u, u), x ∈ Ω, (1.2) ∗ B[u] = g(x), B[ku] = g (x), x ∈ ∂Ω, where Ω is a smooth bounded connected domain in Rn with boundary ∂Ω, is the Laplace operator, and B[w] ≡ α0 ∂w/∂ν + β0 (x)w with ∂/∂ν denoting the outward normal derivative on ∂Ω (cf. [9,20,21,23,24,27,30]). It is assumed that f (x, u, v), g(x), g ∗ (x), and β0 (x) are continuous functions in their respective domains, k(x) is a strictly positive C 2 -function on Ω ≡ Ω ∪ ∂Ω, and either α0 = 0, β0 (x) ≡ 1 (Dirichlet boundary condition) or α0 = 1, β0 (x) 0 (Neumann or Robin boundary condition). A physical interpretation of (1.2) for the case n = 2 is that it governs the static deflection of a plate under a lateral loading. Here k(x) is the stiffness of the plate, g(x) and g ∗ (x) are possible boundary sources, and f (x, u, u) is the loading function, which may depend on the deflection and the curvature of the plate (cf. [31]). The most discussions in the literature for (1.2) are again concerned with the existence, uniqueness, and multiplicity of solutions (cf. [9,20,21,23,30]). On the other hand, there are also a few papers that are devoted to the numerical methods for the computation of the solution but mostly for specific problems and linear equations (cf. [4,8,17]). For the general nonlinear problem (1.2), a finite difference-monotone iterative method is given in [24], where the problem (1.2) is discretized by the finite difference method, and three pointwise monotone iterative schemes are given to the corresponding nonlinear finite difference system. Another approach is given in [27], where two types of block monotone iterations, called block Jacobi and block Gauss–Seidel monotone iterations, are presented for the computation of solutions of the finite difference system. These block monotone iterations improve the rate of convergence of the pointwise monotone iterations given in [24] and can be easily computed by well-known computational algorithms for linear algebraic systems such as the Thomas algorithm (cf. [3,6]). However, the monotone convergence of the iterations in these works requires the monotone property of the nonlinear function f (·,u, v) in u. In this paper, we give a further investigation for the case where the nonlinear function f (·,u, v) is not necessarily monotone in u. By formulating problem (1.2) as a coupled system of two second-order elliptic equations, we discretize the corresponding nonlinear equations into a system of nonlinear algebraic equations by the finite difference method. Our specific goal is to develop some pointwise monotone iterative schemes for the corresponding nonlinear finite difference system without any monotone requirement on the function f (·,u, v), including some comparisons and estimates for the rate of the convergence of the iterations. The removal of the monotone requirement on f (·,u, v) leads to a general computational algorithm for the numerical solutions of problem (1.2). Block Jacobi and block Gauss–Seidel monotone iterations for the nonmonotone function f (·,u, v) can be similarly developed. The outline of the paper is as follows. In Section 2, we discretize the elliptic boundary value problem (1.2) into a coupled system of nonlinear finite difference equations. In Section 3, we develop a new monotone iterative technique for the computation of the finite difference solutions by the method of upper and lower solutions for nonmonotone function f (·,u, v). Three basic pointwise monotone iterative schemes are constructed and the monotone convergence of the iterations to a unique finite difference solution is proved. Section 4 is devoted to the rate of convergence of the iterations. We give several theoretical comparison results among the various monotone sequences, and obtain a simple and easily verified condition to guarantee a geometrically fast rate of convergence. In Section 5, we present some numerical results for a model problem with known analytical solution. These numerical results demonstrate theoretical analysis results and compare well with the known analytical solution. The final section is for some concluding remarks. 2. The finite difference system To obtain a finite difference approximation for the boundary value problem (1.2) we let v = −ku and transform problem (1.2) into the coupled system of the second-order elliptic equations: −u = v/k, −v = f (x, u, −v/k), x ∈ Ω, (2.1) x ∈ ∂Ω, B[u] = g (1) (x), B[v] = g (2) (x), where g (1) (x) = g(x) and g (2) (x) = −g ∗ (x). It is obvious that u is a solution of (1.2) if and only if (u, v) is a solution of (2.1).
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
Let hp be the spatial increment in the xp -direction and let xi be a mesh point in Ω. Define ui = u(xi ), vi = v(xi ), ki = k(xi ), Fi (ui , vi ) = f xi , u(xi ), −v(xi )/k(xi ) .
1083
(2.2)
Let N be the total number of mesh points at which the solution (ui , vi ) is to be computed, and let U = (u1 , u2 , . . . , uN )T , V = (v1 , v2 , . . . , vN )T , T F (U, V ) = F1 (u1 , v1 ), F2 (u2 , v2 ), . . . , FN (uN , vN ) .
(2.3)
Then by using the standard second order central difference approximation for the operator and a suitable approximation for the boundary operator B, we obtain a finite difference approximation of (2.1) in the vector form AU = BV + G(1) , (2.4) AV = F (U, V ) + G(2) , −1 where A is an N × N matrix, B = diag(k1−1 , k2−1 , . . . , kN ), and G(j ) (j = 1, 2) is associated with the boundary function g (j ) (j = 1, 2). For detailed formulation of the system (2.4), see [3,10,24–26]. Throughout the paper we impose the following basic hypothesis on A:
(H1 ) The matrix A ≡ (aj,k ) is irreducible, and aj,j > 0,
aj,k 0 (j = k),
N
aj,k 0,
j, k = 1, 2, . . . , N.
(2.5)
k=1
It can be shown that the property (2.5) can always be satisfied (cf. [3,10,29,34]). On the other hand, the connectedness assumption on Ω ensures that A is irreducible (cf. [29,34]). To demonstrate these properties, we consider the problem (1.2) with Dirichlet boundary condition in a two-dimensional rectangular domain Ω. In this case, A is a block tridiagonal matrix in the form A = tridiag(−Cj , Aj , −Cj )
(2.6)
where for each j , Aj is a tridiagonal matrix with diagonal elements 2(h21 + h22 )/ h21 h22 and upper-lower-off-diagonal elements −1/ h21 , and Cj = Cj = (1/ h22 )I (I denotes the identity matrix) (see [27,34]). Clearly, A is irreducible and the property (2.5) is satisfied. A direct consequence of hypothesis (H1 ) is that for any nonnegative diagonal matrix Θ = 0 (null matrix), the matrix A + Θ is a nonsingular M-matrix and (A + Θ)−1 > 0 (see [5,10,22,34]). In particular, if the strict inequality in the last relation of (2.5) holds for at least one j (corresponding Dirichlet or Robin boundary condition), then A−1 > 0. This property implies that the smallest eigenvalue λ0 of A is real and positive, and its corresponding eigenvector may be chosen as positive (see [5,34]). Otherwise, if the equality in the last relation of (2.5) holds for all j (corresponding to the pure Neumann boundary condition), then the matrix A is singular and the smallest eigenvalue λ0 = 0. In each case we have that the smallest eigenvalue λ0 of A is nonnegative and its corresponding eigenvector may be chosen as positive. A further consequence of hypothesis (H1 ) is given as follows. Lemma 2.1. (See [35].) Let hypothesis (H1 ) hold and let Θ = diag(θ1 , . . . , θN ) be a diagonal matrix with mini θi > −λ0 . Then the matrix A + Θ is a nonsingular M-matrix, or equivalently, the inverse (A + Θ)−1 exists and is nonnegative. To develop a new monotone iterative scheme for system (2.4) where F (U, V ) is not necessarily monotone in U , we use the method of upper and lower solutions whose definition does not depend on the monotone property of F (U, V ). , V ) and (U , V ) in RN × RN are called coupled upper and lower solutions of Definition 2.1. A pair of vectors (U , V ) (U , V ) and if (2.4) if (U F (U, V ) + G(2) , BV + G(1) , AV AU (2.7) (1) BV + G , AV F (U, V ) + G(2) , for all U U U . AU
1084
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
In the above definition, inequalities between vectors are in the sense of componentwise. For any W in RN , we , V ) and (U , V ) we define denote wi the ith component of W . Given a pair of coupled upper and lower solutions (U the sectors , V ) (U, V ) (U , V ) , S = (U, V ) ∈ RN × RN ; (U ui , vi ) (ui , vi ) ( ui , vi ) , (2.8) Si = (ui , vi ) ∈ R × R; ( and make the following basic hypothesis: (H2 ) (i) There exists a diagonal matrix Γ such that F (U, V ) − F (U, V ) −Γ (V − V )
, V ) (U, V ) (U, V ) (U , V ), for all (U
Γ > −λ0 I if λ0 > 0, and Γ 0 but Γ = 0 if λ0 = 0. (ii) Let Θ be a nonnegative diagonal matrix such that Θ = 0 when λ0 = 0.
(2.9)
We note by hypothesis (H1 ) and Lemma 2.1 that the inverses (A + Θ)−1 and (A + Γ )−1 exist and are nonnegative. This property leads to the following existence result for (2.4) for nonmonotone function F (see [25]). , V ) and (U , V ) be a pair of coupled upper and lower solutions of (2.4), and let hypotheses Theorem 2.1. Let (U (H1 ) and (H2 ) hold. Then system (2.4) has at least one solution (U ∗ , V ∗ ) in S. 3. Monotone iterative schemes If F (U, V ) is monotone in U , the solution (U ∗ , V ∗ ) of (2.4) can be computed by the monotone iterative schemes in [24]. To compute the solution (U ∗ , V ∗ ) for nonmonotone function F (U, V ), we develop here a new monotone , V ) and (U , V ) as the initial iterative technique. Specifically, by using the coupled upper and lower solutions (U iterations we construct two sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} from the following iterative scheme: ⎧ (A + Θ ∗ )U (m) = Θ ∗ U (m−1) + BV (m−1) + G(1) , ⎪ ⎪ ⎪ ∗ (m) = Γ ∗ V (m−1) + max F (U, V (m−1) ) + G(2) , ⎪ ⎪ ⎨ (A + Γ )V U ∈S (m−1) (3.1) (m) (m−1) ∗ ∗ ⎪ (A + Θ )U =Θ U + BV (m−1) + G(1) , ⎪ ⎪ ⎪ ⎪ ⎩ (A + Γ ∗ )V (m) = Γ ∗ V (m−1) + min F (U, V (m−1) ) + G(2) , U ∈S (m−1)
Θ∗
Γ∗
and are two diagonal matrices specified later, and S (m) = U ∈ RN ; U (m) U U (m) .
where
(3.2)
In the above iterative scheme, the maximum and the minimum of a vector function are in the sense of componentwise. The following lemma shows that the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} are well-defined. , V ) and (U , V ) be a pair of coupled upper and lower solutions of (2.4), and let hypotheses (H1 ) Lemma 3.1. Let (U and (H2 ) hold. Then the sequences {(U (m) , V (m) )}, {( U (m) , V (m) )} and the set S (m) given by (3.1) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ) and (3.2) respectively, are all well-defined and possess the property (U (m) , V (m) ) ( U (m) , V (m) ) for every m = 1, 2, . . . . , V ), ( U (0) , V (0) ) = (U , V ) and (U , V ) (U , V ), the set S (0) is well-defined. Hence Proof. Since (U (0) , V (0) ) = (U the right-hand side of (3.1) is known when m = 1, and the first iterations (U (1) , V (1) ) and ( U (1) , V (1) ) exist uniquely due to the existence of (A + Θ)−1 and (A + Γ )−1 . By hypothesis (H2 ) Γ V (0) + max F U, V (0) Γ V (0) + min F U, V (0) . U ∈S (0)
U ∈S (0)
We have from (3.1) that (A + Θ)(U (1) − U (1) ) 0 and (A + Γ )(V (1) − V (1) ) 0. It follows from the nonnegative property of (A + Θ)−1 and (A + Γ )−1 that (U (1) , V (1) ) ( U (1) , V (1) ), and therefore the set S (1) is well defined. The conclusion of the lemma follows by an induction argument. 2
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
1085
We next show that the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} converge monotonically from above and below, respectively, to the limits (U , V ) and ( U , V ) that satisfy (U , V ) ( U , V ) and ⎧ (1) max F (U, V ) + G(2) , ⎪ ⎨ AU = BV + G , AV = U U U (3.3) (1) ⎪ ⎩ AU = BV + G , AV = min F (U, V ) + G(2) . U U U
Theorem 3.1. Let the conditions in Lemma 3.1 hold. Then the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} from (3.1) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ) converge monotonically from above and below, respectively, to the limits (U , V ) and ( U , V ) that satisfy (3.3). Moreover, (m−1) (m−1) (m) (m) U U ,V ( U , V ) (U , V ) U (m) , V (m) U (m−1) , V (m−1) (3.4) ,V for all m = 1, 2, . . . , and any solution (U , V ) of the system (2.4) in S satisfies ( U , V ) (U , V ) (U , V ). Proof. We have from (2.7) and (3.1) that (A + Θ) U (0) − U (1) 0, (A + Γ ) V (0) − V (1) 0, (A + Θ) U (1) − U (0) 0, (A + Γ ) V (1) − V (0) 0. By Lemma 3.1 and the nonnegative property of (A + Θ)−1 and (A + Γ )−1 , we obtain that the monotone property (m−1) (m−1) (m) (m) (m) (m) (m−1) (m−1) U ,V ,V U ,V U ,V U (3.5) holds for m = 1. Assume that (3.5) holds for some m = m0 1. Then by hypothesis (H2 ), Γ V (m0 −1) + max F U, V (m0 −1) Γ V (m0 ) + max F U, V (m0 ) , U ∈S (m0 −1)
ΓV
(m0 )
+ min F U, V (m0 ) Γ V (m0 −1) + U ∈S (m0 )
U ∈S (m0 )
min
U ∈S (m0 −1)
F U, V (m0 −1) .
(3.6)
Using this relation and (3.5) with m = m0 , we have that (3.5) holds also for m = m0 + 1. The monotone property (3.5) for all m follows from the principle of induction. In view of this monotone property the limits lim U (m) , V (m) = (U , V ), lim U (m) , V (m) = ( U , V ) (3.7) m→∞
m→∞
exist and satisfy the relation (3.4). Letting m → ∞ in (3.1) shows that the limits (U , V ) and ( U , V ) satisfies (3.3) (see Appendix A). Let (U , V ) be any solution of (2.4) in S. Then ( U (0) , V (0) ) (U , V ) (U (0) , V (0) ). Assume that (m) (m) U ,V (U , V ) U (m) , V (m) (3.8) for some m = m0 0. Then by hypothesis (H2 ), Γ V (m0 ) + max F U, V (m0 ) Γ V + F (U , V ) Γ V (m0 ) + min F U, V (m0 ) . U ∈S (m0 )
U ∈S (m0 )
This leads to ( U (m0 +1) , V (m0 +1) ) (U , V ) (U (m0 +1) , V (m0 +1) ). Finally by the principle of induction, the relation (3.8) holds for all m 0. Letting m → ∞ in (3.8) gives ( U , V ) (U , V ) (U , V ). 2 To ensure that (U , V ) = ( U , V ) and is the unique solution of (2.4) in S, we assume that F (U, V ) is a C 1 -function of (U, V ), and define ∂Fi ∂Fi Mv = max max (u, v); (u, v) ∈ Si , (u, v); (u, v) ∈ Si , Mu = max max i i ∂u ∂v ∂Fi − mu = max 0, m+ (u, v); (u, v) ∈ Si , m± u = min min ± u , mu , i ∂u ∂Fi k ≡ min ki−1 . (3.9) k ≡ max ki−1 , mv = min min (u, v); (u, v) ∈ Si , i i i ∂v
1086
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
The following theorem gives an existence-uniqueness result as well as a computational algorithm for (2.4) for nonmonotone function F . Theorem 3.2. Let the conditions in Lemma 3.1 hold, and also let λ0 be the smallest eigenvalue of A. If, in addition, either λ0 (λ0 − Mv ) > kMu
or
λ0 (λ0 − mv ) < kmu ,
(3.10)
then the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} given by (3.1) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ) converge monotonically from above and below, respectively, to a unique solution (U ∗ , V ∗ ) of (2.4) in S. Moreover, the relation (3.4) holds with (U , V ) = ( U , V ) = (U ∗ , V ∗ ). Proof. The proof is slight different from that in [24] for monotone function, and we give it as follows. It suffices to show (U , V ) = ( U , V ), where (U , V ) and ( U , V ) are the limits in (3.7). Let W = U − U and Z = V − V . Then W 0, Z 0 and by (3.3), ⎧ ⎨ AW = BZ, (3.11) ⎩ AZ = max F (U, V ) − min F (U, V ). U U U
U U U
Using the mean-value theorem we have AZ F (U , V ) − F ( U , V ) m+ u W + mv Z,
AZ F ( U , V ) − F ( U , V ) m− u W + mv Z,
AZ F (U , V ) − F (U , V ) mv Z,
AZ Mu W + Mv Z.
(3.12)
This leads to AW = BZ,
(3.13)
mu W + mv Z AZ Mu W + Mv Z.
Since λ0 is also the smallest eigenvalue of AT , hypothesis (H1 ) ensures that there exists a positive eigenvector Φ of AT to λ0 (cf. [5,34]). Multiplying the equations in (3.13) by Φ T and using the relation Φ T A = λ0 Φ T yield λ0 Φ T W = Φ T BZ, (3.14) mu Φ T W + mv Φ T Z λ0 Φ T Z Mu Φ T W + Mv Φ T Z. A similar reasoning as that in [24] gives (W, Z) = (0, 0) which implies (U , V ) = ( U , V ).
2
Remark 3.1. If F (U, V ) is monotone in U , the uniqueness condition (3.10) coincides with that in [24]. The iterative scheme (3.1) can be improved by Gauss–Seidel and Jacobi methods. In the meantime the set S (m−1) in (3.1) can be replaced by S (m) . Specifically, by writing the matrix A in the split form A = D − U − L, where D, −U and −L are the diagonal, upper-off-diagonal and lower-off-diagonal matrices of A, respectively, we have the following three basic iterative schemes. (a) Picard iteration: ⎧ (A + Θ ∗ )U (m) = Θ ∗ U (m−1) + BV (m−1) + G(1) , ⎪ ⎪ ⎪ (m) = Θ ∗ U (m−1) + BV (m−1) + G(1) , ∗ ⎪ ⎪ ⎨ (A + Θ )U (A + Γ ∗ )V (m) = Γ ∗ V (m−1) + max F (U, V (m−1) ) + G(2) , ⎪ U ∈S (m) ⎪ ⎪ ⎪ ⎪ (m) (m−1) ⎩ (A + Γ ∗ )V = Γ ∗V + min F (U, V (m−1) ) + G(2) , U ∈S (m)
(3.15)
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
(b) Gauss–Seidel iteration: ⎧ (D − L + Θ ∗ )U (m) = (U + Θ ∗ )U (m−1) + BV (m−1) + G(1) , ⎪ ⎪ ⎪ (m) = (U + Θ ∗ )U (m−1) + BV (m−1) + G(1) , ∗ ⎪ ⎪ ⎨ (D − L + Θ )U (D − L + Γ ∗ )V (m) = (U + Γ ∗ )V (m−1) + max F (U, V (m−1) ) + G(2) , ⎪ U ∈S (m) ⎪ ⎪ ⎪ ⎪ (m) (m−1) ⎩ (D − L + Γ ∗ )V = (U + Γ ∗ )V + min F (U, V (m−1) ) + G(2) ,
1087
(3.16)
U ∈S (m)
(c) Jacobi iteration: ⎧ (D + Θ ∗ )U (m) = (U + L + Θ ∗ )U (m−1) + BV (m−1) + G(1) , ⎪ ⎪ ⎪ ⎪ (D + Θ ∗ )U (m) = (U + L + Θ ∗ )U (m−1) + BV (m−1) + G(1) , ⎪ ⎨ (D + Γ ∗ )V (m) = (U + L + Γ ∗ )V (m−1) + max F (U, V (m−1) ) + G(2) , ⎪ U ∈S (m) ⎪ ⎪ ⎪ ⎪ (m) (m−1) ∗ ∗ ⎩ (D + Γ )V = (U + L + Γ )V + min F (U, V (m−1) ) + G(2) ,
(3.17)
U ∈S (m)
, V , U , V ), Θ ∗ and Γ ∗ are two diagonal matrices specified where the initial iteration is (U (0) , V (0) , U (0) , V (0) ) = (U (m) is defined by (3.2). later, and S Theorem 3.3. Let the conditions in Theorem 3.2 hold. Then the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} from any one of the iteration processes (3.15), (3.16) and (3.17) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ) are all well-defined and converge monotonically from above and below, respectively, to the unique solution (U ∗ , V ∗ ) of (2.4) in S. Moreover, the relation (3.4) holds with (U , V ) = ( U , V ) = (U ∗ , V ∗ ). Proof. In view of (D − L + Θ)−1 0, (D − L + Γ )−1 0, (D + Θ)−1 0 and (D + Γ )−1 0, the proof follows from the similar argument as that in the proofs of Lemma 3.1 and Theorems 3.1 and 3.2. 2 Remark 3.2. As compared with the iteration (3.1), we replace the set S (m−1) by S (m) in the iterations (3.15)–(3.17). In the next section, we will see that this replacement leads to a faster rate of convergence. Remark 3.3. Since we adopt the local extreme values at the right-hand side of the iterations (3.1), (3.15), (3.16) and (3.17), we obtain the monotone convergence of the iterations, even if the function F (U, V ) is not monotone in U . 4. Rate of convergence In this section, we investigate the rate of convergence of the iterations. Specifically, we give some comparison results among the various monotone sequences, and obtain a simple condition for the geometric convergence of the iterations. 4.1. Comparison of monotone sequences Our first comparison result is given as follows. Theorem 4.1. Let the conditions in Lemma 3.1 hold, and let Θ and Γ be two diagonal matrices satisfying (Θ , Γ ) (Θ, Γ ). Denote by {(U (m) , V (m) , U (m) , V (m) )} and {(U (m) , V (m) , U (m) , V (m) )} the sequences from each of (3.1), (3.15), (3.16) and (3.17) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ) and (Θ ∗ , Γ ∗ ) = (Θ , Γ ), respectively, where , V ) and ( U (0) , V (0) ) = ( U (0) , V (0) ) = (U , V ). Then for all m = 1, 2, . . . , (U (0) , V (0) ) = (U (0) , V (0) ) = (U (m) (m) (m) (m) (m) (m) (m) (m) U , U ,V U . (4.1) U ,V ,V ,V Proof. We only prove the theorem for Picard iteration (3.15). The proof for the other iterations is similar. Let W (1) = U (1) − U (1) and W (1) = U (1) − U (1) . Then by (3.15) and the monotone property of the sequences, − U (1) 0, 0. (A + Θ)W (1) = (Θ − Θ) U (A + Θ)W (1) = (Θ − Θ) U (1) − U
1088
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
This implies that U (1) U (1) and U (1) U (1) . Similarly, V (1) V (1) and V (1) V (1) which prove (4.1) for m = 1. The relation (4.1) for every m follows from the principle of induction. 2 The comparison result (4.1) shows that with the same initial iterations, which are coupled upper and lower solutions, the rate of convergence of the iterations depends on the choice of the matrix pair (Θ ∗ , Γ ∗ ): the smaller the (Θ ∗ , Γ ∗ ) is, the faster is the convergence. (m)
(m)
(m)
(m)
(m)
(m)
Theorem 4.2. Let the conditions in Lemma 3.1 hold. Denote by {(U P , V P , U P , V P )}, {(U G , V G , (m) (m) (m) (m) (m) ∗ ∗ U (m) G , V G )} and {(U J , V J , U J , V J )} the sequences from (3.15), (3.16) and (3.17) with (Θ , Γ ) = (0) (0) (0) (0) (0) (0) (0) (0) (0) (0) , V ) and ( U , V ) = ( U , V ) = (Θ, Γ ), respectively, where (U P , V P ) = (U G , V G ) = (U J , V J ) = (U P P G G (0) (0) ( U J , V J ) = (U , V ). Then for all m = 1, 2, . . . , (m) (m) (m) (m) (m) (m) (m) (m) (m) (m) (m) (m) U P ,V P U G ,V G U J ,V J . (4.2) UP ,V P UG ,V G UJ ,V J , Proof. Using the maximum–minimum property of the nonlinear functions at the right-hand side of the iterations, the proof is similar as that in [24] for monotone function F . 2 The result in Theorem 4.2 states that with the same initial iterations, which are coupled upper and lower solutions, the sequence of Picard iterations converges faster than the sequence of Gauss–Seidel iterations which in turn converges faster than the sequence of the Jacobi iterations. However, the computation of the sequences by the Gauss–Seidel and Jacobi iterations is more straightforward. Theorem 4.3. Let the conditions in Lemma 3.1 hold. Denote by {(U (m) , V (m) , U (m) , V (m) )} and {(U (m) , V (m) , U (m) , V (m) )} the sequences from (3.15) and (3.1) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ), respectively, where (U (0) , V (0) ) = , V ) and ( U (0) , V (0) ) = ( U (0) , V (0) ) = (U , V ). Then for all m = 1, 2, . . . , (U (0) , V (0) ) = (U (m) (m) (m) (m) (m) (m) (m) (m) U ,V ,V ,V U , U ,V U . (4.3) Proof. Denote the set S (m) by S (m) for the sequence {(U (m) , V (m) , U (m) , V (m) )}. Since the initial iterations are the same, we have from (3.1) and (3.15) that (U (1) , U (1) ) = (U (1) , U (1) ). On the other hand, we have ) max F (U, V ), max F (U, V
U ∈S (0)
U ∈S (1)
) min F (U, V ). min F (U, V
U ∈S (0)
U ∈S (1)
(4.4)
This implies that (A + Γ )(V (1) − V (1) ) 0 and (A + Γ )( V (1) − V (1) ) 0. Thus we have V (1) V (1) and V (1) V (1) which proves (4.3) for m = 1. The relation (4.3) for all m follows from an induction argument. 2 Theorem 4.3 shows that the replacement of S (m−1) by S (m) leads to a faster rate of convergence. 4.2. Geometric convergence Although the convergence of the iterations (3.1) and (3.15)–(3.17) is guaranteed by Theorems 3.2 and 3.3, the explicit rate of convergence is not known in general. We now give a simple and easily verified condition to guarantee a geometrically fast rate of convergence. Lemma 4.1. (See [5].) Let A∗ be a nonsingular M-matrix. Then there exists a positive diagonal matrix D such that the matrix DA∗ D −1 is strictly diagonally dominant. Theorem 4.4. Let the conditions in Theorem 3.2 be satisfied. Denote by {(U (m) , V (m) )} and {( U (m) , V (m) )} the sequences from any one of (3.1), (3.15), (3.16) and (3.17) with (Θ ∗ , Γ ∗ ) = (Θ, Γ ), and denote by (U ∗ , V ∗ ) the unique solution of (2.4) in S. Let γ = max( Γ ∞ , Θ ∞ ), and let (m) (m) (m) (m) (m) E1 , E 2 , E 1 , E 2 = U − U ∗ , V (m) − V ∗ , U ∗ − U (m) , V ∗ − V (m) .
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
1089
If, in addition, max(2Mu , Mv + k) < λ0 ,
(4.5)
where Mu , Mv and k are defined by (3.9), and λ0 is the smallest eigenvalue of A, then there exists a positive diagonal = diag(d1 , . . . , dN ) independent of m such that matrix D (m) E + E (m) + E (m) + E (m) maxi di ρ m E (0) + E (0) + E (0) + E (0) 1 2 1 2 1 2 1 2 ∞ ∞ mini di
(4.6)
for all m = 1, 2, . . . , where −1 + γ + max(2Mu , Mv + k) I D <1 A + γ I )−1 B ρ = D( ∞ with
⎧ ⎪ ⎨ A = A, = D − L, A ⎪ ⎩ A = D,
(4.7)
= 0 B = U B
for Picard iterations (3.1) and (3.15),
= L + U B
for Jacobi iteration (3.17).
(4.8)
for Gauss–Seidel iteration (3.16),
(m)
(m)
Proof. We first note by the monotone property (3.4) that E i 0 and E i 0 for i = 1, 2. In view of the comparison result in Theorem 4.1 it suffices to prove the conclusion for the case of (Θ ∗ , Γ ∗ ) = (γ I, γ I ). In this case, the iterations (3.15), (3.16) and (3.17) can be written in the uniform form ⎧ + γ I )U (m−1) + BV (m−1) + G(1) , (A + γ I )U (m) = (B ⎪ ⎪ ⎪ (m) = (B ⎪ (A + γ I )U (m−1) + BV (m−1) + G(1) , ⎪ ⎨ + γ I )U (m) + γ I )V + γ I )V (m−1) + max F (U, V (m−1) ) + G(2) , (4.9) (A = (B ⎪ U ∈S (m) ⎪ ⎪ ⎪ ⎪ + γ I )V (m) = (B + γ I )V (m−1) + min F (U, V (m−1) ) + G(2) . ⎩ (A U ∈S (m)
By (4.9), (2.4) and the mean-value theorem, there exist two intermediate vectors Ξ (m) and Ξ (m) between U (m) and U (m) such that ⎧ (m) (m−1) (m−1) ⎪ + BE 2 , ⎪ ⎪ (A + γ I )E 1 = (B + γ I )E 1 ⎪ ⎪ ⎨ (A + γ I )E (m−1) + BE (m−1) , + γ I )E (m) = (B 1 1 2 (4.10) (m) (m−1) (m) , V (m−1) ) − F (U ∗ , V ∗ ), ⎪ ⎪ ( A + γ I )E = ( B + γ I )E + F (Ξ ⎪ 2 2 ⎪ ⎪ ⎩ (m) + γ I )E (m−1) + F (U ∗ , V ∗ ) − F ( Ξ (m) , V (m−1) ). (A + γ I )E 2 = (B 2 (m)
Let E (m) = E 1 obtain
(m)
+ E2
(m)
(m)
+ E 1 + E 2 . Using the notations in (3.9) and the monotone property of the sequences we
+ γ + max(2Mu , Mv + k) I E (m−1) . + γ I )−1 B 0 E (m) (A
(4.11)
The above relation is also true for the iteration (3.1). Since max(2Mu , Mv + k) < λ0 , we have from Lemma 2.1 that the matrix A − max(2Mu , Mv + k)I is a nonsingular M-matrix. By Lemma 4.1, there exists a positive diagonal matrix − max(2Mu , Mv + k)I )D −1 is strictly diagonally dominant. Thus we have = diag(d1 , . . . , dN ) such that D(A D −1 B + γ + max(2Mu , Mv + k) I D E < D( A + γ I )D −1 E, D + γ I )−1 0 and B + (γ + max(2Mu , Mv + k))I 0, we have where E = (1, 1, . . . , 1)T . Since (A −1 A + γ I )−1 B + γ + max(2Mu , Mv + k) I D < 1. ρ = D( ∞ (m−1) ∞ . This implies DE (m) ∞ ρ m DE (0) ∞ which proves (4.6). (m) ∞ ρ DE Further by (4.11), DE
2
The estimate (4.6) shows that the iterations (3.1) and (3.15)–(3.17) converge at least as rapidly as a geometric progression with the ratio ρ given in (4.7).
1090
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
in the above theorem may be taken as D = diag(1/x1 , . . . , 1/xN ), where Remark 4.1. The positive diagonal matrix D xi > 0 is the ith component of X = (A − max(2Mu , Mv + k)I )−1 E. Remark 4.2. The relation (4.5) gives a simple and easily verified condition to guarantee a geometric rate of convergence of the monotone iterations. Our numerical experiments in the next section show that it is only sufficient. Improvement of this condition can be interesting both theoretically and computationally. 5. Numerical results To apply the monotone iterative schemes given in Section 3 it is necessary to find a pair of coupled upper and lower solutions. The construction of these functions depends mainly on the function F (U, V ). In this section, we consider an example where F (U, V ) does not possess any monotone property in U . This example illustrates some basic technique for the construction of coupled upper and lower solutions. In the meantime we present some numerical results which demonstrate theoretical analysis results and compare well with the known analytical solution. Consider the boundary value problem 2 u + q(x, y), (x, y) ∈ Ω, u = σ (x, y) 1+u (5.1) u = 0, u = 0, (x, y) ∈ ∂Ω, where Ω = {(x, y); 0 < x < 1, 0 < y < 1}, σ (x, y) is a sign-changing continuous function and q(x, y) is a nonnegative continuous function. Clearly, the problem (5.1) is a special case of (1.2) with v + q(x, y), g(x, y) = g ∗ (x, y) = 0. f (x, y, u, v) = σ (x, y) (5.2) 1+u To obtain an explicit analytical solution of (5.1), we choose σ (x, y) 2 2 q(x, y) = 2π κ 2π + sin(πx) sin(πy), (5.3) 1 + κ sin(πx) sin(πy) where κ > 0 is an arbitrary constant. It is easy to check that for any κ > 0 the function u(x, y) = κ sin(πx) sin(πy) is a solution of (5.1), and q(x, y) 0 if σ (x, y) −2π 2 in Ω. For the problem (5.1) the corresponding finite difference system (2.4) is reduced to the form AU = V , (5.4) AV = F (U, V ), where A is the same as that in (2.6) and F (U, V ) is defined by (2.3) and (5.2). Since σ (x, y) is a sign-changing function, the monotone property of the function F (U, V ) in U is always destroyed. To find a pair of coupled upper and lower solutions of (5.4) we consider the linear (uncoupled) system + δE, = σZ AZ (5.5) = Z, AW where σ and δ are sufficiently large so that |σ (x, y)| σ and q(x, y) δ in Ω, and E = (1, 1, . . . , 1)T . By , Z) of (5.5) exists uniquely and (W Lemma 2.1, the solution (W √ 2, Z) (0, 0) if σ < λ0 , where λ0 is the smallest 2 πh 2 eigenvalue of A (it can be shown that λ0 = 8 sin 2 / h 2π if h1 = h2 = h 1/2). It is easy to verify from , V ) = (W , Z) and (U , V ) = (0, 0) are coupled upper F (U, V ) σ V + δE for all (U, V ) (0, 0) that the pair (U and lower solutions of (5.4) if σ < λ0 . Since ∂Fj /∂v = −σ/(1 + u) −σ for all u 0, the matrices (Θ ∗ , Γ ∗ ) in the iteration processes (3.1), (3.15), (3.16) and (3.17) may be chosen as (Θ ∗ , Γ ∗ ) = (0, σ I ). Let κ = 1 and σ (x, y) = cos(πx) cos(πy), and take σ = 1 , Z) and ( U (0) , V (0) ) = (0, 0) we compute the corresponding seand δ = 2π 2 κ(2π 2 + 1). Using (U (0) , V (0) ) = (W (m) (m) (m) (m) quences {(U , V )} and {( U , V )} from any one of Picard method (3.15), Gauss–Seidel method (3.16) and Jacobi method (3.17) for different mesh sizes h1 and h2 . The termination criterion of the iterations is given by (m) U − U (m) + V (m) − V (m) < ε (5.6) ∞
∞
for various ε > 0. In all the computations the monotone property of the sequences was observed. Numerical results of the sequences {U (m) } and {U (m) } at (xi , yj ) = (0.5, 0.5) for the case h1 = h2 = 1/20 are plotted in Fig. 1. As
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
1091
Fig. 1. The monotone property of {U (m) , U (m) } at (0.5, 0.5) by different methods (left: U (m) ; right: U (m) ).
Fig. 2. The numerical solution U ∗ and the true analytical solution u.
Table 1 The computed solution U ∗ and the true analytical solution u at yj = 0.5 h
(xi , yj )
(0.1, 0.5)
(0.2, 0.5)
(0.3, 0.5)
(0.4, 0.5)
(0.5, 0.5)
0.31414644 0.31029066 0.30933488
0.59754204 0.59020792 0.58838990
0.82244605 0.81235150 0.80984922
0.96684332 0.95497647 0.95203485
1.01659923 1.00412168 1.00102868
0.31414651 0.31029072 0.30933494
0.59754216 0.59020803 0.58839001
0.82244621 0.81235166 0.80984937
0.96684350 0.95497665 0.95203503
1.01659941 1.00412187 1.00102887
0.31414649 0.31029072 0.30933493 0.30901699
0.59754214 0.59020802 0.58839001 0.58778525
0.82244620 0.81235165 0.80984937 0.80901699
0.96684349 0.95497664 0.95203503 0.95105652
1.01659940 1.00412186 1.00102887 1
(a) Picard method 1/10 1/20 1/40
Comp. sol. Comp. sol. Comp. sol.
(b) Gauss–Seidel method 1/10 1/20 1/40
Comp. sol. Comp. sol. Comp. sol.
(c) Jacobi method 1/10 1/20 1/40
Comp. sol. Comp. sol. Comp. sol. True sol.
expected from our theoretical analysis in Theorem 3.3 the sequence {U (m) } is monotone nonincreasing while the sequence {U (m) } is monotone nondecreasing. In addition, we also see that the relation (4.2) holds for every m. In the numerical computations, we find that the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} tend to the same limit ∗ (U , V ∗ ) as m → ∞, which indicates that the limit (U ∗ , V ∗ ) is the unique solution of (5.4) in S = {(U, V ); (0, 0) , Z)}. We choose (U (m∗ ) , V (m∗ ) ) as the computed solution (U ∗ , V ∗ ) where m∗ is the required number (U, V ) (W of iterations for the tolerance ε = 10−6 . Numerical results of U ∗ at yj = 0.5 and various values of xi for the case
1092
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
Fig. 3. The sequence {U (m) } at (0.5, 0.5) for different (Θ ∗ , Γ ∗ ).
Fig. 4. The errors e(m) for κ = 0.1. Table 2 The number of iterations for different methods and (Θ ∗ , Γ ∗ ) Method
Number of iterations (Θ ∗ = 2I, Γ ∗ = 3I )
Picard Gauss–Seidel Jacobi
(Θ ∗ = 8I, Γ ∗ = 27I )
h = 1/10
h = 1/20
h = 1/30
h = 1/10
h = 1/20
h = 1/30
7 137 271
7 547 1089
7 1229 2454
24 152 285
24 560 1103
24 1243 2468
h1 = h2 = h with h = 1/10, 1/20, and 1/40 are listed in Table 1. Also included in the table is the true analytic solution u of (5.1). A geometrical presentation of U ∗ (by Picard method with h1 = h2 = 1/40) and u is given in Fig. 2. It is seen that the computed solution compares closely to the true analytic solution. For the purpose of comparison, we compute the sequences {(U (m) , V (m) )} and {( U (m) , V (m) )} from any one of Picard method (3.15), Gauss–Seidel method (3.16) and Jacobi method (3.17) for the different matrix pairs (Θ ∗ , Γ ∗ ). Numerical results of {U (m) } at (xi , yj ) = (0.5, 0.5) for (Θ ∗ , Γ ∗ ) = (2I, 3I ) and (Θ ∗ , Γ ∗ ) = (8I, 27I ) are sketched in Fig. 3 (h1 = h2 = 1/20). We see from this figure that the comparison result (4.1) holds for every m. In Table 2, we list the numbers of iterations by Picard, Gauss–Seidel and Jacobi method for the matrix pairs (Θ ∗ , Γ ∗ ) = (2I, 3I ) and (Θ ∗ , Γ ∗ ) = (8I, 27I ) and for the different mesh sizes h1 = h2 = h with h = 1/10, 1/20, 1/30, where the tolerance ε = 10−4 . The numerical result demonstrate that the rate of convergence of Picard method is the fastest, and the Gauss–Seidel method converges about twice as fast as the Jacobi method. To demonstrate the geometric convergence of iterations we compute the errors (5.7) e(m) = U (m) − U ∗ + V (m) − V ∗ + U (m) − U ∗ + V (m) − V ∗ ∞ and the ratios r(m) = e(m)/e(m − 1)
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
1093
Fig. 5. The ratios r(m) for κ = 0.1.
Fig. 6. The errors e(m) for κ = 0.4.
Fig. 7. The ratios r(m) for κ = 0.4. Table 3 The values of ρ for κ = 0.1 ρ
Picard method
Gauss–Seidel method
Jacobi method
0.049901
0.983352
0.991732
for the different values of κ, where (Θ ∗ , Γ ∗ ) = (0, E), h1 = h2 = 1/20 and the tolerance ε = 10−13 . Let κ = 0.1. In this case, the conditions (3.10) and (4.5) are both satisfied. In Figs. 4 and 5, we present the errors e(m) and the ratios r(m) for Picard, Gauss–Seidel and Jacobi method, respectively. Our numerical results show that there exists a positive constant ρ < 1 such that e(m) ρe(m − 1) · · · ρ m e(1).
(5.8)
This implies that the errors e(m) are decreasing as rapidly as a geometric progression with ratio ρ. The values of ρ for different iterative methods are listed in Table 3.
1094
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
Table 4 The values of ρ for κ = 0.4 ρ
Picard method
Gauss–Seidel method
Jacobi method
0.050663
0.983356
0.991732
Next we choose κ = 0.4. In this situation, the condition (3.10) still holds, but the condition (4.5) is not satisfied. The errors e(m) for Picard, Gauss–Seidel and Jacobi iterations are shown in Fig. 6, while the ratios r(m) are given in Fig. 7. The values of ρ satisfying (5.8) for different methods are listed in Table 4. We observe from these numerical results that the iterations converge geometrically. This implies that the condition (4.5) is only a sufficient condition to guarantee a geometric rate of convergence. 6. Some concluding remarks In this paper, we extend the pointwise monotone iterative schemes given in [24] for monotone functions to nonmonotone functions. This extension enlarges the applicability of the monotone iterative method to a larger class of fourth-order boundary value problem in the form (1.2). In addition, we compare and analyse the rate of convergence of iterations. Block Jacobi and block Gauss–Seidel iterative schemes in [27] for monotone functions can be similarly extended. It should be pointed out that since the iterative schemes proposed here require the computation of the maximum and minimum values of the nonlinear function F (U, V ) in U it is recommended to use only when the nonlinear function F (U, V ) in the problem is truly nonmonotone in U . In this situation, the computation of the maximum and minimum values can be obtained by considering the system ∂Fi = 0, ∂ui
i = 1, 2, . . . , N.
In our numerical applications, the maximum and minimum values of the nonlinear function are computed by using a Matlab subroutine. An interesting open problem is: Only under the conditions of Theorem 3.2, can we prove that the monotone sequences converge geometrically with an explicit rate of convergence? Acknowledgements The author would like to thank the referee for valuable comments and suggestions which improve the presentation of the paper. Appendix A To prove that the limits (U , V ) and ( U , V ) in (3.7) satisfies (3.3), it suffices to show lim Γ V (m) + max F U, V (m) = Γ V + max F (U, V ), m→∞
U ∈S (m)
U U U
U ∈S (m)
U U U
lim Γ V (m) + min F U, V (m) = Γ V +
m→∞
Let M = max F (U, V ), U U U
min F (U, V ).
Mm = max F U, V (m) . U ∈S (m)
(A.1)
(A.2)
The continuity of the function F (U, V ) implies that M and Mm are both well-defined, and by hypothesis (H2 ), Γ V (m) + Mm Γ V + M. By the continuity of the function F (U, V ) in S we have that for arbitrary ε > 0, there exists a positive number δ such that when (U1 , V1 ), (U2 , V2 ) ∈ S and U1 − U2 ∞ + V1 − V2 ∞ < δ, F (U1 , V1 ) − F (U2 , V2 ) < ε/2. (A.3) ∞ For the above δ > 0 and ε > 0, there exists a positive integer m0 such that when m m0 ,
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
(m) U − U ∞ < δ/2, (m) U − U ∞ < δ/2,
(m) V − V ∞ < δ/2,
Γ ∞ V (m) − V ∞ < ε/2.
(A.4) (m)
Let U (m) ∈ S (m) such that Mm = F (U (m) , V (m) ), and let u i , ui and ui U and U (m) . Then by (A.4) (m)
u i − δ/2 ui
ui + δ/2
(i = 1, 2, . . . , N ),
1095
denote the respective ith component of U ,
whenever m m0 .
(A.5)
Define
⎧ (m) ⎪ u , u i − δ/2 ui u i , ⎪ ⎨ i (m) (m) wi = u(m) ui , i , u i ui ⎪ ⎪ ⎩ (m) ui , ui ui ui + δ/2, (m)
(m)
(i = 1, 2, . . . , N ),
whenever m m0 ,
(m)
and let W (m) = (w1 , w2 , . . . , wN )T . Then U W (m) U , U (m) − W (m) δ/2, ∞
whenever m m0 .
Thus F (W (m) , V ) M which implies that 0 Γ V (m) + Mm − Γ V − M Γ V (m) − V + F U (m) , V (m) − F W (m) , V . Moreover by (A.3), (m) (m) F U , V − F W (m) , V ∞ < ε/2,
whenever m m0 .
Finally we have that when m m0 , Γ V (m) + Mm − Γ V − M Γ ∞ V (m) − V + F U (m) , V (m) − F W (m) , V < ε. ∞ ∞ ∞ This proves the first equality in (A.1). The proof for the second equality is similar. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
A.R. Aftabizadeh, Existence and uniqueness theorems for fourth-order boundary value problems, J. Math. Anal. Appl. 116 (1986) 416–426. R. Agarwal, On fourth-order boundary value problems arising in beam analysis, Differ. Integral Equations 2 (1989) 91–110. W.F. Ames, Numerical Methods for Partial Differential Equations, third ed., Academic Press, San Diego, 1992. I. Babuska, J. Osborn, J. Pitkäranta, Analysis of mixed methods using mesh-dependent norms, Math. Comp. 35 (1980) 1039–1062. A. Berman, R. Plemmons, Nonnegative Matrix in the Mathematical Science, Academic Press, New York, 1979. J.F. Botha, G.F. Pinder, Fundamental Concepts in Numerical Solution of Differential Equations, John Wiley, New York, 1983. A. Cabada, The method of lower and upper solutions for second, third, fourth and higher order boundary value problems, J. Math. Anal. Appl. 185 (1994) 302–320. P.G. Ciarlet, P.A. Raviart, A mixed finite element method for the biharmonic equation, in: C. de Boor (Ed.), Mathematical Aspect of Finite Elements in Partial Differential Equations, Academic Press, New York, 1974, pp. 125–145. Q.H. Choi, T. Jung, A fourth order nonlinear elliptic equation with jumping nonlinearity, Houston J. Math. 24 (1998) 735–756. L. Collatz, Funktionalanalysis und Numerische Mathematik, Springer, Berlin, 1964. C. De Coster, C. Fabry, F. Munyamarere, Nonresonance conditions for fourth-order nonlinear boundary value problems, Int. J. Math. Math. Sci. 17 (1994) 725–740. M.A. Del Pino, R.F. Manasevich, Existence for a fourth-order nonlinear boundary problem under a two-parameter nonresonance condition, Proc. Amer. Math. Soc. 112 (1991) 81–86. G. Grinstein, A. Luther, Application of the renormalization group to phase transitions in disordered systems, Phys. Rev. B 13 (1976) 1329– 1343. C.P. Gupta, Existence and uniqueness theorem for the bending of an elastic beam equation, Appl. Anal. 26 (1988) 289–304. A.C. Lazer, P.J. McKenna, Large-amplitude periodic oscillations in suspension bridges: some new connections with nonlinear analysis, SIAM Review 32 (1990) 537–578. A.C. Lazer, P.J. McKenna, Global bifurcation and a theorem of Tarantello, J. Math. Anal. Appl. 181 (1994) 648–655. J. Li, Full-order convergence of a mixed finite element method for fourth-order elliptic equations, J. Math. Anal. Appl. 230 (1999) 329–349. R.Y. Ma, H.Y. Wang, On the existence of positive solutions of fourth-order ordinary differential equation, Appl. Anal. 59 (1995) 225–231. R.Y. Ma, J.H. Zhang, S.M. Fu, The method of lower and upper solutions for fourth-order two-point boundary value problems, J. Math. Anal. Appl. 216 (1997) 416–422. A.M. Micheletti, A. Pistoia, Multiplicity results for a fourth-order semilinear problem, Nonlinear Anal. 31 (1998) 895–908.
1096
[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
Y.-M. Wang / Applied Numerical Mathematics 57 (2007) 1081–1096
A.M. Micheletti, A. Pistoia, Nontrivial solutions for some fourth-order semilinear elliptic problems, Nonlinear Anal. 34 (1998) 509–523. J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970. C.V. Pao, On fourth-order elliptic boundary value problems, Proc. Amer. Math. Soc. 128 (2000) 1023–1030. C.V. Pao, Numerical methods for fourth order nonlinear elliptic boundary value problems, Numer. Methods Partial Differential Equations 17 (2001) 347–368. C.V. Pao, Finite difference reaction–diffusion systems with coupled boundary conditions and time delays, J. Math. Anal. Appl. 272 (2002) 407–434. C.V. Pao, Numerical analysis of coupled systems of nonlinear parabolic equations, SIAM J. Numer. Anal. 36 (1999) 393–416. C.V. Pao, X. Lu, Block monotone iterations for numerical solutions of fourth order nonlinear elliptic boundary value problems, SIAM J. Sci. Comput. 25 (2003) 164–185. J. Schroder, Fourth-order two-point boundary value problems: estimate by two side bounds, Nonlinear Anal. 8 (1984) 107–114. J. Stoer, B. Bulirsch, Introduction to Numerical Analysis, Springer, New York, 1992. G. Tarantello, A note on a semilinear elliptic value problem, Differ. Integral Equations 5 (1992) 561–565. S.P. Timoshenko, J.M. Gere, Theory of Elastic Stability, McGraw-Hill, New York, 1961. R.A. Usmani, A uniqueness theorem for a boundary value problem, Proc. Amer. Math. Soc. 77 (1979) 320–335. D. Uzunov, Theory of Critical Phenomena, World Scientific, Singapore, 1993. R.S. Varga, Matrix Iterative Analysis, Prentice-Hall, Englewood Cliffs, NJ, 1962. Y.-M. Wang, On accelerated monotone iterations for numerical solutions of semilinear elliptic boundary value problems, Appl. Math. Lett. 18 (2005) 749–755. Y. Yang, Fourth-order two-point boundary value problem, Proc. Amer. Math. Soc. 104 (1988) 175–180.