One-point pseudospectral collocation for the one-dimensional Bratu equation

One-point pseudospectral collocation for the one-dimensional Bratu equation

Applied Mathematics and Computation 217 (2011) 5553–5565 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homep...

482KB Sizes 155 Downloads 229 Views

Applied Mathematics and Computation 217 (2011) 5553–5565

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

One-point pseudospectral collocation for the one-dimensional Bratu equation John P. Boyd Department of Atmospheric, Oceanic and Space Science and Laboratory for Scientific Computation, University of Michigan, 2455 Hayward Avenue, Ann Arbor, MI 48109, United States

a r t i c l e

i n f o

Keywords: Chebyshev polynomial Pseudospectral Collocation Orthogonal collocation Bratu equation

a b s t r a c t The one-dimensional planar Bratu problem is uxx + k exp(u) = 0 subject to u(±1) = 0. Because there is an analytical solution, this problem has been widely used to test numerical and perturbative schemes. We show that over the entire lower branch, and most of the upper branch, the solution is well approximated by a parabola, u(x)  u0 (1  x2) where u0 is determined by collocation at a single point x = n. The collocation equation can be solved explicitly in terms of the Lambert W-function as u(0)  W(k(1  n2)/2)/(1  n2) where both real-valued branches of the W-function yield good approximations to the two branches of the Bratu function. We carefully analyze the consequences of the choice of n. We also analyze the rate of convergence of a series of even Chebyshev polynomials which extends the one-point approximation to arbitrary accuracy. The Bratu function is so smooth that it is actually poor for comparing methods because even a bad, inefficient algorithm is successful. It is, however, a solution so smooth that a numerical scheme (the collocation or pseudospectral method) yields an explicit, analytical approximation. We also fill some gaps in theory of the Bratu equation. We prove that the general solution can be written in terms of a single, parameter-free b(x) without knowledge of the explicit solution. The analytical solution can only be evaluated by solving a transcendental eigenrelation whose solution is not known explicitly. We give three overlapping perturbative approximations to the eigenrelation, allowing the analytical solution to be easily evaluated throughout the entire parameter space.  2011 Elsevier Inc. All rights reserved.

1. Introduction The one-dimensional Bratu problem has a long history. Bratu’s own article appeared in 1914 [9]; generalizations are sometimes called the ‘‘Liouville–Gelfand’’ or ‘‘Liouville–Gelfand–Bratu’’ problem in honor of Gelfand [15] and the nineteenth century work of the great French mathematician Liouville. In recent years, it has been a popular testbed for numerical and perturbation methods [1,17,16,27,21,26,20,12,22,12]. In this note, we apply very low order spectral methods. Our purpose is to illustrate three themes. First, Chebyshev collocation is not only a numerical method; sometimes it can be applied to generate accurate, explicit, analytic approximations [4]. Second, the Bratu function is so well approximated by a parabola that it is a poor test for numerical methods because any scheme is accurate for such a smooth function. Third, we fill in some theoretical gaps that still exist for this heavily-studied problem.

E-mail addresses: [email protected], [email protected] URLs: http://www.engin.umich.edu:/~jpboyd/ 0096-3003/$ - see front matter  2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2010.12.029

5554

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

The differential equation is

uxx þ k expðuÞ ¼ 0;

ð1Þ

where k > 0 is a constant parameter. The boundary conditions are

uð1Þ ¼ 0:

ð2Þ

The exact solution is

n  pffiffiffiffiffiffiffiffi o 2 uðx; kÞ ¼ log z2 sech z k=2x ;

ð3Þ

where the boundary conditions require that z(k) is the solution of the transcendental equation

z ¼ cosh

pffiffiffiffiffiffiffiffi  k=2z :

ð4Þ

This transcendental equation has two real branches, so u(x; k) has two branches also. The two branches meet at the limit point

klimit ¼ 0:8784576797812903015;

zlimit ¼ 1:8101705806989772753: [0,1]

ð5Þ [0,1]

Some authors pose the problem on x 2 [0, 1] instead of [1, 1]; u (x) = u(2x  1) and k = 4k. In the next section, we derive the one-point collocation approximation and analyze its error. The following section describes how Chebyshev series can be applied to compute solutions of arbitrary accuracy and derives the rate of convergence. In Section 4, we fill gaps in the theory of the Bratu equation. We show that the solution for general k can be written in terms of a symmetric, parameter-free function through simple scale analysis without knowledge of the explicit solution. We also derive three perturbative approximations to the width parameter z(k). In the appendices, we give respectively a Chebyshev series for z(k), a description of Newton’s iteration for computing z(k) and lastly new asymptotics for the ‘‘minus’’ branch of the Lambert W-function. 2. One-point polynomial collocation The simplest polynomial which satisfies the boundary conditions is

u2 ðxÞ ¼ u0 ð1  x2 Þ;

ð6Þ

where u0 is a constant. Substituting this approximation into the equation defines the residual function

  Rðx; u0 Þ  u2;xx þ k expðu2 Þ ¼ 2u0 þ k exp u0 ð1  x2 Þ :

ð7Þ

The pseudospectral method, also known as the ‘‘collocation’’ method, ‘‘method of selected points’’, or ‘‘orthogonal collocation’’, determines the unknown coefficients of the approximation by demanding that the residual function should be the interpolant of zero. In this case, where there is only a single degree of freedom, u0, this requires that R(x; u0) must be zero at a single interpolation point, x = n, which determines u0 as the solution of the transcendental equation

  2u0 þ k exp u0 ð1  n2 Þ ¼ 0:

ð8Þ

Remarkably, this collocation equation can be solved for general collocation point n:

u0 ¼ 

  W kð1  n2 Þ=2 1  n2

;

ð9Þ

where W denotes a branch of the Lambert W-function. This has two real-valued branches which will be denoted W0(x) and W(x); these generate approximations to the lower and upper branches of the two-real-branched Bratu function. The Lambert W-function is a heavily studied transcendental that is available in Maple and other systems [2,11,5]. Where should this interpolation point be? The obvious choice is n = 0, the center of the domain. However, the approximation is symmetric about x = 0. If we collocate about a point x = n other than the origin, then R(n; u0) = 0 implies that R(n; u0) = 0, too. Thus, a nonzero interpolation point in fact constrains the residual function at two points. Two possible choices are (i) one of the points of a two point Chebyshev interior grid, n = ±0.707 and (ii) one of the interior points of the four point Chebyshev–Lobatto grid, n = ±0.5. However, Bruce Finlayson, whose book [13,14] contains many examples of low order pseudospectral applications, notes that it is not efficient to choose the collocation point close to the boundary because the approximation is already strongly constrained by the boundary conditions. He suggests that for low order collocation, it is more efficient to use the roots of the Gegenbauer polynomial of order one and degree two because these pare ffiffiffiffiffiffiffiffifarther from the boundary than the points of a Chebyshev grid. Boyd [3] shows that Finlayson’s recommendation, n ¼ 1=5 ¼ 0:44721 is effective for the two-dimensional Bratu problem. One disadvantage of all approximation methods is that the limit point of the approximation usually differs from the exact limit point. In particular, the limit point of the Lambert W-function occurs at an argument of exp(1), which implies

5555

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

k1pt limit ¼

2 expð1Þ 1  n2

ð10Þ

:

The obvious choice of collocation point, n = 0, is rather bad in that it predicts a limit point which is about 16% too small. The pffiffiffiffiffiffiffiffi Chebyshev choice, n ¼ 1=2 ¼ 0:707, is literally off the chart in Fig. 1, predicting klimit 60% too large. Finlayson’s Gegenbauer polynomial root, n = 0.447, gives only a 4.6% undershoot. The ‘‘optimum’’ collocation point, defined to be that for which the approximate and exact limit points are the same, is noptimum = 0.4030414705. Fig. 2 shows that the one-point approximation is very accurate on the lower branch, but less so on the upper branch. Even there, the collocation approximation is accurate to within about 20% even for small k. Table 1 shows the errors for optimum collocation. The errors for Finlayson’s choice of collocation point rise sharply near the limit point. This is not peculiar to this problem, but rather is a fundamental difficulty of approximating branches near a limit point. Because the limit point of the approximation is usually at least a little different from the true limit point, there is always a small neighborhood of the limit point

λ

(one point collocation)/λ

limit

limit

versus ξ

1.3

1.2

1.1

1

0.9

0.8

0

0.2

0.4

0.6

0.8

1

collocation point ξ Fig. 1. The ratio of the limit point as given by one-point collocation to the exact value klimit = 0.8784576797812903015.

L error, relative to u(x=0)

0

10

-1

10

-2

10

-3

10

-4

10

0

0.2

0.4

0.6

0.8

1

λ Fig. 2. Errors in the L1 norm (i.e., maximum pointwise errors on x 2 [1, 1], turned into relative errors by dividing by u(0; k), for various k 2 [0, klimit = 0.8745] for the optimal choice of collocation point (n = 0.403, solid) and Finlayson’s choice (n = 0.447, dashed).

5556

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565 Table 1 Errors in optimum collocation. Relative error quantity

n = 0.403

Max. error in u(0) on lower branch Max. error in u(0) on upper branch

0.0060 0.096

Max. of L1 error on lower branch Max. of L1 error on upper branch

0.049 0.22

where the error is large or even undefined in the sense that the approximation may yield solutions for k > klimit where the differential equations has no real-valued solutions. Fig. 3 shows the errors on the lower branch in the n  k plane. As already noted, the errors rise sharply as the limit point is approached (top of plot). The error contours rise steeply (implying more accuracy for higher k) near n = 0.403, the optimum collocation point. However, for small k, the error is small regardless of the choice of the collocation point. The lower Bratu branch is, numerically speaking, as easy as blowing down a house of playing cards. 3. High order extension: Chebyshev pseudospectral To go beyond a one-point approximation, the best strategy is to apply a series of Chebyshev polynomials in a form that both explicitly imposes the boundary conditions and respects the symmetry of the solution with respect to x = 0:

uðx; kÞ  ð1  x2 Þ

N X

an T 2n ðxÞ ¼

n¼0

Nþ2 X

bn T 2n ðxÞ;

ð11Þ

n¼0

where Tn(x)  cos (narccos (x)) is the nth degree Chebyshev polynomial. The an are the Chebyshev coefficients of u(x; k)/ (1  x2), the unknowns of the computation, while the bn are the coefficients of u(x; k) itself, which are more convenient for theory. If N is odd, the (N + 1) interior collocation points are

xj ¼ cos



 j p ; 2N þ 3

j ¼ 1; . . . ; ðN þ 1Þ;

ð12Þ

which all lie on half the domain, x 2 [0, 1], because of the symmetry about the origin which is assumed in the Chebyshev series. Beginning with a first guess, u(0)(x; k) — the one-point collocation solution is good enough everywhere except perhaps very close to the limit point — one can repeatedly solve a linear equation until convergence: ðkÞ ðkÞ dxx þ k expðuðkÞ ÞdðkÞ ¼ uxx þ k expðuðkÞ Þ;

uðkþ1Þ ¼ uðkÞ  dðkÞ :

ð13Þ

This is the differential equation equivalent of Newton’s iteration for transcendental equations; it is known variously as the ‘‘Newton–Kantorovich iteration’’ or ‘‘quasi-linearization’’ and has been reinvented many times (Appendix D of [6]).

Log10 of relative error on lower branch 0.8

-0.5 -0.5

0.7

-1 -1

-1.

5

0.6 -1

λ

.5

0.5 0.4 -1.5

-2

0.3

-2

0.2 -2

0.1

-2.

-2.5

-2.5 -3.5

0

0.1

-3

0.2

-3.5

ξ

0.3

-3

5

-3.5

0.4

0.5

Fig. 3. Contours of the base-10 logarithm of the errors in the L1 norm (lower branch only) of the one-point collocation approximation in the n  k plane where the horizontal axis is the location of the collocation point and k is the parameter of Bratu’s equation. The error norms were turned into relative errors by dividing by u(0; k).

5557

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

To smoothly move around the limit point from the lower branch to the upper branch, one may employ pseudoarclength continuation [18,6,26]. It has been known for a century that the rate of convergence of a Chebyshev series is controlled by the singularities in the complex plane of the function being approximated. The singularities of

n  pffiffiffiffiffiffiffiffi o uðx; kÞ ¼ 2 log zsech z k=2x ;

ð14Þ

are logarithmic branch points on the imaginary axis. The rate of convergence is controlled by the complex conjugate pair located at

p

xsing ¼ i pffiffiffiffiffiffi z 2k

ð15Þ

The Chebyshev coefficients bn, recalling that this is the coefficient of T2n(x), fall as

bn  constant expðqnÞ=n;

n  1;

ð16Þ

where the ‘‘asymptotic rate of convergence’’ is



q ¼ 2 I arccosðxsing Þ :

ð17Þ

The algebraic factor of 1/n is determined by the logarithmic nature of the singularity; the exponential factor is entirely controlled by the location of the singularities as in (17). Each term in the series is smaller than its predecessor by a factor of exp(q). Thus at the limit point where q  2.17, each term is smaller than its predecessor by a factor of 8.75: only twelve terms give better than 1010 error as illustrated in Fig. 5. However, although this is hard to see on Fig. 4, the rate of convergence logarithmically goes to zero as k ? 0 on the upper branch. 4. Theorems and the master function 4.1. Theorems In some ways it is unfortunate that an explicit solution is known for the Bratu problem. It is unfortunate because the explicit solution obscures the fact that for many nonlinear problems, it is possible to deduce many properties directly from the differential equation even in the absence of an explicit solution. Indeed, when an analytical solution is found, such propertytheorems are often the route to discovery. For the Bratu problem, such a priori properties include the following.

Asymptotic rate of convergence

Magnitude of singularity location

5

10

lower branch 4

8

3

6

x sing

q

lower branch 2

4

upper branch

1

2

0

upper branch

0 0

0.5

λ

1

0

0.5

1

λ

Fig. 4. Left: asymptotic rate of convergence q; each term in the Chebyshev series is smaller than the next term by exp(q). Right: absolute value of xsing. Note that on the lower branch, the rate of convergence is much better (larger q) than on the upper branch because the singularities are farther from the real axis (larger xsing). The lower branch (solid line) is therefore on top in both panels.

5558

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

10

10

10

10

Lower branch, λ=0.87846

0

-5

-10

-15

0

5

10

15

n Fig. 5. Solid: even degree Chebyshev coefficients bn of the Bratu solution at the limit point. The dashed curve is exp(qn)/(1 + n), illustrating the close agreement between Chebyshev theory and the numerically-calculated coefficients.

Theorem 1 1. If v(x ) is a solution to

v xx þ expðv Þ ¼ 0;

ð18Þ

wzz þ k expðrwÞ ¼ 0:

ð19Þ

pffiffiffi  then wðz; k; rÞ  v kz =r is a solution to

2. If v(x; A) is the solution to

  v ðx; AÞ ¼ A þ b eA=2 x ;

vxx + exp(v) = 0 with v(0) = A, vx(0) = 0, then it may be written as ð20Þ

where b(x) is the parameter-free solution to

bxx þ expðbÞ ¼ 0;

bð0Þ ¼ bx ð0Þ ¼ 0:

ð21Þ

3. Choose the origin of the coordinate system so that x = 0 is a minimum or maximum of the Bratu function, as may always be done without loss of generality because the coefficients of the differential equation are independent of x. Then b(x) is symmetric about the origin, that is,

bðxÞ ¼ bðxÞ8x:

ð22Þ

4. The curvature bxx 6 0"x. 5. b 6 0"x. 6. bðxÞ  C  Djxj; jxj  1;

ð23Þ

for some constant C and some positive constant D.

Proof. The first proposition is proved by substituting w into (19) and canceling common factors of the parameters. Similarly, the second proposition is proved by substituting the assumed form for v into the differential equation and canceling the factors in A. To prove the third proposition, note that one can always decompose an arbitrary function into its parts which are symmetric and antisymmetric with respect to the origin by means of

uðxÞ  SðxÞ þ AðxÞ;

SðxÞ  ð1=2ÞðuðxÞ þ uðxÞÞ;

AðxÞ  ð1=2ÞðuðxÞ  uðxÞÞ:

ð24Þ

The Bratu equation can then be written without approximation as the coupled system

Sxx þ k expðSÞ coshðAÞ ¼ 0;

Axx þ k expðSÞ sinhðAÞ ¼ 0:

ð25Þ

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

5559

The condition that the origin is a minimum or maximum requires that Sx(0) = Ax(0) = 0. Because A(x) is an antisymmetric function, A(0) = 0 since this is the only way the antisymmetric condition A(x) = A(x) can be satisfied as x ? 0. It is then obvious from the differential equation satisfied by the antisymmetric component, combined with the fact that both initial conditions for A(x) are zero, that A(x)  0 everywhere. This implies that u(x) must be a symmetric function. The negative semi-definiteness of the curvature follows immediately from writing the Bratu equation as

bxx ¼  exp ðbðxÞÞ;

ð26Þ

which shows that bxx must be non-positive because the exponential function is everywhere non-negative. A function which is everywhere concave downward, symmetric about the origin and with a root at the origin, must be negative semi-definite. As b(x) curves downward with increasing jxj, (26) shows that jbxxj must decay exponentially fast as the argument of the exponential in the Bratu equation, b, becomes more and more negative. The solution to bxx  0 is obviously linear, justifying the asymptotic approximation (23). h Since both initial conditions are known at the origin, it is trivial to compute b(x) by an initial value method, marching right from x = 0; the solution for negative x is just the reflection of this about the origin. 4.2. Rational Chebyshev expansion for b(x)   pffiffiffi For the Bratu problem, an explicit analytical form for b is available: bðxÞ ¼ 2 log cosh x= 2 . For other problems, however, it may be possible to deduce a similar ‘‘master’’ function without being able to compute it except numerically. In this spirit, we ask: even though b(x) is unbounded with explicitly

bðxÞ  logð4Þ 

pffiffiffi 2jxj;

jxj  1;

ð27Þ

can a spectral series be obtained? The answer is yes, and the expansion shows that b(x) could be computed by a spectral method, applied to the differential equation, if an explicit form were unavailable. The first trick is to use special basis functions to accomodate the polynomial unboundedness as described in great gen~ erality in [7]. One can then subtract these from b to obtain a bounded function bðxÞ which can be expanded as a series of rational Chebyshev functions [6,7]:

~ bðxÞ  b

N X pffiffiffi pffiffiffi 2xerfðxÞ ¼ an TB2n ðx; LÞ  2xerfðxÞ:

ð28Þ

n¼0

The factor xerf (x) multiplying the rational Chebyshev functions are

pffiffiffi 2 is an analytic function with the correct asymptotic behavior (xerf (x)  jxj, jxj  1). The

TB2n ðx; LÞ  cos ð2narccotðx=LÞÞ;

ð29Þ

where L > 0 is a user-choosable ‘‘map parameter’’. Note that jTB2n(x; L)j 6 1"x, n; thus, the error in a truncated series is bounded by the sum of the absolute value of all neglected coefficients. Fig. 6 shows that it is possible to obtain about ten decimal place accuracy with only {TB0(x; L), TB2(x; L), . . . , TB50(x; L)}. Once the ‘‘master function’’ b(x) is known on the infinite domain, it is then straightforward to apply it to solve the Bratu equation on a finite domain

uxx þ k expðuÞ ¼ 0;

uð1Þ ¼ 0;

ð30Þ

for arbitrary k and both branches. It is only matter of adding a constant to b(x) in combination with appropriate rescaling of its amplitude and width as expressed by the first two propositions of the theorem. 4.3. Asymptotics of the exact solution on the upper branch When k  1 on the upper branch,

 pffiffiffi  uðx; kÞ ¼ 2 logðzÞ  2 log cosh kzx ; pffiffiffiffiffiffi  logð4z2 Þ  2kzjxj; jxj  1=z;

ð31Þ ð32Þ

In this same regime, it is shown in [8] that it is convenient to express the parameter z that appears in the exact solution in terms of a new parameter r via

z  coshðrÞ

ð33Þ

The eigenrelation becomes

pffiffiffiffiffiffiffiffi k=8 ¼ r expðrÞ

1 ; 1 þ expð2rÞ

r 2 ½0; 1

ð34Þ

5560

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

absolute values of TB coefficients

TB coefficients of β (x)=-2 log(cosh(x/sqrt(2))) + sqrt(2) x erf(x) 0 10 -2

10

-4

10

-6

10

L=2 L=4 L=6 L=8 L=10 L=12

-8

10

-10

10

0

10

20

30

40

50

degree pffiffiffi ~ Fig. 6. First twenty-five even coefficients of the rational Chebyshev series for the bounded function bðxÞ  bðxÞ  2xerfðxÞ. The horizontal label is degree and thus extends to N = 50. The rate of convergence is not wildly sensitive to the map parameter; L = 4 is the best choice for this example.

Neglecting exp(2r) compared to one in the denominator, as true for small k, gives an equation that is solved exactly by the Lambert W-function:

 pffiffiffiffiffiffiffiffi r  W   k=8 ; pffiffiffiffiffiffiffiffi  pffiffiffiffiffiffiffiffi   log k=8 þ log  log k=8 ;

ð35Þ k  1;

ð36Þ

where we employed the asymptotics of the ‘‘minus’’ branch of the W-function. The approximation of r by the Lambert Wfunction (‘‘rW’’ in the table) is quite accurate over most of the upper branch as shown in Table 2, less than 1% relative error for k < 0.45; in contrast, the simpler logarithm/log–log approximation is only accurate for very small k. This implies

zðkÞ 

logðk=8Þ pffiffiffiffiffiffi ; 2k

k  1;

ð37Þ

The thickness of the interior layer where u(x; k) has non-negligible curvative is thus proportional to the logarithmic factor of k. This heuristically explains the slower and slower Chebyshev series convergence as k  1 on the upper branch: the series must resolve an ever-narrowing region of high curvature as k ? 0. 4.4. Perturbative approximation to z for small k (lower branch) The Bratu eigenrelation may be written 2

k ¼ 2arccosh ðzÞ=z2 :

ð38Þ

If we expand z(k) as a power series in k, substitute into the eigenrelation, expand the eigenrelation in powers of k, and force the residual to vanish order-by-order, we obtain

1 13 2 541 3 9509 4 k þ k þ k : u¼1þ kþ 4 96 5760 129024

ð39Þ

This is helpful in initializing Newton’s iteration on the lower branch for small and moderate k, being more accurate than the limit point expansion to the order shown below for k < 0.6. Table 2 Parameters on the upper branch. k

rloglog

rW

r

z

0.01 0.1 0.2 0.5 0.8 0.87

4.55 2.98 2.45 1.71 1.29 1.21

4.939587 3.4209 2.9139 2.153 1.65 1.54

4.939523 3.4194 2.9094 2.127 1.53 1.30

69.9 15.3 9.20 4.25 2.42 1.97

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

5561

4.5. Expansion for both branches about the limit point In the neighborhood of the limit point, one can write formally 2

k  klimit ¼ 2arccosh ðzÞ=z2  klimit ; 2

ð40Þ 2

¼ 2arccosh ðy þ zlimit Þ=ðy þ zlimit Þ  klimit ;

ð41Þ

 :3858439470y2 þ 0:4488787094y3  0:3833915842y4 þ . . . ;

ð42Þ

where we have made the change of variable y  z  zlimit and where klimit and zlimit are the values of k and z at the limit point,

klimit ¼ 0:8784576788;

zlimit ¼ 1:810170580698:

ð43Þ

To lowest order in y, we find

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi y  z  zlimit  1:609882406 ðk  klimit Þ:

ð44Þ

Substituting a formal series for z(k) as a series of powers — once for each sign in (44) — of the variable

r

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðk  klimit Þ;

ð45Þ

into the eigenrelation and matching powers of r gives

pffiffiffiffiffiffiffi z  1:810170581  1:609882406 ~k þ 1:507563549~k  1:456445498ð~kÞ3=2 þ 1:435317944~k2  1:433667611ð~kÞ5=2 þ 1:445853195~k3 ;

ð46Þ

where the expansion variable is

~k  k  klimit ¼ k  0:8784576788;

ð47Þ

and where the upper signs give the upper branch and the same series, taking the lower signs, gives the lower branch. 5. Comparisons with other methods It is difficult to perform sensible comparisons with the vast body of existing work on the Bratu problem. It is silly to compare very complicated perturbative calculations with the simple numerical approximation given here. Nevertheless, some previous work merits comment. In Section 2.3 of his review, He applies a variational method to the Bratu problem [17]. The ‘‘variational method’’ in this instance means that he defines a one-degree-of-freedom approximation as a parabola and determines this coefficient by demanding that a weighted average (integral) of the residual on the interval should be zero instead of collocation of the residual. The good news is that this approach bypasses the vexing choice of a collocation point, and furthermore the integral can be analytically evaluated. The bad news is that the integral yields an error function and the equation for u(0) cannot be analytically solved. With numerical rootfinding, He gets the bifurcation point to 1.57% error, better than ours at the sacrifice of an explicit solution. Aregbesola [1] employs polynomial collocation. However, he does not employ symmetry; his approximation with two points at n = ±1/3 is in fact equivalent to our one-point collocation scheme. Without exploitation of symmetry, Aregbesola was forced to numerically solve a pair of equations for the two coefficients numerically. His basis functions are built up from two-point Taylor series, which is much less robust and also a more complicated approach than the Chebyshev basis described above. Hassan and Erturk [16] performed a Taylor expansion about one endpoint, retaining the slope at the endpoint as a parameter, and then calculated it by solving the polynomial equation that results from imposing the other boundary condition. This fails everywhere on the upper branch and on part of the lower branch because the logarithmic singularities p offfiffiffiu(x; k) shrink the radius of convergence so that it does not include the other endpoint, which happens whenever jxsing j < 3. In contrast, the Chebyshev series is always convergent over the whole interval so long as the function u(x; k) is not singular on the real interval x 2 [1, 1] itself (Chapter 2, [6]). Other methods like Adomian decomposition [27,26,12] implicitly use Taylor expansions and are subject to the same difficulty. 6. Summary The following empirical rule is sound advice to number-crunchers and arithmurgists. Proposition 1 (Parabola/Sine Rule). If a solution is well-approximated by a parabola or a sine function or otherwise is very, very smooth, then it is a poor example for comparing and evaluating numerical methods because even a dreadful, awful numerical method will work tolerably well for such a ‘‘nice’’ function.

5562

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

As an editor of a computational journal for fifteen years, I cannot begin to count the number of papers whose authors have sabotaged their work by violating this precept. The future belongs to numerical and perturbative schemes that can handle brutal, difficult problems such as Troesch’s equation [19,10], or the ensemble of challenging problems in [23–25]. Who cares if a method can accurately solve a problem whose solution is a parabola or a sine wave? Lewis Richardson was solving partial differential equations by hand a century ago. Sir Richard Southwell and his indefatigable calculator, Grace Vaisey, solved PDE after PDE with paper-and-pencil in the early twentieth century. They would have sneered at a one-dimensional ODE whose solution is very near a parabola. Success-for-the-smooth leaves us in total ignorance about success-for-the-hard. The Bratu solution is so smooth that over the entire lower branch, it is well-approximated by a parabola. This means that it is actually a terrible illustration of general numerical techniques. We have revisited this problem to show that it is useful mostly in a completely different sense: to illustrate the theme of Chapter 20 of the author’s book [6] and his article [4] and also of Finlayson’s work [13,14]: that pseudospectral methods can be used analytically for very smooth solutions. The rules for analytical approximation are somewhat different from high order numerical approximation: the best collocation point, where there is only one, is not a point on the usual Chebyshev grid. We have further shown that it is possible to reduce the general problem on the finite domain to that of computing a single, parameter-free symmetric function on an infinite interval. The best numerical method is not the blind application of a number-crunching technique, but rather a numerical strategy developed after close theoretical analysis of the problem. Good numerics is not independent of the physics, but rather flows from the physics. Acknowledgments. This work was supported by the National Science Foundation through grants OCE 0451951 and ATM 0723440. I thank the reviewers and the editor-in-chief for their helpful comments and references to Troesch’s problem and other nonlinear ODEs that are better testbeds than the Bratu problem. Appendix A. Chebyshev expansion for the Bratu dispersion relation The function z(k) cannot be accurately expanded directly as a Chebyshev series because of the logarithmic singularities on the upper branch. Boyd [8] showed that it is possible to expand a dependent quantity, r(k), from which an approximation can be obtained in the form

Table 3 Chebyshev coefficients for rCheb(m), listed so that the first row is a0, a1, a2. 0.74045952017122 0.53987162496955 0.24398611080787 0.02838306892083 0.06858520186835 0.06239294099808 0.01937944774297 0.00844798166647 0.01159835822230 0.00468121938588 0.00008779242049 0.00087326935646 0.00032500748452 0.00003667202496 2.48396452e05 2.82578672e05 9.18943448e06 5.82721936e07 1.02770104e06 8.98760305e07 1.01960974e07 1.056332828e08 6.268478590e08 1.8984976803e08 5.3362017097e09 4.0581748441e09 3.4980373961e09 6.7732035053e11 8.0174086761e10 1.0461828126e09 5.5383823482e10 5.7064476816e10 6.0211397587e10 5.7182158499e10

0.31369470338778 0.20371681487980 0.22403187029337 0.11049296267040 0.00047644472859 0.04283337485235 0.03034272773816 0.00602667058337 0.00534222232856 0.00469998032420 0.00134434502264 0.00013777427141 0.00017210414676 0.00001670449309 4.13689713e06 1.80681259e05 9.81729595e06 2.01296275e06 1.23183079e06 1.14548167e06 4.194454865e07 1.311822365e07 1.2124586907e07 6.6734000393e08 2.1708893360e08 1.5157048780e08 1.0291208671e08 3.9780586012e09 2.1933516158e09 1.5040812177e09 5.5550035077e10 1.9658708378e10 7.7211678492e11

0.76813593962155 0.10377852157648 0.10468953058419 0.11818701239755 0.04927171732195 0.00890124745652 0.02394500558253 0.01298155008367 0.00113367092892 0.00253381777603 0.00150098452585 0.00028029783579 0.00002036437205 0.00001783385921 2.02448273e05 1.46310201e06 4.46285187e06 6.14910587e07 1.89123876e07 6.00471715e07 2.926139890e07 7.149602807e08 7.2063547682e08 5.1275745982e08 1.7414374616e08 1.0528387046e08 8.3270826272e09 3.7411661668e09 2.0449381510e09 1.6586879913e09 1.0111687174e09 7.1685995095e10 6.3982892182e10

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

pffiffiffiffiffiffiffiffi k=8;  pffiffiffiffiffiffiffiffi z ¼ cosh r k=8 ;

m

5563

ð48Þ ð49Þ



rðmÞ ¼ r Cheb ðmÞ  W ðmÞHð1  m=0:15; L ¼ 4Þ; n  pffiffiffiffiffiffiffiffiffiffiffiffiffiffio Hðx; LÞ  ð1=2Þ 1 þ erf Lx= 1  x2 ; X rCheb ðmÞ ¼ aj cos ðjt½x Þ;

ð50Þ ð51Þ ð52Þ

j¼0

8 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi9 < 0:33137170967459079  k=8= ; t ¼ arccos  : ; 0:57564894655909

ð53Þ

where in the last line the plus sign generates the upper branch and the minus sign generates the lower branch. The coeffcients uj are given in Table 3.

Errors of lower branch approximations to z(λ) 1.8

exact limit point series small- λ series

1.7 1.6

z

1.5 1.4 1.3 1.2 1.1 1 0

0.2

0.4

λ

0.6

0.8

Fig. 7. Thick solid: exact z(k) on the lower branch. Dashed: limit point series up to and including r6. Dotted: small k series up to and including k4.

Errors of upper branch approximations to z( λ ) 20

exact limit-point series log/log-log approx. Lambert W approx

18 16 14

z

12 10 8 6 4 2 0

0.2

0.4

λ

0.6

0.8

Fig. 8. Thick solid: exact z(k) on the upper branch. Dashed: limit point series up to and including r6. Dotted: log/log–log approximation. Dash/dot: Lambert W-approximation. The W-function approximation is almost undistinguishable from the true z(k) over the entire upper branch except very near the limit point.

5564

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

Alternatively, one may use Newton’s iteration as described in the next section. Appendix B. Newton’s iteration The dispersion relation can be written 2

k ¼ 2arccosh ðzÞ=z2 :

ð54Þ

This form is graphically convenient because the inverse may be plotted by calculating k for many values of z and then simply reversing the axes so that k is the horizontal axis. Newton’s iteration is 2

2arccosh ðzn Þ=ðzn Þ2 ; znþ1 ¼ zn    qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 arccoshðzn Þ=ðzn Þ2 1= ðzn Þ2  1  arccoshðzn Þ=zn

ð55Þ

where the superscript n denotes the iteration level. To initialize it, we use the approximations derived earlier. These are compared in Figs. 7 and 8 for the lower and upper branches, respectively. To the orders shown, k series (39) is best on the lower branch for k 2 [0,0.6] while the Lamn the small o pffiffiffiffiffiffiffiffi bert W-function approximation, z  cosh W  k=8 , is best on the same range for the upper branch. The limit point series (46), taking the upper signs for the upper branch and lower signs for the lower branch, are best for k 2 [0.6, klimit]. Appendix C. Lambert W-function The lower branch of the Lambert W-function can be evaluated by using an approximation from [2]:

2 3 2 4 1 nh pffiffiffiffiffiffiffiffiffii W ðxÞ  1  r  1  pffiffiffiffi o5; M1 1 þ M1 r=2 = 1 þ M 2 r exp M3 r

ð56Þ

r  1  logðxÞ; M1  0:3361; M2  0:0042; M3  0:0201:

ð57Þ



where

This is accurate to about 0.025%. Higher accuracy can be obtained by performing the Newton iteration (same for either branch)

W nþ1 ¼ W n  ðW n  expðW n ÞxÞ=ð1 þ W n Þ;

ð58Þ

until convergence. For small negative argument x,

W   logðxÞ  logðlogðxÞÞ;

jxj  1;

signðxÞ < 0:

ð59Þ

References [1] Y.A.S. Aregbesola, Numerical solution of Bratu problem using the method of weighted residual, Electron. J. South. Afr. Math. Sci. 3 (2003) 1–7. [2] D.A. Barray, J.Y. Parlange, L. Li, H. Prommer, C.J. Cunningham, E. Stagnitti, Analytical approximations for real values of the Lambert W-function, Math. Comput. Simulat. 53 (2000) 95–103. [3] J.P. Boyd, An analytical and numerical study of the two-dimensional Bratu equation, J. Sci. Comput. 1 (1985) 183–206. [4] J.P. Boyd, Chebyshev and Legendre spectral methods in algebraic manipulation languages, J. Symb. Comput. 16 (1993) 377–399. [5] J.P. Boyd, Global approximations to the principal real-valued branch of the Lambert W-function, Appl. Math. Lett. 11 (1998) 27–31. errata: in Eq. (4), pffiffiffi 11/36 should be 211=36. [6] J.P. Boyd, Chebyshev and Fourier Spectral Methods, 2d ed., Dover, Mineola, New York, 2001. [7] J.P. Boyd, Rational Chebyshev spectral methods for unbounded solutions on an infinite interval using polynomial-growth special basis functions, Comput. Math. Appl. 41 (2001) 1293–1315. [8] J.P. Boyd, Chebyshev polynomial expansions for simultaneous approximation of two branches of a function with application to the one-dimensional Bratu equation, Appl. Math. Comput. 143 (2002) 189–200. [9] G. Bratu, Sur les équations intégrales non linéaires, Bull. Soc. Math. France 43 (1914) 113–142. [10] S.-H. Chang, A variational iteration method for solving troeschs problem, J. Comput. Appl. Math. 234 (2010) 3043–3047. [11] R.M. Corless, G.H. Gonnet, D.E.G. Hare, D.J. Jeffrey, D.E. Knuth, On the Lambert W function, Adv. Comput. Math. 5 (1996) 329–359. [12] E. Deeba, S. Khuri, S. Xie, An algorithm for solving boundary value problems, J. Comput. Phys. 159 (19) 125–138. [13] B.A. Finlayson, The Method of Weighted Residuals and Variational Principles, Academic, New York, 1973. p. 412. [14] B.A. Finlayson, Orthogonal collocation in chemical-reaction engineering, Catal. Rev. Sci. Eng. 10 (1974) 69–138. [15] I.M. Gelfand, Some problems in the theory of quasi-linear equations, Trans. Amer. Math. Soc. Ser. 2 (1963) 295–381. [16] I. Hassan, V. Erturk, Applying differential transformation method to the one-dimensional planar Bratu problem, Int. J. Contemp. Math. Sci. 2 (2007) 1493–1504. [17] J.H. He, Some asymptotic methods for strongly nonlinear equations, Int. J. Mod. Phys. B 20 (2006) 1141–1199. [18] H.B. Keller, Numerical Methods for Two-Point Boundary-Value Problems, Dover, New York, 1992. [19] S.A. Khuri, A numerical algorithm for solving the Troesch’s problem, Int. J. Comput. Math. 80 (2003) 493–498. [20] S.A. Khuri, A new approach to Bratu’s problem, Appl. Math. Comput. 1477 (2004) 131–136.

J.P. Boyd / Applied Mathematics and Computation 217 (2011) 5553–5565

5565

[21] S. Li, S. Liao, An analytic approach to solve multiple solutions of a strongly nonlinear problem, Appl. Math. Comput. 169 (2005) 854–865. [22] A. Mounim, B. de Dormale, From the fitting techniques to accurate schemes for the Liouville-Bratu-Gelfand problem, Numer. Meth. Part. Diff. Eq. 22 (2006) 761–775. [23] M.R. Scott, On the conversion of boundary-value problems into stable initial-value problems via several invariant imbedding algorithms, in: A.K. Aziz (Ed.), Numerical Solutions of Boundary Value Problems for Ordinary Differential Equations, Academic Press, New York, 1975, pp. 89–146. [24] M.R. Scott, H.A. Watts, A systematized collection of codes for solving two-point boundary-value problems on the conversion of boundary-value problems into stable initial-value problems via several invariant imbedding algorithms, in: L. Lapidus, W.E. Schiesser (Eds.), Numerical Methods for Differential Systems, Academic Press, New York, 1976. [25] M.R. Scott, H.A. Watts, Computational solution of linear two-point boundary value problems via orthonormalization, SIAM J. Numer. Anal. 14 (1977) 40–70. [26] M. Syam, A. Hamdan, An efficient method for solving Bratu equations, Appl. Math. Comput. 176 (2006) 704–713. [27] A.M. Wazwaz, Adomian decomposition method for a reliable treatment of the Bratu-type equations, Appl. Math. Comput. 166 (2005) 652–663.