Parameter estimation by spectral approximation

Parameter estimation by spectral approximation

Applied Mathematics and Computation 104 (1999) 1±14 www.elsevier.nl/locate/amc Parameter estimation by spectral approximation Ki Hyun Cha a, Ben-yu G...

182KB Sizes 0 Downloads 130 Views

Applied Mathematics and Computation 104 (1999) 1±14 www.elsevier.nl/locate/amc

Parameter estimation by spectral approximation Ki Hyun Cha a, Ben-yu Guo

a,b,*

, Yong Hoon Kwon

a,1

a

b

Department of Mathematics, Pohang University, Pohang 790-784, South Korea Department of Mathematics, Shanghai University, Jiading, Shanghai, People's Republic of China

Abstract In this paper, Chebyshev and Legendre approximations are proposed for estimating parameters in di€erential equations, which are easy to be performed. The convergence and the spectral accuracy are proved, even without some conditions as imposed in other papers. The numerical results show the advantages of this new approach. Ó 1999 Elsevier Science Inc. All rights reserved. AMS Classi®cation: 34A55; 65L10; 65L99 Keywords: Parameter estimation; Spectral approximation; Convergence and spectral accuracy

1. Introduction In studying many problems arising in sciences and engineering, we need to estimate parameters in governing di€erential equations, using the data obtained from experimental observations. Some theoretical results for such problems can be found in the literature, e.g., see [1±4]. Also, there is a lot of work concerning their numerical approximations, e.g., see [5±10]. One of the usual numerical methods is the optimization method as in [5]. It means that we ®rst guess the unknown parameters in some ways, and then construct a series of the undetermined parameters by minimizing certain cost functions. Under

*

Corresponding author. Fax: 86 21 5952 9932; e-mail: [email protected]. The research of this author was supported in part by the Korean Ministry of Education BSRI98-1430, the Graduate School for Environmental Science and Technology, and Non-directed Research Fund ± Korean Research Foundation. 1

0096-3003/99/$ ± see front matter Ó 1999 Elsevier Science Inc. All rights reserved. PII: S 0 0 9 6 - 3 0 0 3 ( 9 8 ) 1 0 0 5 5 - 3

2

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

some conditions, the series are convergent with the limits as the exact parameters. The main disadvantage of this approach is the costly computations. Another method is the so-called direct method. In this case, we approximate the di€erential equations with unknown parameter directly, and then solve the resulting problems by using given data. Both ®nite di€erence method and ®nite element method have been used in this ®eld, e.g., see [6±8]. Some of the other methods could be found in [9,10]. As we know, the spectral method has developed rapidly in the past two decades, see [11±14]. It has been widely used for numerical solutions of forward problems of di€erential equations. Its main merit is the high accuracy. But, to our knowledge, it has never been applied to any inverse problem. The question is whether or not spectral approximation is also available for some inverse problems. The purpose of this paper is to answer this question. For simplicity, let K ˆ …ÿ1; 1† and consider the following model problem   d du a…x† …x† ˆ f …x†; x 2 K; ÿ dx dx …1† u…ÿ1† ˆ u0 ; u…1† ˆ u1 ; where a…x† P amin > 0: If a…x† 2 L1 …K† and f …x† 2 L2 …K†, then (1) has a unique solution u…x† 2 1 H …K† and a…x†…du=dx†…x† 2 H 1 …K†: Furthermore, if, in addition, a…x† is uniform Lipschitz continuous, then u…x† 2 H 2 …K† (see [15,16]). Now we measure the values of u…x† at certain points in K, and want to determine the corresponding values of unknown parameter a…x†. Recently Guo, Cha and Kwon [17] developed a mixed spectral-pseudospectral method based on the weak formulation of Eq. (1). In this paper we consider the strong formulation of Eq. (1) and provide another algorithm. The main tricks in this paper are ®tting u…x† by Chebyshev and Legendre interpolations, approximating a…x†…du=dx†…x† by a simple and accurate algorithm, and then estimating a…x† pointwise. The main advantages of this new approach are the following. First of all, it is very easy to be implemented, and so saves a lot of work. Next, the algorithm is convergent almost everywhere, even without the condition that for certain positive constant c,   du d2 u …x† P c: inf max …x† ; x2K dx dx

…2†

This condition is usually imposed for the convergences of other direct methods, see [3,10]. Finally the numerical estimation for a…x† has spectral accuracy. The outline of this paper is as follows. Chebyshev approximation is proposed in Section 2, and the Legendre approximation is provided in Section 3. The almost everywhere convergences of both algorithms are proved in these two sections. If, in addition, condition (2) holds, then uniform convergence

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

3

follows. The ®nal section is devoted to numerical results, which coincide with the theoretical analysis. The main idea in this paper is also applicable to other inverse problems. 2. Chebyshev approximation Let x…x† be a non-negative, continuous and integrable real-valued function in K. The weighted space L2x …K† is de®ned by n o L2x …K† ˆ v kvkL2x …K† < 1 ; equipped with the norm 0 11=2 Z 2 kvkL2x ˆ @ jv…x†j x…x† dxA : K

For any non-negative integer m, the Hilbert space Hxm …K† is de®ned as  Hxm …K† ˆ v okx v 2 L2x …K†; 0 6 k 6 m : For any real r P 0; Hxr …K† is de®ned by the space interpolation with the norm k  kr;x : If x…x†  1, then we denote L2x …K†; Hxr …K†; k  kL2x and k  kr;x by L2 …K†; H r …K†; k  k and k  kr , respectively. In this section, we consider the Chebyshev approximation. Let x…x† ˆ ÿ1=2 and N be any positive integer. Let xj be the Chebyshev±Gauss± …1 ÿ x2 † Lobatto interpolation points, i.e.,   p…N ÿ j† ; 0 6 j 6 N: xj ˆ cos N While xj stands for the corresponding weights, i.e., p p ; xj ˆ ; 1 6 j 6 N ÿ 1: x0 ˆ xN ˆ 2N N Let Tl …x† be the Chebyshev polynomial of the ®rst kind of degree l, i.e., Tl …x† ˆ cos …arccos x†; l ˆ 0; 1; 2; . . .  the Chebyshev interpolant of v…x† is For any function v 2 C…K†; N X v~l Tl …x†; IN v…x† ˆ where

lˆ0 N 1X v…xj †Tl …xj †xj ; cl jˆ0 p c0 ˆ cN ˆ p; cl ˆ ; 1 6 l 6 N ÿ 1: 2

v~l ˆ

4

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

We suppose that the value of a…x† at certain point is known. For simplicity, let ai ˆ a…xi † and assume that a…x0 † ˆ a0 is known. Usually we approximate directly the value ai , e.g., see [5±10]. But we now introduce b…x† ˆ a…x†…du=dx†…x†; and shall ®rst approximate bi ˆ b…xi † by Bi . Then we approximate ai by Ai . To do this, let Z xi‡1 f …x† dx: …3† Fi ˆ ÿ xi

Then integrating Eq. (1) yields bi‡1 ˆ bi ÿ Fi ; 0 6 i 6 N ÿ 1; du b0 ˆ a0 …ÿ1†: dx Hereafter du du ; 0 6 i 6 N ; etc: …xi † ˆ dx dx xˆxi Set

Z Gi ˆ

xi‡1 xi

IN f …x† dx:

…4†

…5†

Then the set of Bi is de®ned as Bi‡1 ˆ Bi ÿ Gi ; 0 6 i 6 N; d B0 ˆ a0 IN u…ÿ1†: dx While A0 ˆ a0 and for 1 6 i 6 N ;  ÿ1 d d Bi …dx IN u…xi †† ; if dx IN u…xi † 6ˆ 0; Ai ˆ otherwise: Aiÿ1 ;

…6†

…7†

Remark 2.1. Since IN f …x† is an algebraic polynomial of degree N , the values of Gi can be evaluated exactly. So the errors jBi ÿ bi j depend only on the di€erences between u…x†; f …x† and their interpolants. The smoother the functions, the smaller the errors. In particular, for analytic functions u…x† and f …x†, the errors decay faster than aN ÿb ; a and b being any positive constants. Thus we can approximate b…xi † with spectral accuracy. In fact, this is one of the advantages of the new approach (6) and (7). ~ ˆ K ÿ K . If the meaRemark 2.2. Let K ˆ K \ fxj…du=dx†…x† 6ˆ 0g; and K ~ ~ is positive, then for all x1 ; x2 2 K; sure of K Z xi‡1 f …x† dx ˆ 0: xi

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

5

~ Thus we can predict and deal with it in advance. It means that f …x†  0 in K: This fact allows us to assume that measure(K † ˆ measure…K†. If xi 2 K , then for smooth function u(x) and large N , …d=dx†IN …xi † 6ˆ 0, and thus Ai is deter~ then it is an isolated zero of …du=dx†…x†, and so mined by (7) uniquely. If xi 2 K; ai is arbitrary. Hence algorithms (6) and (7) are well-de®ned for all xi ; 0 6 i 6 N : This is another advantage of this approach. We now analyze the errors. In the sequel, we denote by c a generic positive constant independent of any function and N . Let Ca;r be a positive constant depending only on the derivatives of a…x† of orders up to r. Lemma 2.1. If u 2 H 3=2 …K† and f 2 L2 …K†, then jbi j 6 c…a0 ‡ 1†…kuk3=2 ‡ kf k†;

0 6 i 6 N:

Proof. We have from Eq. (4) that iÿ1 iÿ1 X d X jbi j 6 jb0 j ‡ Fj 6 ja0 j u…x0 † ‡ jFj j: jˆ0 dx jˆ0 By the trace theorem, d u…x0 † 6 ckuk 3=2;x : dx On the other hand, by (3) and the H older inequality,   Z xi‡1 1=2 iÿ1 iÿ1 X X p 2 1=2 jFj j 6 jf …x†j dx …xi‡1 ÿ xi † 6 2kf k: jˆ0

jˆ0

xi

So the conclusion comes immediately.



Lemma 2.2. If f 2 Hxr …K† with r > 1=2, then N X jGi ÿ Fi j 6 cN ÿr kf kr;x : iˆ0

Proof. We have from (3) and (5) and the H older inequality that N N Z xi‡1 X X jGi ÿ Fi j ˆ …IN f …x† ÿ f …x†† dx iˆ0

iˆ0

6

xi

N  Z xi‡1 X iˆ0

xi

…IN f …x† ÿ f …x††2 dx

1=2

…xi‡1 ÿ xi †1=2

p p 6 2kIN f ÿ f k 6 2kIN f ÿ f kx : Moreover, for any v 2 Hxr …K†; r > 1=2 and 0 6 l 6 r (see [18]),

6

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

kIN v ÿ vkl;x 6 cN 2lÿr kvkr;x :

…8†

It implies that kIN f ÿ f kx 6 cN ÿr kf kr;x which completes the proof.



Lemma 2.3. If u 2 Hxr‡2 …K† with r > 1, then jB0 ÿ b0 j 6 cN 1ÿr kukr‡2;x : Proof. By (4) and (6) and the trace theorem, d d jB0 ÿ b0 j ˆ A0 IN u…x0 † ÿ a0 u…x0 † 6 ckIN u ÿ uk3=2 : dx dx Furthermore, (8) leads to kIN u ÿ uk3=2 6 ckIN u ÿ uk3=2;x 6 cN 1ÿr kukr‡2;x : The proof is complete.



We now estimate jBi ÿ bi j. Theorem 2.1. If u 2 Hxr‡2 …K†, f 2 Hxr …K† and r > 1; then jBi ÿ bi j 6 cN 1ÿr kukr‡2;x ‡ cN 1ÿr kf kr;x ;

0 6 i 6 N:

Proof. Lemma 2.3 implies the desired result for i ˆ 0. Now assume i P 1. By (4) and (6), Bi ÿ bi ˆ B0 ÿ b0 ÿ

iÿ1 X …Gj ÿ Fj †:

…9†

jˆ0

By using Lemmas 2.2 and 2.3, we deduce that jBi ÿ bi j 6 jB0 ÿ b0 j ‡

N X jˆ0

jGj ÿ Fj j 6 cN 1ÿr kukr‡2;x ‡ cN ÿr kf kr;x :



Remark 2.3. If a 2 C r‡1 …K† and f 2 Hxr …K†, then we can verify that kukr‡2;x 6 Ca;r‡1 kf kr;x ; and so jBi ÿ bi j 6 Ca;r‡1 N 1ÿr kf kr;x : We next estimate jAi ÿ ai j:

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

7

Theorem 2.2. If u 2 Hxr‡2 …K†, f 2 Hxr …K† and r > 1; then for any d > 1=2; 2

jAi ÿ ai j 6 c…a0 ‡ 1†N 2dÿr …kukr‡2;x ‡ kf kr;x ‡ 1†  ÿ1  d d 2dÿr  u…xi † u…xi † ÿ cN kukr‡2;x ; dx dx

0 6 i 6 N:

Proof. For the sake of simplicity, let d d du du D…x† ˆ IN u…x† …x† ; E…x† ˆ IN u…x† ÿ …x† : dx dx dx dx In particular, Di ˆ D…xi † and Ei ˆ E…xi †: Moreover, set ÿ1 d Ji ˆ IN u…xi † jBi ÿ bi j; Ki ˆ bi Ei Dÿ1 i : dx Then by (7),

 ÿ1  ÿ1 d du IN u…xi † …xi † jAi ÿai j ˆ Bi ÿbi dx dx du d …xi † ÿ bi IN u…xi † ˆ Dÿ1 i B i dx dx   du du ÿ1 6 Di Bi …xi † ÿ bi …xi † ‡ jbi jEi ˆ Ji ‡ Ki : dx dx

…10†

By the trace theorem, for any d > 1=2; Ei 6 kEkL1 …K† 6 ckEk1‡d : Using (8) again yields Ei 6 kEk1‡d;x 6 cN 2dÿr kuk2‡r;x :

…11†

As we know, for any constants c1 and c2 , jc1 ‡ c2 j P jjc1 j ÿ jc2 jj: Thus we get from (11) that

d

IN u…xi † P d u…xi † ÿ d u…xi † ÿ d IN u…xi † dx

dx dx dx



d 2dÿr kuk2‡r;x : P

dx u…xi † ÿ cN By using Theorem 2.1 and (12), we know that for large N , ÿ1  d : Ji 6 cN 1ÿr …kuk2‡r;x ‡ kf kr;x † u…xi † ÿ cN 2dÿr kuk2‡r;x dx

…12†

8

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

Clearly

  d d Di P u…xi † u…xi † ÿ cN 2dÿr kuk2‡r;x : dx dx

By the above inequality, Lemma 2.1 and (11), we assert that Ki 6 c…a0 ‡1†N 2dÿr kuk2‡r;x …kuk3 ‡kf k† 2  ÿ1  d d 2dÿr kuk2‡r;x :  u…xi † u…xi † ÿcN dx dx The proof is complete.



Remark 2.4. If in addition a 2 C r‡1 …K†; then kukr‡2;x 6 Ca;r‡1 kf kr;x : Therefore  ÿ1  d 2 d 2dÿr 2dÿr …kf kx ‡ 1† u…xi † u…xi † ÿ cN kf kr;x : jAi ÿ ai j 6 Ca;r‡1 N dx dx Remark 2.5. In Theorem 2.2, d is an arbitrary constant bigger than 1=2: So if N is larger enough, then d d u…xi † ÿ cN 2dÿr kuk u…xi † > 0: P c 2‡r;x dx dx So Theorem 2.2 and Remark 2.2 indicate that this algorithm is convergent almost everywhere. But in many papers, the convergences are veri®ed with condition (2), as in [3,10]. Theorem 2.2 also shows the spectral accuracy of algorithms (6) and (7). Remark 2.6. If (2) holds, then we obtain from Theorem 2.2 uniform convergence. 3. Legendre approximation In this section, we discuss the Legendre approximation. The Legendre polynomial of order l is de®ned by …ÿ1†l dl l …1 ÿ x2 † ; l ˆ 0; 1; 2; . . . 2l l! dxl Let xj be the Legendre±Gauss±Lobatto interpolation points, i.e., Ll …x† ˆ

d LN …x†; 1 6 j 6 N ÿ 1: dx While xj stands for the corresponding weight, i.e., 2 ÿ2 xj ˆ …LN …xj †† ; 0 6 j 6 N : N …N ‡ 1† x0 ˆ ÿ1;

xN ˆ 1;

xj ˆ roots of

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

 is The Legendre interpolant of a function v…x† 2 C…K† N X v~l Ll …x†; IN v…x† ˆ

9

…13†

lˆ0

where v~l ˆ

N 1X v…xj †Ll …xj †xj ; cl jˆ0

2 2 ; cl ˆ ; 0 6 l 6 N ÿ 1: N 2l ‡ 1 We use the notations b…x†; ai ; bi and Fi as in Section 2. Gi ; Bi and Ai are also given by (5)±(7) with the Legendre interpolation IN de®ned by (13). This approach has the same advantages as shown in Remarks 2.1 and 2.2. We now turn to the error analysis. cN ˆ

Lemma 3.1. If f 2 H r …K† with r > 1=2, then N X jGi ÿ Fi j 6 cN ÿr kf kr : iˆ0

Proof. We know that for any v 2 H r …K†; r > 1=2 and 0 6 l 6 r (see [18]), 1

kIN v ÿ vkl 6 cN 2lÿr‡2 kukr :

…14†

So the conclusion follows from an argument as in the proof of Lemma 2.2.



Lemma 3.2. If u 2 H r …K† with r > 3=2; then 3

jB0 ÿ b0 j 6 cN 2ÿr kukr‡2 : Proof. The conclusion follows from (14) and an argument as in the proof of Lemma 2.3.  Theorem 3.1. If u 2 H r‡2 …K†; f 2 H r …K† and r > 3=2, then jBi ÿ bi j 6 cN …3=2†ÿr kukr‡2 ‡ cN ÿr kf kr ;

0 6 i 6 N:

Proof. Lemma 3.2 implies the desired result for i ˆ 0: Moreover (9) is also valid with the Legendre interpolation IN : By using (14), we can verify as in the proof of Lemma 2.2 that N X p jGi ÿ Fi j 6 2kIN f ÿ f k 6 cN …3=2†ÿr kf kr iˆ0

which with Lemma 2.1 completes the proof.



10

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

Remark 3.1. If in addition a 2 C r‡1 …K†, then jBi ÿ bi j 6 Ca;r‡1 N 3=2ÿr kf kr : Theorem 3.2. If u 2 H r‡2 …K†; f 2 H r …K† and r > 3=2, then for any d > 1=2, 2 d 2d‡…1=2†ÿr jAi ÿai j 6 c…a0 ‡1†N …kukr‡2 ‡kf kr ‡1† u…xi † dx  ÿ1  d  u…xi † ÿ cN 2d‡…1=2†ÿr kukr‡2 ; 0 6 i 6 N : dx Proof. (10) is also valid. Moreover by (14), Ei 6 cN 2d‡…1=2†ÿr kukr‡2 : Thus the conclusion follows from an argument as in the proof of Theorem 2.2.  Remark 3.2. If in addition a 2 C r‡1 …K†, then 2 d jAi ÿ ai j 6 ca;r‡1 N 2d‡…1=2†ÿr …kf kr ‡ 1† u…xi † dx  ÿ1  d 2d‡…1=2†ÿr kf kr ;  u…xi † ÿ ca;r‡1 N dx

0 6 i 6 N:

Remark 3.3. Theorem 3.2 also implies convergence almost everywhere, spectral accuracy and uniform convergence with condition (2). 4. Numerical results In this section, we present some numerical results. Example 4.1. Consider the problem  1 d u…x† ˆ ÿ2x; … cos …p…x ‡ 1†† ‡ 1:5† dx     2 9 2 9 ‡ ; u…1† ˆ : u…ÿ1† ˆ ÿ 2 ‡ p 2 p2 2

d ÿ dx



x 2 K;

The unique solution is u…x† ˆ

1 2 2 2x 3 fp …x ‡ 2† ÿ 2g sin …p…x ‡ 1†† ‡ 2 cos…p…x ‡ 1†† ‡ x3 ‡ 3x: 3 p p 2

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

11

We use the Chebyshev approximation to solve this problem numerically. It can be checked that all hypotheses in Theorem 2.2 are satis®ed. Table 1 shows the maximum errors E…N † between the exact solution ai and the numerical approximation Ai at the Chebyshev interpolation points. Fig. 1 shows the curves of a…x† and its numerical approximation. The real line presents the exact solution and the dotted line is for numerical solution. They indicate that this method has very high accuracy, even if N is not very large. They also show that the errors decay very fast as N increases. Example 4.2. Consider the problem   d d … cos …p…x ‡ 1†† ‡ 2† u…x† ˆ 2p sin …p…x ‡ 1††; ÿ dx dx u…ÿ1† ˆ ÿ2;

x 2 K;

u…1† ˆ 2:

Fig. 2 shows the exact solution and the numerical solution as in Fig. 1. It shows convergence again. Table 1 Errors E…N † N E…N †

4 3.3543 ´ 10ÿ1

8 2.3175 ´ 10ÿ3

16 3.7404 ´ 10ÿ9

Fig. 1. Chebyshev approximation.

32 <1.0000 ´ 10ÿ16

12

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

Fig. 2. Chebyshev approximation.

Example 4.3. We consider the following problem   d d a…x† u…x† ˆ f …x†; x 2 K; ÿ dx dx u…ÿ1† ˆ ÿ2; where a…x† ˆ



u…1† ˆ 1;

10 cos …p…x ‡ 1†† ‡ 20; 5 cos …p…x ‡ 1†† ‡ 25;

if if

x 2 ‰ÿ1; 0Š; x 2 …0; 1Š;

Fig. 3. Chebyshev approximation.

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

 f …x† ˆ

20p sin …p…x ‡ 1††; 5p sin …p…x ‡ 1††;

if if

13

x 2 ‰ÿ1; 0Š; x 2 …0; 1Š:

In this case, the conductivity is discontinuous, and the unique solution is  2x; if x 2 ‰ÿ1; 0Š; u…x† ˆ x; if x 2 …0; 1Š: We ®rst divide the whole interval into two subintervals at the point where the discontinuity takes place. Then we transform each subinterval to the standard interval K: Finally, we estimate the parameter in each subinterval as in the previous examples. Fig. 3 shows the exact solution and its numerical solution as in Fig. 1. It tells us that the method proposed in this paper is also suited for discontinuous conductivity. References [1] W.W.-G. Yeh, Review of parameter identi®cation problems in groundwater hydrology: The inverse problem, Water Resour. Res. 22 (1996) 95±108. [2] R. Acar, Identi®cation of the coecient in elliptic equations, SIAM J. Control Optim. 31 (1993) 1221±1244. [3] G.R. Richter, An inverse problem for the steady state di€usion equation, SIAM J. Appl. Math. 4 (1981) 210±221. [4] E. Vainikko, K. Kunish, Identi®ability of the transmissivity coecient in an elliptic boundary problem, Z. Anal. Angw. 12 (1993) 327±341. [5] H.T. Banks, K. Kunish, Estimation Techniques for Distributed parameter Systems, Birkhauser, Boston, 1989. [6] E.O. Frind, G.F. Pinder, Galerkin solution of the inverse problem for aquifer transmisitivity, Water Resour. Res. 9 (1973) 1397±1410. [7] D.A. Nutbrown, Identi®cation of parameters in a linear equation of groundwater ¯ow, Water Resour. Res. 11 (1975) 581±588. [8] G.R. Richter, Numerical identi®cation of a spatially varying di€usion coecient, Math. Comp. 36 (1981) 375±385. [9] E. Vainikko, G. Vainikko, Some numerical schemes for identi®cation of the ®lteration coecient, Acta. Comm. Univ. Tartuensis. 937 (1992) 90±102. [10] C.K. Cho, B.Y. Guo, Y.H. Kwon, A new approach for numerical identi®cation of conductivity, Appl. Math. Comp. 100 (1999) 265±283. [11] D. Gottlieb, S.A. Orszag, Numerical Analysis of spectral methods, Theory and Applications, SIAM-CBMS, SIAM, Philadelphia, 1997. [12] C. Canuto, M.Y. Hussaini, A. Quarteroni, T.A. Zang, Spectral Methods in Fluid Dynamics, Springer, Berlin, 1988. [13] C. Bernardi, Y. Maday, Approximations Spectrals de Problemes aux Limites Elliptiques, Springer, Berlin, 1992. [14] B. Guo, Spectral Methods and Their Applications, World Scienti®c, Singapore, 1998. [15] D. Gilbarg, N. Trudinger, Elliptic Partial Di€erential Equations of Second Order, Springer, Berlin, 1983. [16] P.H. Rabinowitz, A minmax principle and applications to elliptic partial di€erential equations, in: J.M. Chadam (Ed.), Lecture Notes in Mathematics 648, Springer, Berlin, 1970, pp. 97±115.

14

K.H. Cha et al. / Appl. Math. Comput. 104 (1999) 1±14

[17] B. Guo, K.-H. Cha, Y.H. Kwon, Parameter identi®cation by mixed spectral-pseudospectral approximations, submitted. [18] C. Canuto, A. Quarteroni, Approximation results for orthogonal polynomials in Sobolev spaces, Math. Comp. 38 (1982) 67±89.