Numerical solutions with a priori error bounds for coupled self-adjoint time dependent partial differential systems

Numerical solutions with a priori error bounds for coupled self-adjoint time dependent partial differential systems

MATHEMATICAL COMPUTER MODELLING PERGAMON Mathematical and Computer Modelling 29 (1999) l-18 Numerical Solutions with A Priori Error Bounds for Co...

1MB Sizes 0 Downloads 49 Views

MATHEMATICAL COMPUTER MODELLING PERGAMON

Mathematical

and Computer

Modelling

29 (1999) l-18

Numerical Solutions with A Priori Error Bounds for Coupled Self-Adjoint Time Dependent Partial Differential Systems E. PONSODA AND L. J~DAR Departamento de Matemitica Aplicada Universidad Politecnica de Valencia P.O. Box 22012, Valencia, Spain Qmat.upv.es (Received April 1998; accepted November 1998) Abstract-This paper is concerned with the construction of accurate continuous numerical solutions for partial self-adjoint differential systems of the type (P(t) ut)t = Q(t)uzzr ~(0, t) = u(d, t) = 0, u(z,O) = f(z), ut(r,O) = g(z), 0 5 x 5 d, t 2 0, where P(t),Q(t) are positive definite WXT-valued functions such that P’(t) and Q’(t) are simultaneously semidefinite (positive or negative) for all t 2 0. First, an exact theoretical series solution of the problem is obtained using a separation of variables technique. After appropriate truncation strategy and the numerical solution of certain matrix differential initial value problems the following question is addressed. Given T > 0 and an admissible

error c > 0 how to construct a continuous numerical solution whose error with respect to the exact series solution is smaller than e, uniformly in D(T) = {(z, t); 0 5 r 5 d, 0 5 t 5 T}. Uniqueness of solutions is also studied.

@ 1999 Elsevier Science Ltd. All rights reserved.

Keywords-Analytic-numerical solution, Coupled self-adjoint problem, Accurate solution, Multistep method.

partial differential system, Mixed

1. INTRODUCTION Coupled time dependent partial differential equations are frequent in many different fields and its numerical computation use to be performed using discrete or semidiscrete methods [l-3] for which a prior% error bounds in term of the data are unusual as well as the knowledge of exact solutions. Sometimes the complexity of the problem under consideration disregards the possibility of computing with a priori error bounds, but it is also true that there is some abuse of unaccurate methods in some situations when a better answer is available. Coupled self-adjoint partial differential equations of the type

(P(t)w)t = Q(tbl,z,

O
t>O,

(1.1)

are frequent in the study of wave propagation in ferrite materials [4]. In the evaluation of microwave heating processes, the constant coefficient model (P(t) = P, Q(t) = Q) often leads This work has been supported by the Spanish D.G.I.C.Y.T. ciana Grants GV-C-CN-1005796 and GV-CB-12-63.

Grant PB96-1321-CO202

- see front matter @ 1999 Elsevier Science Ltd. All rights reserved PII: SO895-7177(99)00014-X

0895-7177/99/$

and the Generalitat Valen-

Typeset by d&-w

2

E. PONSODA AND L. J~DAR

to misleading results due to the complexity of the field distribution within the oven and the variation in dielectric properties of the material with temperature, moisture content, density, and other parameters, see [5, Chapter 3; 61. Electromagnetic processing of homogeneous materials at high power densities, or more precisely the study of microwave drying processes in thick and for hygroscopic materials, leads to models described by coupled time dependent equations of type (l.l), see [7, Chapter lo]. Such equations also appear in the analysis of multimode microwave applicators [8]. In this paper, we consider coupled equations of type (1.1) together with the mixed conditions

u(O,t) = u(d,t) = 0,

t > 0,

(1.2)

4x7 0) = f(x),

O
(1.3)

OLx
(1.4)

r&c, 0) = g(x), where u = (~1,. . . , uT)T is a vector in lW’, P(t), continuously differentiable functions such that

and Q(t) are RrX’-valued

symmetric two times

P(t) and Q(t) are positive definite for all t 2 0 and -P’(t)

and Q’(t) are positive semidefinite

or negative semidefinite for all t 2 0. Functions f(x),

(1.5)

(1.6)

g(x) are W-valued where f(x)

is three times differentiable and fc3)(x) is piecewise

continuous in [O,d] with f(0) = f(d) = fc2’(0) = fc2)(d) = 0,

(1.7)

g(z) is twice differentiable and gc2)(x) is piecewise continuous in [0, d] with g(0) = g(d) = 0.

(1.8)

Standard techniques to solve such problems are those based on finite difference or finite element methods [l-3,8]. Exact solution is useful to check the correctness of the model as well as its variation with the data. It is well known that separation of variables method is efficient for constructing series solutions of mixed partial differential systems with constant coefficients. However, for the time dependent case, separation of variables use to be disregarded due to the lack of knowledge about the solutions of the underlying separated ordinary differential equations associated to the problem. In spite of these facts, although the exact solution of the associate separated ordinary differential equations are not known in an explicit way, the growth of such solutions and their derivatives can be used to construct an exact series solution of the problem. This fact has been recently shown for the case of a single time dependent partial differential equation in [9], and in [lo] for the matrix case with P(t) = I for all t 2 0. The organization of the paper is as follows. In Section 2, the uniqueness of solutions of problem (l.l)-(1.4) is studied. Section 3 deals with boundness of solutions of the matrix differential equation (P(t)Y’)’

+ X2Q(t)Y = 0,

t 2 0,

(1.9)

where X > 0 is a real parameter and P(t), Q(t) are positive definite continuously differentiable llVX’-valued functions. In Section 4, an exact series solution of problem (l.l)-(1.4) is constructed using a matrix separation of variables technique. In Section 5, the following question is addressed. Given an admissible error E > 0 and a bounded domain D(T) = [0, d] x [O,T], T > 0, how to construct a finite continuous numerical solution so that the error with respect to the infinite series

NumericalSolutions solution is uniformly smaller than c in D(T).

3

First, the infinite series is truncated in an appro-

priate way. Then, using one-step matrix methods discrete numerical solutions of equation (1.9) for appropriate values of X are constructed. Finally, using linear B-spline matrix functions the discrete numerical values are interpolated according with the required accuracy. An illustrative example is given. If A is a matrix in Rrx’J, its Frobenius norm is defined by

, and its 2-norm, denoted by llAl[, is defined by

where for a vector z E R?, ](t(]g = (.z~z)~/~ is the usual Euclidean norm of z. By (11, p. 571, it follows that

IIAII L IIAIIF I fill4l

y+ijI

I II4 I fiy+ijl.

(1.10)

If A is a symmetric matrix in IV” and its real eigenvalues Xi, X2, . . . , A, are ordered so that xi 5 x2 I a.* I X,, it follows that [12, p. 211 (1.11)

p(A) = m= {I&l; 1 I i 5 r) = IlAll, and for every vector z in R’, ~ill~ll; 5 zTAz I

Ml4l;.

(1.12)

We recall that a symmetric matrix A in Rrx’ is said to be positive definite if (AZ, z) > 0 for every nonzero vector z in W’. If A is positive definite, all its eigenvalues are positive [13]. If (AZ, z) > 0 for all z in R’, then A is said to be positive semidefinite. A is said to be negative definite (semidefinite) if -A is positive definite (semidefinite). If A is positive definite (semidefinite) we denote A > 0 (A 2 0). If A 2 0, then by [13, pp. 800,801], it follows that there exists only one positive semidefinite square root matrix A112, and AlI2 commutes with A.

(1.13)

If A > 0 and B > 0, by (1.11) we can write A-iBA-’

= A-‘B’/2@/2A-’

= MMT

> 0 - 1

M = A-‘B’i2.

(1.14)

Furthermore, if B > 0 then A-lBA_’

> 0 because M = A-‘B’/2 is invertible. If A > 0 and a(A) is the set of all the eigenvalues of A, then we denote Amin = min{X;X E a(A)} and X,,,(A) = max{X; X E a(A)}. If A > 0 then A-’ > 0 and a(A-‘) = {X-l; X E a(A)}. By (1.11) one gets

X,i,(A)

= min{X; X E a(A)} = ( max {w; w E 0 (A-‘)})-’

= IIA-lll-’

Thus if A > 0, it follows that

kin(A) =

IIA-‘ll-l

3

An,(A) = IIAII.

(1.15)

If A < 0, the o(A) is contained in the negative real line and by (1.15) one gets knin(A) = -&-,,,(-A)

=

-llAll,

(1.16)

E. PONSODAAND L. J~DAR

4

furthermore, if A < 0, then

X,,(A)

= -Ami,

(1.17)

= - IIA-1I/-1 .

2. UNIQUENESS

OF SOLUTIONS

In this section, the uniqueness of solutions of problem (l.l)-(1.4) and (1.6) is studied. Let v(x,t)

and v(x,t)

under hypotheses (1.5)

be two solutions of problem (l.l)-(1.4)

and let

z(x, t) = u(x, t) - 2)(x, t). Then z(x,t)

(2.1)

verifies (1.1) and

CASE 1. -P’(t)

z(0,t) = r(d, t) = 0,

t > 0,

(2.2)

z(x,O) = zt(x,O) = 0,

OIx
(2.3)

and Q(t) are negative semidefinite matrices in lWx’.

Let us consider the function E(t) =

s

~)QWz~x, t> + &z,

0’ {s:(x,

W’(+t(~,

t)} dx.

(2.4)

Since P(t) and Q(t) are both positive definite matrices, then JW) 2 0,

tit 20.

(2.5)

From (2.4) it follows that

E’(t) = I” { (Zj(s,t,Q(t,t=(x,t))‘+

(Z:PMx,t))‘}

dx.

(2.6)

Taking into account that Q(t) = Q(t)T, it follows that

t)Q(+&,

(z:(x,

= &x,

t)Q(t)+,

= (QW&,

t> + &c

tNT &,

t>&‘(t)&,

t) + &z,

= &+)Q’(t)+, Since P(t) = P(t)T, ($(x,

t,)’ t> + z:(x, t>Q(t)zz&,

4 (Q(th(x,

t)

t)) + z,‘
t> + 2Z,TtQ(@&,

4

(2.7)

t).

in an analogous way one gets t)P(t)zt(x,

t))’ = .ztT(x,W’(Mx,

Adding and subtracting zT(x, t)P’(t)q(z,

Q

(2.8)

t) in (2.8), one gets

@,T(x, W(Qst(x, = s,‘(x, t) (P’(t).zt(x, t) + P(&t(x:, =

t) + 22:(x, t)W%(x,

t))’ t)) - s,‘(x, t)P’(t)sl(x,

t)

(2.9)

@(xc,t)Q(t)Zzz(xr t) - $(x, W’(t)Zt(x, t).

Substituting (2.7) and (2.9) in (2.6) it follows that E’(t) =

gd (z:(x,

J +24&,

t)Q’W&, t)QWz&

t> - z,‘(x, WV>& t) + 2z,Th t)Q(t)zzt(x,

9 t>] dx.

(2.10)

Numerical Solutions

Integrating

by parts

the third

term of (2.10), and using (2.2),(2.3),

one gets

d

t)Q(tM~,

&T

J0 =

Then,

t) da:

&zG)Q(t)Z,(O)l;

expression

- ~dz:(.,t)Q(t)~~,(~,t)dl:

(2.10) takes the form

E’(t) = Since -P’(t)

= -~d~l(x,t)Q(t)a,(~,2)d~.

s

od {z,T(x,t)Q’(t)z,(x,t)

and Q’(t) are negative

- z;(x,t)P’(t)zt(x,t)}

semidefinite

matrices

E’(t) I 0,

Since the integrand

of (2.4) is a positive

(2.11)

in !Rrx’, it follows that

vt 2 0.

By (2.5) and (2.12), E(t) IS * p OSI‘t’rve and decreasing, the mean value theorem, it follows that E(t) = 0,

dx.

(2.12)

and by (2.3), one concludes

E(0) = 0. Using

vt 2 0.

continuous

function,

one gets

z5(x, t) = zt(x,t) = 0.

(2.13)

From (2.2), (2.3), and (2.13) it follows that

z(x,t) = 0. CASE 2. -P’(t)

and Q(t) are positive

Let us consider

semidefinite

matrices

in lP’.

the function

ET) = I” {z,T(~,tWz~(xJ)

Q--‘(t) (Wh

+ (Wz,(x,t)lT

Since P(t) and Q(t) are both positive

definite

matrices,

E(t) 2 0,

(4))

dx.

(2.14)

it follows that

vt 2 0.

(2.15)

By (2.14) one gets

ET)’ = ld { (z;(x, t>P(t)zz(x, t))’ + [(P(t)z4x, The first term of the right-hand

s

od (z,‘(x, t)J’(t)z&, =

/

t))T Q-‘(t)

where the first expression

(2.16)

t))’ dx

gd {2z,T(z, t)P(t)z&,

2z,T(z, t)P(t)zt(x,

kc.

side of (2.16) takes the form

t) + z;(z, t)P’(t)z&x,

t)l$ - 2

/0

(2.17)

t)} dx

d =

(J’(t)zt(x, t))]‘}

d

z,j:(x,t)P(t)zt(x,t)

of (2.17) is zero by (2.2),(2.3).

dx +

z,‘(x, W”(t)zo

(x, t) dx,

E.PONSODAAND L.J~DAR

6

Second term in (2.16) can be written

in the form

I” [(J’(t)zt(~,t))~ Q-'(t) P(t)t,(sd))]’ dx = 1”[%~(x,t)P(t)Q-'(t)P(t)zt(x,t)]'

=S{

da:

d

2z~(z,t)P(t)Q-‘(t)P(t)zt(x,t)

+ 2 W)4x,WT

Q-‘(t)

(W)+,

t))

0

-.z,'(x,t)P(t)Q-‘(t)Q’(t)Q-‘(t)P(t)z,(x,t))

dx. (2.18)

By (2.17) and (2.18) it follows that d

ET)’ = S{

2,‘(x, t)p’(t)&(x,t)

+‘2t;(s,

t)P(t)Q-‘(t)P(t)zt(x,

t)

t) + 2 (P(t)+,

t)jT Q-l(t)W)zt(x,

(2.19)

t)

t,} dx.

we have z&x,

Then,

t)P(t)z&

t)P(t)Q-‘(t)Q’(t)Q-‘(t)P(t)zt(x,

-&%A From (l.l),

- 2&(x,

substituting

ET)’ = Since -P’(t)

J{0

t) = Q-‘(t)P(t)z&,t)

Q-‘(W”(+tW).

+

(2.20)

(2.20) in (2.19), one gets

d z,‘(x, t)P’(t)zz(x,

and Q’(t) are positive

semidefinite,

(P(th(x:,

t))} dx.

it follows that

ET)’ 5 0, and in an analogous

Q-‘(t)&‘(t)&-‘(t)

t) - (P(t).z,(x:,t))T

way to Case 1, we conclude

V’t 2 0, ~(2, t) = 0. Thus,

that

the following

result

has

been established.

THEOREM 2.1. Under hypotheses tinuously

differentiable

(1.5),(1.6),

problems

(l.l)-(1.4)

has at most a two-times

con-

solution.

3. ON THE MATRIX

EQUATION

(P(t)Y’)‘+

X2Q(t)Y = 0

In this section, we address some important properties of equation (1.9) that will play an important role in the construction of exact and approximate solutions of problem (l.l)-( 1.4). positive definite matrix functions for Throughout this section P(t) and Q(t) are differentiable t 2 0 and X is a positive real number. The next definition is an extension of the one given in [14].

DEFINITION3.1.Let {YI, Yz} be a pair of WTxT -valued solutions of equation (1.9). We say that {YI, Yz} is a fundamental set of solutions of (1.9), if for every JRTsolution y of (1.9), there exists vectors cl and cz in It’, uniquely determined by y such that t 2 0.

Y = Yl(t)c1 + fiw2,

(3.1)

LEMMA 3.1.If {Yl, Yz} is a pair of WTxr-solutions of (1.9) such that the lR2rx2r matrix s =

Y,(O) yi(o)

yz(0) yi(o)

1

is invertible,

then {YI, Yz} is a fundamental set of solutions of (1.9).

(3.2)

7

Numerical Solutions

PROOF. First of all note that considering the transformation 2(t) = P(t)Y’(t),

v(t) = [

;I:;1

(3.3)

7

equation (1.9) is equivalent to the extended first-order system

JV’(t) = d(t, Since 3-l

=

[

_4 i

-I

0

J=

X)V(t),

[

I

0

d(t, A) =

>

1

1=JT,the above system is equivalent to

V’(t) = J-ld(t,

iv@,A) =

X)V(t) = M(t, X)V(t),

. p_O’tt) X2f@) 1 P-l(t) 0

O -X2Q(t)

1’

(3.4)

(3.5)

Hence, given initial conditions for equations (1.9) or (3.5), both problems have only one solution, see [15, p. 259; 161. Let y(t) b e a W-solution of (1.9) satisfying y(O) = cr, y’(O) = /3. For any vectors cl and c2 in R’, the function s(t) = defines a W-solution

X(+3

(3.6)

+ Yz(t>c2,

of (1.9). If we impose on s(t)

that s(0) = CY,s’(0) = p, one achieves the

algebraic system [ E$;

;::;I

[ ::I

= [ ;]

(3.7)

.

By hypothesis (3.2), system (3.7) is uniquely solvable and

[;;I=s-1 [;I.

(3.8)

Taking cl and c2 given by (3.8), the vector function s(t) defined by (3.6) for these values satisfies the same initial conditions as y(t). By the uniqueness it follows that s(t) coincides with y(t). Thus the result is established. I

EXAMPLE 3.1.Let V(t) and W(t) conditions V(0) = [k],

be W2rxr-solutions of equation (3.4) satisfying the initial

and W(0) = [ $,)],

respectively, where P(0) is invertible. Let Y(t) and

YT) be defined by Y(t) = [I, then Y(0) = l,, ?$$

01V(t),

= 0, and by (3.3)-(3.5)

YT) = [L 01W(t), one gets

Y’(0) = 0,

Y’(t) = [ Ir

O] V’(t) = [ 0 P-l(t)]

V(t),

Y’;T;i = [I,

O] W(t)

Iv(t),

= [O P-‘(t)]

F@j=I?..

Hence,

and by Lemma 3.1, the pair {Y(t), YT)} the W-solution of the problem (WY’)’

+

is a fundamental set of solutions of (1.9). In particular

X2Q(t)y = 0,

Y(0) = QY,

is given by y(t)

=

Y(t)a + YT)P.

Y’(O) = P,

t L 0,

8

E.PONSODA AND L. J~DAR

where P(t), defined

= [X2f(t) p_!(t)]

3.1. Let X > 0 and d(t,X)

THEOREM

Q(t) are positive

definite

, J

= [y-i]

be matrices in ~2rx2r

in JR’x r. Let F and G the real valued functions

matrices

by F(v, t) = vTJTd-‘(t,

G(v, t) = vTd(t,

X)J’v,

and let vi = [ :I] , yi E W’, zi E R’, equation (3.4). Then it follows that:

X)v,

1 5 i 5 T, a column

v E EP,

solution

(i) F(v, t) 2 0 and G(v,t) 2 0, for all v E lRPr, t > 0; (ii) if P(t), Q(t) are differentiable functions and -P’(t),

t 2 0,

of the matrix

(3%

differential

Q’(t) are both positive

semidefinite

functions, then F(vi(t),t) 5 F(vi(O),O), for all t 2 0, 1 < i 5 r; (iii) if -P’(t), Q’(t) are both negative semidefinite, then G(vi(t), t) 5 G(q(O), t>O, llilr.

0), for all

PROOF. in W2rx2r. Hence G(v, t) 2 0,

(i) Note that d(t, A) and A-‘(& A) are positive definite matrices for all v E W2?, t 2 0. Furthermore, for v E R2r one gets F(v, t) = (,7v)T A-‘@, (ii) Let vi(t)

=

[ Y$:;]

1 5 i 5 T. Considering

X)Jv

be the ith column F(vi(t),t)

= (vT(t))‘JTd-l(&

(J’ f 3’)

Since ,J + JT

X)Jvi(t) (id@,

$d(t,X)

= 0 and by hypothesis

(-$d(t,A))

(3.4) for

t>> X)&;(t)

x)) d-‘(6 A)

of problem

by (3.4) it follows that

+ vT(t)JTd-‘(t,

vi(t) - v’(t).7Td-1(t,

d-+,X)

oft,

tzo.

V(t)

of a solution

= $ (F(v+),

-v;(t)JTd--l&X) = v:(t)

vector

as a function

F’WM

<=Jv,

A)[ 2 0,

= ETA-+,

6

Wvi(t)

dd(t ’

A) d-l@, >

X)Jvi(t).

2 0, by (1.5) it follows that d-‘@,A)

> 0.

By (1.5) one gets F’(v&),

t) = - (,7v&))T

d-‘&I)

(id(U))

(J’vi(t))

d-‘(0)

2 0,

Thus, F(vi(t), t) is a decreasing function of t for 1 5 i 5 r, and in particular F(vi(O),O), for all t 1 0. This proves (ii). (iii) Let vi(t) be as in part (ii). By definition that 3 + JT = 0, it follows that G’(v&), = vT(t)d(t,

X)Jd(t, =

Hence part

VT@)46 A)(.7+ gT)d(t,

(iii) is proved.

F(q(t),

X)JTd(t,

X)q(t)

X)vi(q _t &)

(-$d(t,x))

+ v:(t)

(id@,

vi(t) 5 0.

($d@, 4)

vi(t)

t) 5

and the fact

t) = 1 (G(v&),t)

X)vi(t) + vT(t)d(t,

= v:(t)

of G and by (1.5), the hypothesis

t >_ 0.

x)> vi(t)

9

Numerical Solutions THEOREM 3.2.

V(0)

=

Let V(t) and W(t) be the solutions of system

, W(O) =

[ :]

hypothesis of Theorem

[ py,,,] and Iet Y(t)

(3.4) satisfying the initial conditions

= [I, O]V(t),

Y(t)

= [L 0) W(t).

Under the

3.1 (ii) it follows that

IlUt>II I (~11~-1~~~1111~~~~11)1’2 2

(3.10) t L 0,

llY’(t)ll 5 X (~ll~-‘~~~l1211~~~)llIlQ(~)ll)1’2 7

Il~@)ll 5 A-’ (~ll~~~111211~-1~~~lIIIQ-‘~~~ll)”zt II~WII

(3.11)

5 (~llQ-1(~)llIlQ(t)llllJ’-‘(t)l1211~(~)112)1’2~

PROOF. Let vi(t) be the ith column vector of V(t), expressed in the form zli = [xi 1, where yi, zi are vectors in IV for 1 5 i 5 T. By hypothesis of Theorem 3.l(ii) and (1.12) it follows that F(vi(t), t) I F(ui(O), 0) = (J’u2(0)OT A-‘(0, A) (J~i(0)) = [-zT(O)yT(O)] d-‘(0,

21

A) -$$ [

(3.12)

= z~(O)Q-~(O)Z~(O)X-~ + yT(O)P(O)yi(O)

= Yi’(0)p(O)Yi(O) 6 llYi(“)IIEII~(~fll = llpfo>II= By (1.12) and (1.15) one gets F(vi(t), t) = z;(t)Q-‘(t)zi(t)X-2 2 YT(t)PCt)Yi(t)

+ y’(t)P(t)yi(t)

I llYi(t)II~knin(P(t))

= IlYi(t)ll~ ((pm’(t)((-’

(3.13)

and by (3.12) and (3.13), llYM22 I Ip-wII

IIW)II~

t I 0,

(3.14)

l
Note also that (3.15)

and by (1.15) IlQ~~~ll-‘ll~~~~~ll~~-25 z’WQ-‘Wi(W2,

lsi
(3.16)

By (3.15) and (3.16) one gets lldt)ll;

5 ~211V91111QWII,

t 2 0,

(3.17)

l
Premultiplying system (3.4) by ,7-l = 3T such system can be written form (3.5). This means that vector vi(t) satisfies

in the equivalent

(3.18) Hence, y:(t) = P-‘(t)zi(t)

and by (3.17) one gets

IIY:Wll; 5 X2 IIP-‘@)l12 IIJ’(~)llIIQ(t)ll~

llilr.

(3.19)

By (3.14) and (3.19) one gets that Y(t) = [LO] V(t) satisfies llw>ll2F L T p-Wll

IIwM~

llW>ll2F 5 d

Jlp-Wl12 IIW9llIlQ(~>II~

t 2 0.

(3.20)

10

E. PONSODAAND L. J~DAR

By (3.19) and (3.20) one gets (3.10). Let wi be the ith column vector of W(t), written in the form

g , where &, & are vectors in R’ [ 1 by Theorem 3.l(ii) it follows that

wi =

5 F(wi(O),

~(~i(t),t)

0) =

for 1 I i 5 T. Taking into account (1.12) and (1.15),

-g(o) ih (0) 1

(JWi(0))T d-‘(O,A>(JWi(O)) d-l(O,

= [-$(0)$(O)]

A)

[

(3.21)

= ~(0)Q-1(O)Z~(O)X-2 + Pepsi

= Z~(0)Q-1(O)Zi(O)X-2 5 X-211Zi(0)II; I(Q-1(O)(I

5 ~-211WN12jIQ-lWj(~ By (1.12) and (1.15) one gets F(w$),

t) = Z;((t)Q-‘(t)Z&)X-2

2 ~~(t)Q-1(t)C(t)X-2 F(wi(t),t)

+ $(t)P(t)~&)

2 FT(t)P(t)G(t)

(3.22)

l
2 X-211%(t)ll~llQ(t)ll-1,

l
2 II~i(~)llll~-‘(~)ll-l,

By (3.22) it follows that

1 I i 5 T, t > 0,

IlW)II; I ~-211WXl12(IQ-‘WII IIP-‘(t)j(, IIWII; 5 IIf’(O>l12~~Q-‘(O)~~ IIQWII, Since gi(t)

= P-‘(t)Zi(t),

lli
(3.23)

tzo.

by (3.23) one gets

Ilv’iWll; 5 IIP-‘(t)112 IIWl12 /lQ-l(0)jI and by (3.23) and (3.24) one concludes that F:(t)

=

lSilr,

IlQ(t)ll> [I,

0] W(t)

(3.24)

t>o,

satisfies

II~WII; L ~~-211W911211Q-1(~)llll~-1WII~ II~WII: I T (11~-‘~~~1111~~~~11)2 IIQ-l(0)IIIIQ(~)ll~

t 10, t 2 0.

This proves inequalities (3.11).

I

THEOREM 3.3. Let V(t) and W(t)

be the solutions of system (3.4) satisfying the initial condi-

tions V(0) = [;I 7 W(O) = [ pyoJr and let Y(t) hypothesis of Theorem 3.1 (iii) it follows that

= [I,o]

V(t),

F’(t) = [ I,O]

Under the

W(t).

Ily(t)ll 5 (~llQ(~)llllQ-1W)1’2 7 IIY’(tNl I X (V1(~)l1211~(~)IIIlQ@‘H)“” 9

(3.25)

t 2 0,

Il~:(t)II 5 A-’ (~llQ-1(~)llll~-‘(‘W)1’2 >

(3.26)

IlW>II 5 (~11~-1~~~1111~~~~1111~-‘~~~112)1’2 > t 2 0. PROOF. Let vi(t)

= [ ~:~~~] be like in Theore m 3.2. By the hypothesis of Theorem 3.l(ii) and

taking into account (1.15) it follows that G(vi(t), t) I G(vi(O), 0) = uT(O)d(O, X)vi(O) = X2y:(0)Q(O)yi(O) =

G(vi(t),t)

X2yiT(0)Q(Oh(O) I

= A2y:(t)Q(t)yi(t) G(vi(t),t)

+ z:(O)P-‘(0)&O)

(3.27)

~211~~(0)ll~llQ(O)II= X211Q(0)ll,

+ zT(t)P-‘(@i(t)

2 $(t>J’-‘(t)zi(t)

> X211yi(t)ll; (lQ-l(t)ll-l

2 ll~~(t)ll~ll~(t)lI-l~

,

11

Numerical Solutions

By (3.27) one gets Ilvi(t)II; I Since

Il4tN;

IlQ(O)II (IQ-‘(t)(l7

5 ~211WNlIIQ(WI~

IId(t)ll; 5 X2 \jP-1(t)()2 IIJ’(t>llllQO’)Il~ llW)ll~

2 ~IlQU9II IjQ-‘Wj( 7

This proves (3.10).

(3.28)

by (3.18) it follows that

y:(t) = Pvl(t)ti(t),

IIWII;

l
Let wi(t)

l
t>o,

I ~~211Q(ONlljP-‘(t)1j2 IIP(t)ll>

t 2 0.

be like in the proof of Theorem 3.2, and note that by

i7”(t) = [ .,t,]

Theorem 3.l(iii) one gets

< G(wi(O),O) = X2$(0)Q(O)&(O)+

G(wi(t),t)

= q-(OP-'(O)%(O)

Z;(O)P-'(O)&(O)

L IlqO)ll;IpyO)II

= pqO)(I,

G(wi(t), t) = X2g,T(t)Q(t)j$i(t) + ZT(t)P-l(t)Zi(t)

~2G~(t)Q(t)W>L ~211iii(t)ll; 1/Q-‘(t)/-‘, G(w(t),t) 2 z:(W-‘(t)%(t) 2 llzi(t)ll~lI~(t)ll-‘,

(3.29)

2

1LiSr.

By (3.29) it follows that

IIWII;

I X-2 I)&-‘@Ill I\P-‘10)\\ 7

Taking into account that $(t)

Iliiw;

llwN3

5 IIWII p-‘wll

?

llilr.

(3.30)

by (3.30) one gets

= P-‘(t)Zi(t),

5 ll~-‘(o)l1 IP(t>II llwt)l12

>

l
(3.31)

and by (3.30) and (3.31) it follows that

II~WII~ I rXw2 l\Q-‘(t)jl llWNl2F 5 r I(WO)(I

JJp-‘(o)jJ,

(3.32)

IPWI llp-1@)~~2~

t 2 0.

This proves inequalities (3.26).

4.

I

EXACT

SERIES

SOLUTION

By applying the separation of variables technique, we seek solutions of problem (l.l)-(1.3) the form u(z, t) = YP)X(X), where X(z)

X(z)

Y(t) E RT,

ER

of (4.1)

and y(t) verify

x’l+ x2x = 0,

UWY’)’f

X2Q(t)u

The eigenvalues of problem (4.2) are Xi = (y)‘, Xn(s)

O
X(0) = X(d) = 0,

= sin y (

= 0,

t 2 0.

(4.2) (4.3)

n 1 1, and the corresponding eigenfunctions

>

,

n 2 1.

(4.4)

Let {Y,(t),p,(t)} be the fundamental set of solutions of the matrix differential equation

(P(t)Y’)’ + ( y)2

Q(t)Y = 0,

(4.5)

12

E. PONSODA AND L. J~DAR

satisfying

Y,(O)= I,

Y;(o)

E(O)= 0, f

= 0,

Y;(O)

= P(O),

n 11,

(4.6)

whose existence is granted by Example 3.1. By Theorems 3.2 and 3.3, given T > 0, there exist constants M, N such that

IIWt)ll I Mv IIxx~)ll

llk@)ll

I y,

L +f ((lP-%9(I

(Iti(t)ll

llQ(o)ll-‘)“2

5 N(ttP-l(0)II

$ (4.7)

llQ(0)ll-1)1’2,

n 2 1.

O
y*(t) = Y,(t)%%+

%t(@n, dx;

a, = fldf(z)sin(y)

n>l,

t>o,

b, = fldg(x)sin(y)

(4.8)

dx

is the JP-solution of problem (WY;)

+

(y )2 &@)a

= 0,

Y,(O)

Y;(o)

= %I,

= bl,

n L

1.

,

n 2 1

(4.9)

Hence ~~(2, t) = y,(t) sin (7)

= {Yn(t)a, + Fn(t)b,)sin

(7)

(4.10)

satisfies (l.l)-(1.3)

for all n > 1. Under hypotheses (1.7) and (1.8) the sine Fourier coefficients b, satisfy, see [17, p. 711,

a,,

and c

nIbnIb

< +m,

7221

c

nllbnl12 c +m

(4.12)

Ql

[18, pp. 95-99; 19; 20, pp. 38-411. Note that by (4.7), (4.11), (4.12), the series

see

t&t)

c

=

u,&, t) =

7221

c {Yin(t + %(t)b,}sin(7) ,

(4.13)

nL1

is uniformly convergent in a bounded domain D(T) = {(x, t); 0 5 x 5 d, 0 5 t 5 T}. Thus u(x, t) defines a continuous function. Under hypotheses (1.7),(1.8), ~(z, t) satisfies the initial conditions (1.3) and (1.4), see [20, p. 461. Furthermore, by (4.7), (4.11), (4.12), the series c

(y)

{ YA(t)an + ?A(t)b*}

sin

{Yn(t)a,

(y)lsin

,

?I>1 c nzl

+Fn(t)bn}

(7))

Numerical Solutions are also uniformly

convergent

in D(T).

p. 4031, u(z, t) is twice termwise for (z,t)

in D(T),

13

Hence, by the derivation

partially

differentiable

theorem

with respect

of functional

series (21,

to CC,once with respect

to t,

and P(t)ut(Z,

t) = C

P(t) { YL(t)a,

+ Y;(t)&}

sin

(y)

?I>1 =

C

(4.14) P(t)&(t)

,

sin (7)

?Ql

~~~(5, t) =

C ( y)2 {Y,(t)a,

+k(t)b,}sin(y ) .

(4.15)

nil

Note that

by (4.9) one gets (P(t)&)’

x(J?t)&)‘sin

(y)

n>l

is uniformly

convergent

is termwise

differentiable

= - (y)2

Q(t)y,,,

= - c

(y)’

and by (4.7), (4.12), (4.13), the series

Q(t)yn(t)

,

sin (7)

?I>1

in D(T).

By the derivation

with respect

(W>u&, t>lt = -

theorem

of functional

series, the series (4.14)

to t, and

c (7) 2&WY&>

sin(y) ,

(2,t> E W”).

(4.16)

@I By (4.15) and (4.16) it follows that u(z, t) defined by (4.13) is a rigorous (1 .l)-( 1.4). Summarizing the following result has been established.

solution

of problem

Let P(t), Q(t) be positive definite Rrxr -valued continuously differentiable funcfunctions satisfying (1.7) tions satisfying hypotheses (1.5),(1.6), and Jet f(z), g(z) be W-valued and (1.8). If an, 6, are defined by (4.8), then u(x, t) defined by (4.13) is a solution of problem (1.1)-(1.4).

THEOREM 4.1.

REMARK. The series solution U(X, t) provided by Theorem 4.1, presents two computational difficulties. First, the infiniteness of the series, and second, the fundamental set of solutions {Y,(t), F,(t)) of equation (4.5) satisfying (4.6) is not known in an explicit computable way. These drawbacks are treated in the next section.

5. CONTINUOUS

NUMERICAL

SOLUTIONS

This section deals with the construction of continuous numerical solutions of problem (l.l)-(1.4) with a prefixed accuracy in the domain D(T). The approach is based in two steps. The first is the truncation of the infinite series u(x, t) given by (4.13). Once the infinite series has been truncated according with a prefixed accuracy, then we construct numerical approximations of y,(t) defined by (4.8) for a finite number of values n = 1, . . . , no - 1, being no the starting truncation index. Let e and T be positive numbers. By (4.7) and (4.11) it follows that

By [22, p. 401, see also [9], one gets (5.2)

14

E. PONSODA AND L. J~DAR

and by (5.1),(5.2),

taking

n0

no the first positive

integer

satisfying

L

120

2

2,

(5.3)

it follows that (5.4) Suppose that P(t) and Q(t) are twice continuously from the hypothesis of previous sections. Note that Yn(t) = [I, 0] Vn(t) and pn(t) V;(t)

=

= Wr,

[I,

0]

differentiable

WX’-valued

in accordance

Olt
0

Computing

1’

P-‘(t)

_A “d” 2 Q(t) ( > defined by

ki,n = max {~~M(‘)(t,n)//;

3.1, one gets

OltlT,

n)K(t);

=

Let ki,n be the constants

with Example

apart

where

W,(t),

WA(t) = M(t, n>W,(t>; M(t,n)

functions

0

O
(5.5)

lsnsno-1.

OIi12,

l
(5.6)

it follows

Vi”)(t)

= {M’(t,

n) + M2(t, n)} V,(t),

Vi3)(t)

= { M3(t, n) + M(t, n)M’(t,

n) + 2M’(t, n)M(t,

n) + Mc2)(t, n)} V,(t)

(5.7)

and by [16, p. 1141, and (5.7)

IlW>ll

llQ2)(t)ll

5 wP’lca,n),

llV,‘3)(t)ll

I (ki,n + 3ko,,A,,

In an analogous

way, W,(t)

+ b+)

I (h,,

+ kif,,) exp(Tko,,J,

expO’ko,A

0 5 t I T (5.8)

OltlT,

l
satisfies

Wc2)(t) +km) ll~(O>II ewWo,n), III (k:,,,

II n

I(w?i3)(t)l(

5 (kin

+ 3ko,An

+ b+)

with

[23, Section

expWo,,L

(5.9)

O
lIn5ns-1, In accordance

IIWII

2; 241, we construct

now discrete

numerical

approximations

of

Vn(t) and Wn(t), of the form, see [24],

vn,o =

L

[0

1 ’

m-1

Vn,m =

[ W n,O =

12r -

j=O

[ P(0) m-1 j=O

n>

>

1

{

I2T + $w,-j4,

12~ -

-1 iM(tm-j,

n>

{ l
n)

11 K,o,

(5.10)



n [

:Mh-j,

{ 0

Vn,m = Nh=T,

-1

n

h<&,

1

{

I2r + +vm-j-l,n)

lin
11 wn,o,

tj = jh.

Numerical

15

Solutions

If we denote by e n,n

-

Vn(tm) -

fn,m= Wn(Gn)- wn,m,

vn,,,

1 5 n 2 no - 1 (5.11)

l
= mh of the approximations V,,,,, and W,,,,

the global discretization error at t,

by Theorem 2,

[24, p. 131, it follows that

en,m L T

ed3Tkd

(ki,,

fn,?n 5 T

wWko,n>

{k&z + 3ko,A,,

Nh=T,

llm
+ 3ko,,kl,n + kz,n} = h2&, + kz,n} jjP(o)II = h2Fn,

(5.12)

l
h -c k&

the vector R’ sequence defined by t n,m

1A 0 1PL&l+

=

wrl,, bn1 = {Y,,man+ %,,b,} ,

llm
(5.13)

l
satisfies

+ ~‘lPnllz)r IItn,m - ~nhn)ll L h2b%Il~nl12

llmlN,

lIn
(5.14)

Now we construct a continuous numerical approximation of yn(t) using linear B-spline functions Let bl,(t)

interpolating the values {t,,m}~=o.

bl,,(t)

=

h-‘(t {

defined by

- t,,,),

h-%n+z

t7n I t < hn+l

-%

I

hn+1r

t < &n+z,

with tm+l - t, = h, blm(t) = 0 for t < t, and t 2 tm+2. In addition bl,(t) 2 0, bl_l(t) = 0 and bl,(t) + bl,_l(t) = 1, for every t in [t m, tm+2], see [25, p. 2471. Let us consider the linear B-spline matrix functions interpolating the theoretical values Y,(t,) as well as the numerical approximations {Y,,,m}E=l, defined by

s(Y,,t)

N-l

c

=

N-l

hm(~)K(hn),

T(Y,,t)

=

m=-1

c

hn,(W&n.

(5.15)

m=-1

For 0 I t 5 T, 1 5 n 5 no - 1, it follows that (see [24, p. 691)

IIWnr t) - T(K, t)ll I opmyN IKbJ -

-

- Y,,mll= opzFNIlen,mll 5 h2&. -

-

(5.16)

In analogous way, denoting N-l S(b)

=

c

N-l T(b)

bn(@&),

=

m=-1

c

b$)~,m,

(5.17)

m=-1

one gets IIS

- T(%J)ll

L oEyN -

_

Ilfn,mll5 h2Fn.

(5.18)

By (5.12), (5.13), (5.16), and (5.18), the vector functions

U(z,t,n,-l)=~~‘{S(Y,,t)o,+S(?~,t)b,)sin(~), n=l

(5.19)

E. PONSODA

AND

L.

J~DAR

~(s,t,n,-1)=~~‘{T(Y~,t)a,tT(Y,.t)b,}sin(~) n=l

(5.20)

no-1 =

C

T(t,,t)sin

(7)

n=l

satisfy II%,

t, WI - 1) - &G t, no - I)11 I h2 “2’

(,,on,,&

+ ]lb,l]G)

(z,t)

,

E D(T).

(5.21)

n=l

Furthermore,

by [24, p. 69; 25, p. 2571, (5.8), and (5.9) it follows that

(5.22)

l
O
- 1) = ngl {Y,(t)a, ?%=I

+ pn(t)b,)

~(5, t), ,

sin (y)

(5.23)

by (5.22) it follows that II+,

t, no - 1) - C(z, t, no - I)]] kr,, + &&J

W4I + IMI IIWW) expP’bJ

(5.24)

by (5.21) and (5.24) it follows that

lb@,t, no- 1) - V(xc, t, 120 - 1)ll 5 h2*g1 { [E, + i (IQ,,+ ki,,J ev Wo,d] lbnll (5.25)

n=l

+ [F, + 5 (bn + kg,,)IIWOIIexpWd] = ~2-hlJ-1, Hence, taking

h small enough

llbnll}

(~4 E W?.

so that l/2

( >

h-c

2Yno-1

where T,,~- r is defined

h < min {kc!,;

--!-

(5.26)

lIn


by (5.25)) it follows that

lIU(Gt, no- 1) - V(z, 6 720 - 1111 < ;,

(z,t> E WI.

(5.27)

ht> E WI.

(5.28)

By (5.4) and (5.27) one gets II&t,

no - 1) - &t,no

-

1)ll < E,

Summarizing the following result which provides a constructive procedure numerical solutions of problem (l.l)-( 1.4) has been established.

for obtaining

accurate

17

Numerical Solutions

THEOREM 5.1. Suppose that apart from the hypothesis of Theorem 4.1, P(t) and Q(t) are twice continuously differentiabie in [O,T] where T > 0. Given e > 0, iet no be given by (5.3), ki,, defined by (5.6) for 0 5 i 5 2, 1 5 n 5 no - 1, and ~~~-1 by (5.25). Let h > 0 be small enough so that satisfies (5.26) and let N be a positive integer with Nh = T. Let {Vn,m}$=O, {IJV~,~}~=~ be the matrix sequences defined by (5.10) for 1 < n 5 no - 1 and let t,(t) be the vector B-spline function interpolating the values

t

n,m

=

01{K,m%l + ~n,mdn),

[I,

lin
OimlN,

no-l t,(t)

=

c n=l

b&)tn,m

= T(tn,t),

then C(z, t, no - 1) = mgl t,(t)

sin

(y)

n=l

is an approximate solution of problem (1 .l)-(1.4) EXAMPLE

Let us consider problems (l.l)-(1.4)

5.1.

P(t) P(t)

satisfying

3

=

-

(5.28).

with

1 1I ’

arctant 1

1

1I +

and Q(t) are symmetric, positive definite for t 2 0. Furthermore, the inverse matrix of P(t)

and Q(t) are given by 1 P(t)-’

-

2 - arctan t

= -

1 2 - arctan t

Taking derivatives in P(t)

1+

1 2 - arctan t 1 2 - arctan t

1 [ Q(t)-1



=

1

--

arctyt

--

arctan t

lf-

1 arctan t 1 arctan t

1 .

and Q(t) it follows that

Note that -P’(t) and Q’(t) are simultaneously positive semidefinite matrices. rems 2.1 and 4.1, problems (l.l)-(1.4) has a unique solution defined by (4.13).

Then, by Theo-

REFERENCES 1. W.F. Ames, Numerical Methods for Partial Diflerential Equations, New York, Academic Press, (1977). 2. G.D. Smith, Numerical Solution ofPartial Diflerential Equations: Finite Difference Methods, Oxford Univ. Press, Oxford, (1993). 3. B. Szab6 and I. Babdka, Finite Element Analysis, John Wiley & Sons, New York, (1996). 4. D.M. Posar, Microwave Engineering, Addison-Wesley, New York, (1995). 5. A.C. Metaxss and R.J. Meredith, Industrial Microwave Heating, Peter Peregrin, London, (1983). 6. G. Harvey, Microwave Engineering, Academic Press, New York, (1963). 7. G. Roussy and J.A. Pearcy, Foundations and Industrial Applications of Microwaves and Radio Frequency Fields, John Wiley, New York, (1995). a. D.C. Dibben and Ft. Metaxas, Time domain finite element analysis of multimode microwave applicators, IEEE IFansact. on Magnetics 52 (3), 942-945, (1996). 9. P. Almenar, L. J&&r and J.A. Martin, Mixed problems for the time-dependent telegraph equation: Continuous numerical solutions with a priori error bounds, Mathl. Cornput. Modelling 25 (ll), 31-44, (1997). 10. L. J6dar, P. Almenar and D. Goberna, Exact and approximate solutions with a priori error bounds for systems of time dependent wave equations, Mathl. Comput. Modelling 26 (7), 11-2, (1997). 11. G.H. Golub and C.F. Van Loan, Matriz Computations, Johns Hopkins Univ. Press, Baltimore, MD, (1989). 12. J.M. Ortega, Numerical Analysis, A Second Course, Academic Press, New York, (1972). 13. R. Godement, Cours d’Algdbre, Hermann, Paris, (1967).

18

E. PONSODA

AND

L. J~DAR

14. L:Jbdar, Explicit solutions for second order operator differential equations with two boundary value conditions, Linear Algebm Appl. 103, 73-86, (1988). 15. W.T. Reid, Theory for Ordinary Diflerential Equations, Springer, Berlin, (1980). 16. T.M. Flett, Diflerential Analysis, Cambridge Univ. Press, Cambridge, MA, (1980). 17. A. Zygmund, 7+igonometric Series, 2 Volumes, Cambridge Univ. Press, Cambridge, MA, (1977). 18. A.N. Tikhonov and A.A. Samarskii, Equations of Mathematical Physics, Dover, New York, (1990). 19. V.I. Smirnov, A Course of Higher Mathernalics, Volume II, Pergamon Press, New York, (1964). 20. G.B. Folland, Fourier Analysis and Applications, Wadworth & Brooks, Pacific Grove, CA, (1992). 21. T.M. Apostol, Mathematical Analysis, Addison-Wesley, Reading, MA, (1957). 22. N.M. Temme, Special Am&ions, John Wiley & Sons, New York, (1996). 23. L. Jodar and E. Ponsoda, Computing continuous numerical solutions of matrix differential equations, Computers Math. Appl. 29 (8), 63-71, (1995). 24. L. Jddar and E. Ponsoda, Non-autonomous Fliccati-type matrix differential equations: Existence interval, construction of continuous numerical solutions and error bounds, IMA J. Nwner. Anal. 15, 61-74, (1995). 25. G. Hammarlin and K.H. Hoffmann, Numerical Mathematics, Springer, Berlin, (1991).