The exact solution of nonlinear age-structured population model

The exact solution of nonlinear age-structured population model

Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112 www.elsevier.com/locate/na The exact solution of nonlinear age-structured population...

243KB Sizes 1 Downloads 97 Views

Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112 www.elsevier.com/locate/na

The exact solution of nonlinear age-structured population model夡 Minggen Cui∗ , Zhong Chen Department of Mathematics, Harbin Institute of Technology at Weihai, Shandong 264209, PR China Received 3 July 2005; accepted 12 June 2006

Abstract In this paper, the representation of the exact solution of age-structured population model is obtained. Based on this, an effective numerical algorithm for solving the approximate solution of population model is given. The final numerical experiment shows that our method is effective. 䉷 2006 Elsevier Ltd. All rights reserved. Keywords: Age-structured population model; Nonlinear; Reproducing kernel

1. Introduction In this paper, we will discuss how to solve nonlinear age-structured population model [6] (I)

jp(t,x) jt

+

jp(t,x) jx

= −(d1 (x) + d2 (x)P (t))p(t, x),

t 0, 0 x < A,

t = 0 : p(0, x) = p0 (x), 0 x < A, A x = 0 : p(t, 0) = a [b1 () − b2 ()P (t)]p(t, ) d, t 0, A P (t) = 0 p(t, x) dx, t 0, where t, x denote time and age, respectively, P (t) denotes the total population number at time t, p(t, x) is the age a+a specific density of individuals of age x at time t, which means a p(t, x) dx gives the number of individuals that have age between a and a + a at time t, d1 (x) is the natural death rate (without considering competition) d2 (x)P (t) is the increase of death rate considering competition, b1 (x) is the natural fertility (without considering competition), b2 (x)P (t) is the decrease of fertility considering competition, a denotes the lowest age that an individual can bear, and A is the maximum age that an individual of the population may reach.



Supported by Harbin Institute of Technology (at Weihai Shandong) Science Foundation, HIT(WH), 2006, Y200511.

∗ Corresponding author.

E-mail address: [email protected] (M. Cui). 1468-1218/$ - see front matter 䉷 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.nonrwa.2006.06.004

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1097

In general, model (I) can be written as ⎧ jp(t, x) jp(t, x) ⎪ = −(x, I (t), t)p(t, x), t 0, 0 x < A, + ⎨ jx jt (1.1) t = 0 : p(0, x) = p0 (x), 0 x < A, ⎪ ⎩ A x = 0 : p(t, 0) = 0 (, I (t), t)p(t, ) d, t 0, A A where ,  denote fertility and death rate, respectively, I (t) = 0  (x)p(t, x) dx, I (t) = 0  (x)p(t, x) dx, and  (x), (x) are weight functions. If let  (x) ≡  (x) ≡ 1, (x, z, t) = d1 (x) + d2 (x)z, (x, z, t) = b1 (x) − b2 (x)z, then Eq. (1.1) will become population model (I). In 1995, for the case  (x) =  (x) ≡ 1, based on Runge–Kutta method, Abia and López-Marcos brought forward a difference scheme for solving Eq. (1.1) (see [1]). In 1997, for the case  (x) =  (x) ≡ 1 and ,  independent of t, Iannelli and Kim introduced a spline algorithm for solving Eq. (1.1) (see [7]). Furthermore, in 1999, Abia and López-Marcos formulated a second-order finite difference scheme for solving Eq. (1.1) (see [2]). In this paper, the representation of the exact solution of age-structured population model (I) can be given. Based on this, we propose an effective numerical algorithm for solving the approximate solution of population model (I) and the effectiveness of our methods is shown by the numerical experiment. First, the definition of several reproducing kernel spaces needed in this paper are introduced. In this section, we assume  = [a, b] × [c, d]. The reproducing kernel space W21 [a, b] is defined [4] by def

W21 [a, b] = {u(x)|u(x) is an absolute continuous real-valued function on interval [a, b] and u [x] ∈ L2 [a, b]}

(1.2)

and the inner product and the norm of W21 [a, b] are of the forms as follows:  (u, v)W 1 [a,b] = 2

uW 1 [a,b] = 2



b

u(x)v(x) + u (x)v  (x) dx

a

(u, u)W 1 [a,b] 2

∀u, v ∈ W21 [a, b],

∀u, v ∈ W21 [a, b].

It is proved [4] that W21 [a, b] is a Hilbert space with the reproducing kernel Rab (x) =

1 [ch(x +  − a − b) + ch(|x − | − b + a)]. 2sh(b − a)

(1.3)

Namely, for each u ∈ W21 [a, b] and a fixed  ∈ [a, b], it holds that (u(x), Rab (x))x,W 1 [a,b] = u(). 2

The two-variable reproducing kernel space can be defined by [10] def

W () = {u(x, y) is a two-variable absolutely continuous real-valued function defined on  and ux , uy , uxy ∈ L2 ()} and endowed with the inner product and the norm   (uv + ux vx + uy vy + uxy vxy ) d (u, v)W = 

uW =



(u, u)W

(1.4)

∀u, v ∈ W ()

∀u ∈ W ().

It is proved [10] that W () is a Hilbert space with the reproducing kernel Rsab (x)Rzcd (y), where (s, z), (x, y) ∈  and Rsab (x), Rzcd (y) are the reproducing kernels of W21 [a, b], W21 [c, d], respectively. On the other hand, Ref. [3] points

1098

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

out that W21 [a, b] ⊗ W21 [c, d] is a reproducing kernel space with the reproducing kernel Rsab (x)Rzcd (y). Therefore W21 [a, b] ⊗ W21 [c, d] = W ()

(1.5)

and (1) For each u ∈ W21 [a, b], v ∈ W21 [c, d], it holds that u(x)v(y) ∈ W (). (2) For each u1 , v1 ∈ W21 [a, b], u2 , v2 ∈ W21 [c, d], it holds that (u1 (x)u2 (y), v1 (x)v2 (y))W = (u1 , v1 )W 1 [a,b] (u2 , v2 )W 1 [c,d] . 2

2

Similar to W21 [a, b], we can prove 2 W2,0 [a, b] = {u(x)|u(a) = 0, u (x) is an absolute continuous real-valued function def

on interval [a, b] and u (x) ∈ L2 [a, b]},

(1.6)

W22 [c, d] = {u(x)|u (x) is an absolute continuous real-valued function on interval [c, d] and u (x) ∈ L2 [c, d]} def

(1.7)

are both reproducing kernel spaces with reproducing kernels K¯ ab (x), Kcd (y), respectively, where , x ∈ [a, b], , y ∈ [c, d] (see the Appendix for the expression of K¯ ab (x), Kcd (y)). And the inner products and the norms of 2 [a, b], W 2 [c, d] are, respectively, defined by W2,0 2 

(u, v)W 2

2,0 [a,b]

=

b

2 u(x)v(x) + 2u (x)v  (x) + u (x)v  (x) dx ∀u, v ∈ W2,0 [a, b], √ 2 [a, b], uW 2 [a,b] = (u, u)W 2 [a,b] ∀u ∈ W2,0 a

2,0

2,0



(u, v)W 2 [c,d] = 2

d

u(x)v(x) + 2u (x)v  (x) + u (x)v  (x) dx ∀u, v ∈ W22 [c, d], √ uW 2 [c,d] = (u, u)W 2 [c,d] ∀u ∈ W22 [c, d]. c

2

2

Similar to W () , we can define the space W0 (): W0 () = {u(x, y)|u(a, y) = 0, c y d, ux (x, y), uy (x, y), uxy (x, y) are both two-variable absolutely def

 (4) 2 continuous real-valued functions and uxx , uyy , u xxy , uxyy , uxxyy ∈ L ()}.

(1.8)

It is a reproducing kernel space. The inner product and the norm are defined by    (3) (u, v)W0 = (uv + 2(ux vx + uy vy ) + (uxx vxx + uyy vyy ) + 2(u(3) xxy vxxy 

(3) (4) (4) + u(3) xyy vxyy ) + uxxyy vxxyy ) dx dy ∀u, v ∈ W0 (), √ uW0 = (u, u)W0 ∀u ∈ W0 ()

and the expression of its reproducing kernel is KM  (M) = K¯ ab (x) · Kcd (y), where M = (x, y), M  = (, ) ∈  and K¯ ab (x), Kcd (y) are the reproducing kernels of W 2 [a, b] and W 2 [c, d], respectively. Therefore 

2,0

2

2 [a, b] ⊗ W22 [c, d]. W0 () = W2,0

We need several properties of two-variable absolutely continuous functions.

(1.9)

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1099

Set W1 () = {u(x, y)|u(x, y) is a two-variable absolutely continuous real-valued function defined on },

(1.10)

the following properties hold. Property 1. Suppose u ∈ W1 (), then for every fixed t ∈ [a, b], as a function of x, u(t, x) is absolutely continuous on interval [c, d]. Similarly, for every fixed x ∈ [c, d], as a function of t, u(t, x) is absolutely continuous on interval [a, b]. Property 2. Assume that real-valued function p(t, x) is continuous on domain , then we have: (1) if for each t ∈ [a, b], as a function of x, p(t, x) is absolutely continuous on [c, d] and px (t, x) ∈ L2 (), then t q1 (t, x) = a p(t1 , x) dt1 ∈ W1 (); (2) if for each x ∈ [c, d], as a function of t, p(t, x) is absolutely continuous on [a, b] and pt (t, x) ∈ L2 (), then x q2 (t, x) = c p(t, x1 ) dx1 ∈ W1 (). Property 3. Suppose that as a function of t, p(t, x) is absolutely continuous on [a, b] and jp(t, x)/jt ∈ L2 (). Let  ,  ∈ [c, d] and < . Then real-valued function q(t) = p(t, x) dx is absolutely continuous on [a, b]. Property 4. Suppose that p(t, x) ∈ W1 (), (j2 /jtjx)p(t, x), (j/jx)p(t, x) ∈ L2 () and d ∈ C (1) [c, d], then d(x)p(t, x) ∈ W1 (). 2. Solving population model can be turned into solving operator equation (IV) In this paper, we assume that: (H1) Zero-order compatible condition is valid, that is,  A p0 (0) = [b1 () − b2 ()P (0)]p0 () d.

(2.1)

a

(H2) First-order compatible condition holds, that is,   A jp(0, )  p0 (0) + (0, 0)p0 (0) = + (0, )p0 () d [b1 () − b2 ()P (0)] · j a  A + b2 ()p0 () d · P  (t − x)|x=t , a

(H3) (H4) (H5) (H6)

A A where (t, x) = d1 (x) + d2 (x)P (t), P  (t − x)|x=t = p0 (0) − 0 [(0, )p0 ()] d, P (0) = 0 p0 (x) dx = C (constant). On interval [0, A], p0 (x) (0 p0 (x) ) is properly smooth, and p0 (x) → 0 = p0 (A) as x → A − 0.  x On interval [0, A], 0 d1 (x), 0 d2 (x) are properly smooth. And it holds that d1 (x) → +∞, and 0 d1 () d → ∞ as x → A − 0. On interval [0, A], 0 b1 (x)C1 , 0 b2 (x)C2 are properly smooth. There exists B, A − 1B < A, such that d1 , d2 ∈ C (1) [0, B], where C (1) [0, B] denotes the set of all functions with the first-order continuous derivative.

In the above assumptions, , , C1 , C2 are all positive constants. If the assumptions (H1)–(H5) hold, the author has proved the theorem on the existence and uniqueness of the global solution for Eq. (I) [6]. From the process of his proof we know that Eq. (I) still has a unique global solution when we restrict t ∈ [0, n], where n ∈ R + .

1100

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

In this paper, due to the need of our method, we also assume that (H6) is valid. Furthermore, we assume that under the assumptions (H1)–(H6) (II)

jp(t,x) jt

+

jp(t,x) jx

= −(d1 (x) + d2 (x)P (t))p(t, x),

0 t n, 0 x < B,

0 x < B, t = 0 : p(0, x) = p0 (x), B x = 0 : p(t, 0) = a [b1 () − b2 ()P (t)]p(t, ) d, 0 t n, B P (t) = 0 p(t, x) dx still have a unique global solution, where n ∈ R + . In the following part of this section, we consider how to solve Eq. (II). 2.1. Solving Eq. (II) can be turned into solving Eq. (IV) Let p1 (t, x) = p(t, x) − p0 (x). Write  B  P1 (t) = p1 (t, x) dx, 0 = 0

B

p0 (x) dx.

(2.2)

0

Then Eq. (II) can be equivalently turned into (III) jp1 + jp1 = −(d (x) + d (x)P (t))(p + p (x)) − p  (x), 1 2 1 0 0 jt jx t = 0 : p1 (0, x) = 0,



B

x = 0 : p1 (t, 0) = −p0 (0) +

(b1 () − b2 ()P (t)) · (p1 (t, ) + p0 ()) d,

0 t n, 0 x < B,

(2.3)

0 x < B,

(2.4)

0 t n,

(2.5)

a



B

P (t) =

p(t, x) dx.

(2.6)

0

Integrating both sides of (2.3) with respect to x from 0 to x1 , we get  x1 jp1 (t, x) dx + p1 (t, x1 ) − p1 (t, 0) jt 0  x1 =− (d1 (x) + d2 (x)P (t))(p1 + p0 (x)) dx − p0 (x1 ) + p0 (0).

(2.7)

0

Put

 b(x) = b1 (x) − 0 b2 (x),

1 =

b(x)p0 (x) dx, a

then by (2.5), we have



B

p1 (t, 0) = − p0 (0) + 

a



a



B

B

2 =

b2 (x)p0 (x) dx,

(2.8)

a

[b1 () − b2 ()(P1 (t) + 0 )] · (p1 (t, ) + p0 ()) d

B

= − p0 (0) +

[b() − b2 ()P1 (t)] · (p1 (t, ) + p0 ()) d  B  B b()p1 (t, ) d + b()p0 () d − P1 (t)

= − p0 (0) + a a  B − P1 (t) b2 ()p1 (t, ) d a  B  = − p0 (0) + 1 + b()p1 (t, ) d − 2 P1 (t) − P1 (t) a

B

b2 ()p0 () d

a

B a

b2 ()p1 (t, ) d.

(2.9)

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

Set

 d(x) = d1 (x) + 0 d2 (x),

1101

x1

f1 (x1 ) =

d(x)p0 (x) dx,

(2.10)

0

then

 0

x1

[d1 (x) + d2 (x)P (t)](p1 + p0 (x)) dx  x1 = [d1 (x) + d2 (x)(P1 (t) + 0 )](p1 + p0 (x)) dx 0 x1 [d(x) + d2 (x)P1 (t)](p1 + p0 (x)) dx = 0  x1  x1 = f1 (x1 ) + d(x)p1 (t, x) dx + P1 (t) d2 (x)p1 (t, x) dx 0  x01 + P1 (t) d2 (x)p0 (x) dx.

(2.11)

0

Substituting (2.9), (2.11) into (2.7) and rearranging it, we obtain  B  x1  x1 jp1 (t, x) P1 (t) dx b2 ()p1 (t, ) d + d2 (x)p1 (t, x) dx + jt 0 0 a  B   x1 + p1 (t, x1 ) + P1 (t) 2 + d2 (x)p0 (x) dx − b()p1 (t, ) d 0 a  x1 d(x)p1 (t, x) dx = 1 − p0 (x1 ) − f1 (x1 ), +

(2.12)

0

where P1 (t), 0 , 1 , 2 , b(x), d(x), f1 (x1 ) are, respectively, defined in (2.2), (2.8), (2.10). Define operators  x1 jp1 (t, x) (Ap1 )(t, x1 ) = dx + p1 (t, x1 ), jt 0  B (Bp1 )(t, x1 ) = P1 (t) = p1 (t, x) dx, 0

  (B1 p1 )(t, x1 ) = (Bp1 )(t, x1 ) · 2 +

x1

d2 (x)p0 (x) dx ,

(2.13)

(2.14) (2.15)

0



B

(C1 p1 )(t, x1 ) =

b2 ()p1 (t, ) d,

(2.16)

d2 (x)p1 (t, x) dx,

(2.17)

a



x1

(C2 p1 )(t, x1 ) = 0

(Cp1 )(t, x1 ) = (C1 p1 )(t, x1 ) + (C2 p1 )(t, x1 ),  B (D1 p1 )(t, x1 ) = b()p1 (t, ) d,

(2.18) (2.19)

a



x1

(D2 p1 )(t, x1 ) =

d(x)p1 (t, x) dx,

(2.20)

0

(Dp1 )(t, x1 ) = −(D1 p1 )(t, x1 ) + (D2 p1 )(t, x1 ),

(2.21)

f (t, x1 ) = 1 − f1 (x1 ) − p0 (x1 ),

(2.22)

1102

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

then (2.12) becomes Bp1 · Cp1 + (A + B1 + D)p1 = f (t, x1 ).

(2.23)

Furthermore, let (Fp1 )(t, x1 ) = (Ap1 )(t, x1 ) + (B1 p1 )(t, x1 ) + (Dp1 )(t, x1 ),

(2.24)

then (2.12) can be written as Bp1 · Cp1 + Fp1 = f (t, x1 ).

(2.25)

Theorem 2.1. Eq. (II) is equivalent to (IV)

Bp1 · Cp1 + Fp1 = f (t, x1 ), p1 (0, x1 ) = 0,

0 t n, 0 x1 B,

0 x1 B.

(2.26) (2.27)

Proof. Note that Eq. (II) is equivalent to Eq. (III), it is enough to prove that Eq. (III) is equivalent to Eq. (IV). From the above reasoning it follows that (III) ⇒ (IV). Next we will prove (IV) ⇒ (III). In fact, Eq. (2.26) is the other form of Eq. (2.12). Differentiating on both sides of (2.12) with respect to x1 , we get (2.3). Subsequently, integrating on both sides of (2.3) with respect to x from 0 to x1 gives (2.7). Comparing (2.7) with (2.12), one gets (2.5). Hence, from (2.26) (2.3) and (2.5) are deduced. This completes (IV) ⇒ (III).  2.2. The boundedness of operators In this section, we will prove that operators A, B, B1 , C1 , C2 , C, D1 , D2 , D defined in (2.13)–(2.21) are bounded linear operators of W0 () → W (). Using Hölder inequality, we can easily obtain the following lemma. Lemma 2.1. Suppose that p(t, x) ∈ L2 (), then 

x1

q(t, x1 ) =

p(t, x) dx ∈ L2 ().

0

Theorem 2.2. A defined in (2.13) is a bounded linear operator of W0 () → W (). Proof. Wefirst prove that if p ∈ W0 (), then Ap ∈ W (). Due to p ∈ W0 () ⊂ W (), it is enough to prove that x q(t, x1 ) = 0 1 (jp(t, x)/jt) dx ∈ W (). In fact, from p ∈ W0 () it follows that jp(t, x)/jt is absolutely continuous on domain , and j2 p(t, x)/jt 2 ∈ L2 (). Using Properties 1 and 2 we get that q(t, x1 ) is absolutely continuous on . x On the other hand, because of jq(t, x1 )/jx1 = jp(t, x1 )/jt ∈ L2 (), jq(t, x1 )/jt = 0 1 (j2 p(t, x)/jt 2 ) dx1 ∈ L2 () (obtained by Lemma 2.1), j2 q(t, x1 )/jtjx1 = j2 p(t, x1 )/jt 2 ∈ L2 (). It holds that q(t, x1 ) ∈ W (). Next, we will prove A defined by (2.13) is a bounded linear operator of W0 () → W (). Obviously, A is linear. Below, we will show that A is bounded. Denote by I the embedding operator of W0 () → W () and operator A1 is defined as follows:  x1 jp(t, x) (2.28) dx ∀p ∈ W0 (), (A1 p)(t, x1 ) = jt 0 then A = I + A1 . The boundedness of I can be obtained by IpW = pW pW0

∀p ∈ W0 ().

(2.29)

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1103

While the boundedness of A1 can be proved as follows: for any p ∈ W0 (), from Hölder inequality it follows that

 x1 2  x1  x1  2 2  12 dx pt (t, x) dx  (pt (t, x)) dx · (A1 p) (t, x1 ) = 0 0 0  B (pt (t, x))2 dx, B 0

therefore,  n 0

B



B

(A1 p) (t, x1 ) dx1 dt B 2

0

0



n B

0 0 n B

 B 2 0

0

pt (t, x) dx dt dx1 2

(pt (t, x))2 dx dt B 2 p2W0 .

(2.30)

Similarly,  

2 j (A1 p)(t, x1 ) d B 2 p2W0 ,  jt 2   j (A1 p)(t, x1 ) d p2W0 ,  jx1 2   2 j (A1 p)(t, x1 ) d p2W0 .  jtjx1

So



 A1 p2W =

 2

(A1 p)2 +

j A1 p jx1

2

+

j A1 p jt

2

+

j2 A1 p jtjx1

2 d

2(B + 1)p2W0 . This finishes our proof.

(2.31)



By Properties 3 and 4, it can be proved similarly that C2 , D2 are both bounded linear operators of W0 () → W (). Imitating the proof of Theorem 2.2, one can also prove the following two theorems. Theorem 2.3. Suppose that b ∈ C[0, B], ,  ∈ [0, B], < . Define operator   b()p(t, ) d, ∀p ∈ W0 (), (Kp)(t, x) =

(2.32)

then K is a bounded linear operator of W0 () → W (). Corollary 2.1. B, C1 , D1 defined in (2.14), (2.16), and (2.19) are all bounded linear operators of W0 () → W (). Theorem 2.4. Operator B1 defined in (2.15) is a bounded linear operator of W0 () → W (). 3. The exact solution of Eq. (IV) From (2.22) it follows that f (t, x1 ) can be regarded as a function of W (). By the definition of W0 (), we know that solving Eq. (IV) is equivalent to solving operator equation Bp1 · Cp1 + Fp1 = f (t, x1 ),

p1 ∈ W0 (), f (t, x1 ) ∈ W (),

(3.1)

where  = [0, n] × [0, B], n, B ∈ R + , B, C, F defined in (2.14), (2.18), (2.24), respectively, are bounded linear operators of W0 () → W () (which have been proved in Section 2.2), and Eq. (3.1) has a unique solution (see Ref. [6]).

1104

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

For the convenience of discussion, we introduce the following definition. Definition 3.1. Let M, M  ∈ . Then we call that function u(M, M  ) is separable if there exists a function v(M) such that u(M, M  ) = v(M)v(M  ). 3.1. Solving Eq. (3.1) can be turned into finding the separable solution of Eq. (3.6) Assume that the value of the exact solution p1 (t, x1 ) of Eq. (3.1) at some point M0 ∈  is known, that is, p1 (M0 ) = C0  = 0.

(3.2)

Let M, M  , V , V  , W, W  ∈  = [0, n] × [0, B], M = (x, y), M  = (, ). We need the following reproducing kernel 2 [0, n], W 2 [0, B], defined by (1.2), (1.6), (1.7), respectively, are one-dimensional respaces: W21 [0, n], W21 [0, B],W2,0 2 producing kernel spaces with reproducing kernels Rx0n (), Ry0B (), K¯ x0n (), Ky0B (), respectively. W () (=W21 [0, n]× 2 [0, n] × W 2 [0, B]), defined in (1.4) and (1.8), are both two-dimensional reproducing kerW21 [0, B]), W0 () (=W2,0 2 nB (M  ), K nB (M  ), respectively, and R nB (M  ) = R 0n ()R 0B (), K nB (M  ) = nel spaces with reproducing kernels RM x y M M M 0n 0n ¯ Kx ()Ky (). Define operator L : W0 (2 ) = W0 () ⊗ W0 () → W () as follows: (Lu)(M) = (Ku)(M) + (Hu)(M) ∀u ∈ W0 (2 ),

(3.3)

where nB nB (Ku)(M) = (u(V , V  ), [B∗W RM (W )](V ) · [C∗W  RM (W  )](V  ))(V ,V  ),W0 (2 )

(Hu)(M) = (u(V , V  ),

1 nB nB K (V ) · [F∗W  RM (W  )](V  ))(V ,V  ),W0 (2 ) C0 M 0

∀u ∈ W0 (2 ),

(3.4)

∀u ∈ W0 (2 ),

(3.5)

nB (W )](V ) where B∗ , C∗ , F∗ are conjugate operators corresponding to B, C, F , respectively, and the expression [B∗W RM ∗ nB indicates that operator B is applied to RM (W ) as the function of W and that the result is the function of V. Next, we give the main theorem of this subsection.

Theorem 3.1. Suppose that M, M  ∈  and p1 (M0 ) = C0  = 0, then p1 (M) is one solution of Eq. (3.1) if and only if p1 (M  )p1 (M  ) is one solution of operator equation Lu = f,

u ∈ W0 (2 ), f ∈ W ().

(3.6)

Proof. From (3.3)–(3.5), p1 (M0 ) = C0 , and using the equality (u, B∗ v) = (Bu, v), it follows that [Lp1 (M  )p1 (M  )](M) = (Bp1 )(M) · (Cp1 )(M) + (Fp1 )(M).

(3.7)

Necessity: If p1 (M) is one solution of (3.1), then (Bp1 )(M) · (Cp1 )(M) + (Fp1 )(M) = f (M). By (3.7) and (3.8), it follows that p1 (M  )p1 (M  ) is one solution of Eq. (3.6). Sufficiency: By (3.6) and (3.7), (3.8) is obtained. This means that p1 (M) is one solution of Eq. (3.1).

(3.8) 

From Theorem 3.1, solving Eq. (3.1) is equivalent to finding the separable solution of linear operator Eq. (3.6) which satisfies p1 (M0 ) = C0 (which is unique as Eq. (3.1) has a unique solution). So in the following, we first give the representation of all solutions of linear operator equation Lu = f , then find the separable solution among them.

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1105

3.2. The analytic representation of all solutions of Lu = f Similar to Ref. [9], we can prove that L is a bounded linear operator of W0 (2 ) = W0 () ⊗ W0 () → W (). Refs. [9,5] give the structure of the solution space of Lu = f , namely, = {u0 + v|v ∈ (L∗ W ())⊥ }. By using equality (L∗ W ())⊥ = N (L) = {v|Lv = 0, v ∈ W0 (2 )}, we rephrase the result of Refs. [9,5] as follows. nB   Theorem 3.2. Let {Mi }∞ i=1 be a dense subset of . For every i = 1, 2, . . ., put i (M) = RMi (M), i (M , M ) = (L∗ i )(M  , M  ), where L∗ denotes the conjugate operator of L. Let { ˜ i }∞ i=1 be the complete orthonormal system on by Gram–Schmidt process, namely,  obtained from { i }∞ i=1

˜ (M  , M  ) =

i

i

ik k (M  , M  ),

ii > 0, i = 1, 2, . . . ,

(3.9)

k=1

then u0 (M  , M  ) =

i ∞ i=1

 ˜ (M  , M  ), ik f (Mk ) i

(3.10)

k=1

is the minimal norm solution and the solution space of Lu = f can be denoted as u0 + N (L) = u0 + {v|Lv = 0, v ∈ W0 (2 )}.

(3.11)

Remark 3.1. In Refs. [9,5], the authors did not give the representation of N (L) or (L∗ W ())⊥ . Remark 3.2. If we obtain a complete orthonormal system r˜1 , r˜2 , . . . , r˜n , . . . of null space N (L), then the solution set of Lu = f can be denoted as {u0 +



2 i r˜i | real sequence { i }∞ i=1 ∈ l }.

(3.12)

i=1 ⊥

Note that N(L) = L∗ W () , namely, N (L) ⊕ L∗ W = W0 (2 ), we know: if sequence r1 , r2 , . . . , rn , . . . is a complete set of W0 (2 ), then sequence 1 , 2 , . . . , n , . . . , r1 , r2 , . . . , rm , . . . is also a complete set of W0 (2 ). Denote by ˜ ,..., ˜ , . . . , r˜1 , r˜2 , . . . , r˜m , . . .

˜ 1 , 2 n ∗ the result that is obtained from it by Gram–Schmidt process. Then { ˜i }∞ i=1 is a complete orthonormal set of L W () and ∞ {˜ri }i=1 is a complete orthonormal set of N (L). From the process of Gram–Schmidt orthonormalization, the following lemma holds obviously. 2 ∗ ∞ Lemma 3.1. Let {ri }∞ i=1 be a complete system of W0 ( ), {vi }i=1 be a complete system of L W (), and v1  = 0. If sequence

v˜1 , v˜2 , . . . , v˜n , . . . , r˜1 , r˜2 , . . . , r˜m , . . . is a complete orthonormal system defined on W (2 ) obtained from v1 , v2 , . . . , vn , . . . , r1 , r2 , . . . , rm , . . .

1106

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

by Gram–Schmidt process, namely, v˜1 =

v1 , v1 W0 (2 )

 vk+1 − ki=1 (vk+1 , v˜i )v˜i ,  vk+1 − ki=1 (vk+1 , v˜i )v˜i W0 (2 )  (r1 , v˜i )v˜i r1 − ∞ ∞ i=1 r˜1 = , r1 − i=1 (r1 , v˜i )v˜i W0 (2 )

v˜k+1 =

r˜k+1 =

k = 1, 2, . . . ,

 k ri rk+1 − ∞ i=1 (rk+1 , v˜ i )v˜ i − i=1 (rk+1 , r˜i )˜ , k  − (r , r ˜ )˜ r  (r , v ˜ ) v ˜ rk+1 − ∞ i=1 k+1 i i W0 (2 ) i=1 k+1 i i

k = 1, 2, . . . ,

(3.13)

 (To the second formula of (3.13), we follow the following rules: if for some positive integer k, vk+1 − ki=1 (vk+1 , v˜i )v˜i =0, then we delete vk+1 from the sequence v˜1 , . . . , v˜k , vk+1 , vk+2 , . . . , r1 , r2 , . . . and rewrite vk+2 , vk+3 , . . . , r1 , r2 , . . . as (3.13) from the new sequence vk+1 , vk+2 , . . . , r1 , r2 , . . ., after that, we again compute v˜k+1 by the second formula of v˜1 , . . . , v˜k , vk+1 , vk+2 , . . . , r1 , r2 , . . . . We will repeat the same process until vk+1 − ki=1 (vk+1 , v˜i )v˜i  = 0. To the last two formulas of (3.13), we follow the same rules.) then {˜ri }∞ i=1 is a completely orthonormal system of N (L). Remark. Write (n) r˜1

 r1 − ni=1 (r1 , v˜i )v˜i  = , r1 − ni=1 (r1 , v˜i )v˜i 

(n)

r˜k+1 =

k

(n) (n) ri i=1 (rk+1 , r˜i )˜ k (n) (n) rk+1 − i=1 (rk+1 , r˜i )˜ri

rk+1 −

− −

n

i=1 (rk+1 , v˜ i )v˜ i

n

i=1 (rk+1 , v˜ i )v˜ i 

,

k = 1, 2, . . . ,

then it is not difficult to prove that for every i = 1, 2, . . . we have (n)

˜ri

− r˜i W0 (2 ) → 0,

when n → ∞.

∞   ∞ Theorem 3.3. Let {Mi }∞ i=1 be a dense subset of . In Lemma 3.1, take vi = i , {ri }i=1 = {KMi (M ) · KMj (M )}i,j =1 , ∞ ∞ ∞   namely, arranging {KMi (M ) · KMj (M )}i,j =1 in proper order and regard it as {ri }i=1 . Then {˜ri }i=1 defined in (3.13) is a completely orthonormal set of N (L), where KM (M  ) denotes the reproducing kernel of W0 (). 2 2 Proof. We only need to show that {KMi (M  ) · KMj (M  )}∞ i,j =1 is complete in W0 ( ). Let ∈ W0 ( ) satisfy that for every i, j = 1, 2, . . ., ( (M  , M  ), K¯ Mi (M  ) · KMj (M  )) = 0. Note that KMi (M  ) · KMj (M  ) is the reproducing 2 kernel of W0 (2 ), (Mi , Mj ) = 0. From the density of set {Mi } in , namely, the density of set {Mi , Mj }∞ i,j =1 in  , it follows that ≡ 0 on 2 . This completes our proof. 

3.3. The representation of the exact solution of Eq. (3.1) In this section, we consider how to find the separable solution in the function set of (3.12). We first give a lemma. Lemma 3.2. Suppose q(M, M  ) ∈ W0 (2 ), then the following two propositions are equivalent: Proposition A: There exists a function v(M), such that q(M, M  ) = v(M)v(M  ) and v(M0 ) = C0  = 0. Proposition B: q(M, M  )  = 0 and q(M, M  ) = (1/C02 )q(M, M0 ) · q(M  , M0 ).

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1107

Since Eq. (3.1) has a unique solution which satisfied p(M0 ) = C0 , Eq. (3.6) has a unique solution which satisfied p(M0 ) = C0 . By Lemma 3.2, (3.12), we obtain the following method of finding the exact solution of Eq. (3.1). Method: Set q(M, M  ) = u0 +



(3.12∗ )

i r˜i ,

i=1 2 where u0 is defined in (3.10), r˜i is defined by Theorem 3.3. Real sequence { i }∞ i=1 ∈ l are unknown. Let C0 be a  nonzero known number. Seeking p(M, M ) such that

0  = q(M, M  ) =

1 q(M, M0 ) · q(M  , M0 ). C02

(3.14)

(suppose that (3.14) has a unique solution) or equivalently finding { i }∞ i=1 such that q(M, M  ) −

1 q(M, M0 ) · q(M  , M0 )W0 (2 ) = 0, C02

then from Theorem 3.1 it follows that q(M, M0 )/C0 is the exact solution of Eq. (3.1). 3.4. The numerical algorithm for solving the ε approximately solution of Eq. (3.1) In this section, we give a numerical algorithm of finding the approximate solution of (3.14). We first introduce the concept of ε-approximate solution. Definition 3.2. Suppose that ε > 0, we call u(M) the ε-approximate solution of Eq. (3.1), if it holds that Bu · Cu + Fu − f W0 () ε.

(3.15)

Theorem 3.4. Suppose that q(M, M  ) is of the form (3.12*) and q(M0 , M0 ) = C02 . Put ε1 (M  , M  ) = q(M  , M  ) −

1 q(M  , M0 ) · q(M  , M0 ). C02

(3.16)

If for some ε > 0, ε1 (M  , M  )W0 (2 ) < ε, then q(M  , M0 )/C0 is a ε · L approximate solution of Eq. (3.1). Proof. Note that q(M  , M  ) is of the form (3.12*), we obtain Lq = f . Furthermore, from q(M0 , M0 ) = C02 it follows 1 L[q(M  , M0 ) · q(M  , M0 )] C02





q(M  , M0 ) q(M  , M0 ) q(M  , M0 ) · CM  − FM  . = f − BM  C0 C0 C0

Lε1 = Lq −

If put u(M) = q(M  , M0 )/C0 , then Lε1 W = f − (BuCu + Fu)W L · ε. This means that u(M) = q(M  , M0 )/C0 is an ε · L-approximate solution of Eq. (3.1).



In order to give the numerical algorithm for solving Eq. (3.1), we need a further analysis as follows: (1) The choice of ri : Take first n points which belong to . put rlm as follows: nB nB rlm = KM (V  ) · KM (V  ), m l

l, m = 1, 2, . . . , n,

and arranging them by the diagonal rule, namely, r11 , r12 , r21 , r31 , r22 , r13 , r14 , r23 , . . . , rnn . Then regard it as 2 nB (V  ) is the reproducing kernel of W (). sequence {ri }ni=1 , where KM 0

1108

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

(2) The choice of unknown numbers: Take {i = q(Mi , M0 )/C0 }ni=1 as unknown numbers, where q(M  , M  ) has the form (3.12*), then from (3.14) it follows that q(Mi , Mj ) = i j . Moreover, i can be denoted by {i }ni=1 as follows: i = (q(M  , M  ), r˜i ) = (q(M  , M  ),



ik k +

k=1

=



ik (q(M  , M  ), L∗ k ) +

= = =

k=1 ∞ k=1 ∞

cik rk )

k=1

cik (q(M  , M  ), rk )

k=1 i

k=1 ∞

i

i

nB ik ((Lq)(M), RM (M)) + k

nB nB (M  )KM (M  )) cik (q(M  , M  ), KM k k 1

k=1

ik f (Mk ) + ik f (Mk ) +

k=1

i k=1 i

2

cik q(Mk1 , Mk2 ) cik k1 k2 ,

(3.17)

k=1

where k corresponds to ordered pair (Mk1 , Mk2 ). Summing up the analyses of (1) and (2), we obtain the following algorithm: 1 (A1) Take {Qi }N i=1 which belong to  and the choice of N1 follows (A3). nB (M), (M  , M  )=(L∗ )(M  , M  ). Let { ˜ }N1 be the orthonormal (A2) For every i=1, 2, . . . , N1 , put i (M)=RQ i i i i=1 i 1 system defined on 2 obtained from { i }N i=1 by Gram–Schmidt process, namely,

˜ (M  , M  ) =

i

i

ik k (M  , M  ),

ii > 0, i = 1, 2, . . . , N1 .

(3.18)

k=1

(A3) Note that u0,N1 (M  , M  ) =

i N1 i=1

 ˜ (M  , M  ), ik f (Mk ) i

(3.19)

k=1

we obtain the approximate solution u0,N1 of u0 defined by (3.10), where N1 is selected such that maxM  ,M  ∈ |u0,N1 (M  , M  ) − u0,N1 −1 (M  , M  )| is very small. nB 2  (A4) Take {Mi }N i=1 which belong to . Put M1 = M0 (see (3.24) for the purpose of it). Let {rlm = KMl (V ) N2 nB (V  )}N2 KM l,m=1 and arrange {rl,m }l,m=1 according to diagonal rule, namely, r11 , r12 , r21 , r31 , r22 , r13 , m N2

N2

2 2 r14 , . . . , rnn . Then regard it as sequence {ri }i=1 and Let {˜ri }i=1 be the orthonormal system defined on 2 ˜ ˜ ˜ obtained from 1 , 2 , . . . , N1 , r1 , r2 , . . . , rN 2 by Gram–Schmidt process, namely, 2

r˜i =

N1 k=1

ik k +

i

cik rk .

(3.20)

k=1

2 (A5) Take {i = q(Mi , M0 )/C0 }N i=1 the unknown numbers and from (3.17) we have

i = (q(M  , M  ), r˜i ) =

N1 k=1

ik f (Mi ) +

i k=1

where k correspond to ordered pair (Mk1 , Mk2 ).

cik k1 k2 ,

(3.21)

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1109

(A6) Let 2









N2

i r˜i (M  , M  ),

(3.22)

1 q(M  , M0 ) · q(M  , M0 ). C02

(3.23)

q(M , M ) = u0,N1 (M , M ) +

i=1

ε1 (M  , M  ) = q(M  , M  ) − 2 Solve {i }N i=1 by

min{ε1 (M  , M  )W0 (2 ) , i ∈ R, i = 1, 2, . . . , N2 , 1 = (put 1 = C0 for satisfying Theorem 3.4),

q(C0 , C0 ) = C0 } C0 (3.24)

where the definition of the norm of W0 (2 ) is given by the Appendix. 2  (A7) Substituting {i }N i=1 obtained by (A6) into (3.22). Then by Theorem 3.4 we know that q(M , M0 )/C0 is the ε-approximate solution of Eq. (3.1).

4. Example By using the algorithm given by Section 3.4, we solve Eqs. (4.1). The following example (4.1) is given by [8] as a numerical experiment. Example. Suppose that p = p(t, x) denotes the age-specific density of individuals of age x at time t. Solve population equations ⎧ jp jp ⎪ = −P (t) · p(t, x), t 0, 0 x < A, + ⎪ ⎪ jx ⎪ jt ⎪ ⎨ e−x (4.1) p(0, x) = , 0 x < A, ⎪ 2 ⎪ ⎪ ⎪ p(t, 0) = P (t), t 0, ⎪ A ⎩ P (t) = 0 p(t, x) dx, t 0, where A = +∞. And it is easy to verify that p(t, x) = e−x /(1 + e−t ), t 0, x 0, is the exact solution of Eqs. (4.1). Solution: Take  = [0, 1] × [0, 10], which denotes (1 unit time)×(10 unit age). The numerical results are listed in Table 1. From Table 1 we know that the algorithm provided in this paper is effective. Appendix A A.1. The representation of { i (M  , M  )}∞ i=1 nB (N ) the reproducing kernel of W ( ). Assume that M  , M  , N  , N  , M ∈ . Let 1 =[0, n]×[0, B]. Denote by KM 0 1 Then nB  nB 

i (M  , M  ) = ( i (N  , N  ), KM  (N ) · KM  (N ))(N  ,N  ),W (2 ) 0 



1

nB  nB  = ((L i )(N , N ), KM  (N ) · KM  (N ))(N  ,N  ),W (2 ) 0 1 nB  nB  = ( i (M), LN  N  [KM  (N ) · KM  (N )](M))M,W (1 ) nB  nB  = LN  N  [KM  (N ) · KM  (N )](Mi ), ∗

1110

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

Table 1 Nodes

The exact solution

Approximate solution p1 (t, x)

Error p(t, x) − p1 (t, x)

t

x

p(t, x)

0.00 0.20 0.40 0.60 0.80 1.00

0.00 0.00 0.00 0.00 0.00 0.00

0.5 0.549834 0.598688 0.645656 0.689974 0.731059

0.5 0.523672 0.554367 0.596953 0.653438 0.719825

0 2.6162E−02 4.4321E−02 4.4703E−02 3.6536E−02 1.1234E−02

0.00 0.20 0.40 0.60

1.00 1.00 1.00 1.00

0.18394 0.202273 0.220245 0.237524

0.18394 0.195806 0.208099 0.22346

0 6.467E−03 1.2146E−02 1.4064E−02

0.80 1.00

1.00 1.00

0.253827 0.268941

0.24594 0.274462

0.00 0.20 0.40 0.60 0.80 1.00

2.00 2.00 2.00 2.00 2.00 2.00

0.0676676 0.074419 0.0810236 0.0873801 0.0933779 0.0989390

0.0676676 0.0688321 0.0714055 0.0770204 0.087962 0.0923431

0 5.5869E−03 9.6181E−03 1.03597E−02 5.4159E−03 6.5159E−03

0.00 0.20 0.40 0.60 0.80 1.00

3.00 3.00 3.00 3.00 3.00 3.00

0.0248935 0.0273767 0.0298069 0.0321453 0.0343518 0.0363973

0.0248935 0.0229024 0.0222456 0.0239349 0.0279446 0.0333908

0 4.4722E−03 7.3509E−03 8.2104E−03 6.4072E−03 2.9965E−03

0.00 0.20 0.40 0.60 0.80 1.00

4.00 4.00 4.00 4.00 4.00 4.00

0.00915782 0.0100706 0.0109653 0.0118256 0.0126373 0.0133898

0.00915782 0.00824638 0.0076789 0.00805066 0.00947644 0.0115372

0 1.8242E−03 3.2864E−03 3.7749E−03 3.1609E−03 1.8526E−03

0.00 0.20 0.40 0.60 0.80 1.00

5.00 5.00 5.00 5.00 5.00 5.00

0.00336897 0.00370475 0.00403393 0.0043504 0.00464901 0.00492583

0.00336897 0.00403647 0.00428807 0.00448504 0.005010781 0.00579783

0.00 0.20 0.40 0.60 0.80 1.00

0.00 6.00 6.00 6.00 6.00 6.00

0.00123938 0.0013629 0.001484 0.00160024 0.00171028 0.0018211

0.00123938 0.000471366 −0.000839367 −0.00236794 −0.00367041 −0.00478155

0 8.9154E−04 2.32337E−03 3.85194E−03 5.38069E−03 6.49366E−03

0.00 0.20 0.40 0.60 0.80 1.00

8.00 8.00 8.00 8.00 8.00 8.00

0.000167731 0.000184449 0.000200837 0.000216594 0.000231461 0.000245343

0.000167731 −0.000274359 −0.000611792 −0.00611792 −0.0129138 −0.0157958

0 4.58849E−03 6.31876E−03 9.81151E−03 1.30453E−02 1.6041E−02

0.00 0.20 0.40 0.60 0.80 1.00

10.00 10.00 10.00 10.00 10.00 10.00

0.0000227 0.0000249642 0.0000271804 0.0000271804 0.0000293128 0.00003319

0.0000227 −0.000189344 −0.00111487 −0.00249723 −0.00386516 −0.00571415

0 2.14306E−04 1.14205E−03 2.52654E−03 3.89648E−03 5.20734E−03

7.887E−03 −5.552E−03

0 −3.3172E−04 −2.5414E−04 −1.3464E−04 −3.5880E−04 −8.7200E−04

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

1111

nB (N  ) · K nB (N  )](M) indicates that operator L is applied to K nB (N  ) · K nB (N  ) as where expression LN  N  [KM  M  M M  nB (N  )·K nB (N  ) into function of N  , N  and that the resulting function is considered as function of M. Substituting KM  M  (3.3)–(3.5) and using equality (u, B ∗ v) = (Bu, v), we get nB nB 

i (M  , M  ) = [BV KM  (V )](Mi ) · [CV  KM  (V )](Mi ) +

1 nB  nB  K (M ) · [FV  KM  (V )](Mi ). C0 M0

2 [a, b] and W 2 [c, d] A.2. Expressions of reproducing kernels of W2,0 2 2 [a, b] and W 2 [c, d], where x,  ∈ [a, b], Denote by K¯ xab (), Kycd (), respectively, the reproducing kernels of W2,0 2 y,  ∈ [c, d]. Computing by Mathematica 4.2, we obtain the expressions of K¯ xab (), Kycd () as follows:  [a1 (x) + a2 (x)]e + [a3 (x) + a4 (x)]e− ,  x, ab ¯ Kx () = [b1 (x) + b2 (x)]e + [b3 (x) + b4 (x)]e− ,  > x,

where a1 (x) = b1 (x) + 41 (1 + x)e−x , a2 (x) = b2 (x) −

e−x , 4

a3 (x) = b3 (x) + 41 (−1 + x)e−x , a4 (x) = b4 (x) −

e−x , 4

while b1 (x) =

1 · {[−3e4a + (5 − 2a + 2b − 4ab + 2b2 )e2(a+b) ]e−x + [(3 + 6a)e2a − (5 + 2b 1 + 2b2 )e2b ]ex + [−3e4a + (1 + 2b)e2(a+b) ]xe−x + [−3e2a + (1 + 2b)e2b ]xex },

b2 (x) =

1 · {[3e4a − (1 − 4a + 2b)e2(a+b) ]e−x + [−3e2a + (1 + 2b)e2b ]ex − 2e2(a+b) xe−x 1 − 2e2b xex },

b3 (x) =

1 · {[(−5 + 2b − 2b2 )e4a+2b + (3 − 6a)e2a+4b ]e−x + [(5 + 2a − 2b − 4ab + 2b2 )e2(a+b) 1 − 3e4b ]ex + [(−1 + 2b)e4a+2b + 3e2a+4b ]xe−x + [(−1 + 2b)e2(a+b) + 3e4b ]xex },

b4 (x) =

1 · {[(−1 + 2b)e4a+2b + 3e2a+4b ]e−x + [(1 + 4a − 2b)e2(a+b) − 3e4b ]ex − 2e4a+2b xe−x 1 − 2e2(a+b) xex },

where 1 = 4[3(e4a − e4b ) + 4(a − b)e2(a+b) ].  [c1 (y) + c2 (y)]e + [c3 (y) + c4 (y)]e− ,  y, Ky () = [d1 (y) + d2 (y)]e + [d3 (y) + d4 (y)]e− ,  > y, where c1 (y) = d1 (y) + 41 (1 + y)e−y , c2 (y) = d2 (y) −

e−y , 4

1112

M. Cui, Z. Chen / Nonlinear Analysis: Real World Applications 8 (2007) 1096 – 1112

c3 (y) = d3 (y) + 41 (−1 + y)e−y , c4 (y) = d4 (y) −

e−y , 4

while d1 (y) =

1 · {[−9e4c + (9 − 10c + 2c2 + 10d − 4cd + 4c2 d + 2d 2 − 4cd 2 )e2(c+d) ]e−y  + [−(15 + 6c + 6c2 )e2c + (15 + 6d + 6d 2 )e2d ]ey + [−9e4c + (9 − 2c + 2d − 4cd + 4d 2 )e2(c+d) ]ye−y + [(3 + 6c)e2c − (3 + 6d)e2d ]yey },

d2 (y) =

1 · {[9e4c + (−9 + 2c − 4c2 − 2d + 4cd)e2(c+d) ]e−y + [(3 + 6c)e2c − (3 + 6d)  e2d ]ey + [4(c − d)e2(c+d) ]ye−y + [6(e−2c + e2d )]yey },

d3 (y) =

1 · {[−(15 − 6d + 6d 2 )e4c+2d + (15 − 6c + 6c2 )e2c+4d ]e−y + [(−9 − 10c − 2c2  + 10d + 4cd + 4c2 d − 2d 2 − 4cd 2 )e2(c+d) + 9e4d ]ey + [(−3 + 6d)e4c+2d + (3 − 6c)e2c+4d ]ye−y + [(9 + 2c − 2d − 4cd + 4d 2 )e2(c+d) − 9e4d ]yey },

d4 (y) =

1 · {[(−3 + 6d)e4c+2d + (3 − 6c)e2c+4d ]e−y + [(−9 − 2c − 4c2 + 2d + 4cd)  e2(c+d) + 9e4d ]ey + [6(e−2c + e2d )e2(c+d) ]ye−y + [4(c − d)e2(c+d) ]yex },

where  = 4[9(e4c + e4d ) − 2(9 + 2c2 − 4cd + 2d 2 )e2(c+d) ]. A.3. The definition of W0 (2 ) and its inner product Suppose that  = [a, b] × [c, d]. Then def

2 W0 () = W2,0 [a, b] ⊗ W22 [c, d],

def

W0 (2 ) = W0 () ⊗ W0 ().

Let A={( 1 , 2 , 3 , 4 )| i ∈ {0, 1, 2}, i =1, 2, 3, 4}. For any =( 1 , 2 , 3 , 4 ) ∈ A, u(x1 , x2 , x3 , x4 ) ∈ W0 (2 ), (0) write j i i = j i /jxi i , j( ) = j 11 j 22 j 33 j 44 , where ji u = u, i = 1, 2, 3, 4. Furthermore, denote by l the count satisfied i = 1, i = 1, 2, 3, 4. Then the inner product of W0 (2 ) can be defined as  (u, v) = 2l j( ) u · j( ) v d ∀u, v ∈ W0 (2 ). (A.3) 2 ∈A

References [1] L.M. Abia, J.C. López-Marcos, Runge–Kutta methods for age-structured population models, Appl. Numer. Math. 17 (1995) 1–17. [2] L.M. Abia, J.C. López-Marcos, On the numerical integration of non-local terms for age-structured population model, Math. Biosci. 157 (1999) 147–167. [3] N. Aronszajn, Theory of reproducing kernels, Trans. Am. Math. Soc. 68 (1950) 337–404. [4] M. Cui, Z. Deng, On the best operator of interpolation, J. Math. Numer. Sin. 8 (2) (1986) 209–216. [5] M. Cui, Y. Yan, The representation of the solution of a kind of operator equation Au = f , Numer. Math. J. Chin. Univ. 1 (1995) 82–86. [6] J. Huang, The existence and uniqueness of global solutions for Verhulst’s partial differential equation on the population mathematical model, vol. 19(6) Yunnan Normal University, 1999, pp. 28–39. [7] M. Iannelli, M.-Y. Kim, Splitting method for the numerical approximation of some models of age-structured population dynamics and epidemiology, Appl. Math. Comput. 87 (1997) 69–93. [8] M.Y. Kim, E.J. Park, An unwind scheme for a nonlinear model in age-structured population dynamics, Comput. Math. Appl. 30 (8) (1995) 5–17. [9] C.-L. Li, M.-G. Cui, The exact solution for solving a class nonlinear operator equations in the reproducing kernel space, Appl. Math. Comput. 143 (2–3) (2003) 393–399. [10] S. Wen, M. Cui, Best approximate interpolation operator in space W, J. Comput. Math. Appl. 3 (1997).