Asymptotically stable positive periodic solutions for parabolic systems with temperature feedback

Asymptotically stable positive periodic solutions for parabolic systems with temperature feedback

Nonlinear Analysis 41 (2000) 73 – 95 www.elsevier.nl/locate/na Asymptotically stable positive periodic solutions for parabolic systems with temperat...

184KB Sizes 0 Downloads 32 Views

Nonlinear Analysis 41 (2000) 73 – 95

www.elsevier.nl/locate/na

Asymptotically stable positive periodic solutions for parabolic systems with temperature feedback Anthony W. Leung a;∗ , Beatriz Villa b a Department

of Mathematical Sciences, University of Cincinnati, Old Chemistry Bdg (ML 0025), Cincinnati, OH 45221-0001, USA b Department of Mathematics, Universidad Nacional, Bogot a, Colombia Received 11 October 1997; accepted 15 May 1998

Keywords: Periodic systems; Bifurcations; Asymptotic stability; Parabolic partial di erential equations; Nonlinear feedback; Reactor dynamics

1. Introduction We consider the time T -periodic problem @u − u = H (x; t; um (x; t))u(x; t); @t for all (x; t) ∈ × (0; ∞);

u(x; t + T ) = u(x; t);

u ≡ 0 on @ × (0; ∞); i = 1; : : : ; m;

(1.1)

where is a bounded domain in RN ; N ≥ 1, with boundary @ of class C 2+ for some  ∈ (0; 1) and H is a m × m matrix function, u = col:(u1 ; : : : ; um ), and T ¿0. We will obtain a positive solution for Eq. (1.1), for certain value of , and consider the stability of this periodic solution as a solution of the nonlinear parabolic system @u − u = H (x; t; um (x; t))u(x; t); @t

(x; t) ∈ × (0; ∞);

u(x; 0) = u0 ; x ∈ ; u ≡ 0 on @ × (0; ∞);

(1.2)

i = 1; : : : ; m:

The system (1:1) models a physical problem when the reaction rates H of the components are dependent on temperature, which is denoted by um . The system is 

This research is partially supported by Colciencias.



Corresponding author. Tel.: +1-513-556-4067; fax: +1-513-556-3417. E-mail address: [email protected] (A. W. Leung).

0362-546X/00/$ - see front matter ? 2000 Elsevier Science Ltd. All rights reserved. PII: S 0 3 6 2 - 5 4 6 X ( 9 8 ) 0 0 2 6 2 - 4

74

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

nonlinear, and becomes linear if H is independent of um . However, in most physical problems the temperature should be introduced as a separate variable which changes with time and space. Consequently, system (1:1) should be applicable for studying many physical problems. Examples for such temperature-dependent coecient models can be found in [8, 13, 16]. System (1:1) may be called a temperature feedback model. Application of the theory developed here will be given in the last section to reactor dynamics. The parameter  in Eq. (1.1) can arise in the study of reaction-di usion in media of various sizes. It appears if we scale the space variable so that the equations become a system on a xed space domain with the scaling parameter  as in Eq. (1.1) (see e.g. [14] for details). To clarify notations, the symbols u; v; u s , etc. always denote m × 1 column vectors and H; H 0 ; etc., denote m × m matrices. When we write u ∈ X or H ∈ X (where X is a function space), we mean that all components of u or H belong to X . In order to have enough smoothness for the problem we assume some or all of the following smoothness hypotheses in the entire paper: (H1) H (: ; ) ∈ C ; =2 ( × [0; T ]) is T -periodic in t, with uniformly bounded Holder norm for  in bounded subsets of R, for each component; (H2) H := (@Hij (: ; )=@) ∈ C ; =2 ( × [0; T ]), with uniformly bounded Holder norm for  in bounded subsets of R for each component; (H3) @2 Hij (x; t; :)=@2 is continuous in some neighborhood V of zero, uniformly in

× R for i; j ∈ J . Here, J denotes the set of integers {1; : : : ; m}: For convenience, we let E and F be the Banach spaces of functions which are T -periodic in t, de ned by E := {u ∈ C 2+; 1+=2 ( × R): u(x; t + T ) = u(x; t); u ≡ 0 in @ × R}; kukE = max{kui kC 2+; 1+=2 ( ×[0; T ]) ; i = 1; : : : ; m}; F := {u ∈ C ; =2 ( × R): u(x; t + T ) = u(x; t)} kukF = max{kui kC ; =2 ( ×[0; T ]) ; i = 1; : : : ; m}; F1 := {u ∈ C 1; 1 ( × R): u(x; t + T ) = u(x; t); u ≡ 0 in @ × R}; kukF1 = max{kui kC 1; 1 ( ×[0; T ]) ; i = 1; : : : ; m}: In Section 3, we will be concerned with the bifurcation of positive periodic solutions from the trivial solution as the parameter  changes. Conditions will be expressed in terms of H (x; t; 0) and its associated linear eigenvalue problems. In Section 4, we nd additional conditions on H (x; t; 0) and (@H=@)(x; t; 0), so that the T -periodic solution is asymptotically stable. In Section 5, we will apply the theories in Sections 3 and 4 to the study of reactor dynamics. Analogous theory can be readily developed for steady-state solutions in autonomous parabolic systems. Note also that essential assumptions on H (x; t; ) are only made near  = 0. In view of the general hypotheses made in Sections 3 and 4 on H (x; t; 0), we can

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

75

see that Theorem 3.1 and 4.3 can be readily applicable to many physical, chemical or biological problems. Theorems 3.1 and 4.3 are the main results of this article. They extend the results described in [9, Sections III 26–27], from scalar equations with concave or convex nonlinearities to systems of the special feedback form. Theorem 3.1 gives sucient conditions on the linearized system and its adjoint at temperature um = 0 for bifurcation to occur. Theorem 4.3 states that if further the o -diagonal coecients are all nonnegative at temperature um = 0, and all the coecients are decreasing functions of temperature there, then the positive periodic bifurcating solution is asymptotically stable. These theorems are convenient for application to physical problems with temperature feedback as indicated in Section 5. 2. Preliminaries In this section, we clarify some preliminary materials which are needed in the main Sections 3–5. In the following lemma we study the evolution system “generated” on the ˆ de ned on X , Banach space X := (Lp ( ))m ; p¿2N by the operator M0 (t) := A + B(t), 1; p 2; p m with domain, D(M0 (t)) = D(A) = (W ( ) ∩ W0 ( )) . Here A is the “diagonal” ˆ operator with m components A := diag (; : : : ; ) and B(t) denotes the multiplication operator induced on X by the m × m matrix function B(x; t) := [bij (x; t)]m i; j=1 . We denote by k kp the norm on X de ned by (Z )  kfkp = max



|fi (x)|p dx

1=p

; i = 1; : : : ; m ;

for all f = (f1 ; : : : ; fm ) ∈ (Lp ( ))m : For convenience we denote by J = {1; : : : ; m}; k k := k kp on X and I : X → X is the identity operator. We use the standard notation R(; C) for the operator (I − C)−1 and if Y and Z are Banach spaces, L(Y; Z) denotes the set of bounded linear maps from Y into Z; L(Y ) := L(Y; Y ). ˆ Lemma 2.1. Suppose B(t) satis es the following condition ˆ B(t) is uniformly bounded on L(X ) for all t ∈ [0; ∞) and satis es  ˆ − B(s)k ˆ  the H older type condition: kB(t) L(X ) ≤ C1 |t − s| ;

(2.1)

for some  ∈ (0; 1) and C1 ¿0. Then M0 generates an evolution system on X for the problem du ˆ − (A + B(t))u = 0; t¿s; dt

(2.2)

u(s) = u0 ; u0 ∈ X: The proof of the lemma follows readily from Theorem 5.6.1 in [15]. It can also be obtained from more general results in [2]. For completeness, a short proof is included in the appendix.

76

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

In what follows we denote by Lc the operator Lc := diag((@=@t)−+c1 ; : : : ; (@=@t)− +cm ) where ci (x; t) is a bounded T -periodic in t function in C ; =2 ( × R); 0¡¡1; i ∈J. Lemma 2.2 (Comparison). Suppose that u; v ∈ (C 2; 1 ( × R))m ∩ (C 1; 0 ( × R))m ; u 6≡ 0; and vi 6≡ 0; vi ≥ 0 on × R; for all i ∈ J satisfy Lc [u] = Pu

on × R;

u(x; t + T ) = u(x; t) u≡0

on × R;

(2.3)

on @ × R;

Lc [v] ≥ Qv

on × R;

v(x; t + T ) = v(x; t) on × R; v≡0

(2.4)

on @ × R;

where P := [pij ] and Q := [qij ] are m × m matrices such that pij (x; t) and qij (x; t) are bounded functions T -periodic on t; in × R: Moreover, pij and qij are nonnegative for i 6= j and qij ≥ pij in × R; for all i; j ∈ J: Then there exist  ∈ R and k ∈ J such that vk ≡ uk ; vj − uj ≥ 0 and pkj = qkj on × R; for all j ∈ J . Proof. Let K¿0 be a positive constant such that pii + K; qii + K¿0 on × R, for all i ∈ J . Therefore, we have m X

(Lc )i (vi ) + Kvi ≥ (qii + K)vi +

qij vj ≥ 0 in × R:

j=1; j6=i

Thus the Maximum Principle for Parabolic equations implies that for all i ∈ J , we  t) ∈ @ × R satis es have vi ¿0 in × R and the outward normal derivative at (x;  t)¡0: (@vi =@)(x; First, we assume that some component ul of u takes a positive value at some point in × R. Since u ≡ 0 on @ , we can readily obtain some constant a¿0 such that vj − auj ¿0 on × R for all j ∈ J . Let j = Sup{a ∈ R : vj − auj ¿0 on × R} for those j such that j can be nitely de ned. De ne j = ∞, if uj ≤ 0 everywhere in

× R. Let k be the minimum of all j ’s. Thus 0¡k ¡∞ and vi − k ui ≥ 0 on

× R; i ∈ J. Moreover, since we have ((Lc )k + K)(vk − k uk ) ≥ (K + pkk )(vk − k uk ) +

m X

pki (vi − k ui )

i=1; i6=k

+

m X

(qki − pki )vi ≥ 0 in × R;

(2.5)

i=1

the Maximum Principle implies that the function ’ := vk −k uk either satis es (i) ’ ≡ 0 in × R or (ii) ’¿0 and @’=@ ≤ − ¡0. In case (ii), k cannot be the supremum as

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

77

de ned above. Hence, we must have ’ ≡ 0 in × R. Then Eq. (2.5) further implies that we must have qki = pki for all i ∈ J in × R. If uj ≤ 0 in × R for all j ∈ J , replace u by −u and follow the same proof as in the above paragraph. The conclusions of the lemma still hold because  can be of any sign. 3. Bifurcation of the feedback model In this section we study the Feedback Model (1:1). For convenience, we use the following notations: H (x; t; ) := [Hij (x; t; )]m i; j=1 is a m × m matrix for (x; t; ) ∈ × R × R 0 and H := [Hij (: ; 0)]. In order to provide an ordering in our Banach spaces of functions we use the cone P := {u = col(u1 ; : : : ; um ): ui ∈ C( × R); ui (x; t) ≥ 0 for all (x; t) ∈ ×R; i = 1; : : : ; m}. In this entire section, we will always assume that H (x; t; ) satis es the smoothness properties (H1) and (H2). Throughout this paper we will use the symbol L for the diagonal operator   @ @ − ; : : : ; −  L := diag @t @t and L−1 f, for f ∈ F or f ∈ F1 , to denote the function w ∈ E such that Lw = f in

× R [6]. We de ne the adjoint L∗ of L by L∗ := diag(−(@=@t) − ; : : : ; −(@=@t) − ) and denote by (H 0 )T the transpose matrix of H 0 . Our main assumption is (P1) (a) There exists a unique positive eigenvalue ˆ0 , with corresponding eigenfunction v0 in E, for the problem L[v0 ] = ˆ0 H 0 v0 in × R; v0 = 0 on @ × R;

(3.1)

having the property that each component of v0 satis es vi0 ¿0 in × R; @vi0 =@¡0 on @ × R for i ∈ J . Furthermore 1= ˆ0 is a geometrically simple eigenvalue of the operator L−1 H 0 : F → F. (b) Moreover, there exists vˆ0 ∈ E such that L∗ vˆ0 = ˆ0 (H 0 )T vˆ0 in × R; vˆ0i ≥ 0; for i = 1; : : : ; m; R and ×[0; T ] H 0 v0 :vˆ0 dx dt¿0: Applying L−1 to Eq. (1.1), it can be written as u − L−1 [H (: ; um )u] = 0:

(3.2)

If we de ne the operator F : R+ × F1 → F1 by F(; u) = u − L−1 H (: ; um )u; for all (; u) ∈ R+ × F1 , we can write Eq. (3.2) as F(; u) = 0. De ning L0 ; L1 : F1 → F1 by L0 = I −ˆ0 L−1 H 0 ; L1 = − L−1 H 0 and G : R+ × F1 → F1 , by G(; u) = − L−1 [H (: ; um )− H 0 ] u, Eq. (3.2) becomes F(; u) ≡ L0 u + ( − ˆ0 )L1 u + G(; u) = 0; for (; u) ∈ R+ × F1 :

78

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

We denote by hhu; vii the integral Z hhu; vii =

T 0

Z

u · v dx dt

m are suciently regular where u = col(u1 ; : : : ; uP m ); v = col(v1 ; : : : ; vm ) : × [0; T ] → R m functions, and u · v := i=1 ui vi is the dot product. The null space and range of a linear mapping A are denoted by N (A) and R(A). The following theorem is the main bifurcation result of this section. It gives sucient conditions on the linearized system and its adjoint at temperature um = 0 to insure the bifurcation of positive periodic solution. (Note hypothesis (P1) (a), (b).)

Theorem 3.1. Assume the matrix H satis es hypothesis (P1) and smoothness conditions (H1)–(H3), then (ˆ0 ; 0) is a bifurcation point for the problem (3:2) (or equivalently for Eq. (1.1)). More precisely, there exists an interval [0; ]; ¿0 and a C 1 − curve ((s); (s)): [0; ] → R × F1 ; such that (0) = ˆ0 ; (0) = 0; and L[u s ] − (s)H (: ; ums )u s = 0;

(3.3)

where u s = col(u1s ; : : : ; ums ); u s = s(v0 + (s)) and v0 is described in part (a) of (P1). This theorem is proved by rst proving the following two lemmas and then apply a bifurcation theorem in [3]. Lemma 3.1. Assume H satis es (P1), (H1) and (H2). Then (i) N (L0 ) is a one-dimensional subspace of F1 spanned by v0 . (ii) dim(F1 =R(L0 )) = 1. (iii) L1 v0 ∈= R(L0 ) and (−L−1 )v0 ∈= R(L0 ): Proof. (i) is given in (P1). In order to prove (ii), we rst show that 1= ˆ0 is an algebraic simple eigenvalue of L−1 H 0 . In fact, if there exists w ∈ F1 such that ((1= ˆ0 )I − L−1 H 0 )2 w = 0, then there exists k ∗ ∈ R such that ((1= ˆ0 )I − L−1 H 0 )w = k ∗ v0 . Therefore 1 Lw = H 0 w + ˆ0 k ∗ H 0 v0 ˆ 0

in × R:

(3.4)

Multiplying both sides of (3.4) by vˆ0 and integrating over × [0; T ], we obtain

ˆ0 k ∗ H 0 v0 ; vˆ0 = 0: Hence, you have k ∗ = 0 and w ∈ N ((1= ˆ0 )I −L−1 H 0 ). From this we can use the Theory of compact operators to obtain the following decomposition: F1 = N (L0 ) ⊕ R(L0 ) ([4, Ch. 2, Theorem 8.9]). Since dim(N (L0 )) = 1, Eq. (3.5) implies (ii).

(3.5)

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

79

0 ˆ−1 0 Finally

from Eq. (3.5) we obtain (iii), since L1 v = − 0 v ∈ N (L0 ) implies the rst 0 0 6= 0 implies the second part. part, and v ; vˆ

In what follows, we use some Schauder’s theory estimates. Speci cally the existence of a positive constant K()¿0 for each  ∈ (0; 1), such that for all u ∈ C( × [0; T ]), kL−1 uk∞ ≤ kL−1 ukC 1+; =2 ( ×[0; T ]) ≤ K()kuk∞ for  ∈ (0; 1): Direct calculations give the following result Lemma 3.2. Assume H satis es (P1), (H1) and (H2). Then (i) the Frechet derivatives D2 G; D1 G; D12 G exist and are continuous on R × F1 . Moreover, we have D2 G(; u)w = − L−1 [H (: ; um )w + H (: ; um )uwm − H 0 w]

(3.6)

for all (; u) ∈ R+ × F1 ; w ∈ F1 where H (:; um ) is the matrix function (H (:; um ))ij := (@Hij (: ; um )=@)ij , (ii) kG(; u)kF1 =kukF1 → 0 as kukF1 → 0 uniformly for  near ˆ0 . (iii) If we further assume (H3), then we have F ∈ C 2 (R × F1 ; F1 :) (For more details of proof of analogous results, see [14]). To complete the proof of Theorem 3.1, we use Eq. (3.6), to obtain the relations: L0 = D2 F(ˆ0 ; 0); L1 = D12 F(ˆ0 ; 0); G(; 0) = D2 G(; 0) = D12 G(; 0) = 0: Consequently by (P1) and Lemmas 3.1 and 3.2, we can apply a bifurcation theorem of Crandall and Rabinowitz [3], to the equation F(; u) = 0, to obtain a C 1 -curve ((s); (s)) of solutions as described in the statement of Theorem 3.1. 4. Linearized stability and asymptotic stability In this section we investigate the stability of the periodic solution u s given in Eq. (3.3) as a solution of the initial boundary value problem (1:2). We will consider the linearized system related to Eq. (3.3) @u − u − (s)H (x; t; ums )u − (s)um H (x; t; ums )u s = 0; (x; t) ∈ × [0; ∞); @t u(x; 0) = u0 (x);

(4.1)

u=@ × R ≡ 0;

and study the spectrum of the evolution system associated with it. (Here, H (x; t; um ) is the matrix function described in Lemma 3.2 above.) Throughout this section, we will always assume that H (x; t; ) satis es the smoothness properties (H1)–(H3) in Section 1. We will prove a sequence of theorems and lemmas leading to the main asymptotic stability Theorem 4.3. Under the assumptions of Theorem 3.1, Theorem 4.3 asserts

80

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

that if we further assume the o -diagonal entries of H are nonegative at um = 0 and all entries of H are decreasing functions of um at um = 0, then the positive periodic bifurcating solution is asymptotically stable. We now clarify an additional property. An m × m matrix function Q = [qij ] in × [0; ∞) is said to satisfy property (P2) if (P2) qij ≥ 0; in × [0; ∞) for all i; j ∈ J i 6= j: Throughout this section, let B(x; t) = [bij ] denote a matrix function with entries in F1 . We de ne the operators M; M2 : E → F by M :=

@ @ − ( + B(x; t)) and M2 := − ( + B(x; t) − bI ); @t @t

where b¿0 satis es m X

bij (x; t) − b¡0; for all (x; t) ∈ × R; i = 1; : : : ; m:

j=1

In what follows, we shall be mainly concerned with the study of the location of the eigenvalues of M . In our analysis we use the main spectral properties of compact positive operators. This study can be done via the use of the evolution systems associated ˆ to it. Observe that the operator B() induced on X by multiplication by B(:; ) satis es condition (2:1). Let U (s; t) denote the evolution system generated on X by the family ˆ of operators {A + B() − bI }≥0 . It is a known fact that U (s; t) : X → (D(A); k:k1 ) is continuous (here kuk1 := kAukX ). Hence, as a consequence of the Sobolev Imbedding Theory, U (T; 0) : X → X is a compact operator. An important result on the eigenvalues of M2 and M is as follows. Theorem 4.1. Suppose B(x; t) satis es (P2), and b; M2 are de ned as above. Then (i) the operator M2 has an inverse; and if we consider M2−1 : F → F; it is a compact bounded linear operator. (ii) There exists an eigenvalue 1 of M2 ; with corresponding eigenfunction in P. Here 1 is the reciprocal of the spectral radius of M2−1 . (iii) The number  := 1 − b is an eigenvalue of M with eigenfunction in P. All  eigenvalue  of M satisfy Re  ≥ . Proof. Part (i). From (P2) and the choice of b, one can use the Maximum Principle to show that M2 is a continuous and one to one linear operator. To prove the surjectivity of M2 , let K : F → F be de ned by Ku = L−1 (Bu − bu) for all u ∈ F. Since the operator L−1 : F → E is continuous and the imbedding E → F is compact, K is compact. Observe that the equation M2 u = f is equivalent to (I − K)u = L−1 f for all f ∈ F. Thus, since I − K : F → F is injective, from the Fredholm Alternative for compact linear operators it follows that I − K is surjective. This proves that M2 is surjective. Therefore it follows from the Open Mapping Theorem that the inverse M2−1 : F → E is a bounded linear operator. Since the inclusion i : E → F is compact, the operator M2−1 : F → F is compact.

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

81

Parts (ii) and (iii). Since M2−1 : F → F is a compact and positive bounded linear operator, its spectrum (M2−1 ) consists of a sequence of real and complex eigenval−1 ues { n }∞ n=1 and zero is the only possible accumulation point of (M2 ). Moreover, according to the Krein–Rutman Theorem [4, Theorem 6.19.2], the spectral radius of M2−1 (Spr(M2−1 )), is an eigenvalue of M2−1 with corresponding eigenfunction in P. We denote Spr(M2−1 ) = 1 and let n = 1= n for all positive integer n, represent the eigenvalues of M2 . Thus 1 is an eigenvalue of M2 with corresponding eigenfunction in P. Observe that if  is an eigenvalue of M2 with eigenfunction (x; t); (: ; jT ) is an eigenfunction of the operator U (jT; 0) corresponding to the eigenvalue e−jT , for all positive integer j. This can be seen by verifying that the function z(x; t) := e−t (x; t) satis es the problem du ˆ − bI )u(t) = 0; t¿0; − (A + B(t) dt

u(0) = (: ; 0):

In other words, we have z(·; t) = U (t; 0)(·; 0) on for t ≥ 0 and consequently e−jT (·; jT ) = z(·; jT ) = U ( jT; 0)(·; 0) = U (jT; 0)(·; jT ): Note that conversely, if w0 is an eigenfunction of U (T; 0) corresponding to e−T ∈ R, the T periodic function w(x; t) := et U (t; 0)w0 (x) is an eigenfunction of M2 with eigenvalue . Therefore by the Krein–Rutman Theorem, e−1 T = Spr(U (T; 0)): From the de nition of Spectral radius, we obtain e−Re (n )T = |e−n T | ≤ e−1 T for all n ≥ 1. This means that Re (n − b) ≥ 1 − b: For s xed we de ne the operator Fs : E ⊆ F → F by Fs u = (L) ◦ [D2 F((s); u s )] u = Lu − (s)H (: ; ums )u − (s)(H (: ; ums )u s )um : For convenience, we let s denote the point spectrum of Fs : In order to obtain stability we will need the following additional hypothesis: (P3) For those i where vˆ0i 6≡ 0, assume that @Hij (x; t; 0)=@ ≤ 0 in × R; for all j ∈ J ;  t; 0)=@¡0 and moreover, there exist some (x;  t) ∈ × R and (i; j) where @Hij (x;  t)¿0. vˆ0i (x; (Here, vˆ0i is de ned in (P1) (b).) The following theorem is basic for the purpose of this section. Theorem 4.2. Assume H satis es (P1)– (P3) at  = 0. There exists a number ∗ ∈ (0; ); where  is described in Theorem 3:1; and a positive function 1 (s) for all s ∈ (0; ∗ ) such that if  is an eigenvalue of Fs ; then Re  ≥ 1 (s):

82

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

This theorem is proved by means of a sequence of lemmas. The following lemma is the consequence of Lemmas 3.1 and 1.3 in [3]. Lemma 4.1. There exists 0 ¿0; such that, for all (; u) ∈ G0 := {(; u) ∈ R+ × F1 : kD2 F(; u)−L0 kL(F1 ) ¡0 }; there is a unique real number r(; u); satisfying |r(; u)|¡ 0 ; such that D2 F(; u) − r(; u)(−L−1 ) is singular. Moreover, r(; u) is a (−L−1 ) – simple eigenvalue of D2 F(; u); and the maps r : (; u) → r(; u) and  : (; u) → (; u) are smooth in G0 ; where (; u) is the corresponding eigenfunction of r(; u). As a consequence of this lemma, there exist 1 ∈ (0; ) and two functions ( (:); z(:)) : (ˆ0 − 1 ; ˆ0 + 1 ) → R × F1 ;

((:); !(:)) : [0; 1 ] → R × F1

(4.2)

with ( (ˆ0 ); z(ˆ0 )) = ((0); !(0)) = (0; v0 ) such that − LD2 F(; 0)z() = ()z();

−LD2 F((s); s(v0 + (s))!(s)

= (s)!(s) = − Fs !(s);

(4.3)

where

() = r(; 0); (s) = r((s); s(v0 + (s)): It will be proved that in fact for suciently small s; (s)¡0 and the real parts of all the eigenvalues of Fs are greater or equal that −(s). Lemma 4.2. The function (s) satis es 0 (0)¿0: Proof. Theorem 3.1 asserts that 0 (0) exists. Eq. (3.2) implies that s(v0 + (s)) is in E. From Eq. (3.3) we obtain @ (s(v0 + (s))) − (s(v0 + (s))) = (s)H (: ; s(v0 + (s))m )s(v0 + (s)): @t Dividing by s, di erentiating with respect to s and setting s = 0, we obtain @ 0 0 ( (0)) − (0 (0)) = 0 (0)H (: ; 0)v0 + ˆ0 vm H (: ; 0)v0 + ˆ0 H (: ; 0):0 (0): @t Multiplying by (vˆ0 )T , integrating over × [0; T ] and using (P1), we obtain EE

0 DD 0 vˆ ; L[0 (0)] = vˆ0 ; 0 (0)H (: ; 0)v0 + ˆ0 vm H (: ; 0)v0 + ˆ0 H (: ; 0)0 (0) : From (P1) part (b), we also have



∗ 0 0 L vˆ ;  (0) = ˆ0 vˆ0 ; H (: ; 0)0 (0) : From the last two equalities we obtain from (P3)

0 0 v ˆ0 vˆ0 ; H (: ; 0)vm ¿0: 0 (0) = − hhvˆ0 ; H (: ; 0)v0 ii

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

83

Lemma 4.3. The function (); given in Eq. (4.3), satis es 0 (ˆ0 )¿0: Proof. From Eq. (4.3) we obtain (−L)D2 F(; 0)z() = − L(z()) + H (: ; 0)z() = ()z();

 ∈ (ˆ0 − 1 ; ˆ0 + 1 ):

Since (ˆ0 ) = 0; multiplying by (vˆ0 )T , integrating over × [0; T ] and by (P1) we nd





0 −vˆ ; L[z()] +  vˆ0 ; H (: ; 0)z() = ( () − (ˆ0 )) vˆ0 ; z() ; −ˆ0





vˆ0 ; H (: ; 0)z()

Therefore ( () − (ˆ0 )) =  − ˆ0



+





vˆ0 ; H (: ; 0)z() = ( () − (ˆ0 )) vˆ0 ; z() :

vˆ0 ; H (: ; 0)z() hhvˆ0 ; z()ii

:

Taking limit as  tends to ˆ0 , from (P1) we obtain the result. Since lims→0 −s0 (s) 0 (0 )=(s) = 1 [3], from Lemmas 4.1 and 4.2 we obtain the following lemma. Lemma 4.4. There exists 2 ∈ (0; 1 ) such that (s)¡0; for all s ∈ (0; 2 ). (We can assume that u s = s(v0 + (s))¿0 on × R; for all s ∈ (0; 2 ):) (For more details, see [14].) In what follows we consider the point spectrum of the operator Fs . Note that Fs = (@=@t) − ( + ˆ0 H 0 + Bs ), where Bs [:] = − ˆ0 H (: ; 0)[:] + [(s)H (: ; ums )[:] + (s)(H (: ; ums ):u s )[:]m ]: ˆ and Bˆ s (t) the multiplication operators induced on X by the m × m Let us denote by B(t) matrices ˆ0 H 0 and Bs , respectively. From Lemma 2.1, the smoothness of H 0 and Bs imply that there exists an evolution system associated with the evolution equations du ˆ + Bˆ s (t))u(t) = 0; on [0; 2T ]; − (A + B(t) dt

u(0) = u0 ;

(4.4)

for s ∈ (0; 2 ), xed. This evolution system will be denoted by Us . Proof of Theorem 4.2. Note that if  is an eigenvalue of Fs , then  + 2ni=T also is an eigenvalue of Fs for all integer n: As a consequence of Lemmas 4.1 and 4.4, we obtain positive numbers 0 and 2 such that in the set { ∈ C : ||¡0 } there is a unique eigenvalue of Fs ; −(s)¿0, for all s¡2 . (We may assume that 0 ¡2=T .) We rst prove that there exists ¿0 ˜ such that Re (s )¿− ˜ for all eigenvalue s of Fs .

84

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

Let U˜ (; t) be the evolution system associated with the evolution equation du ˆ − (A + B(t))u(t) = 0; on [0; 2T ]; dt

u(0) = u0 ;

for u0 ∈ X . It is well known that if u is a solution of Eq. (4.4), u satis es the equation Z t U˜ (t; )Bˆ s ()u() d: u(t) = Us (t; 0)u0 = U˜ (t; 0)u0 + 0

From this, if u0 ∈ X , we obtain by the Gronwall inequality that kUs (t; 0)u0 kX = ku(t)kX ≤ Ke t ku0 kX for some constants K and independent of s and so Spr(Us (t; 0)) ≤ kUs (t; 0)kL(X ) ≤ Ke t

for all s ∈ (0; 2 ):

If s is an eigenvalue of Fs , e−s T is an eigenvalue of Us (T; 0) (see the proof of Theorem 4.1). Therefore |e−s T | = e−Re s T ≤ Spr(Us (T; 0)) ≤ Ke T   ln K := −˜ : Re s ¿− + T

and (4.5)

We will construct a set D = { ∈ C: − ˜ ≤ Re  ≤ r; 0 ≤ Im  ≤ 2=T };

(4.6)

 suciently where it is impossible to nd an eigenvalue of Fs, for some r¿0 and s¿0 small. This will imply that all the eigenvalues of Fs, satisfy Re ¿r. In fact, if ∗ is an eigenvalue of Fs and Re(∗ ) ≤ r; then by Eq. (4.5), Re(∗ )¿− ˜ and ∗ belongs to the complement of D. This implies that −˜ ≤ Re ∗ ≤ r and either Im(∗ )¿2=T or Im(∗ )¡0. However, there exists an integer n such that ∗ + (2in =T ) belongs to D. ∗ ∗  ))T = e− T is an eigenvalue of Us(T; 0); we nd ∗ + (2ni=T ) ∈ D Since e−( +(2ni=T is also an eigenvalue of Fs. This contradiction proves that all the eigenvalues of Fs belong to the set { ∈ C | Re ¿r}. In order to improve the information about the location of the point spectrum of Fs we consider the operator T := I + ˆ0 (−L−1 )H 0 + (−L−1 )

on Xˆ ;

 ∈ C;

where Xˆ denotes the set of T -periodic in t functions in (Lp ( × [0; 2T ]))m . (See [7] for extension of theories in [6] to Lp .) Since T (u) = 0; i u = Mu, where M := @=@t − ( + ˆ0 H 0 ), the Fredholm Alternative asserts the existence of T−1 ∈ L(Xˆ ), for all  in the complement of the point spectrum p (M) of M. Also observe that if Fs u ∗ = ∗ u ∗ , and T−1 is de ned, after ∗ simple calculations we obtain T∗ (u ∗ ) = L−1 [L(u ∗ ) − ˆ0 H 0 u ∗ ] + ∗ (−L−1 )u ∗ = L−1 [Fs + Bs ]u ∗ + ∗ (−L−1 )u ∗ = L−1 [Fs u ∗ − ∗ u ∗ ] + L−1 [Bs u ∗ ]:

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

85

Consequently −1 [Bs u ∗ ]): u ∗ = T−1 ∗ (L

(4.7)

Observe that (P1) asserts the number zero is an eigenvalue of M = (@=@t) − ( + ˆ0 H 0 ); with positive eigenfunction in each component. In the case B(x; t) = H 0 in Theorem 4.1, the comparison Lemma 2.2 implies that 0 = 1 − b. Thus, part (iii) of Theorem 4.1 asserts that Re() ≥ 0 for all eigenvalue  of M, and if  is an eigenvalue of M di erent of 2in=T for all integer n, then Re()¿0. Also, since zero is the unique possible accumulation point of the eigenvalues of U˜ (T; 0); there exists a real number r1 ¿0 such that all eigenvalues of M belong to the complement of the compact set   2 0 0 ≤ Im() ≤ − D1 =  ∈ C |−0 ≤ Re() ≤ r1 ; 2 T 2   2 0 : ∪  ∈ C |− ˜ ≤ Re() ≤− ; 0 ≤ Im() ≤ 2 T Since the eigenvalues of M belong to the complement of D1 ; there exists a positive real number k¿0 such that, kT−1 k ≤ k; for all  ∈ D1 : Note that kBs kL(Xˆ ) → 0; as s → 0: This and the last inequality imply that, there exists 0¡∗ ¡2 such that, kT−1 (−L−1 Bs )k¡1; for all 0¡s ≤ ∗ and  ∈ D1 : It follows from last inequality and Eq. (4.7) that all of the eigenvalues of Fs belong to the complement of D1 ; for all  r1 }: Let D2 denote the set 0¡s ≤ ∗ . Let 0¡s ≤ ∗ ; and 0¡r¡min{−(s);   2i ¡ ∩ { ∈ C | Re() ≤ r}: D2 =  ∈ C | either||¡0 or  − 0 T Therefore by the arguments given at the beginning of the proof of this theorem concerning the eigenvalues near zero we conclude that all of the eigenvalues of Fs belong to the complement of D1 ∪ D2 . Since D1 ∪ D2 contains the set D given in (4:6), we have completed the proof of the theorem. Remark 4.1. So far we have only been considering X := (Lp ( ))m ; 2p¿N in our previous discussions. However, in the following stability results, we may also let X = C := {u=u ∈ (C( ))m ; u = 0 on @ }; and A = diag(; : : : ; ) is an operator on C with domain D(A) = {u ∈ (W 2; p ( ))m for all p, u ∈ (C( ))m ; u = 0 and u = 0 on @ }: The operator A is an in nitesimal generator of an analytic semigroup on ˆ be de ned on X = C with domain C (cf. [15, p. 217]). If we let M0 (t) := A + B(t) ˆ denotes the multiplication operator induced on X by an D(M0 (t)) = D(A), where B(t) m × m matrix function B(x; t), the statement and proof of Lemma 2.1 is true verbatim with X = C. Let Y1 and Y2 be Banach spaces as follows: Y1 = {u: u ∈ (Lp ( ))m } for p large enough such that N ¡2p, and Y2 = {u: u ∈ (C( ))m ; u = 0 on @ }: Let A1 be the  operator on Y1 with domain D(A1 ) = {u ∈ Y1 : u ∈ (W 2; p ( ) ∩ W01; p ( ))m }; and A2 be the  operator on Y2 with domain D(A2 ) as described in Remark 4.1 for D(A) above.

86

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

For u = col(u1 ; : : : ; um ) and f(t; u) = H (: ; t; um )u, we can consider the following nonlinear initial-boundary value problem for each i = 1; 2 corresponding to Eq. (1.2): du − Ai u = f(t; u(t)) dt

for t ≥ 0;

u(0) = u0 ;

(4.8)

with u(t) ∈ D(Ai ); t¿0; respectively, for i = 1; 2. Here we suppress writing the dependence of f on (s),  since it is xed for some s ∈ (0; ∗ ). Deÿnition 4.1. A solution of Eq. (4.8) in Yi is a function u ∈ C([0; ∞); Yi ) ∩ C 1 ((0; ∞); Yi ) with u(0) = u0 ; u(t) ∈ D(Ai ), for t¿0 and u(t) satis es Eq. (4.8) for all t¿0: Let us recall that for xed s,  we denote by Us, the evolution system associated with the evolution equation (4:4). This evolution system is generated by {A + B˜s (t)}; t ≥ 0, where B˜s is the operator de ned on Yi ; i = 1; 2; by  (x; t; ums (x; t))[z(t)(x)] + (s)H   (x; t; ums (x; t))u s[z(t)(x)]m : [B˜s (t)z(t)](x) := (s)H For the case of solution in X = Y1 , we will use subspaces X ; ¿0 which is the Banach space (D(−A1 ) ; k:k ), where D(−A1 ) is the domain of the fractional powers of the operator (−A1 ) and kuk = k(−A1 ) ukY1 : In the case N=2p¡ ¡1, there exists a constant C( )¿0 such that kuk∞ ≤ C( )kuk for all u ∈ X (see [15]). We can then readily verify that for the function f described above, hypotheses (H1) – (H3) lead to ˜ ¿0; 0¡¡1 the following Lipschitz properties in Y1 and Y2 : There exist C¿0; such that ˜ − t| + ku − wk ) for all u; w ∈ X kf(s; u) − f(t; w)kLp ≤ C(|s with kuk ; kwk ≤ ; s; t¿0

(4.9)

and ˜ − t| + ku − wk∞ ) for all u; w ∈ Y2 kf(s; u) − f(t; w)k∞ ≤ C(|s with kuk∞ ; kwk∞ ≤ ; s; t¿0:

(4.10)

Here C˜ depends on and . From the Lipschitz properties (4:9) and (4:10) we obtain local existence for solutions of Eq. (4.8) in Y1 ; Y2 respectively (see e.g. [15]). From hypotheses (H1)–(H3) and the fact that ∈ (N=(2p); 1) we can further deduce that kf(s; w) − f(t; u) − dfu (w − u)kLp = o(kw − uk )

for all u; w ∈ X ; kuk ≤ ; t¿0; as kw − uk → 0

(4.11)

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

87

and kf(s; w) − f(t; u) − dfu (w − u)k∞ = o(kw − uk∞ ) Here

for all u; w ∈ Y2 ; kuk∞ ≤ ; t¿0; as kw − uk∞ → 0: (4.12) 

 @H (: ; t; um ) uzm : dfu z = H (: ; t; um )z +  @

(4.13)

Note that the operator Fs can be written as (d=dt)−Ai −dfu with u = u s. The evolution system Us (T; 0) associated with Eq. (4.4) is a compact linear operator either from X → X ; 0 ≤ ¡1, or from Y2 → Y2 (see e.g. [8]). Thus, we only have to consider eigenvalues in the spectrum. For each u0 ∈ Y1 or Y2 , we denote S(u0 )(t) := u(t), where u is the unique solution of Eq. (4.8) corresponding to u0 . We can readily apply the stability Theorem 8.1.1 in [8] and Theorem 4.2 above to obtain the asymptotic stability of the periodic solutions u s(x; t): Theorem 4.3 (Asymptotic stability). Assume the smoothness properties (H1) – (H3) for  H (x; t; ) and H satis es (P1) to (P3) at  = 0. For each xed s; 0¡s¡  ∗ ; let  = (s); s and u (: ; t) be the periodic solution of Eq. (4.8) or (1.2) described in Theorem 3.1. Let ∈ (N=(2p); 1), then for each i = 1; 2, there exist positive constants ; ; M such that if ku0 − u s(: ; 0)k ≤ =M; u0 ∈ X for i = 1 (or ku0 − u s(: ; 0)k∞ ≤ =M; u0 ∈ Y2 ; for i = 2); the solution u(t) = S(u0 )(t) of Eq. (4.8) exists on 0 ≤ t¡∞ and satis es ku(t) − u s(: ; t)k ≤ M ku0 − u s(: ; 0)k e− t

for all t ≥ 0;

i=1

or ku(t) − u s(: ; t)k∞ ≤ M ku0 − u s(: ; 0)k∞ e− t

for all t ≥ 0;

i = 2:

Note: The condition on is only assumed for solutions in Y1 . Under the assumptions of the Theorem, f(t; u) and its linear approximation dfu as given in Eq. (4.13) will satisfy the continuity and Holder continuity properties of Theorem 8.1.1 in [8]. We apply the theorem with ∈ (N=(2p); 1); X = Y1 for solutions in Y1 and = 0; X = Y2 for solutions in Y2 . By Theorem 4.2, if  is an eigenvalue of Fs, then Re  ≥ 1 (s)¿0.  Since we only have to consider the eigenvalues of Us (T; 0), we nd Spr(Us (T; 0)) = sup {|e−T |} = sup {e−Re(T ) }¡e−T1 (s) ¡1: ∈p (Fs)

∈p (Fs)

Thus, the conclusion of the Theorem follows from Theorem 8.1.1 in [8]. For more details in obtaining Eqs. (4.9)–(4.13), see similar arguments in [14].

88

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

5. Application to reactor dynamics In this section we study the following system which describes the dynamics inside ssion reactors of various sizes. @uˆ − uˆ = Hˆ (y; t; uˆm (y; t))u(y; ˆ t); u(y; ˆ t + T ) = u(y; ˆ t); @t for all (y; t) ∈ k × (0; ∞); uˆ ≡ 0 on k@ × (0; ∞);

(5.1)

is a xed domain in RN ; m¿2 and k¿0. The domain k represents the reactor core, uˆj (y; t); j = 1; : : : ; m−1 is the neutron- ux of the jth energy group, and uˆm (y; t) denotes the temperature. Hˆ is an m × m matrix, whose components Hˆ ij represent the temperature dependent ssion and scattering rates of various energy groups,  is the Laplacian operator with y as independent variable. The function Hˆ mm denotes the cooling coecient and Hˆ mj ; j = 1; : : : ; m−1; denotes the rate of temperature increase due to neutrons in group j. Consequently we should have Hˆ mm ≤ 0; and Hˆ ij ≥ 0 for all (i; j) 6= (m; m). With the change of variables y = k x, problem (5:1) is transformed in Eq. (1.1), where  = k 2 . Here ui (x) = uˆi (y) = uˆi (k x) and Hij (x; t; um (x; t)) = Hˆ ij (y; t; uˆm (y; t)) = Hˆ ij (k x; t; uˆm (k x; t)). We assume that control rods in the reactor are adjusted periodically so that the reaction rates are time-periodic. This gives rise to the periodic system (5:1). We will obtain a positive solution of Eq. (1.1) for certain value of  and consider the stability of this solution as a solution of the nonlinear parabolic system (1:2). This means that a positive periodic solution bifurcates from the trivial solution at certain critical size of the reactor. (For more detailed description of the equations, see e.g. [13, 10, 5]. We study this model under all of the smoothness assumptions (H1) – (H3) in Section 1. The considerations in the last paragraph lead to the following conditions on the reactor model. (C1) Suppose that Hij0 ∈ C ; =2 ( × [0; T ]) is T periodic on t; Hij0 ≥ 0; in × R for all (i; j) 6= (m; m). 0 0 6≡ 0; j = 1; : : : ; m − 1; Him ≡ 0 for i = (C2) Hij0 6≡ 0 for i 6= j; i; j = 1; : : : ; m − 1; Hmj 1; : : : ; m − 1: Note that (C1) essentially re ects the fact that the scattering and ssion rates are nonnegative. Moreover, the neutron- uxes contribute to increase in temperature. The 0 ≡ 0 for i = 1; : : : ; m − 1 in (C2) means that the in uence of temperature conditions Him on the neutron- uxes is only through the rates of scattering, ssion and absorption. Under these physically motivated assumptions (C1), (C2), etc., we will show by means of Theorems 5.1 and 5.2 below that Theorems 3.1 and 4.3 can be readily applied to obtain asymptotically stable positive bifurcating periodic solutions as in the last two sections (see Remarks 5.2 and 5.3 below). For convenience, we de ne the following operator with m components 

 @ @ @ − ; : : : ; − ; −  + q ; Lq := diag @t @t @t

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

89

where q(x; t) ≥ 0 is a T -periodic function in C ; =2 ( × R). Let us denote by S := 0 0 L−1 q H : F → F, so that for u ∈ F; w = Su is the function which satis es Lq w = H u on

× R and w ≡ 0 on @ × R. We rst prove the existence of a positive eigenfunction for an appropiate linear system related to Eq. (3.1), under the restrictive condition 0 ≡ 0. This restriction will be modi ed later in Theorem 5.2. Hmm 0 ≡ 0 in × R. Then there Theorem 5.1. Suppose H 0 satis es (C1) and (C2), and Hmm 0 exists (0 ; u ) ∈ R × E; 0 ¿0 such that

Lq [u 0 ] = 0 H 0 u 0 ;

(5.2)

with each componenet ui0 ¿0 in × R and @ui0 =@¡0 on @ × R for i ∈ J . The number  = 0 is the unique positive number so that the problem u = Su has a nontrivial nonnegative solution for u ∈ P. Furthermore; 1=0 is a simple eigenvalue (algebraically 0 k and geometrically) of the operator S (i.e. dim ker(L−1 q H − (1=0 )I ) = 1; for all k ∈ N). Proof. The operator S : F → F is completely continuous and positive with respect to the cone P. Let z = col(z1 ; : : : ; zm ) = L−1 (1; : : : ; 1). De ne v = Sz. Using the sign properties of H 0 in (C1), (C2) and the Maximum Principle we readily see that there exists ¿0 such that Sz ≥ z. Theorem 2.5 in [11] asserts that there exists a nontrivial function u 0 = col(u10 ; : : : ; um0 ) ∈ P and 0 ≥ ¿0 such that Su 0 = 0 u 0 (i.e. Eq. (5.2) with 0 = 1=0 ). The last component of Eq. (5.2) implies that we cannot have ui0 ≡ 0 in

× R for all i = 1; : : : ; m − 1. The Maximum Principle further implies that if uj0 6≡ 0, for all j ∈ J , then uj0 (x; t)¿0 for all (x; t) ∈ × R. We can then obtain from our assumptions and the Maximum Principle that ui0 ¿0 in × R and @ui0 =@¡0 on @ × R for all i ∈ J . Now we let w = col(w1 ; : : : ; wm ) 6≡ 0 be such that Lq [w] = 0 H 0 w. From Lemma 2.2, there must exist ∗ ∈ R and some k ∈ J such that uk0 ≡ ∗ wk

and

uj0 − ∗ wj ≥ 0

in × R for all j ∈ J:

(5.3)

If for some integer r ∈ J  t) − ∗ wr (x;  t)¿0 ur0 (x;

for some (x;  t) ∈ × R

then the Maximum Principle and periodicity imply that ur0 − ∗ wr ¿0 in × R: We then consider the ith equation in Eq. (5.2). For i 6= r, [C2] implies that ui0 − ∗ wi 6≡ 0 for each i 6= r. This contradicts Eq. (5.3). Thus we must have u 0 ≡ ∗ w: The uniqueness of 0 follows readily from Lemma 2.2. In order to prove the algebraic 0 ) ∈ F; v0 6≡ 0, satis es (S − 0 I ) 2 v0 = 0 simplicity of 0 , suppose that v0 = col(v10 ; : : : ; vm (I is the identity in F; 0 = 1=0 ). Thus, (S − 0 I )v0 =−k ∗ u 0 , for a constant k ∗ , which ˆ we may assume positive. Let ¿0 such that ˆ 0 + v0 ¿0 wi0 = u i i

in × R; for all i ∈ J:

90

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

Pm Pm Therefore, since Lq [wi0 ] = 0 j=1 Hij0 (wj0 + 0 k ∗ uj0 ) ≥ 0 j=1 Hij0 wj0 in × R ; for all i ∈ J , by Lemma 2.2 there exist k ∈ J and 0 ∈ R such that wk0 = 0 uk0 ;

wj0 − 0 uj0 ≥ 0 in × R;

j = 1; : : : ; m:

The identity 0 = Lq [wk0 − 0 uk0 ] = 0

m X

Hkj0 [wj0 + (0 k ∗ − 0 )uj0 ]

in × R

j=1

implies that k ∗ = 0. This proves the algebraic simplicity. Recall that Hmm describes the cooling coecient for the temperature equation. This leads to the assumption: 0 (x; t) ≤ 0 for all (x; t) ∈ × R. (C3) Hmm To insure the existence of positive eigenfunctions, we further assume that: 0 (x;  t)¿0. (C4) There exist k ∈ {1; : : : ; m − 1} and (x;  t) ∈ × [0; T ] such that Hkk Theorem 5.2. Suppose H 0 satis es all the hypotheses (C1) – (C4). Then there exists (ˆ0 ; v0 ) ∈ R × E; ˆ0 ¿0; such that L[v0 ] = ˆ0 H 0 v0

in × R;

v0 = 0 on @ × R;

with each component vi0 ¿0 in × R; @vi0 =@¡0 on @ × R for i ∈ J . Furthermore 1= ˆ0 is a geometric simple eigenvalue of the operator L−1 H 0 : F → F. We will use Theorem 5.1 to prove this theorem. For convenience, de ne the m × m matrix function on × R as follows: 0 ij (x; t) = Hij (x; t)

for i; j ∈ J; (i; j) 6= (m; m); (x; t) ∈ × R;

mm

to be

≡ 0:

For each  ≥ 0, de ne the m component vector operator   @ @ @ 0 − ; : : : ; − ; −  − Hmm L ≡ diag @t @t @t and consider the eigenvalue problem L [u] = () u

in × R;

u=@ × R ≡ 0

(5.4)

with eigenvalue (). Since satis es the condition in Theorem 5.1, for each  ≥ 0 problem (5:4) has a unique positive eigenvalue () with corresponding eigenfunction u  = col(u1 ; : : : ; um ), ui ¿0 in × R, @(ui )=@¡0 on @ × R, for all i ∈ J . Theorem 5.2 will follow readily from the next two lemmas. Lemma 5.1. Under the hypotheses of Theorem 5.2, () is a bounded function for all  ∈ [0; ∞). Proof. Let G be an open bounded set in with its closure contained in such that 0 (x; t)¿0 for all (x; t) ∈ G × [r1 ; r2 ]; 0 ≤ r1 ≤ r2 ≤ T . Let  6≡ 0 be a C ∞ function Hkk

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

91

with compact support contained in G. Thus Z 0 Hkk (x; t) 2 dx dt¿0: G×[0;T ]

Let u  be the solution of Eq. (5.4) and set w  = ln(u  )k : Thus, we have in G × [0; T ] N

X @w  − w  − @t i=1



@w  @xi

2 =

m−1 () X 0  0 Hkj uj ≥ ()Hkk : (u  )k j=1

Therefore multiplying by  2 and integrating over D = G × [0; T ], we obtain 2 # Z Z "  N  X @w @w   0 2  dx dt: − w −  2 dx dt ≥ () Hkk @t @xi D D i=1

R

(@w  =@t) 2 dx dt = 0, integrating by parts gives Z Z

0 2  dx dt ≤ ∇x w  ; 2∇x  − ∇x w  dx dt () Hkk

Since

D

D

Z ≤

So that 0¡() ≤ (

R D

D

D

h∇; ∇i dx dt:

0 2 Hkk  dx dt)−1

R D

h∇; ∇i dx dt:

Lemma 5.2. Under the hypotheses of Theorem 5.2, () is a continuous function for all  ∈ [0; ∞). Proof. Let ∗ ≥ 0 and i be a sequence converging to ∗ which we may assume without loss of generality that (i ) → d for some d ≥ 0. From Theorem 5.1, for each i there exists a T -periodic eigenfunction ui ≥ 0 normalized to kui k∞ = 1, satisfying Li ui = (i ) ui

in × R;

u=@ × R ≡ 0:

Schauder’s Theory implies that {ui } is bounded in the C (1+)=2 ( × [0; T ]) norm. Using an imbedding Theorem we obtain, going to a subsequence if necessary, that ui → v for some v ∈ (C (1+)=2 ( × [0; T ]))m ; v 6≡ 0, v ≥ 0 in × R. In the limit, we obtain by Schauder’s theory again that L ∗ v = d v

in × R;

v=@ × R ≡ 0:

From the Maximum Principle, we must have v¿0 in × R. On the other hand for  = ∗ , Theorem 5.1 implies that there exists v∗ ∈ E; v∗ ¿0 in × R and a number (∗ ) satisfying L∗ v∗ = (∗ ) v∗

in × R;

v∗ =@ × R ≡ 0:

92

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

Therefore from the Comparison Lemma 2.2 we obtain that d = (∗ ) and the proof is complete. Proof of Theorem 5.2. Since (0)¿0, from Lemmas 5.1 and 5.2 we conclude that there exist ˆ0 and v0 ¿0 in × R such that (ˆ0 ) = ˆ0 , (ˆ0 ; v0 ) ∈ R+ × E, Lˆ0 v0 = (ˆ0 ) v0

in × R;

v0 =@ × R ≡ 0;

(5.5)

and @vi0 =@¡0 on @ × R; i ∈ J . This proves the existence statement. The geometric simplicity follows from the Comparison Lemma 2.2. This completes the proof of Theorem 5.2. Lemma 5.3. Suppose H 0 satis es (C1)– (C4). Then there exists vˆ 0 ∈ E such that L∗ vˆ 0 = ˆ0 (H 0 )T vˆ 0

in × R;

(5.6)

vˆi0 ¿0 for i = 1; : : : ; m − 1, vˆ0m ≡ 0 in × R. Moreover vˆ 0 is unique up to a multiple. (Here ˆ0 is de ned in Theorem 5.2.) 0 ¿0 in × R and de ne Proof. Let k be a positive constant such that k + Hmm ∗ 0 T  D (x; t) := (H (x; −t)) + kI . As in the proof of Theorem 5.1, we can show that there exists (˜0 ; u ∗ ) such that

 ∗ = ˜0 D∗ u ∗ L[u ∗ ] + ˆ0 ku

in × R;

u ∗ =@ × R ≡ 0;

u ∗ (x; t + T ) = u ∗ (x; t)

with u ∗ ≥ 0; 6≡ 0 in × R, ˆ0 ¿0. Note that vˆ 0 (x; t) := u ∗ (x; −t) satis es  ]vˆ 0 in × R; vˆ 0 =@ × R ≡ 0; L∗ [vˆ 0 ] + ˆ0 k vˆ 0 = ˜0 [(H 0 )T + kI vˆ 0 (x; t + T ) = vˆ 0 (x; t): Recall that from Eq. (5.5), v 0 satis es  ]v0 ; in × R; v0 =@ × R ≡ 0;  0 = ˆ0 [H 0 + kI L[v0 ] + ˆ0 kv v0 (x; t + T ) = v0 (x; t):

(5.7)

Multiplying both sides of Eq. (5.7) by vˆ 0 and integrating over × [0; T ], we obtain ˆ0



 ]v0 ; vˆ 0 [H 0 + kI



DD

EE DD EE  0 ; vˆ 0 L[v0 ] + ˆ0 kIv = v0 ; L∗ vˆ 0 + ˆ0 k vˆ 0



 ]vˆ 0 = ˜0 [H 0 + kI  ]v0 ; vˆ 0 : = ˜0 v0 ; [(H 0 )T + kI =

Therefore ˜0 = ˆ0 and vˆ 0 satis es Eq. (5.6). The mth equation in Eq. (5.6) clearly implies that vˆm0 ≡ 0 in × R, because the last row (H 0 )T is identically zero except the diagonal entry. The rst (m−1) equations imply that vˆi ¿0 in × R for i = 1; : : : ; m−1.

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

93

Applying Lemma 2.2 to the rst m − 1 equations of Eq. (5.6) (i.e. with m replaced by m − 1) we obtain the uniqueness of vˆ 0 up to a multiple. Corollary 5.1. Suppose H 0 satis es (C1) – (C4). Then 1= ˆ0 is the unique eigenvalue of L−1 H 0 with positive eigenfunction; and 1= ˆ0 is a simple eigenvalue (algebraically and geometrically). Proof. Let 1= be another eigenvalue of L−1 H 0 , ¿ˆ0 with corresponding eigenfunction v  positive. Hence Lv  = H 0 v 

in × R:

(5.8)

Multiplying both sides of Eq. (5.8) by vˆ 0 and integrating over × [0; T ], we obtain



Lv  ; vˆ 0



=



H 0 v  ; vˆ0



:

On the other hand,



Lv  ; vˆ 0



=



v  ; L∗ vˆ 0



= ˆ0



v  ; (H 0 )T vˆ 0



= ˆ0



H 0 v  ; vˆ 0



:

So that  = ˆ0 . The simplicity assertion follows from Theorem 5.2, Lemma 5.3 and the proof of Lemma 3.1 Remark 5.1. From Theorem 5.2, Corollary 5.1 and Lemma 5.3, we see that H 0 will satisfy (P1) (a) and (b) under assumptions (C1) – (C4). Condition (C1) clearly implies (P2). Remark 5.2. If H 0 satis es (C1)–(C4), then (P1) holds and we can apply Theorem 3.1 for bifurcation of positive periodic solution of Eq. (1.1) corresponding to Eq. (5.1). Remark 5.3. If H 0 satis es (C1)– (C4) and the condition (P3) concerning (@Hij0 =@) (x; t; 0) holds, then Theorem 4.3 is applicable for the stability of the bifurcating solution. Physically, this means that the control rods have to be adjusted according to temperature feedback so that (P3) holds for the reaction rates. In view of the general hypotheses of Theorems 3.1– 4.3, we see that they should be readily applicable to many physical, chemical or biological problems, whenever reaction rates are temperature dependent. Analogous theory can be readily developed for steady-state solutions of general autonomous parabolic systems.

94

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

Appendix We now include the proof of Lemma 2.1, for completeness. See other references in Section 2. Proof of Lemma 2.1. Observe that if for some b ∈ R there exists an evolution system V (t; s) for the problem on X : du ˆ − bI )u = 0; − (A + B(t) dt u(s) = u0 ; u0 ∈ X;

t¿s;

it implies the existence of an evolution system U (t; s) for Eq. (2.2). Moreover, they are related by the expresion U (t; s) = ebt V (t; s). For this reason it suces to show that all  ∈ C with Re  ≥ 0 belongs to the resolvent set of the operator M1 := A + Bˆ − b∗ I and ˆ − b∗ I )kL(X ) ≤ C for all  ∈ C; Re  ≥ 0; kR(; A + B(t) 1 + || for some b∗ ; C¿0 and that the operator M1 satis es k(M1 (t) − M1 (s))(M1 ())−1 kL(X ) ≤ K|t − s| for some K ∈ R; and t; s;  ∈ [0; ∞); [15, Theorem 5.6.1]. By well known results [15, Theorem 7.3.6], there exists a positive constant M such that M for all  ∈ C; Re ¿0: (A.1) kR(; A)kL(X ) ≤ || ˆ Let c := Sup{kB(t)k L(X ) ; t ∈ [0; ∞)}, t ≥ 0 and  a complex number such that Re ¿ ˆ ˆ ˆ M c.  Since kR(; A)B(t)k L(X ) ≤ (M=||)kB(t)kL(X ) ¡1; the operator I − R(; A)B(t) is invertible and the inverse can be written as ∞ X −1 k ˆ ˆ = (R(; A)B(t)) : (A.2) (I − R(; A)B(t)) k=0 −1 ˆ = ˆ := (I −R(; A)B(t)) ˆ R(; A) : X → D(A): It can be proved easily that R(t) Let R(t) ˆ R(; A + B(t)). Therefore, it follows from Eqs. (A.1) and (A.2) that

ˆ  kR(; A + B(t))k L(X ) ≤ M=(|| − M c): √ If b∗ ¿2M c + 1 and C = 4 2M , by simple calculations we obtain ˆ − b∗ I )kL(X ) ≤ kR(; A + B(t)

M C ≤ | + b∗ | − M c || + 1

for all  ∈ C; Re  ≥ 0:

Moreover, by Eq. (2.1),  ˆ − B(s)k ˆ k(M1 (t) − M1 (s))(M1 ())−1 kL(X ) ≤ CkB(t) L(X ) ≤ CC1 |t − s| :

The proof of the lemma is complete.

A.W. Leung, B. Villa / Nonlinear Analysis 41 (2000) 73 – 95

95

References [1] H. Amann, Periodics solutions of semilinear parabolic equations, Nonlinear Anal. (a volume in honor of E.H. Rothe) (1978) 1–29. [2] H. Amann, Dynamic theory of quasilinear parabolic equations – II. Reaction di usion systems, Di erential Integral Equation 3 (1) (1990) 13–75. [3] M. Crandall, P. Rabinowitz, Bifurcation, perturbation of simple eigenvalues and Linearized Stability, Arch. Rat. Mech. Anal. 52 (1973) 161–181. [4] K. Deimling, Nonlinear Functional Analysis, Springer, New York, 1985. [5] J.J. Duderstadt, L.J. Hamilton, Nuclear Reactor Analysis, Wiley, New York, 1976. [6] P.C. Fife, Solutions of parabolic boundary problems existing for all time, Arch. Rat. Mech. Anal. 16 (1964) 155–186. [7] F. He, A. Leung, S. Stojanovic, Periodic optimal control for parabolic Volterra–Lotka type equations, Math. Meth. Appl. Sci. 18 (1995) 127–146. [8] D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, vol. 840, Springer, Berlin, 1981. [9] P. Hess, Periodic-Parabolic Boundary Value Problems and Positivity, Longman Scienti c & Technical, Harlow, 1991. [10] W.E. Kastenberg, A Stability criterion for space-dependent nuclear-reactor system with variable temperature feedback, Nucl. Sci. Engng. 37 (1969) 19–29. [11] Krasnolsel’skii Positive Solutions of Operator Equations, P. Noordho , Ltd., Groningen, 1964. [12] A. Leung, Systems of Nonlinear Partial Di erential Equations, Applications to Biology and Engineering, Kluwer Academic Publishers, Boston, 1989. [13] A. Leung, G.S. Chen, Elliptic and parabolic systems for neutron ssion and di usion, J. Math. Anal. 120 (1986) 655–669. [14] A. Leung, L.A. Ortega, Bifurcating solutions and stabilities for multigroup neutron ssion systems with temperature feedback, J. Math. Anal. Appl. 194 (1995) 489–510. [15] A. Pazy, Semigroups of Linear operators and Applications to Partial Di erential Equations, Springer, New York, 1983. [16] D.B. Spalding, The theory of ame phenomena with a chain reaction, Phil. Trans. Roy. Soc. London A 249 (1956) 1–25.