Existence of positive solutions for fourth-order boundary value problem with variable parameters

Existence of positive solutions for fourth-order boundary value problem with variable parameters

Nonlinear Analysis 66 (2007) 870–880 www.elsevier.com/locate/na Existence of positive solutions for fourth-order boundary value problem with variable...

192KB Sizes 0 Downloads 77 Views

Nonlinear Analysis 66 (2007) 870–880 www.elsevier.com/locate/na

Existence of positive solutions for fourth-order boundary value problem with variable parameters✩ Guoqing Chai Department of Mathematics, Hubei Normal University, Huangshi, Hubei, 435002, People’s Republic of China Received 8 October 2005; accepted 15 December 2005

Abstract In this paper, by using the fixed point theorem and operator spectral theorem, the author establishes the theorem on the existence of positive solutions for the fourth-order boundary value problem with variable parameters as follows:  (4) u + B(t)u  − A(t)u = f (t, u), 0 < t < 1 u(0) = u(1) = u  (0) = u  (1) = 0 where A(t), B(t) ∈ C[0, 1] and f (t, u) : [0, 1] × [0, ∞) → [0, ∞) is continuous. c 2005 Elsevier Ltd. All rights reserved.  MR subject classification: 34B15 Keywords: Positive solutions; Fixed point theorem; Operator spectral

1. Introduction The deformations of an elastic beam in an equilibrium state, whose two ends are simply supported, can be described by the fourth-order ordinary differential equation boundary value problem as follows:  (4) u = g(t, u, u  ), 0 < t < 1 u(0) = u(1) = u  (0) = u  (1) = 0 ✩ Supported in part by NSF of People’s Republic of China.

E-mail address: [email protected]. c 2005 Elsevier Ltd. All rights reserved. 0362-546X/$ - see front matter  doi:10.1016/j.na.2005.12.028

G. Chai / Nonlinear Analysis 66 (2007) 870–880

871

where g : [0, 1] × [0, +∞) × (−∞, 0] → [0, +∞) is continuous [1,2]. Owing to its importance in physics, the existence of solutions to this problem has been studied by many authors. See, for example, [3–9]. But, in practice, only its positive solution is significant. In the special case, Ma and Wang [6] studied the existence of positive solutions on the following boundary value problem:  (4) u = f (t, u), 0 < t < 1 u(0) = u(1) = u  (0) = u  (1) = 0. Recently, Li Yongxiang [9] investigated the above generalizing form:  (4) u + βu  − αu = f (t, u), 0 < t < 1 u(0) = u(1) = u  (0) = u  (1) = 0.

(1)

Under the following conditions, (C1 ) f (t, u) : [0, 1] × [0, ∞) → [0, ∞) is continuous; 2 β α (C2 ) α, β ∈ R, β < 2π 2 , α ≥ −β 4 , π 4 + π 2 < 1. Li generalized the results of Ma and Wang. In this paper, we consider more general boundary value problem as follows:  (4) u + B(t)u  − A(t)u = f (t, u), 0 < t < 1 (2) u(0) = u(1) = u  (0) = u  (1) = 0, where A(t), B(t) ∈ C[0, 1]. Obviously, boundary value problem (simply denoted by BVP) (1) can be regarded as the special case of BVP (2) with B(t) = β, A(t) = α. Since the parameters A(t), B(t) are variable, we cannot expect to transform directly BVP (2) into an integral equation as in [9]. We will apply the cone fixed point theory, combining with the operator spectra theorem to establish the existence of positive solutions of BVP (2). Our results generalize that in [9]. Let Y = C[0, 1], Y+ = {u ∈ Y : u(t) ≥ 0, t ∈ [0, 1]}. It is well known that Y is a Banach space equipped with the norm u0 = supt ∈[0,1] |u(t)|, u ∈ Y . Set X = {u ∈ C 2 [0, 1] : u(0) = u(1) = 0}. For given λ ≥ 0, denote the norm ·λ by uλ = supt ∈[0,1]{|u  (t)|+λ|u(t)|}, u ∈ X. We also need the space X equipped with the norm u1 = max{u0 , u  0 }. In the next section, we will show that (X,  · λ ) and (X,  · 1 ) are both Banach spaces. If u ∈ C 2 [0, 1] ∩ C 4 (0, 1) fulfils BVP (2), then we call u a solution of BVP (2). If u is a solution of BVP (2), and u(t) > 0, t ∈ (0, 1), then we say u is a positive solution of BVP (2). We list the following conditions for convenience: (H1) f (t, x) : [0, 1] × [0, ∞) → [0, ∞) is continuous; (H2) A(t), B(t) ∈ C[0, 1], α = inft ∈[0,1] A(t), β = inft ∈[0,1] B(t), β < 2π 2 , α ≥ 0, πα4 + < 1.

β π2

2. Preliminaries Lemma 1. ∀u ∈ X, u0 ≤ u  0 . Proof. (i) By u(0) = u(1), there is a τ ∈ (0, 1) such that u  (τ ) = 0, and so −u  (t) =  τ   τ   1    t u (s)ds, t ∈ [0, τ ]. Hence |u (t)| ≤ t |u (s)|ds ≤ 0 |u (s)|ds ≤ u 0 . Similarly,   ∀t ∈ [τ, 1], |u (t)| ≤ u 0 . t 1 (ii) From u(0) = 0, we have u(t) = 0 u  (s)ds, t ∈ [0, 1], and so |u(t)| ≤ 0 |u  (s)|ds ≤ u  0 . Thus u0 ≤ u  0 . 

872

G. Chai / Nonlinear Analysis 66 (2007) 870–880

Lemma 2. (1 + λ)−1  · λ ≤  · 1 ≤  · λ , and X is complete with respect to the norm  · λ , where the constant λ ≥ 0. Proof. It is easy to see that  · λ and  · 1 are both norms on X by Lemma 1, so we only need to show their completeness. Let {u n } be a Cauchy sequence in X, i.e. u n − u m 0 → 0, u n − u m 0 → 0(n, m → ∞). So, there exist u, v ∈ Y with u n − u0 → 0, u n − v0 → 0(n → ∞). Owing to u n ∈ X, we have u n (0) = u n (1) = 0, and so the following equality holds:  1 G(t, s)u n (s)ds (3) un = − 0

where G(t, s) = u 



t (1 − s), 0 ≤ t ≤ s ≤ 1, s(1 − t), 0 ≤ s ≤ t ≤ 1 .

Taking the limit in (3), we have u = −

1 0

G(t, s)v(s)ds,

= v. Thus u ∈ X, u n − u1 → 0(n → ∞), and so (X,  · 1 ) is complete. and so Now we show that the norm  · λ is equivalent to the norm  · 1 . In fact, ∀u ∈ X, t ∈ [0, 1], |u  (t)| + λ|u(t)| ≤ u  0 + λu0 ≤ (1 + λ)u1 . Thus uλ ≤ (1 + λ)u1 . On the other hand, ∀u ∈ X, t ∈ [0, 1], |u  (t)| ≤ |u  (t)| + λ|u(t)| ≤ uλ , and so u  0 ≤ uλ . By Lemma 1, we have u1 ≤ uλ . Thus  · 1 is equivalent to  · λ . Now it follows that (X, ·λ ) is complete from the completeness of (X, ·1 ). This completes the proof.  For any h ∈ Y , consider the following linear boundary value problem:  (4) u + βu  − αu = h(t), 0 < t < 1, u(0) = u(1) = u  (0) = u  (1) = 0,

(4)

where α, β satisfy assumption (H2). 2 Let P(λ) = √ λ + βλ − α. It is easy to see that equation P(λ) = 0 has two real roots λ1 , λ2 =

−β±

β 2 +4α , 2

with λ1 ≥ 0 ≥ λ2 > −π 2 satisfying the following decomposition form:    d2 d2 (4)  u + βu − αu = − 2 + λ1 − 2 + λ2 u dt dt    d2 d2 = − 2 + λ2 − 2 + λ1 u. (5) dt dt

Suppose that G i (t, s)(i = 1, 2) is the Green function associated with −u  + λi u = 0,

u(0) = u(1) = 0.

We have the following several lemmas, which will be used in the sequence. √ Lemma 3 ([9]). Let ωi = |λi |, then G i (t, s)(i = 1, 2) can be expressed by  sinh ωi t sinh ωi (1 − s) , 0 ≤ t ≤ s ≤ 1, ωi sinh ωi sinh ωi s sinh ωi (1 − t) , 0 ≤ s ≤ t ≤ 1; ωi sinh ωi  t (1 − s), 0 ≤ t ≤ s ≤ 1, λi = 0, G i (t, s) = s(1 − t), 0 ≤ s ≤ t ≤ 1;  sin ωi t sin ωi (1 − s) , 0 ≤ t ≤ s ≤ 1, 2 −π < λi < 0, G i (t, s) = sin ωi ωs isinsinωωi (1i − t) , 0 ≤ s ≤ t ≤ 1. ωi sin ωi

(i) when λi > 0, G i (t, s) = (ii) when (iii) when

(6)

G. Chai / Nonlinear Analysis 66 (2007) 870–880

873

Lemma 4 ([9]). G i (t, s)(i = 1, 2) has the following properties: (i) G i (t, s) > 0, ∀t, s ∈ (0, 1); (ii) G i (t, s) ≤ Ci G i (s, s), ∀t, s ∈ [0, 1]; (iii) G i (t, s) ≥ δi G i (t, t)G i (s, s), ∀t, s ∈ [0, 1]; where Ci = 1, δi = −π 2 < λi < 0.

ωi sinh ωi ,

if λi > 0; Ci = 1, δi = 1, if λi = 0; Ci =

1 sin ωi

, δi = ωi sin ωi , if

1 In the sequel, gi (t) = G i (t, t), ki (t) = 0 G i (t, s)ds, t ∈ [0, 1], bi = min 1 ≤t,s≤ 3 G i (t, s), 4 4 and Di = maxt ∈[0,1] ki (t), i = 1, 2. Obviously, bi > 0, i = 1, 2.

Lemma 5. Di = ki ( 12 ) > 0, (i = 1, 2), (i) when λi > 0, Di = (ii) when (iii) when

1 1 λi (1 − cosh ωi ); 2 λi = 0, Di = 18 ; −π 2 < λi < 0, Di = λ1i (1 − 1 ωi cos

).

2

Proof. By carefully calculation, we have  ⎧1  1 ωi ⎪ (1 − 2t) , if λi > 0, 1 − cosh ⎪ ⎪ ⎪ λ cosh ω2i 2 ⎪ ⎨ i 1 ki (t) = t (1 − t), if λi = 0, ⎪ 2   ⎪ ⎪ ⎪ 1 ωi 1 ⎪ ⎩ cos (1 − 2t) , if −π 2 < λi < 0. 1− λi cos ω2i 2 Now, we can know that Lemma 5 is true from the above formula by further calculation.



For any h ∈ Y , the linear BVP (4) has a unique solution u, which we denoted by T h = u. From (4)–(6), the operator T can be expressed by  1 1 G 1 (t, s)G 2 (s, τ )h(τ )dτ ds (T h)(t) = 

0

= 0

0 1 1

G 2 (t, s)G 1 (s, τ )h(τ )dτ ds

t ∈ [0, 1].

(7)

0

Lemma 6. T : Y → (X,  · λ1 ) is linear completely continuous, and T  ≤ D2 . Proof. It is easy to know that the operator T maps Y into X. ∀h ∈ Y, u = T h ∈ X, u(0) = u(1) = u  (0) = u  (1) = 0. Let v = −u  + λ2 u, then v(0) = v(1) = 0. From (5) and (6), we have   −v + λ1 v = h(t), 0 < t < 1, v(0) = v(1) = 0. 1 Hence v(t) = 0 G 1 (t, s)h(s)ds, t ∈ [0, 1], and so  1 G 1 (t, s)h(s)ds, t ∈ [0, 1]. (8) −u  + λ2 u = 0

874

G. Chai / Nonlinear Analysis 66 (2007) 870–880

Similarly, we also have  1 G 2 (t, s)h(s)ds, −u  + λ1 u =

t ∈ [0, 1].

(9)

0

We will divide the proof into five steps as follows: (1) Firstly, we will show that T : Y → (X,  · 1 ) is continuous. By (8), (7) and Lemma 5, ∀h ∈ Y we have  1  |u (t)| ≤ |λ2 | |u(t)| + G 1 (t, s)|h(s)|ds 0

   ≤

1

|λ2 | 0

1

1

G 1 (t, s)G 2 (s, τ )dτ ds +

0



G 1 (t, s)ds h0

0

≤ (|λ2 |D1 D2 + D1 )h0 ,

t ∈ [0, 1].

Hence it follows that u  0 ≤ (|λ2 |D1 D2 + D1 )h0 .

(10)

By Lemma 1, we have T h1 = u1 ≤ (|λ2 |D1 D2 + D1 )h0 .

(11)

Consequently, T is continuous. (2) Secondly, we show that T is compact with respect to the norm  · 1 on X. Suppose that {h n }∞ n=1 is an arbitrary bounded sequence in Y , then there exists K 0 > 0 such that h n 0 ≤ K 0 , n ≥ 1. Let u n = T h n , n = 1, 2, . . .. ∀t1 , t2 ∈ [0, 1], t1 < t2 . From (8), it follows that  1 u n (t2 ) − u n (t1 ) = λ2 (u n (t2 ) − u n (t1 )) − (G 1 (t2 , s) − G 1 (t1 , s))h n (s)ds. 0

By (7), we have

|u n (t2 ) − u n (t1 )|





1 1

|λ2 | 0

 +

1

0

|G 1 (t2 , s) − G 1 (t1 , s)|G 2 (s, τ )dτ ds 

|G 1 (t2 , s) − G 1 (t1 , s)|ds K 0 .

(12)

0

By (12), together with uniform continuity of G 1 (t, s) on [0, 1] × [0, 1], we can prove that {u n (t)}∞ n=1 is equi-continuous on [0, 1]. On the other hand, from (11) together with h n 0 ≤ K 0 , it follows that {u n (t)}∞ n=1 is uniformly bounded. Now, in terms of the Ascoli–Arzela theorem,  ∞ the sequence {u n }∞ n=1 is relatively compact in Y , and so there exists a subsequence of {u n }n=1 ,  ∞  denoted by {u nk }n=1 , such that u nk − v0 → 0, where v ∈ Y . Similar to the proof in Lemma 2, owing to u nk ∈ X and u nk (0) = u nk (1) = 0, we obtain  1 G(t, s)u nk (s)ds. (13) u nk = − 1

0

G(t, s)v(s)ds, then u ∈ X and u  = v. By (13), we have  1 G(t, s)|u nk (s) − v(s)|ds ≤ u nk − v0 . |u nk (t) − u(t)| ≤

Let u(t) = −

0

0

Thus, u nk − u0 → 0(n → ∞), and so we have u nk − u1 → 0.

G. Chai / Nonlinear Analysis 66 (2007) 870–880

875

From steps (1), (2) proved above, we obtain that T : Y → (X, ·1 ) is completely continuous. (3) Now, the norms  · 1 and  · λ1 on X are equivalent from Lemma 2, so we immediately have that T : Y → (X,  · λ1 ) is also completely continuous. (4) Lastly, we come to show that T hλ1 ≤ D2 h0 , ∀h ∈  Y. Step (i): ∀h ∈ Y+ , let u = T h, by Lemma 4, u ∈ X Y+ . The equality (8) with the assumption λ2 ≤ 0 implies that u  ≤ 0. Thus, by equality (9) with λ1 ≥ 0, we immediately have  1   |u (t)| + λ1 |u(t)| = −u (t) + λ1 u(t) = G 2 (t, s)h(s)ds, t ∈ [0, 1]. 0

Step (ii): For any h ∈ Y , let h = h 1 − h 2 , u 1 = T h 1 , u 2 = T h 2 , where h 1 , h 2 are the positive part and negative part of h, respectively. Let u = T h, then u = u 1 − u 2 . From step (i) proved above, we have u i ≥ 0, u i ≤ 0, i = 1, 2, and the following equality holds:  1 .  |u i (t)| + λ1 |u i (t)| = G 2 (t, s)h i (s)ds = H h i , t ∈ [0, 1], i = 1, 2. (14) 0

So, by (14), we have |u  | + λ1 |u| = |u 1 − u 2 | + λ|u 1 − u 2 | ≤ (|u 1 | + λ1 |u 1 |) + (|u 2 | + λ1 |u 2 |) = H h 1 + H h 2 = H |h| ≤ D2 |h|0 = D2 h0 . Thus T hλ1 ≤ D2 h0 , and so T  ≤ D2 . This completes the proof.



We introduce the notations as follows: f 0 = lim inf min ( f (t, u)/u), u→0+

t ∈[0,1]

f ∞ = lim inf min ( f (t, u)/u), u→+∞

t ∈[0,1]

f 0 = lim sup max ( f (t, u)/u), u→0

t ∈[0,1]

f ∞ = lim sup max ( f (t, u)/u). u→+∞

t ∈[0,1]

Γ = π 4 − βπ 2 − α, K = supt ∈[0,1][ A(t) + B(t) − (α + β)]. The hypothesis assures that Γ > 0.

α π4

+

β π2

<1

3. Main results Theorem 1. Assume (H1), (H2) hold, and L = K D2 < 1. If f 0 > Γ , f¯∞ < (1 − L)Γ , then BVP (2) has a positive solution. Proof of Theorem 1. For any h ∈ Y , consider the following BVP:  (4) u + B(t)u  − A(t)u = h(t), 0 < t < 1, u(0) = u(1) = u  (0) = u  (1) = 0. It is easy to see that the above equation is equivalent to the following equation:  (4) u + βu  − αu = −(B(t) − β)u  + (A(t) − α)u + h(t), 0 < t < 1 u(0) = u(1) = u  (0) = u  (1) = 0.

(15)

For any v ∈ X, let Gv = −(B(t) − β)v  + (A(t) − α)v. Obviously, the operator G : X → Y is linear. By Lemma 2, ∀v ∈ X, t ∈ [0, 1], |(Gv)(t)| ≤ [B(t)+ A(t)−(α +β)]v1 ≤ K v1 ≤ K vλ1 . Hence Gv0 ≤ K vλ1 , and so G ≤ K . On the other hand, u ∈ C 2 [0, 1] ∩ C 4 (0, 1) is a solution of (15) iff u ∈ X satisfies u = T (Gu + h), i.e.

876

G. Chai / Nonlinear Analysis 66 (2007) 870–880

u ∈ X, (I − T G)u = T h.

(16)

Owing to G : X → Y and T : Y → X, the operator I −T G maps X into X. From T  ≤ D2 (by Lemma 6) together with G ≤ K and condition D2 K < 1, applying operator spectral theorem, we have that the operator (I − T G)−1 exists and is bounded. Let H = (I − T G)−1 T , then (16) is equivalent to u = H h. By the Neumann expansion formula, H can be expressed by H = (I + T G + · · · + (T G)n + · · ·)T = T + (T G)T + · · · + (T G)n T + · · · .

(17)

T G)−1

The complete continuity of T with the continuity of (I − yields that the operator H : Y → X is completely continuous. ∀h ∈ Y+ , let u = T h, then u ∈ X ∩ Y+ , and u  ≤ 0 (by the step (4) of proof in Lemma 6). So we have (Gu)(t) = −(B(t) − β)u  (t) + (A(t) − α)u(t) ≥ 0,

t ∈ [0, 1].

Hence ∀h ∈ Y+ , (GT h)(t) ≥ 0,

t ∈ [0, 1],

(18)

and so (T G)(T h)(t) = T (GT h)(t) ≥ 0, t ∈ [0, 1]. Assume that ∀h ∈ Y+ , (T G)k (T h)(t) ≥ 0, t ∈ [0, 1], then ∀h ∈ Y+ . Letting h 1 = GT h, by (18) we have h 1 ∈ Y+ , and so (T G)k+1 (T h)(t) = (T G)k (T GT h)(t) = (T G)k T h 1 ≥ 0, t ∈ [0, 1]. Thus by induction it follows that ∀n ≥ 1, h ∈ Y+ , (T G)n (T h)(t) ≥ 0,

t ∈ [0, 1].

By (17), we have ∀h ∈ Y+ , (H h)(t) = (T h)(t) + (T G)(T h)(t) + · · · + (T G)n (T h)(t) + · · · ≥ (T h)(t),

t ∈ [0, 1],

(19)

and so H : Y+ → Y+ ∩ X. On the other hand, we have ∀h ∈ Y+ , (H h)(t) ≤ (T h)(t) + T G(T h)(t) + · · · + T Gn (T h)(t) + · · · ≤ (1 + L · · · + L n + · · ·)(T h)(t) 1 = (T h)(t), t ∈ [0, 1]. 1− L So the following inequalities hold:

(20)

1 T h0 , t ∈ [0, 1]. (21) 1−L 1 T h0 . H h0 ≤ (22) 1− L For any u ∈ Y+ , define Fu = f (t, u). By assuming (H1), we have that F : Y+ → Y+ is continuous. It is easy to see that u ∈ C 2 [0, 1] ∩ C 4 (0, 1) being a positive solution of BVP (2) is equivalent to u ∈ Y+ being a nonzero solution equation as follows: (H h)(t) ≤

u = H Fu.

(23)

Let Q = H F. Obviously, Q : Y+ → Y+ is completely continuous. We next show that the operator Q is a nonzero fixed point in Y+ . Let P = {u ∈ Y+ |u(t) ≥ δ1 (1 − L)g1 (t)u(t)0 , t ∈ [0, 1]}. It is easy to know that P is a cone in Y . Now, we show Q P ⊂ P.

G. Chai / Nonlinear Analysis 66 (2007) 870–880

877

For any h ∈ P, let h = Fu, then h ∈ Y+ . By (19), we have (Qu)(t) = (H Fu)(t) ≥ (T Fu)(t),

t ∈ [0, 1].

By Lemma 4, ∀t, σ ∈ [0, 1], we have  1 1 (T Fu)(t) = G 1 (t, s)G 2 (s, τ )(Fu)(τ )dτ ds 0

0



1 1

≥ δ1 g1(t) 0

G 1 (s, s)G 2 (s, τ )(Fu)(τ )dτ ds

0

≥ δ1 g1(t)(T Fu)(σ ). So (Qu)(t) ≥ δ1 g1 (t)T Fu0 ,

t ∈ [0, 1].

(24)

By (22), we have T Fu0 ≥ (1 − L)H Fu0 = (1 − L)Qu0 .

(25)

By (24) and (25), it follows that (Qu)(t) ≥ δ1 (1 − L)g1 (t)Qu0 . Thus Q P ⊂ P. Let δ = δ1 (1 − L)d1 , d1 = mint ∈[ 1 , 3 ] g1 (t). Obviously, δ > 0, and moreover 4 4



∀u ∈ P ⇒ u(t) ≥ δu0 ,

t∈

 1 3 . , 4 4

(26)

(i) By f 0 > Γ , we can choose ε > 0 such that f 0 > Γ + ε. Then ∃r > 0 such that f (t, x) > (Γ + ε)x, t ∈ [0, 1], 0 < x ≤ r . Let Ωr = {u ∈ P : u0 < r }. For any u ∈ ∂Ωr , we have u0 = r, 0 < u(t) ≤ r, t ∈ (0, 1), and so f (t, u(t)) > (Γ + ε)u(t), t ∈ (0, 1). By u(t) ≥ δu0 = δr, t ∈ [ 14 , 34 ], it follows that   1 3 , f (t, u(t)) > (Γ + ε)u(t) ≥ (Γ + ε)δr, t ∈ . 4 4 Now we shall prove (a) infu∈∂ Ωr Qu0 > 0, and (b) ∀u ∈ ∂Ωr , 0 < μ ≤ 1, Qu = μu. (a) For any u ∈ ∂Ωr , by (19) we have        1 1 1 1 1 , s G 2 (s, τ ) f (τ, u(τ ))dτ ds Qu0 ≥ Qu G1 ≥ (T Fu) = 2 2 2 0 0    3 3 4 4 1 , s G 2 (s, τ )dτ ds ≥ (Γ + ε)δr G1 1 1 2 4 4 ≥

1 (Γ + ε)δb1 b2r. 4

(27)

Therefore, infu∈∂ Ωr Qu0 > 0. (b) Suppose the contrary, that ∃u 0 ∈ ∂Ωr , 0 < μ0 ≤ 1, such that Qu 0 = μ0 u 0 . By (19), we get u 0 (t) ≥ μ0 u 0 (t) = (Qu 0 )(t) ≥ (T Fu 0 )(t), t ∈ [0, 1]. Let v0 = T Fu 0 . Then u 0 ≥ v0 , and v0 satisfies BVP (4) with h = Fu 0 . Hence (4)

v0 + βv0 − αv0 = f (t, u 0 ),

0 < t < 1.

(28)

878

G. Chai / Nonlinear Analysis 66 (2007) 870–880

Multiplying (28) by sin πt and integrating on [0, 1] together with v0 (0) = v0 (1) = v0 (0) = v0 (1) = 0, u 0 ≥ v0 , we get  1  1  1 Γ sin πsu 0 (s)ds ≥ Γ sin πsv0 (s)ds = sin πs f (s, u 0 (s))ds. (29) 0

0

0

1 By f (s, u 0 (s)) > (Γ + ε)u 0 (s), s ∈ (0, 1), we have Γ 0 sin πsu 0 (s)ds ≥ (Γ + ε) 1 1 0 sin πsu 0 (s)ds. Since 0 sin πsu 0 (s)ds > 0, we have Γ ≥ Γ + ε, a contradiction. Now, by virtue of (a), (b) proved above, together with the fixed point index theory [10], we obtain i (Q, Ωr , P) = 0. (ii) By f¯∞ < (1− L)Γ , letting N = (1− L)Γ , we choose 0 < ε < N such that f¯∞ < (N −ε). Then ∃R0 > 0, for x ≥ R0 , f (t, x) < (N − ε)x, t ∈ [0, 1]. Let M = sup(t,x)∈[0,1]×[0,R0] f (t, x). Then f (t, x) < (N − ε)x + M, √ 2M max{ δε ,

∀t ∈ [0, 1], x ∈ [0, ∞).

Take R > R0 , r }. Putting Ω R = {u ∈ P : u < R}, we next prove ∀u ∈ ∂Ω R , μ ≥ 1, μu = Qu. Assume on the contrary that ∃μ0 ≥ 1, u 0 ∈ ∂Ω R such that μ0 u 0 = Qu 0 . Let v0 = T Fu 0 . Similarly to (29), we can prove  1  1 Γ sin πtv0 (t)dt = sin πt f (t, u 0 (t))dt 0

0



1

≤ (N − ε)



1

u 0 (t) sin πtdt + M

0

sin πtdt.

(30)

0

1 By (20), we have u 0 (t) ≤ μ0 u 0 (t) = (Qu 0 )(t) ≤ 1−L v0 (t), t ∈ [0, 1]. So, by (30), we have  1  1  1 u 0 (t) sin πtdt ≤ (N − ε) u 0 (t) sin πtdt + M sin πtdt, (31) N 0

and so



M

0

1



1

sin πtdt ≥ ε

0

 u 0 (t) sin πtdt ≥ ε

0

 ≥ δε

0

3 4 1 4

 sin πtdt u 0 0 .

3 4 1 4

u 0 (t) sin πtdt (32)



2M , which contradicts the choice of R. In terms of the fixed point index Thus R = u0 ≤ εδ theory [10], we have i (Q, Ω R , P) = 1, and so i (Q, Ω R \ Ω¯ r , P) = 1. Thus BVP (2) has a positive solution. This completes the proof. 

Theorem 2. Suppose that (H1), (H2 ) hold. Moreover, assume L = K D2 < 1, f¯0 < (1 − L)Γ , f ∞ > Γ . Then BVP (2) has a positive solution. Proof. As in the proof of Theorem 1, we have that the operator Q : P → P is completely continuous. (i) By hypothesis, f¯0 < (1 − L)Γ . Letting N = (1 − L)Γ , we choose 0 < ε < N such that f¯0 < N − ε. Thus ∃r > 0, 0 < x ≤ r, 0 ≤ t ≤ 1, f (t, x) ≤ (N − ε)x.

G. Chai / Nonlinear Analysis 66 (2007) 870–880

879

Put Ωr = {u ∈ P : u < r }. ∀u ∈ ∂Ωr , f (t, u(t)) < (N − ε)u(t), t ∈ (0, 1). We claim that ∀u ∈ ∂Ωr , 1 ≤ μ, Qu = μu. In fact, otherwise, ∃u 0 ∈ ∂Ωr , 1 ≤ μ0 with Qu 0 = μ0 u 0 . By (20), we have (Qu 0 )(t) ≤ 1 1 1−L (T Fu 0 )(t), t ∈ [0, 1]. Letting v0 = T Fu 0 , then u 0 (t) ≤ μ0 u 0 (t) = (Qu 0 )(t) ≤ 1−L v0 . Similarly to (31), we have  1  1  1 u 0 (t) sin πtdt ≤ Γ v0 (t) sin πtdt = f (t, u 0 (t)) sin πtdt N 0

0

0



1

≤ (N − ε)

u 0 (t) sin πtdt.

0

1 Since 0 u 0 (t) sin πtdt > 0, we have N ≤ N − ε, a contradiction. Therefore i (Q, Ωr , P) = 1. (ii) By f ∞ > Γ , we choose ε > 0 with f ∞ > Γ + ε. Then ∃R0 > 0 such that f (t, x) > (Γ + ε)x for x ≥ R0 , t ∈ [0, 1]. Owing to sup(t,x)∈[0,1]×[0,R0] f (t, x) < ∞, it is easy to prove that ∃M√> 0 such that f (t, x) > (Γ + ε)x − M, for t ∈ [0, 1], x ∈ [0, ∞). Taking 2M } and putting Ω R = {u ∈ P : u0 < R}, we shall show that: (a) R > max{r, δ −1 R0 , εδ infu∈∂ Ω R Qu0 > 0, and (b) ∀u ∈ ∂Ω R , 0 < μ ≤ 1, Qu = μu. (a) For any u ∈ ∂Ω R , u(t) ≥ δu = δ R > R0 , t ∈ [ 14 , 34 ], we have f (t, u(t)) ≥ (Γ + ε)u(t) ≥ (Γ + ε)δ R, t ∈ [ 14 , 34 ]. As in the deduction of (27), the following inequality holds:    1 1     1 1 1 ≥ (T Fu) = , s G 2 (s, τ ) f (τ, u(τ ))dτ ds G1 Qu0 ≥ (Qu) 2 2 2 0 0    1 3 4 1 ≥ , s G 2 (s, τ ) f (τ, u(τ ))dτ ds G1 1 2 0 4 ≥

1 (Γ + ε)D1 b2 δ R. 2

Thus infu∈∂ Ω R Qu0 > 0. (b) Suppose the contrary, ∃u 0 ∈ ∂Ω R , 0 < μ0 ≤ 1 such that Qu 0 = μ0 u 0 . From (19), it follows that (Qu 0 )(t) ≥ (T Fu 0 )(t), 0 ≤ t ≤ 1. Letting v0 = T Fu 0 , then u 0 (t) ≥ μ0 u 0 (t) = (Qu 0 )(t) ≥ v0 (t). Similarly to (30)–(32),we have  1  1  1 u 0 (t) sin πtdt ≥ Γ v0 (t) sin πtdt = f (t, u 0 (t)) sin πtdt Γ 0

0

0



1

≥ (Γ + ε)



1

u 0 (t) sin πtdt − M

0

sin πtdt,

0

and so 

1

M



1

sin πtdt ≥ ε

0

0 √

 u 0 (t) sin πtdt ≥ εδu 0 0

3 4 1 4

sin πtdt.

2M Thus R = u0 ≤ εδ , which contradicts the choice of R. By (a), (b) proved above, we have i (Q, Ω R , P) = 0. Thus i (Q, Ω R \ Ω¯ r , P) = −1, and so BVP (2) has a positive solution. This completes the proof of Theorem 2. 

880

G. Chai / Nonlinear Analysis 66 (2007) 870–880

References [1] C.P. Gupta, Existence and uniqueness theorems for a bending of an elastic beam equation, Appl. Anal. 26 (1988) 289–304. [2] C.P. Gupta, Existence and uniqueness results for the bending of an elastic beam equation at resonance, J. Math. Anal. Appl. 135 (1988) 208–225. [3] Y. Yang, Fourth-order two-point boundary value problems, Proc. Amer. Math. Soc. 104 (1988) 175–180. [4] M.A. Del Pino, R.F. Manasevich, Existence for a fourth-order boundary value problem under a two parameter nonresonance condition, Proc. Amer. Math. Soc. 112 (1991) 81–86. [5] C. De Coster, C. Fabry, F. Munyamarere, Nonresonance conditions for fourth-order nonlinear boundary value problems, Internat. J. Math. Math. Sci. 17 (1994) 725–740. [6] R. Ma, H. Wang, On the existence of positive solutions of fourth-order ordinary differential equations, Appl. Anal. 59 (1995) 225–231. [7] R. Ma, J. Zhang, S. Fu, The method of lower and upper solutions for fourth-order two-point boundary value problems, J. Math. Anal. Appl. 215 (1997) 415–422. [8] Z. Bai, The method of lower and upper solutions for a bending of an elastic beam equation, J. Math. Anal. Appl. 248 (2000) 195–202. [9] Y.X. Li, Positive solutions of fourth-order boundary value problems with two parameters, J. Math. Anal. Appl. 281 (2003) 477–484. [10] D. Guo, V. Lakshmikantham, Nonlinear Problems in Abstract Cones, Academic Press, New York, 1988.