A parabolic variational inequality related to the perpetual American executive stock options

A parabolic variational inequality related to the perpetual American executive stock options

Nonlinear Analysis 74 (2011) 6583–6600 Contents lists available at ScienceDirect Nonlinear Analysis journal homepage: www.elsevier.com/locate/na A ...

315KB Sizes 1 Downloads 75 Views

Nonlinear Analysis 74 (2011) 6583–6600

Contents lists available at ScienceDirect

Nonlinear Analysis journal homepage: www.elsevier.com/locate/na

A parabolic variational inequality related to the perpetual American executive stock options Song Liping a,b,c , Yu Wanghui a,b,∗ a

School of Mathematic Science, Soochow University, Suzhou 215006, China

b

Research Center of Financial Engineering, Soochow University, Suzhou 215006, China

c

Department of Mathematics, Putian University, Putian 351100, China

article

info

Article history: Received 23 March 2011 Accepted 25 June 2011 Communicated by Enzo Mitidieri MSC: 35A25 35R35 35Q99

abstract A parabolic variational inequality is investigated which comes from the study of the optimal exercise strategy for the perpetual American executive stock options in financial markets. It is a degenerate parabolic variational inequality and its obstacle condition depends on the derivative of the solution with respect to the time variable. The method of discrete time approximation is used and the existence and regularity of the solution are established. © 2011 Elsevier Ltd. All rights reserved.

Keywords: Parabolic variational inequality Free boundary problem Executive stock option

1. Introduction Executive stock options (ESOs) are call options on the stock of a firm granted to an executive by the firm as part of his or her remuneration package. These options bestow the right to the executive to buy the stocks of the firm at a pre-arranged price on pre-arranged days. They encourage the executive to work towards an increase in the financial health of the firm, which will increase the share price of the firm, and eventually increase his or her own wealth. Since the mid-1980s, executive stock options have become an important component of compensation to the executives in the United State and other countries. However, ESOs are an expense to the firm because the firm is buying services from the executive. The ESOs issued by a firm are so large that the cost to the firm should not be ignored. In 2004, under Statement of Financial Accounting Standards No. 123 (revised), the Financial Accounting Standards Board required firms to account for their costs of ESOs. In order to calculate the firm’s cost of ESOs, it is necessary to have rational prediction of the future exercise strategies of the executives since the ESOs usually have long maturity ranging from 5 to 15 years. This gives rise to the need to create a reasonable valuation method for ESOs. Executive stock options are a kind of American call option (i.e., they can be exercised at any time during the exercise window), with long maturity ranging from 5 to 15 years. However, they differ from the vanilla American call options in several crucial ways, for example, they cannot be transferred when the executive leaves the company and they cannot be traded in the option market. The executive’s risk preference also affects his or her exercise behavior and hence the value



Corresponding author at: School of Mathematic Science, Soochow University, Suzhou 215006, China. Tel.: +86 13815250291. E-mail addresses: [email protected] (L. Song), [email protected] (W. Yu).

0362-546X/$ – see front matter © 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.na.2011.06.042

6584

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

of the ESOs. These characteristics of ESOs make the valuation of ESOs difficult and the standard no-arbitrage pricing theory fails for ESOs. Many authors have studied the problem of valuating the ESOs. In the papers [1–4], the optimal exercise time was studied under the assumption that the executive exercises all of the options at one time. In the paper [5], a model with discrete admissible exercise time was considered and both the optimal exercise time and optimal exercise number of the options were studied. In the paper [6], the authors established an optimal control model for executive stock options and obtained a parabolic variational inequality satisfied by the value function of the maximal wealth. For more about the study on executive stock options, we refer our readers to [1–9] and the references therein. In this paper, we study a parabolic variational inequality related to the perpetual American executive stock options. Let A be the total number of options held by the executive with strike K , the executive can exercise any number of his or her options at any time in the interval [0, +∞). Let mt be the cumulative total of options exercised in the time interval [0, t ], mt is a nonnegative right-continuous increasing adapt process in [0, +∞) with the constraint lim mt = A.

(1.1)

t ↑+∞

Denote the stock price at time t and the whole wealth of the executive by St and x∞ respectively. Then, +∞

∫ x∞ = x0 +

e−r ρ (Sρ − K )+ dmρ ,

(1.2)

0

where r is the risk-free interest rate (a positive constant) and x0 is the initial wealth before exercising any options. Let U (x) be the utility function of the executive which is a concave increasing function. Then our problem is to find a suitable nonnegative right-continuous increasing adapted process mt with the constraint (1.1), so as to maximize E [U (x∞ )]. For any t ∈ [0, +∞), set



  m is any non-negative right-continuous increasing adapted ,

Mt ≡ mρ  ρ process over ρ ∈ [t , +∞) satisfying m∞ = A

(1.3)

then the problem can be written as the following stochastic control problem:

[ 

V (s, x0 ) ≡ sup E U

+∞

∫ x0 +

mρ ∈M0

e−r ρ (Sρ − K )+ dmρ

0

 ]  S0 = s . 

(1.4)

Suppose the stock price progress St follows a geometric Brownian motion, that is, dSt = α St dt + σ St dWt ,

(1.5)

where α is the expected return rate, σ is the volatility, and Wt is the standard Brown motion. α and σ are constants with σ > 0 and α < r. We shall denote δ = r − α > 0. Let a be the amount of options still available for exercise at time t and let x be the discount wealth of the executive before exercise at time t. Define the value function v(t , s, x, a) of the maximal discount wealth at time t by +∞

[  ∫ v(t , s, x, a) = sup E U x + mρ ∈Mt

t

 

e−r ρ (Sρ − K )+ dmρ  mt − = A − a, St = s

] (1.6)

where 0 ≤ t < +∞, 0 < a ≤ A, 0 ≤ x < +∞, 0 ≤ s < +∞ and Mt is defined by (1.3). Obviously v(0, s, x0 , A) = V (s, x0 ). By standard arguments of stochastic analysis, if the value function v(t , s, x, a) defined by (1.6) is sufficiently smooth, it must solve the following parabolic variational inequality max {Lv, B v} = 0,

0 ≤ t , x, s < +∞, 0 < a ≤ A,

(1.7)

with the conditions



v(+∞, s, x, a) = U (x), v(t , s, x, 0) = U (x),

(1.8)

where

 ∂v σ 2 2 ∂ 2v ∂v  Lv = + s +α , 2 ∂t 2 ∂s ∂s  B v = − ∂v + e−rt (s − K )+ ∂v . ∂a ∂x

(1.9)

A version of (1.7)–(1.9) with finite maturity T (that is, 0 ≤ t ≤ T < +∞) was given in the paper [6] with no derivation (see [6, Section 3]). The method of the derivation is standard in the field of stochastic analysis. For the convenience of our readers, we give some key points on the derivation of (1.7)–(1.9) as follows.

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6585

Let

Ct ≡ mρ ∈ Mt |mρ have a continuous differentiable trajectory over ρ ∈ [t , +∞), almost surely ,





(1.10)

then (1.6) is equivalent to the following optimal control problem +∞

[  ∫ v(t , s, x, a) = sup E U x + mρ ∈Ct

t

 

]

e−r ρ (Sρ − K )+ dmρ  mt − = A − a, St = s .

(1.11)

By the standard dynamic programming principle for stochastic control problems, we can get from (1.6) that 0 = sup Et [I + {v(t + h, St +h , x, A − mt +h ) − v(t , s, x, a)}] ,

(1.12)

mρ ∈Ct

where h is an arbitrary positive number and

 I =U

+∞



x+

e−r ρ (Sρ − K )+ dmρ





 ∫ −U x+

e−r ρ (Sρ − K )+ dmρ



.

(1.13)

t +h

t

By (1.5) and the well-known Itô’s formula, we have

v(t + h, St +h , x, A − mt +h ) − v(t , s, x, a) =

t +h



L v dρ − t

t +h

∫ t

∂v dmρ + ∂a

t +h



{· · ·}dWρ .

(1.14)

t

Dividing (1.12) by h and then letting h → 0, using (1.11), (1.13)–(1.14) and the fact that dWρ is a martingale, we get 0 = Lv + sup

mρ ∈Ct



m′ρ B v ,



(1.15)

which is equivalent to (1.7) since m′ρ ≥ 0. The first equality in (1.8) can be derived from (1.6), (1.5) and the assumption that r ≥ α by some elementary calculations. The second equality in (1.8) directly follows from (1.6) and (1.3). Specifically, we shall only consider the CARA utility function U (x) = −e−γ x ,

(1.16)

where γ is a positive constant. We must note that our method used in this paper can also be applied for other types of utility functions, such as, the CRRA utility functions (e.g. see [6]). To simplify (1.7)–(1.9), let s = ez ,

τ = ae−rt ,

v(t , s, x, a) = e−γ x u(τ , z ).

(1.17)

Then, by some simple calculations, we find that u(τ , z ) satisfies the following parabolic variational inequality max {−L u, −B u} = 0,

0 < τ ≤ A, − ∞ < z < +∞

(1.18)

with the initial condition u(0, z ) = −1,

(1.19)

where

   ∂u σ 2 ∂u σ 2 ∂ 2u   − − α− , L u = r τ ∂τ 2 ∂ z2 2 ∂z (1.20)  ∂u  z +  Bu = + γ (e − K ) u. ∂τ Note that, the variable τ = ae−rt is the discount amount of the options still available for exercise at time t. (1.18)– (1.20) shows that the value function v(t , s, x, a) of the maximal discount wealth at time t depends only on τ , s and x. This is consistent with the well-known fact that the vanilla perpetual American call option is independent of the time t. For convenience, we can write (1.18)–(1.20) in the following form: L u ≡ rτ

∂u σ 2 ∂ 2u ∂u − −µ ≥ 0, ∂τ 2 ∂ z2 ∂z

(τ , z ) ∈ Q∞ ,

∂u + γ (ez − K )+ u ≥ 0, (τ , z ) ∈ Q∞ , ∂τ B u · L u = 0, (τ , z ) ∈ Q∞ , u(0, z ) = −1, −∞ < z < +∞, Bu ≡

(1.21) (1.22) (1.23) (1.24)

6586

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

σ2

where µ = α −

2

=r −δ−

σ2 2

, δ = r − α > 0, and

Q∞ = (0, A] × (−∞, +∞).

(1.25)

This paper is devoted to the rigorous study of the problem (1.21)–(1.25). The main difficulty for the study lies in that: the ∂ parabolic operator L in (1.21) is degenerate at τ = 0 and the obstacle operator B in (1.22) depends on the derivative ∂τ . We cannot directly use the standard penalty method (see [10], also see the proof of Lemma 2.1 in the next section) to construct the solution. We shall turn to the discrete time variable method to construct approximating solutions and, after we get some important uniform estimates, we can prove that the approximating solutions converge to a solution of (1.21)–(1.25). The following theorem is the main result of this paper. Theorem 1.1. Assume that the parameters A, r , δ, σ , γ and K are all positive constants. For any ρ > 0 and any ϵ ∈ (0, A), let Qρ = [−ρ, ρ] × (0, A] and Qϵρ = [−ρ, ρ] × [ϵ, A]. Then the problem (1.21)–(1.25) has at least one solution u(τ , z ) satisfying: 2 ,1 u ∈ W∞ (Qρ ) ∩ C (Q¯ ∞ ); 1 1 , 2 4

uτ τ ∈ L2 (Qϵρ );

(1.27)

−1 ≤ u(z , τ ) ≤ 0, 0 ≤ uz ≤ C 0 , (z , τ ) ∈ Q¯ ∞ ; L u = 0, (z , τ ) ∈ (−∞, ln K ] × (0, A],

(1.28)

uτ ∈ C

(Qϵρ ),

(1.26)

uz τ ∈ L2 (Qρ ),

where C0 is a positive constant depending only on γ , A and K .

(1.29) 

Remark 1.2. Set

N ≡ {(z , τ ) ∈ Q∞ |B u(z , τ ) > 0},

C ≡ {(z , τ ) ∈ Q∞ |B u(z , τ ) = 0}.

N and C correspond to the non-exercise region and the exercise region of the executive options respectively. Recall that z = ln s where s is the stock price and τ = ae−rt is the discount amount of the options still available for exercise at time t. When (z , τ ) ∈ C , the executive should exercise some part of the amount a = τ ert of still available options. While, when (z , τ ) ∈ N , the executive should not exercise any of the options. When z < ln K , that is, s < K , the stock price is less than the strike so that the executive cannot exercise any options. Thus, the region (−∞, ln K ) × [0, A] must be contained in the non-exercise region N and hence B u > 0 in (−∞, ln K ) × [0, A] which, together with (1.23), leads to (1.29). So (1.29) is natural. For more properties of the solution to (1.21)–(1.25), we must study the optimal exercise boundary which is the interface between N and C . This is a much more difficult problem and will be the target of our further study.  The arrangement of this paper is as follows. In Section 2, we construct approximating problems with discrete time variable and bounded space variable, and prove the existence of the approximating solutions by the standard penalty method. In Section 3, we prove some uniform estimates for the solutions obtained in Section 2. And in the final Section 4, we complete our proof of Theorem 1.1. 2. The approximating problems In this section we construct approximating problems of (1.21)–(1.25) and prove the existence of the approximating solutions. For any positive integer N, let h = NA and τn = nh for n = 0, 1, 2, . . . , N. For any R > | ln K |, we construct a sequence of functions {un (z )}Nn=0 defined on [−R, R] such that L n un ≡ −

σ2 2

u′′n − µu′n + r τn

un − un−1 h

≥ 0,

a.e. − R < z < R,

z + Bn un ≡ un − e−γ h(e −K ) un−1 ≥ 0,

Ln un · Bn un = 0,

−R < z < R, a.e. − R < z < R,

u′n (−R) = 0,

(2.1) (2.2) (2.3) (2.4)

u′n (R) = −γ τn eR un (R),

(2.5)

u0 (z ) = −1,

(2.6)

−R ≤ z ≤ R .

Lemma 2.1. There exists at least one solution {un (z )}Nn=0 of (2.1)–(2.6) satisfying un ∈ W 2,∞ ([−R, R]) and un < 0 in [−R, R] for n = 1, 2, . . . , N. Proof. The arguments are standard, we only give some key points.

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6587

For any number ϵ with 0 < ϵ < min{K − e−R , eR − K }, we can choose a penalty function βϵ (s) and a function πϵ (s) approximating to s+ such that

 βϵ (s) ∈ C ∞ ((−∞, +∞)),    βϵ (s) ≤ 0, βϵ′ (s) ≥ 0, βϵ (s) = 0, s ≥ 0,    βϵ (s) = s + 1, s ≤ −2ϵ, ϵ

βϵ′′ (s) ≤ 0,

−∞ < s < +∞, (2.7)

and

 πϵ (s) ∈ C ∞ ((−∞, +∞)),   0 ≤ π ′ (s) ≤ 1, πϵ′′ (s) ≥ 0, −∞ < s < +∞, ϵ πϵ (s) = 0, for s ≤ −ϵ; πϵ (s) = s, for s ≥ ϵ,    lim πϵ (s) = s+ , for s ∈ (−∞, +∞).

(2.8)

ϵ→0

Then we consider the following penalty problem (2.9)–(2.12):



σ2 2

u′′n,ϵ − µu′n,ϵ + r τn

un,ϵ − un−1,ϵ h

+ βϵ (un,ϵ − un−1,ϵ e−γ hπϵ (e

z −K )

) = 0,

−R < z < R,

u′n,ϵ (−R) = 0,

(2.10)

un,ϵ (R) = −γ τn e un,ϵ (R), ′

R

u0,ϵ (z ) = −1,

(2.9)

(2.11)

−R ≤ z ≤ R.

(2.12)

We can apply the standard fixed point methods to show that (2.9)–(2.12) has at least one solution {un,ϵ (z )} ⊂ C ∞ ([−R, R]). Now we must prove that, by passing to a subsequence, {un,ϵ }Nn=1 tends to a solution {un }Nn=1 of (2.1)–(2.5) as ϵ → 0. We shall divide the proof into two steps below. Step 1: n = 1. The key is to show the following estimates: N n=0

−Ch ≤ βϵ (u1,ϵ − u0,ϵ e−γ hπϵ (e −ϵ Ch ≤ u1,ϵ − u0,ϵ e−γ hπϵ

z −K )

(ez −K )

) ≤ 0,

,

−R ≤ z ≤ R ,

(2.13)

−R ≤ z ≤ R ,

(2.14)

‖u1,ϵ ‖C 2 ([−R,R]) ≤ Ch .

(2.15)

Here and below, Ch denotes various positive constants independent of ϵ (but depending on h). Proof of (2.13). Since βϵ (s) is negative, the inequality on the right hand side of (2.13) is trivial. To prove the inequality on the left hand side of (2.13), we notice that πϵ (ez − K ) = 0 in a neighborhood of z = −R and πϵ (ez − K ) = ez − K in a neighborhood zof z = R by the definition of πϵ (s). Set Ψ (z ) ≡ u1,ϵ (z ) − u0,ϵ (z )e−γ hπϵ (e −K ) and let z0 be a minimal point of βϵ (Ψ ) in [−R, R]. Then, z0 is also a minimal point of Ψ in [−R, R] because βϵ (s) is increasing in (−∞, +∞). Suppose z0 = R, then Ψ ′ (z0 ) ≤ 0, that is, R R u′1,ϵ (R) − u′0,ϵ (R)e−γ h(e −K ) + u0,ϵ (R)γ heR e−γ h(e −K ) ≤ 0

which, together with (2.11) and (2.12), leads to

  R −γ τ1 eR u1,ϵ (R) − u0,ϵ (R)e−γ h(e −K ) ≤ 0, that is, Ψ (z0 ) = Ψ (R) ≥ 0, and hence βϵ (Ψ (z0 )) ≥ β(0) = 0. Thus the inequality in the left-hand side of (2.13) is true. Suppose z0 ∈ [−R, R). Then we have Ψ ′ (z0 ) = 0 and Ψ ′′ (z0 ) ≥ 0. Here we have used (2.10). By (2.9), we have

βϵ (Ψ ) =

σ2 2

Ψ ′′ + µΨ ′ −

r τ1 h

Ψ +

σ2  2

z u0,ϵ e−γ hπϵ (e −K )

′′

 ′ r τ   z z 1 + µ u0,ϵ e−γ hπϵ (e −K ) + u0,ϵ 1 − e−γ hπϵ (e −K ) .

Using (2.8) and (2.12), we get

βϵ (Ψ (z0 )) ≥ ≥

σ2 

z u0,ϵ e−γ hπϵ (e −K )

2 γ hσ 2

≥−

2 r τ1 h

′′



r τ1 h

Ψ (z0 ) − Ch

r τ1 z e2z e−γ hπϵ (e −K ) πϵ′′ (ez − K ) − Ψ (z0 ) − Ch h

Ψ (z0 ) − Ch .

h

6588

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

If Ψ (z0 ) ≥ 0, then βϵ (Ψ (z0 )) ≥ βϵ (0) = 0; if Ψ (z0 ) < 0, then βϵ (Ψ (z0 )) ≥ −Ch by the last inequality above. So in both cases the inequality in the left-hand side of (2.13) is true.  z Proof of (2.14). By (2.7), if u1,ϵ − u0,ϵ e−γ hπϵ (e −K ) < −2ϵ , then

βϵ (u1,ϵ − u0,ϵ e−γ hπϵ (e

z −K )

So (2.14) follows from (2.13).

)=

z u1,ϵ − u0,ϵ e−γ hπϵ (e −K )

ϵ

+ 1.



Proof of (2.15). We rewrite (2.9) as follows



σ2 2

u′′1,ϵ − µu′1,ϵ +

r τ1 h

u1,ϵ = f (z ),

−R < z < R,

(2.16)

where f (z ) ≡

r τ1 h

z u0,ϵ − βϵ (u1,ϵ − u0,ϵ e−γ hπϵ (e −K ) ).

(2.17)

Using (2.13) and (2.12), we have

|f (z )| ≤ Ch ,

−R ≤ z ≤ R .

(2.18)

Noting that the ordinary differential equation (2.16) with the boundary conditions (2.10)–(2.11) for n = 1 can be explicitly solved, (2.15) can be derived from (2.18) and the expression of u1,ϵ . Based on (2.15), we can pass to a subsequence of {u1,ϵ } such that, as ϵ → 0,



u′′1,ϵ → u′′1 , u1,ϵ → u1 ,

in weakly ∗ −L∞ ([−R, R]), in C 1 ([−R, R]),

(2.19)

where u1 is some function in W 2,∞ ([−R, R]). Since u1,ϵ is a solution of (2.9)–(2.11) for n = 1 and satisfies (2.14) and (2.19), a standard argument shows that u1 is a solution of (2.1)–(2.5) for n = 1. z1 + We need to prove that u1 < 0 in [−R, R]. Let z1 be a maximal point of u1 in [−R, R]. If u1 (z1 ) − u0 (z1 )e−γ h(e −K ) = 0, z + 1 then u1 (z1 ) < 0 because of u0 (z1 ) = −1 < 0. Suppose u1 (z1 ) − u0 (z1 )e−γ h(e −K ) > 0. Then, by (2.1)–(2.3), u1 ∈ C 2 (U1 ) for some neighborhood U1 of z1 in [−R.R] and



σ2 2

u′′1 − µu′1 +

r τ1 h

u1 =

r τ1 h

u0 < 0,

z ∈ U1 .

Noting that u1 also satisfies the boundary conditions (2.4)–(2.5), we can apply the maximum principle to get u1 (z1 ) < 0. Thus u1 is a solution of (2.1)–(2.5) for n = 1 which satisfies u1 ∈ W 2,∞ ([−R, R]) and u1 < 0 in [−R, R]. Step 2: n ≥ 2. Suppose that, for n = 1, 2, . . . , k with 1 ≤ k < N, we have obtained the solution un of (2.1)–(2.5) which satisfies un ∈ W 2,∞ ([−R, R]) and un < 0 in [−R, R]. By the arguments in Step 1 above, we can assume that,

 uk,ϵ → uk , in C 1 ([−R, R]), as ϵ → 0, u (z ) < 0, z ∈ [−R, R], for ϵ small enough, ‖uk,ϵ ‖ 2 for ϵ small enough. k,ϵ C ([−R,R]) ≤ Ch ,

(2.20)

where uk,ϵ is the solution of (2.9)–(2.11) for n = k. Based on (2.20), we can show in the same way as in Step 1 that there exists a solution uk+1 of (2.1)–(2.5) for n = k + 1 which satisfies uk+1 ∈ W 2,∞ ([−R, R]) and uk+1 < 0 in [−R, R]. Moreover, by passing to a subsequence, (2.20) still holds when k is replaced by k + 1. Therefore, by induction, we obtain a sequence {un }Nn=1 ⊂ W 2,∞ ([−R, R]) which satisfies (2.1)–(2.6) and un < 0 in [−R, R] for n = 1, 2, . . . , N. The proof of Lemma 2.1 is complete.  3. Some properties and estimates for the approximating solutions In the sequel, we shall use the following notations:

 C :   0 C1 :  C2 : Cρ :

various positive constants depending only on γ , A and K ; various positive constants depending only on γ , A, K , r and σ ; various positive constants depending only on γ , A, K , r , σ and δ; various positive constants depending only on γ , A, K , r , σ , δ and ρ.

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6589

Moreover, {un }Nn=0 is always the solution of (2.1)–(2.6) provided by Lemma 2.1 and

wn (z ) = un (z )eγ τn (e

z −K )+

,

n = 0, 1, 2, . . . , N .

(3.1)

Lemma 3.1. For n = 1, 2, . . . , N, we have 0 > wn ≥ wn−1 ≥ w0 ≥ −1,

−R ≤ z ≤ R ,

(3.2)

0 > un ≥ un−1 ≥ −1, −R ≤ z ≤ R, un > un − 1 , Ln un = 0, −R ≤ z ≤ ln K ,

(3.3)

un ∈ C ([−R, ln K ]).

(3.5)

(3.4)

2

Proof. By (3.1), (2.2) and (2.6), we have z + Bn un = e−γ τn (e −K ) [wn (z ) − wn−1 (z )] ≥ 0,

w0 (z ) = −1,

−R ≤ z ≤ R, n = 1, 2, . . . , N ,

−R ≤ z ≤ R .

(3.6) (3.7)

So, wn ≥ wn−1 ≥ w0 = −1 in [−R, R]. Recalling un < 0 in [−R, R] by Lemma 2.1, we also have wn < 0 in [−R, R]. Hence (3.2) is true. Moreover, we have z + z + z + 0 > un = wn e−γ τn (e −K ) ≥ wn−1 e−γ τn (e −K ) ≥ wn−1 e−γ τn−1 (e −K ) = un−1 .

Thus, 0 > un ≥ un−1 ≥ u0 = −1 and (3.3) is true. In order to prove (3.4)–(3.5), we notice the fact that

 un (z ) = wn (z ), z ∈ [−R, ln K ], n = 0, 1, . . . , N , un ∈ C 1 ([−R, R]), n = 1, 2, . . . , N ,  wn ∈ C ([−R, R]) ∩ C 1 ([−R, ln K ]) ∩ C 1 ([ln K , R]), n = 1, 2, . . . , N .

(3.8)

We claim that un (ln K ) > un−1 (ln K ),

n = 1, 2, . . . , N .

(3.9)

Suppose (3.9) is false, then, by (3.1)–(3.3) and (3.8), wn (ln K ) = wn−1 (ln K ) and the function wn (z ) − wn−1 (z ) takes its minimum in [−R, R] at z = ln K . So,



wn′ (ln K + 0) − wn′ −1 (ln K + 0) ≥ 0, wn′ (ln K − 0) − wn′ −1 (ln K − 0) ≤ 0.

(3.10)

On the other hand, it is easy to find from (3.1) and (3.8) that



wn′ (ln K + 0) = wn′ (ln K − 0) + γ τn K wn (ln K ), wn′ −1 (ln K + 0) = wn′ −1 (ln K − 0) + γ τn−1 K wn−1 (ln K ),

(3.11)

which leads to

[wn′ − wn′ −1 ](ln K + 0) = [wn′ − wn′ −1 ](ln K − 0) + γ τn−1 K [wn − wn−1 ](ln K ) + γ hK wn (ln K ). Since wn (ln K ) < 0 by (3.2) and wn (ln K ) = wn−1 (ln K ), we find

[wn′ − wn′ −1 ](ln K + 0) < [wn′ − wn′ −1 ](ln K − 0), which is a contradiction to (3.10). So, (3.9) must be true. Now we divided the proof of (3.4)–(3.5) into two steps as follows. Step 1: n = 1. By (3.3) and (3.9), u1 (z ) − u0 (z ) ≥ 0 in [−R, ln K ] and u1 (ln K ) − u0 (ln K ) > 0. Suppose there is a z1 ∈ [−R, ln K ) such that u1 (z1 ) − u0 (z1 ) = 0. Then z1 is a minimal point of u1 − u0 in [−R, ln K ] which, together with (2.4) and (2.6), implies u′1 (z1 ) − u′0 (z1 ) = 0.

(3.12)

Without loss of generality, we can assume u1 (z ) − u0 (z ) > 0,

z1 < z ≤ ln K ,

(3.13)

that is, B1 u1 > 0 in (z1 , ln K ]. So, it follows from (2.1)–(2.3) that L 1 u1 = −

σ2 2

u′′1 − µu′1 + r τ1

u1 − u0 h

= 0,

a.e. z ∈ (z1 , ln K ).

(3.14)

6590

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

Recalling that u1 ∈ W 2,∞ ((−R, R)) and u0 ≡ −1 in [−R, R], we find from (3.14) that u1 ∈ C 2 ([z1 , ln K ]) and (3.14) holds everywhere in [z1 , ln K ]. Let v1 = u1 − u0 and rewrite (3.14) as follows:



σ2 2

v1′′ − µv1′ + r τ1

v1 h

= 0,

z ∈ [z1 , ln K ].

(3.15)

Since z1 is a minimal point of v1 in [z1 , ln K ] and v1 (z1 ) = 0, we can applying the Hopf lemma to (3.15) and find v1′ (z1 ) > 0, that is, u′1 (z1 ) − u′0 (z1 ) > 0, a contradiction to (3.12). Thus, we must have u1 > u0 everywhere in [−R, ln K ] and, consequently, L1 u1 = 0 in [−R, ln K ] and u1 ∈ C 2 ([−R, ln K ]). Step 2: n ≥ 2. Suppose there is an integer k such that 1 ≤ k < N and (3.4)–(3.5) hold for n = 1, 2, . . . , k. Then, we have



σ2 2

u′′k − µu′k = Lk uk − r τk

uk − uk−1 h

< 0,

z ∈ [−R, ln K ].

(3.16)

Repeating the argument in Step 1 with u1 and u0 replaced by uk+1 and uk respectively and with the role of u0 = −1 in

[−R, ln K ] replaced by (3.16), we can also find that uk+1 satisfies (3.4)–(3.5) for n = k + 1. Thus, by induction, (3.4)–(3.5) are true for n = 1, 2, . . . , N. The proof of Lemma 3.1 is complete.



Lemma 3.2. 0 ≤ u′n (z ) ≤ C0 ,

z ∈ [−R, R], n = 1, 2, . . . , N .

(3.17)

Proof. Let wn be the function defined by (3.1). Using (2.5) and (3.2), we have R 0 ≤ u′n (R) = −γ τn eR un (R) = −γ τn eR e−γ τn (e −K ) wn (R)

≤ γ τn eR e−γ τn (e

R −K )

  R = eγ τn K γ τn eR e−γ τn e

≤ eγ AK e−1 , which, together with (2.4), gives 0 ≤ u′n (R) ≤ eγ AK −1 , n = 1, 2, . . . , N .

u′n (−R) = 0,

(3.18)

Set

An ≡ {z ∈ (−R, R) | wn (z ) = wn−1 (z ) },

n = 1, 2, . . . , N .

(3.19)

Then, An ⊂ (ln K , R) by (3.1) and (3.4). Moreover, (3.2) implies that every point z in An is a minimal point of wn (z )−wn−1 (z ) and hence wn′ (z ) = wn′ −1 (z ) in An . So, by (3.1),

[u′n eγ τn (e

z −K )

+ γ τn ez wn ](z ) = [u′n−1 eγ τn−1 (e

z −K )

+ γ τn−1 ez wn−1 ](z )

for any z ∈ An , that is, z z u′n (z ) = u′n−1 (z )e−γ h(e −K ) − γ ez hwn (z )e−γ τn (e −K ) ,

z ∈ An

(3.20)

for n = 1, 2, . . . , N. Denote Mk ≡

max u′n (z ),

z ∈[−R,R] n=0,1,...,k

k = 0, 1, . . . , N .

Case 1: Mk = u′n (z ) for some z ∈ An and 1 ≤ n ≤ k. If ez ≥ 3K , then by (3.20) and (3.2), z z Mk ≤ Mk e−γ h(e −K ) + γ ez he−γ h(e −K ) .

So,

γ (ez − K )he−γ h(e −K ) 3 x ≤ · x , z −K ) z −γ h ( e e −K 1−e 2 e −1 where x = γ h(ez − K ) ∈ (0, +∞). Noting that ez

Mk ≤

lim x ↓0

x ex − 1

z

·

= 1,

lim

x↑+∞

x ex − 1

= 0,

(3.21)

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6591

we have max

[0,+∞)

x ex − 1

< +∞.

Thus Mk ≤

3 2

x

max

[0,+∞) ex

−1

.

If K < ez < 3K , then by (3.20), Mk ≤ Mk−1 + 3K γ h. Combining the two inequalities above, we get Mk ≤ max{C0 , Mk−1 + C0 h}.

(3.22)

Case 2: Mk = un (z ) for some z ̸∈ An and 1 ≤ n ≤ k. Then, z ∈ [−R, R] − An and wn (z ) > wn−1 (z ). If z = R or z = −R, then by (3.18), we have Mk = u′n (z ) ≤ eγ AK −1 . Suppose z ∈ (−R, R). By (2.1)–(2.3), there is a sufficiently small positive number ϵz such that un ∈ C 3 ([z − ϵz , z + ϵz ]) and ′

r τn

un − un−1 h



σ2



σ2

2

u′′n − µu′n = 0,

in (z − ϵz , z + ϵz ).

So, r τn

u′n − u′n−1 h

2

′′ u′′′ n − µun = 0,

in (z − ϵz , z + ϵz ).

(3.23)

Since z is a maximal point of u′n in [−R, R], we have u′′n (z ) = 0 and u′′′ n (z ) ≤ 0 which, together with (3.23), leads to Mk = u′n (z ) ≤ u′n−1 (z ) ≤ Mk−1 . Thus, Mk ≤ max{C0 , Mk−1 }.

(3.24)

Combining (3.22) and (3.24), we find Mk ≤ max{C0 , Mk−1 + C0 h},

k = 1, 2, . . . , N .

(3.25)

Noting that M0 ≡ max[−R,R] u0 (z ) ≡ 0, we can easily derive from (3.25) by deduction that ′

MN ≤ C0 .

(3.26)

Thus, the inequality on the right hand side of (3.17) is true. The proof of the inequality on the left hand side of (3.17) is similar and easier. We omit it for brevity. The proof of Lemma 3.2 is complete.  Lemma 3.3. Let R > | ln K | + 2h . Then, for any ρ ∈ | ln K |, R −



0≤

un (z ) − un−1 (z ) h





δ

,

1 h



, the following estimate is true:

z ∈ [−ρ, ρ], n = 1, 2, . . . , N .

(3.27)

Proof. The inequality on the left hand side of (3.27) directly follows from (3.3). Write z + Bn un ≡ un − e−γ h(e −K ) un−1 = (un − un−1 ) + Φ un−1 ,

(3.28)

z + where Φ (z ) = 1 − e−γ h(e −K ) . Obviously, Φ (z ) is a Lipschitz continuous function in (−∞, +∞) and

0 ≤ Φ ( z ) ≤ h γ ez ,

0 ≤ Φ ′ (z ) ≤ hγ ez ,

z ∈ [−R, R], z ̸= ln K .

(3.29)

Let φ ∈ C ∞ ([−R, R]) be a cutoff function such that

 0 ≤ φ(z ) ≤ 1,     φ(z ) = 0,    φ(′z ) = 1, ′′ |φ | ≤ Ch, |φ | ≤ Ch,

z ∈ [−R, R]; 1 |z | > ρ + ; h |z | ≤ ρ; z ∈ [−R, R]

where C is some absolute positive constant.

(3.30)

6592

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

Set Hn =

un −un−1 h

vn ( z ) ≤



δ

,

and vn = e−z Hn φ . Then the inequality on the right hand side of (3.27) follows from the inequality: z ∈ [−R, R], n = 1, 2, . . . , N .

un −un−1,ϵ h

Define Hn,ϵ = un,ϵ → un ,

(3.31)

and vn,ϵ = e−z Hn,ϵ φ where {un,ϵ }Nn=0 is the solution of (2.9)–(2.12) and

in C 1 ([−R, R]), as ϵ → 0, n = 0, 1, . . . , N .

(3.32)

Let zn be a maximum point of vn,ϵ in [−R, R]. If zn = ±R, then vn,ϵ (zn ) = 0. If zn ∈ (−R, R), we consider the following two cases. Case 1: Bn un (zn ) = 0. Then, by (3.28),

[un (zn ) − un−1 (zn )] + Φ (zn )un−1 (zn ) = 0, that is, Hn,ϵ (zn ) =

 1 un−1 (zn ) − un−1,ϵ (zn ) − Φ (zn )un−1 (zn ) = 0. h

Noting zn ∈ (ln K , R) by (3.1) and (3.4), we can get from (3.29), (3.32) and the equality above that Hn,ϵ (zn ) ≤ γ ezn + 1h o(1) where limϵ↓0 o(1) = 0. So, 1

vn,ϵ (zn ) ≤ γ + o(1).

(3.33)

h

Case 2: Bn un (zn ) > 0. Then L n un ≡ −

σ2 2

u′′n − µu′n + r τn

un − un−1 h

= 0,

z ∈ U (zn ),

(3.34)

and un ∈ C 2 (U (zn )) where U (zn ) is a neighborhood of zn in (−R, R). By (2.9), (2.7) and (2.12), we have Ln−1 un−1,ϵ ≥ 0 in [−R, R] for n = 1, 2, . . . , N. So, Ln un − Ln−1 un−1,ϵ ≤ 0,

z ∈ U (zn ),

that is,



hσ 2 2

Hn′′,ϵ − µhHn′ ,ϵ + rhHn,ϵ + r τn−1 (Hn,ϵ − Hn−1,ϵ ) ≤ o(1),

z ∈ U (zn ).

(3.35)

Recall that vn,ϵ = Hn,ϵ e−z φ . By some elementary calculations, we get from (3.35) that



σ2 2

vn′′,ϵ − νvn′ ,ϵ + δvn,ϵ + r τn−1

vn,ϵ − vn−1,ϵ h

≤ fn + o(1),

z ∈ U (zn ),

(3.36)

2 where ν = σ2 + r − δ and

fn ≡ −

σ2  2

2Hn′ ,ϵ e−z φ ′ + Hn,ϵ e−z φ ′′ − ν Hn,ϵ e−z φ ′ .



By (3.3), (3.17) and (3.32), |Hn,ϵ | ≤ max |fn | ≤ Cρ ,

[−R,R]

2 h

and |Hn′ ,ϵ | ≤

2C0 h

(3.37)

in [−R, R] which, together with (3.30) and (3.37), lead to

n = 1, 2, . . . , N .

(3.38)

Noting that un ∈ C 2 (U (zn )) and un−1,ϵ ∈ C ∞ ([−R, R]), we have vn,ϵ ∈ C 2 (U (zn )). Therefore, vn′ ,ϵ (zn ) = 0 and vn′′,ϵ (zn ) ≤ 0 because zn is a maximum point of vn,ϵ . Thus (3.36) and (3.38) imply that either vn,ϵ (zn ) ≤ vn−1,ϵ (zn ) or vn,ϵ (zn ) ≤ Combining (3.33) and (3.39), we get max vn,ϵ ≤ max

[−R,R]



Cρ + o(1)

δ

 , max vn−1,ϵ . [−R,R]

Cρ + o(1)

δ

.

(3.39)

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6593

Letting ϵ ↓ 0 and using (3.32), we obtain max vn ≤ max







, max vn−1 .

δ

[−R,R]

[−R,R]

By deduction, we find max vn ≤ max





δ

[−R,R]

 , max v1 , [−R,R]

n = 1, 2, . . . , N .

(3.40)

Now we prove (3.31) for n = 1. Noting that u0 ≡ −1 ≡ u0,ϵ , we have v1 ≡ v1,ϵ . If z1 = ±R, then v1 (zn ) = 0; if z1 ∈ (−R, R) and B1 u1 (z1 ) = 0, then v1 (z1 ) ≤ γ by (3.33) with n = 1 and o(1) = 0. Suppose z1 ∈ (−R, R) and B1 u1 (z1 ) > 0. Then (3.36) holds for n = 1, that is,



σ2 2

v1′′ − νv1′ + δv1 ≤ f1 ,

z ∈ U (z1 ).

(3.41) C

Since v1′ (z1 ) = 0 and v1′′ (z1 ) ≤ 0, we get from (3.41) and (3.38) that v1 (z1 ) ≤ δρ . Thus (3.31) holds for n = 1. By (3.40), (3.31) is true for all n = 1, 2, . . . , N.  Lemma 3.4. Let R > | ln K | + 2h . Then, for any ρ ∈ | ln K |, R −



|u′′n (z )| ≤ C2 e2z ,

1 h



, the following estimate is true:

a.e. z ∈ [−ρ, ρ], n = 0, 1, . . . , N .

(3.42)

Proof. When n = 0, (3.42) is trivial. We shall suppose n ≥ 1 below. Since un ∈ W 2,∞ ([−R, R]) which is equivalent to C 1,1 ([−R, R]), u′n is Lipschitz continuous in [−R, R] and hence u′′n (z ) is the classical derivative of u′n for almost everywhere z ∈ [−R, R]. Set X ≡ z ∈ (−R, R)|u′′n (z ) is the classical derivative of u′n , n = 1, 2, . . . , N .





(3.43)

Then [−R, R] − X is a set of zero Lebesgue measure. Let wn be the function defined by (3.1). If wn (z ) > wn−1 (z ), then, by (2.1)–(2.3), u′′n is continuous in a neighborhood of z and Ln un (z ) = 0, that is, u′′n (z ) = where Hn =

1 h

2 

σ

2

 −µu′n (z ) + r τn Hn (z ) ,

if wn (z ) > wn−1 (z ),

(3.44)

(un − un−1 ). So, by (3.17) and (3.27), we have

u′′n (z ) ≥ −C2 ,

if wn (z ) > wn−1 (z ), z ∈ [−R, R].

(3.45)

Suppose z ∈ X and wn (z ) = wn−1 (z ) (1 ≤ n ≤ N ). By (3.1)–(3.4), z ∈ (ln K , R] and z is a minimal point of wn − wn−1 . So wn′ (z ) = wn′ −1 (z ) and wn′′ (z ) ≥ wn′′−1 (z ). If n > 1 and wn−1 (z ) = wn−2 (z ), then we also have wn′′−1 (z ) ≥ wn′′−2 (z ) and hence wn′′ (z ) ≥ wn′′−2 (z ). By recursion, there exists an integer m ∈ [1, n], such that, either wn′′ (z ) ≥ w0′′ (z ) = 0;

or,

wn′′ (z ) ≥ wm′′ (z ) and wm (z ) > wm−1 (z ).

(3.46)

A simple calculation shows that

wk′′ (z ) = eγ τk (e

z −K )



u′′k (z ) + 2γ τk ez u′k (z ) + γ 2 τk2 e2z + γ τk ez uk (z )







(3.47)

for k = 1.2. . . . , N which, together with (3.3) and (3.17), gives z u′′k (z ) ≥ e−γ τk (e −K ) wk′′ (z ) − C2 ez ,

k = 1, 2, . . . , N .

(3.48)

If the first case in (3.46) happens, then u′′n (z ) ≥ −C2 ez ; while, if the other case in (3.46) happens, we can apply (3.45) for n = m and get u′′m (z ) ≥ −C2 . So, by (3.47), (3.48), (3.3) and (3.17), we have z ′′ u′′n (z ) ≥ e−γ τn (e −K ) wm − C 2 ez

≥ e−γ τn (e

z −K )

  z eγ τm (e −K ) u′′m − C2 (e2z + ez ) − C2 ez

≥ −e−γ (n−m)h(e −K ) C2 (e2z + ez + 1) − C2 ez ≥ −C2 (e2z + ez + 1). z

6594

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

Thus, we get u′′n (z ) ≥ −C2 (e2z + ez + 1),

if wn (z ) = wn−1 (z ) and z ∈ X .

(3.49)

Combining (3.45) and (3.49), we find u′′n (z ) ≥ −C2 e2z ,

a.e. z ∈ [−R, R], n = 1, 2, . . . , N ,

(3.50)

for some larger number C2 . On the other hand, we can rewrite (2.1) as follows: u′′n (z ) ≤

2 

σ2

 −µu′n (z ) + r τn Hn (z ) ,

a.e. z ∈ [−ρ, ρ]

which, together with (3.17) and (3.27), leads to u′′n (z ) ≤ C2 (1 + ez ),

a.e. z ∈ [−ρ, ρ].

(3.51)

Now (3.42) follows from (3.50) and (3.51) for some larger C2 . Lemma 3.5. Let R > | ln K | +

∫ N −

ρ



h

u′n − u′n−1

2

h

−ρ

n =1

2 h

 + 1. For any ρ ∈ | ln K |, R −

1 h



 − 1 , the following estimate is true:

dz ≤ Cρ .

(3.52)

Proof. By (2.1)–(2.3), [Ln un − Ln−1 un−1 ] · Bn un (z ) ≤ 0, un −un−1 . h

Set Hn =

a.e. z ∈ [−R, R], n = 2, 3, . . . , N .

(3.53)

Using (3.28), we can rewrite (3.53) as the following:

[ ] hσ 2 ′′ − Hn − hµHn′ + r τn (Hn − Hn−1 ) + rhHn−1 (hHn + Φ un−1 ) ≤ 0

(3.54)

2

for almost everywhere z ∈ [−R, R] and n = 2, 3, . . . , N. By (3.3), (3.27) and (3.29), Hn−1 ≥ 0,

Hn ≥ 0,

0 ≤ Bn un = hHn + Φ un−1 ≤ C2 hez ,

z ∈ [−R, R].

(3.55)

Using (3.55), we find from (3.54) that

[ ] hσ 2 ′′ − Hn + r τn (Hn − Hn−1 ) (hHn + Φ un−1 ) ≤ C2 h2 ez |Hn′ |

(3.56)

2

for almost everywhere z ∈ [−R, R] and n = 2, 3, . . . , N. Let ψ(z ) ∈ C ∞ ([−R, R]) be a cutoff function such that

 0 ≤ ψ(z ) ≤ 1,   ψ(z ) = 0, ψ(z ) = 1,  |ψ ′ | ≤ C , |ψ ′′ | ≤ C ,

z ∈ [−R, R]; |z | > ρ + 1; |z | ≤ ρ; z ∈ [−R, R]

(3.57)

where C is some absolute positive constant. By integrating by parts, we have R



Hn′′ (hHn + Φ un−1 )ψ 2 dz = − −R

R



h(Hn′ )2 ψ 2 dz − −R



R

2hHn′ Hn ψψ ′ dz − −R

Using (3.3), (3.17), (3.27), (3.29), (3.57) and the inequality 2αβ ≤ εα + equality that R

R 1 Hn′′ (hHn + Φ un−1 )ψ 2 dz ≤ − h (Hn′ )2 ψ 2 dz + hCρ . 2 −R −R

Hn′ (Φ un−1 ψ 2 )′ dz .

β2 ε

for suitable ε > 0, we can get from the above



Since

(Hn − Hn−1 )Hn =

R

−R 2





1 2

Hn2 − Hn2−1 +



1 2

(Hn − Hn−1 )2

(3.58)

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6595

and τn − τn−1 = h, we have N −

τn (Hn − Hn−1 )Hn ≥

n =2

1

 τ

2

2 N HN

−h

N −1 −

 Hn2−1

−τ

2 2 H1

.

n =2

Using (3.27) and (3.57), we find

∫ N −

R

τn (Hn − Hn−1 )Hn ψ 2 dz ≥ −hCρ .

h

(3.59)

−R

n =2

Similarly, we have N −

τn (Hn − Hn−1 )un−1 =

n =2

N −

τn (Hn un−1 − Hn−1 un−2 ) − h

n=2

N −

τn Hn2−1

n =2

≥ τN HN uN −1 − h

N −

Hn−1 un−2 − τ2 H1 u0 − h

n=2

N −

τn Hn2−1 ,

n =2

which, together with (3.3), (3.27), (3.29) and (3.57), leads to N ∫ −

R

τn (Hn − Hn−1 )Φ un−1 ψ 2 dz ≥ −hCρ .

(3.60)

−R

n =2

Combining (3.56) and (3.58)–(3.60), we obtain

∫ N σ2 −

R

(Hn′ )2 ψ 2 dz ≤ Cρ + C2

h

4

−R

n =2

ez |Hn′ |ψ 2 dz .

−R

for suitable ε > 0, we yield

R

(Hn′ )2 ψ 2 dz ≤ Cρ .

h

8

β2 ε

R

h

n=2

Using the inequality 2αβ ≤ εα 2 +

∫ N σ2 −

∫ N −

(3.61)

−R

n =2

For n = 1, we get from (2.1)–(2.3) that L 1 u1 · B 1 u1 = 0 ,

z ∈ [−R, R].

Since u0 ≡ −1, we have,

[ −

hσ 2

]

H1′′ − hµH1′ + rhH1 · (hH1 − Φ ) = 0,

2

z ∈ [−R, R].

Using (3.27), (3.29) and (3.57), we can easily derive that R



(H1′ )2 ψ 2 dz ≤ Cρ .

(3.62)

−R

Now (3.52) follows from (3.61)–(3.62) and (3.57). Lemma 3.6. Let R > | ln K | +



ρ

max 1≤n≤N

where Hn =

N −  2 τn Hn′  dz + h

−ρ 1 h

2 h



 + 2. Then, for any ρ ∈ | ln K |, R −

n=2

ρ



−ρ

1 h

 − 1 , the following estimate is true:

   Hn − Hn−1 2  dz ≤ Cρ τn2   h

(3.63)

(un − un−1 ).

Proof. By (2.1)–(2.3), [Ln un − Ln−1 un−1 ] · [Bn un (z ) − Bn−1 un−1 (z )] ≤ 0,

a.e. z ∈ [−R, R]

for n = 2, 3, . . . , N, that is,

[ −

σ2 2

Hn − µHn + r τn−1 ′′



Hn − Hn−1 h

]

+ rHn · [Hn − Hn−1 + Φ Hn−1 ] ≤ 0,

for almost everywhere z ∈ [−R, R] and n = 2, 3, . . . , N.

(3.64)

6596

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

We rewrite (3.64) as follows:



rhτn−1

Hn − Hn−1

2

h

] [ σ2 + − Hn′′ − µHn′ + rHn (Hn − Hn−1 ) ≤ I ,

(3.65)

2

where

[ ] σ 2 ′′ Hn − Hn−1 ′ I ≡− − Hn − µHn + rHn + r τn−1 Φ Hn−1 . 2

(3.66)

h

Let ψ be the cutoff function defined by (3.57). By integrating by parts, we have R



Hn′′ (Hn − Hn−1 ) ψ 2 dz =



R



Hn′ Hn′ − Hn′ −1 ψ 2 dz + 2







Hn′ (Hn − Hn−1 ) ψψ ′ dz . −R

−R

−R

R

So, R



≥−

rhτn−1

R

8



Hn − Hn−1

2

Hn′

ψ 2 dz −

h

−R



2 −R

−R



R



1

Hn′′ (Hn − Hn−1 ) ψ 2 dz −



 2  − Hn′ −1 ψ 2 dz

2

R



32h r τn−1

Hn′



2

(ψ ′ )2 dz .

(3.67)

−R

Similarly, R



Hn (Hn − Hn−1 )ψ dz ≥ 2

−R

1



R

Hn2 − Hn2−1 ψ 2 dz ,



2 −R



(3.68)

and

−µ



R

Hn′ (Hn − Hn−1 ) ψ 2 dz ≥ −

rhτn−1 8

−R

R





Hn − Hn−1

2

h

−R

ψ 2 dz −

32hµ2 r τn−1

R



Hn′



2

ψ 2 dz .

(3.69)

−R

Combining (3.65)–(3.69), we get m 3r −

4

hτn2−1





Hn − Hn−1

∫ m −

2

h

−R

n =2

≤ C2

R

ψ 2 dz +

σ2 4

τm−1

  (Hn ) · ψ 2 + (ψ ′ )2 dz + C2

h

R



−R

n =2

R

(Hm′ )2 ψ 2 dz −R

R

′ 2



m −  ′ 2  (H1 ) + H12 ψ 2 dz + τn−1

−R

n=2



R

I ψ 2 dz ,

(3.70)

−R

for all m = 2, 3, . . . , N. By (3.66) and (3.29), the last term on the right hand side of (3.70) can be estimated as follows: m −

τn−1



I ψ 2 dz −

−R

n =2

≤ C2

m r −

R

∫ m −

2 n=2

hτn2−1



R



Hn − Hn−1 h

−R

2

ψ 2 dz

R

 ′ 2    (Hn ) + (Hn′ −1 )2 + Hn2 + Hn2−1 · ψ 2 + (ψ ′ )2 e2z dz .

h

(3.71)

−R

n =2

Using (3.27), (3.52) and (3.57), we can derive from (3.70)–(3.71) that m − n =2

hτn2−1



R

−R



H n − H n −1 h

2

ψ 2 dz + τm−1

R

(Hm′ )2 ψ 2 dz ≤ Cρ −R

for all m = 2, 3, . . . , N. Since

τn−1 n−1 1 = ≥ for n ≥ 2, τn n 2 (3.63) follows from (3.72) and (3.62).





(3.72)

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6597

4. The existence of the solution Define

 u − τn − τ (u − u ), n n n−1 h u = h u 0

τn−1 < τ ≤ τn , n = 1, 2, . . . , N ,

(4.1)

τ = τ0 = 0,

where {un }Nn=0 is the solution of (2.1)–(2.6) provided by Lemma 2.1. In the sequel, we shall use the notations: uhτ ≡ (uh )τ , uhz ≡ (uh )z , uhzz ≡ (uh )zz and uhzτ ≡ (uh )z τ for simplicity. Lemma 4.1. Let R > | ln K | + 2h + 2, ρ ∈ | ln K |, R − 1h − 1 and uh (z , τ ) be the function defined by (4.1). Set QR ≡ [−R, R] × (0, A] and Qρ = [−ρ, ρ] × (0, A]. 2,1 Then, uh ∈ W∞ (QR ) ∩ C (Q¯ R ) and uhzτ ∈ L2 (Qρ ). Moreover, the following estimates hold:



−1 ≤ uh (z , τ ) < 0,

0 ≤ uhz (z , τ ) ≤ C0 ,



(z , τ ) ∈ Q¯ R ,

(4.2)

‖uτ ‖L∞ (Qρ ) ≤ Cρ , ‖ ‖ ≤ Cρ , ‖ ‖L2 (Qρ ) ≤ Cρ , √ h ‖ τ uz τ ‖L∞ ([0,A];L2 [−ρ,ρ]) ≤ Cρ , [ h ]2 ∫ u (z , τ ) − uhτ (z , τ − h) τ2 τ dzdτ ≤ Cρ . h

uhzz L∞ (Qρ )

uhzτ

(4.3) (4.4) (4.5)

h



Here, the constants C0 and Cρ are defined at the beginning of Section 3. Proof. Lemma 4.1 is an easy consequence of the estimates derived in Section 3. In fact, by some simple calculations, (4.2) follows from Lemmas 3.1–3.2, (4.3) follows from Lemmas 3.3–3.5, and (4.4)–(4.5) follows from Lemma 3.6.  We shall show below that the function uh converges, possibly by passing to a subsequence, to a solution of (1.21)–(1.25) as h ↓ 0. To this purpose, we need an auxiliary lemma as follows. Lemma 4.2. Let Y = [z1 , z2 ] × [τ1 , τ2 ] be a rectangle and v(z , τ ) ∈ H 1 (Y ) satisfying

‖v‖L2 (Y ) + ‖vτ ‖L2 (Y ) + ‖vz ‖L∞ ([τ1 ,τ2 ];L2 [z1 ,z2 ]) ≤ M < +∞

(4.6)

for some positive constant M. Then, after possibly changing the value of v(z , τ ) on a set of zero Lebesgue measure in Y , v(z , τ ) ∈ 1 1

C 2 , 4 (Y ) and the following estimate holds:

‖v‖

1 1

, C 2 4 (Y )

≤ CM ,

(4.7)

where C is some positive constant depending only on z2 − z1 and τ2 − τ1 . Proof. By the standard approximating method, we only need to prove (4.7) for v ∈ C 1 (Y ). For any (x, τ ) ∈ Y and any

(y, τ ) ∈ Y , we have

∫ x   ∫   1 |v(x, τ ) − v(y, τ )| =  vz (z , τ )dz  ≤ |x − y| 2 max

z2

τ ∈[τ1 ,τ2 ] z 1

y

|vz (z , τ )|2 dz

 12

.

Using (4.6), we get 1

|v(x, τ ) − v(y, τ )| ≤ M |x − y| 2 ,

∀ τ ∈ [τ1 , τ2 ], ∀x, y ∈ [z1 , z2 ].

(4.8)

Similarly, for any (x, τ ) ∈ Y and any (x, t ) ∈ Y , we can show 1

|v(x, τ ) − v(x, t )| ≤ CM |τ − t | 4 ,

∀τ , t ∈ [τ1 , τ2 ], ∀x ∈ [z1 , z2 ].

In order to prove (4.9), we can assume, without loss of generality, that z1 ≤ x ≤ 0 < τ − t ≤ τ2 − τ1 ≤ Cˆ (z2 − x)2 ,

where Cˆ =

4(τ2 − τ1 )

(z2 − z1 )2

(4.9) z1 +z2 2

and τ > t so that

.

Then, there is a x∗ ∈ (x, z2 ] such that 0 < τ − t = Cˆ (x∗ − x)2 .

(4.10)

6598

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

By (4.6), we have x∗



|v(z , τ ) − v(z , t )|dz ≤ x

x∗



τ



 1 |vτ (z , τ )| dτ dz ≤ M (x∗ − x)(τ − t ) 2 .

t

x

Using (4.10) and the mean value theorem for integrals, we find that there is a ξ ∈ [x, x∗ ] such that 1

1

|v(ξ , τ ) − v(ξ , t )| ≤ Cˆ 4 M (τ − t ) 4 . Noticing that |ξ − x| ≤ |x∗ − x| = Cˆ

− 12

(4.11) 1 2

|τ − t | , we get from (4.8) and (4.11) that

|v(x, τ ) − v(x, t )| ≤ |v(x, τ ) − v(ξ , τ )| + |v(ξ , τ ) − v(ξ , t )| + |v(ξ , t ) − v(x, t )| 1

≤ CM (τ − t ) 4

where C is some positive constant depending only on Cˆ . Thus, (4.9) is true. Moreover, we can easily derive from (4.6) that

|v(z , τ )| ≤ C˜ M ,

∀(z , τ ) ∈ Y ,

(4.12)

for some positive constant C˜ depending only on z2 − z1 and τ2 − τ1 . Now (4.7) follows from (4.8)–(4.9) and (4.12).  Proof of Theorem 1.1. By (4.2)–(4.3),

‖uh ‖C (Q¯ ρ ) + ‖uhz ‖L∞ (Qρ ) + ‖uhτ ‖L∞ (Qρ ) ≤ Cρ . So, uh is Lipschitz continuous in Q¯ ρ and

‖uh ‖C 0+1 (Q¯ ρ ) ≤ Cρ .

(4.13)

Also by (4.2)–(4.3),

‖uhz ‖L∞ (Qρ ) + ‖uhzτ ‖L2 (Qρ ) + ‖uhzz ‖L∞ (Qρ ) ≤ Cρ . 1 1

Applying Lemma 4.2 to v = uhz and Y = Qρ , we find uhz ∈ C 2 , 4 (Q¯ ρ ) and

‖uhz ‖

1 1

, C 2 4 (Q¯ ρ )

≤ Cρ .

(4.14)

Based on (4.3)–(4.4) and (4.13)–(4.14), we can find some function u(z , τ ) defined on Q¯ ∞ , satisfying 2 ,1 u ∈ W∞ (Qρ ) ∩ C (Q¯ ∞ ),

uz τ ∈ L2 (Qρ )

(4.15)

and, as h → 0 (possibly by passing to a subsequence),

 h u →u     uhz → uz uhτ ⇀ uτ  h  ⇀ uzz  uzz  uhzτ ⇀ uz τ

in C (Q¯ ρ ), in C (Q¯ ρ ), in w ∗ −L∞ (Qρ ), in w ∗ −L∞ (Qρ ), in w − L2 (Qρ ).

(4.16)

Moreover, by a standard argument, we can find from (4.16) and (4.5) that uτ τ ∈ L2 (Qϵρ ) and

‖uτ τ ‖L2 (Qϵρ ) ≤

1

ϵ

Cρ .

(4.17)

By (4.16), (4.3) and (4.4), we also have

‖uτ ‖L∞ (Qρ ) ≤ Cρ ,

1

‖uz τ ‖L∞ ([ϵ,A];L2 [−ρ,ρ]) ≤ √ Cρ . ϵ

(4.18)

Thus, v = uτ satisfies the conditions of Lemma 4.2 with Y = Qϵρ and M = 1ϵ Cρ for some Cρ . So, Lemma 4.2 gives that 1 1

uτ ∈ C 2 , 4 (Qϵρ ). We shall show below that the limit function u is a solution of (1.21)–(1.25) and satisfies (1.26)–(1.29).

(4.19)

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

6599

(1.26)–(1.27) follows from (4.15), (4.17) and (4.19). (1.28) follows from (4.16) and (4.2). (1.29) can be easily derived from (4.16), (4.1) and (3.4). By (4.16), (4.1), (2.1)–(2.3) and (2.6), it is not difficult to verify that u satisfies (1.21)–(1.24). For brevity, we only give a proof for (1.23) as below. In terms of (2.1) and (4.1)–(2.2), we can write L uh (z , τ ) = J1 (z , τ ) + I1 (z , τ ),

(z , τ ) ∈ QR ,

(4.20)

B u (z , τ ) = J2 (z , τ ) + I2 (z , τ ),

(z , τ ) ∈ QR ,

(4.21)

h

where

 un − un − 1 σ 2 ′′  un − µu′n + r τn , J1 (z , τ ) ≡ Ln un = −    2 h     σ2   I1 (z , τ ) ≡ r [τ − τn ]uhτ − [τ − τn ]uhτ zz − µ[τ − τn ]uhτ z ,   2   1 1 z + un − e−γ h(e −K ) un−1 , J2 (z , τ ) ≡ Bn un =    [h −hS (z ) h ]   e −1    I ( z , τ ) ≡ + S ( z ) un−1 + S (z )[h − (τn − τ )]uhτ ,  2 h   S (z ) ≡ γ (ez − K )+ ,

(4.22)

for τ ∈ [τn−1 , τn ] and n = 1, 2, . . . , N. By (2.3), J1 J2 ≡ 0 in QR . So, L uh · B uh = J 1 I 2 + J 2 I 1 + I 1 I 2 ,

(z , τ ) ∈ QR .

(4.23)

For any fixed function ϕ(z , τ ) ∈ C0 (Q∞ ) satisfying ϕ ≥ 0 in Q∞ , we can take R and ρ large enough, such that, ∞

supp ϕ ≡ { (z , τ ) ∈ Q∞ | ϕ(z , τ ) ̸= 0 } ⊂ Qρ .

(4.24)

Then, it is easy to verify from (4.2)–(4.3) and Lemmas 3.1–3.6 that the first term on the right hand side of (4.23) can be estimated by J1 I2 ϕ ≤ hCρ ,

a.e. in QR .

(4.25)

It is easy to see from (4.22), (4.2) and (3.3) that the third term on the right hand side of (4.23) satisfies I1 I2 ϕ ≤ h2 Cρ (uhτ )2 + (uhτ z )2 + 1 ϕ −





σ 2 (τ − τn ) 2

I2 ϕ uhτ zz ,

in QR .

(4.26)

Noting that R



I2 ϕ uτ zz dz = − h

−R



R



−R

I2,z ϕ + I2 ϕz uhτ z dz ,



τ ∈ [0, A],

we can obtain from (4.26), (4.24), (4.2) and (3.3) that



I1 I2 ϕ dzdτ ≤ hCρ



QR



 h 2  (uτ ) + (uhτ z )2 + 1 dzdτ ,

which, together with (4.3), leads to



I1 I2 ϕ dzdτ ≤ hCρ .

(4.27)

QR

Similarly, we can show that the second term on the right hand side of (4.23) satisfies



J2 I1 ϕ dzdτ ≤ hCρ .

(4.28)

QR

Combining (4.23), (4.25) and (4.27)–(4.28), we yield



L uh · B uh · ϕ dzdτ ≤ hCρ ,

∀ϕ ∈ C0∞ (Q∞ ), ϕ ≥ 0 in Q∞

QR

with R and ρ large enough such that (4.24) holds.

(4.29)

6600

L. Song, W. Yu / Nonlinear Analysis 74 (2011) 6583–6600

On the other hand, we have



L u · B u · ϕ dzdτ = h

h

∫ 

QR

r τ uτ −

QR

h

∫ 

r τ uτ −

= QR

+

σ2 2

h



σ2 2

σ2 2

−µ

uhz

uhzz

−µ

uhz

 

uhτ + γ (ez − K )+ uh ϕ dzdτ





γ (ez − K )+ uh ϕ dzdτ

uhz uhzτ ϕ + uhz uhτ ϕz dz − µ

 QR

uhzz



∫ QR

uhz uhτ ϕ dzdτ + r τ

∫ QR

(uhτ )2 ϕ dzdτ .

Let h ↓ 0 (hence R → +∞) in the last equality and using (4.16), (4.29) and the following inequality



(uτ )2 ϕ dzdτ ≤ lim inf



h→0

Q∞

Q∞

(uhτ )2 ϕ dzdτ ,

∀ ϕ ∈ C0∞ (Q∞ ), ϕ ≥ 0 in Q∞ ,

we find



L u · B u · ϕ dzdτ ≤ 0,

∀ ϕ ∈ C0∞ (Q∞ ), ϕ ≥ 0 in Q∞ .

Q∞

So, L u · B u ≤ 0 almost everywhere in Q∞ which, together with (1.21)–(1.22), leads to L u · B u = 0 almost everywhere in Q∞ . Thus, (1.23) is true. The proof of Theorem 1.1 is complete.  Acknowledgments Supported by the NSFC grant No. 10671103 and by the NSFC grant No. 11001142. Yu Wanghui was supported by JiangSu SFC Project No. BK2008155. References [1] [2] [3] [4] [5] [6] [7] [8]

R. Lambert, D. Larcker, R. Verrecchia, Portfolio considerations in valuing executive compensation, Journal of Accounting Research 29 (1991) 129–149. J. Carpenter, The exercise and valuation of executive stock options, Journal of Financial Economics 48 (1998) 127–158. B.J. Hall, K.J. Murphy, Stock option for undiversified executives, Journal of Accounting and Economics 33 (2002) 3–42. J. Ingersoll, The subjective and objective evaluation of incentive stock options, Journal of Business 79 (2006) 453–487. A. Jain, A. Subramanian, The intertemporal exercise and valuation of employee options, Accounting Review 79 (2004) 705–744. L.C.G. Rogers, J. Scheinkman, Optimal exercise of executive stock options, Finance Stock 11 (2007) 357–372. V. Henderson, D. Hobson, Horizon-unbiased utility functions, Stochastic Processes and their Applications 117 (2007) 1621–1641. M. Grasselli, V. Henderson, Risk aversion and block exercise of executive stock options, Journal of Economic Dynamics and Control 33 (1) (2009) 109–127. [9] T. Leung, R. Sircar, Accounting for risk aversion, vesting, job termination risk and multiple exercises in valuation of employee stock options, Mathematical Finance 19 (1) (2009) 99–128. [10] A. Friedman, Variational Principles and Free-Boundary Problems, John Wiley & Sons, 1988.