Estimation for discretely observed continuous state branching processes with immigration

Estimation for discretely observed continuous state branching processes with immigration

Statistics and Probability Letters 81 (2011) 1104–1111 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage...

265KB Sizes 1 Downloads 163 Views

Statistics and Probability Letters 81 (2011) 1104–1111

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

Estimation for discretely observed continuous state branching processes with immigration Jianhui Huang a , Chunhua Ma b,∗ , Cai Zhu a a

Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong

b

School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, PR China

article

abstract

info

Article history: Received 9 November 2010 Received in revised form 1 March 2011 Accepted 3 March 2011 Available online 10 March 2011

We study the problem of parameter estimation for the continuous state branching processes with immigration, observed at discrete time points. The weighted conditional least square estimators (WCLSEs) are used for the drift parameters. Under the proper moment conditions, asymptotic distributions of the WCLSEs are obtained in the supercritical, sub- or critical cases. © 2011 Elsevier B.V. All rights reserved.

MSC: primary 62F12 62M05 secondary 60G52 60J75 Keywords: Continuous state branching process Poisson random measure Weighted conditional least square estimator Weak convergence

1. Introduction The general concept of the CBI processes (continuous state branching processes with immigration) was first introduced by Kawazu and Watanabe (1971). Such processes arise as the high density limits of a sequence of Galton–Watson branching processes with immigration and have been widely used in the study of population biology. Let R+ = [0, ∞). Let m(dξ ) and µ(dξ ) be σ -finite measures supported by R+ \ {0} such that ∞



ξ m(dξ ) + 0





ξ ∧ ξ 2 µ(dξ ) < ∞.

(1.1)

0

Let Bt be a standard Brownian motion, N0 (ds, dξ ) be a Poisson random measure on (0, ∞) × R+ with intensity dsm(dξ ) and N1 (ds, du, dξ ) be a Poisson random measure on (0, ∞) × (0, ∞) × R+ with intensity dsduµ(dξ ). Suppose that B, N0 and N1 are independent of each other. Following Dawson and Li (2006), a CBI process can be defined as follows t



(a + bXs )ds +

Xt = x0 + 0



t

∫ 0

∫ t∫  σ Xs dB(s) + 0

ξ N0 (ds, dξ ) + R+

Corresponding author. E-mail addresses: [email protected] (J. Huang), [email protected] (C. Ma).

0167-7152/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2011.03.004

∫ t∫ 0

0

Xs−

∫ R+

ξ N˜ 1 (ds, du, dξ ),

(1.2)

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

1105

where x0 ∈ R+ , a ≥ 0, b ∈ R, σ ≥ 0, and N˜ 1 (ds, du, dξ ) = N1 (ds, du, dξ ) − dsduµ(dξ ). The above process is also called a CB process without immigration if a = 0 and m = 0 (i.e. N0 = 0). The process Xt is resp. called supercritical, sub- or critical as b > 0, b < 0, or b = 0. This classification also corresponds to different asymptotic behaviors of Xt as t → ∞. Roughly speaking, if b > 0, the process explodes exponentially with rate b. If b < 0, the process without immigration goes to 0 almost surely, but immigration will prevent extinction and thus yield an ergodic behavior under mild moment conditions. In the critical case, a weak convergence result holds (see Theorem 2.3), which implies that Xt grows linearly. We refer to Pinsky (1971) and Li (2000) for details. Despite originating from biology, the CBI process has recently been the fundamental tool in financial modeling due to its affine term structure (ATS). Filipović (2001) proved that all (including non-continuous) non-negative Markov short rate processes yielding an ATS coincide with the class of CBI-processes. In particular, when b < 0, Xt defined by (1.2) is known as the Cox–Ingersoll–Ross (CIR) model with jumps, also called the JCIR model; see Duffie and Garleanu (2001). Assume that the process Xt is observed at equidistant time points {tk = k1t , k = 0, . . . , n} for fixed 1t and the only unknown quantities in (1.2) are the drift parameters a and b. The objective of this paper is the estimation of a and b based on the sampling data {Xtk }nk=1 . Following the approach to estimation in discrete-time and -space branching processes with immigration in Wei and Winnicki (1990), we choose the weighted conditional least square estimators (WCLSEs). Throughout the paper, we need the following condition ∞







ξ 2 m(dξ ) + 1

ξ 2 µ(dξ ) < ∞.

(1.3)

1

Then Xt has the finite second moment. Let l01 = We also assume that a + l01 > 0

and

∞ 0

ξ m(dξ ), l02 =

∞ 0

ξ 2 m(dξ ), l11 =

∞ 0

ξ µ(dξ ) and l12 =

σ 2 + l12 > 0.

∞ 0

ξ 2 µ(dξ ). (1.4)

Let us write Xk for Xtk . By (1.2), we have Xk = γ0 + γ1 Xk−1 + εk ,

(1.5) b1t

where εk = Xk − E[Xk |Xk−1 ], γ1 = e , and γ = martingale difference w.r.t. {Fk }, where Fk = σ { , Xk

1

(Xk−1 + 1)

1 2

1

= γ1 (Xk−1 + 1) 2 + (γ0 − γ1 )(Xk−1 + 1)− 2 + δk , 1

where δk = εk /(Xk−1 + 1) 2 . Note that E[δk2 |Fk−1 ] =

η0 =

b1t

(e − 1) if b ̸= 0 and γ0 = (a + l01 )1t if b = 0. Then εk is a , . . . , Xk }. Rewrite (1.5) as

a+l01 0 b X0 X1

(σ 2 + l12 )(a + l01 ) 2b2

(eb1t − 1)2 +

l02 2b

η1 Xk−1 +η0 , Xk−1 +1

(e2b1t − 1),

(1.6)

where, if b ̸= 0,

η1 =

σ 2 + l12 b

eb1t (eb1t − 1);

(1.7)

p

if b = 0, η0 = (σ 2 + l12 )(a + l01 )(1t )2 /2 + l02 1t and η1 = (σ 2 + l12 )1t. If Xk → ∞ as k → ∞, it is easy to see that p

E[δk2 |Fk−1 ] → η1 . Then the conditional distribution of δk given Fk is asymptotically normal as stated in Wei and Winnicki (1990). Based on (1.6), we have the following WCLSEs for a and b. n ∑

bˆ n =

1

1t

log

Xk

k=1 n



n ∑ k=1

1 Xk−1 +1

(Xk−1 + 1)

k=1

−n

n ∑ k=1

n ∑ k=1

1 Xk−1 +1

Xk Xk−1 +1

(1.8)

− n2

and 1 n



n ∑ k=1

Xk − e

bˆ n 1t

n ∑ k=1

 Xk−1

bˆ n − l01 . (1.9) ebˆ n 1t − 1 The estimation problems for a and b have been widely studied in the CBI diffusion case (i.e. N0 = N1 = 0 in (1.2)). When the diffusion process can be observed continuously, Overbeck (1998) introduced the maximum likelihood estimators (MLEs) based on the Girsanov density and obtained the asymptotic distributions of the MLEs in the critical, super-, and sub-critical cases. When the process is observed only at discrete time points, Overbeck and Ryden (1997) considered the standard conditional least square estimator (CLSEs), and asymptotic properties of the estimators were studied only in the sub-critical case. In the present paper, we consider the WCLSEs for a and b in the general CBI process with jumps. Under the conditions that the Lévy measures m and µ have finite second moments, we obtain the consistency and asymptotic distributions of the WCLSEs in the critical, super-, and sub-critical cases. Our results can be regarded as the continuous-state analogues of Wei and Winnicki (1990) for discrete state branching processes with immigration. The remainder of this paper is organized as follows. In Section 2, we will give the main results and their proofs. aˆ n =

1106

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

2. Asymptotic properties of the WCLSEs case. It is known in Pinsky (1971) that under the assumption that b < 0,  ∞We will start with the subcritical ∞ (ξ log ξ )µ( d ξ ) < ∞ and ( log ξ ) m (dξ ) < ∞, the process Xt has a unique stationary distribution X . If the distribution 1 1

of X0 is the stationary distribution, then Xt is stationary. By a simple coupling argument, this result is valid for arbitrary initial distributions. Theorem 2.1. Assume that b < 0, and conditions (1.3) and (1.4) hold. Then aˆ n and bˆ n are strongly consistent. Furthermore,

√ √ d ( n(bˆ n − b), n(ˆan − a)) → N (0, (UV )W (UV )′ ), where   1 0

eb1t 1t a + l01 a + l01

U =



beb1t 1t



 ,

b

eb1t − 1

eb1t − 1

[

 

[ E [X + 1]E

V =

−1

]

1 X +1

−1

  

E

]

1

−1

X +1

[

 ,  E [X ]

]

X



−E X +1 ]

η0 + η1 X  X +1 [ ] , η0 + η1 X  E (X + 1)2 [



η + η1 E [X ]  0 W =  [ η0 + η1 X ] E X +1

E

and X is a random variable with the stationary distribution of Xn as n → ∞. d

Proof. By Pinsky (1971) and (1.3), if b < 0, we have Xt → X . If we further assume that σ > 0, X (·) is ergodic. Then the Birkhoff’s ergodic theorem (see Theorem 7.2.1 in Durrett (2010)) implies that n 1−

n k=1

a.s.

Xk → −

n 1−

a + l01

Xk

n k=1 Xk−1 + 1

b

n 1−

,

1

n k=1 1 + Xk−1

a.s.

→ γ1 + (γ0 − γ1 )E a.s.

[

a.s.

[

→E ]

1 1+X

]

1 1+X

,

.

a.s.

Then by (1.8) and (1.9), bˆ n → b and aˆ n → a. Let θˆ1n = ebn 1t and θˆ2n = bˆ n − b =

1

1t

ˆ

(θˆ1n − γ1 )(e−b1t + op (1)) and [

aˆ n − a =

γ0 eb1t 1t (e

γ0 b (e

bˆ n

] + op (1) (θˆ1n − γ1 ) +

(ebn 1t − 1). By Taylor’s theorem, ˆ

bˆ n

− 1)(eb1t − 1) e −1 √ √ As in Wei and Winnicki (1990), we see that ( n(θˆ1n − γ1 ), n(θˆ2n − γ0 )) = Vn Zn , where  −  n 1 1   −1 − 1 n n  n  − 1 − 1  k=1 Xk−1 + 1  Vn = ( X + 1 ) − 1  , k − 1 n n − − 1 X 1   n2 k=1 X + 1 k−1 k−1 k=1 − Xk−1 n k=1 Xk−1 + 1 n k=1   n n 1 − 1 − εk εk √ Zn′ = √ . n k=1 n k=1 Xk−1 + 1 bˆ n 1t

− 1)



aˆ n +l01

bˆ n 1t

bˆ n 1t

(θˆ2n − γ0 ).

a.s.

(2.1)

(2.2)

By the ergodic theorem, Vn → V . By condition (1.4), X is not degenerate. It follows from Jensen’s inequality and nond

degeneracy of X , det (V ) > 0. By (2.1) and (2.2), it suffices to prove that Zn → N (0, W ). Note that

 ] [ ]  εk2 η0 + η1 X p  E[ε |Fk−1 ] → η0 + η1 E[X ], E Fk−1 → E , n k=1 n k=1 (Xk−1 + 1)2  (X + 1)2  ] [ ] [ n  1− εk2 η0 + η1 X p  Fk−1 → E E . n k=1 (Xk−1 + 1)  X +1 n 1−

2 k

p

n 1−

[

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

For any ϵ > 0,

∑n 1

k=1

n

∑n 1

E[εk2 1{|εk |>√nϵ} |Fk−1 ] ≤

k=1

n

E[εk2 1{|εk |>√kϵ} |Fk−1 ]. Note that εk2 = (Xk − γ1 Xk−1 − γ0 )2

is stationary since (Xk−1 , Xk ) is stationary. As k → ∞, E[εk2 ] →

∑n 1 n

k=1

E[ε

2 √ k 1{|εk |> kϵ}

[ n 1− n k=1

E

1107

a+l01 b

η1 + η0 . Then εk2 is uniformly integrable and

|Fk−1 ] → 0. Similarly,

] √ εk2 n ϵ}| F {|ε /( X + 1 )| > k−1 → 0. k k−1 (Xk−1 + 1)2

So the Lindeberg’s conditions are satisfied. The remaining proof follows from the martingale central limit theorem; see Hall and Heyde (1980).  Now we concentrate on the supercritical case. By Pinsky (1971) and condition (1.3)–(1.4), if b > 0, then as t → ∞, a.s.

Xt /ebt −→ L for some positive variable L and 0 < L < ∞ a.s. ∑∞ −j Let R∞ denote the space of real sequences x = (x1 , x2 , . . .) with metric d(x, y) = j=1 2 |xj − yj |/(1 + |xj − yj |). Recall that δk is defined in (1.6) for k ≥ 1. We consider the R∞ -valued random variables Γn = {ϱnj } and Γ = {ϱj } defined by

ϱnj =

 δn−j+1 , j = 1, 2, . . . , n, 0, otherwise,

(2.3)

and ϱj are i.i.d. random variables distributed as N (0, η1 ), where η1 is defined in (1.7). Lemma 2.1. Assume that b > 0, and conditions (1.3) and (1.4) hold. Then Γn converges weakly to Γ in R∞ . Proof. Recall that εk is defined in (1.5) and Xk = Xtk . For j ≤ n, we note that

εn − j + 1 =

tn−j+1



σe

b(tn−j+1 −s)



X (s)dBs +

tn−j+1



tn−j

tn−j tn−j+1







0

eb(tn−j+1 −s) ξ N˜ 1 (ds, du, dξ ) R+

eb(tn−j+1 −s) ξ N˜ 0 (ds, dξ ).

+ tn−j

X (s−)



R+

Let φn−j (s) = eb(s−tn−j ) (Xn−j + 1). Define

δn′ −j+1 =



tn−j+1

b

σ e 2 (tn−j+2 −s) dBs +

tn−j+1



eb(tn−j+1 −s) ξ



1

tn−j

tn−j

φn−j (s)

∫ 0

R+

(Xn−j + 1) 2

N˜ 1 (ds, du, dξ ).

It is not hard to see that for fixed j and some positive constant C , E[(δn−j+1 − δn−j+1 ) ] ≤ C E ′

2

∫

1t

e

2b(1t −s)

0

    1t 2b(1−s)  Xs+t  e ds  n−j bs  − e  ds + C 0 ,   Xtn−j + 1  E[Xtn−j + 1]

′ which converges to 0 as n → ∞. Define ϱnj as in (2.3) with δn−j+1 replaced by δn′ −j+1 . Thus Lemma 2.1 is equivalent to the ′ weak convergence of {ϱnj , 1 ≤ i ≤ k} for all k ≥ 1. Without loss of generality, we only consider the case k = 2. For λj ∈ R

(j = 1, 2) and i2 = −1,

1 2 2 E[eiλ1 ϱn1 +iλ2 ϱn2 ] = e− 2 σ λ1

 1t 0

eb(21t −s) ds

E[eiλ2 ϱn2 eυn ],

where

υn =



1t



l12



0

=−

−1 b(1t −s) ξ (X n−1 +1) 2

ebs (Xn−1 + 1)[eie

1

− 1 − ieb(1t −s) ξ (Xn−1 + 1)− 2 ]µ(dξ )ds

R+

2

1t

eb(21t −s) ds + un .

0 a.s.

Here, un is some random variable, supn |un | is bounded and |un | → 0 as n → ∞. Note that |eυn − e− for some positive constant K . We have 1 2 2 E[eiλ1 ϱn1 +iλ2 ϱn2 ] = e− 2 (σ +l12 )λ1

=e

0

eb(21t −s) ds

E[eiλ2 ϱn2 ] + o(1)

− 12 [(σ 2 +l12 )λ21 +(σ 2 +l12 )λ22 ] 01t eb(21t −s) ds

− 12 η1 (λ21 +λ22 )

which converges to e

 1t



, as n → ∞.



+ o(1),

l12 2

 1t 0

eb(21t −s) ds

| ≤ K |un |

1108

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

Lemma 2.2. Under the conditions of Lemma 2.1, we have as n → ∞,

 − 21 n n − − d (Xj−1 + 1) εj −→ N (0, η1 ). j=1

(2.4)

j=1

Proof. We first claim that as n → ∞,

− 21 − n

ebn1t

 Vn =

eb1t − 1

d

eb(j−1)1t /2 δj −→ N (0, η1 ). 1

In fact, for any k ≥ 1 let Mnk = (eb1t − 1) 2 d

Mnk −→ Mk = (eb1t − 1)

(2.5)

j =1

1 2

j =1

P(|Mnk − Mnn | > ϵ) =

ϵ

j=1

e−bj1t /2 ϱnj . Then we have Vn = Mnn . By Lemma 2.1, for any fixed k, d

∑k

1

∑k

e−bj1t /2 ϱj . It is easy to see that Mk −→ N (0, η1 ) as k → ∞. Also note that for any ϵ > 0,



E 2



n −

e

−bj1t

γ

2 nk



j=k+1

∞ η1 + η0 − e−bj1t . ϵ 2 j=k+1

Thus it follows from Theorem 3.2 in Billingsley (1999) that (2.5) holds. Using the same method as in Theorem 3.5 in Wei and Winnicki (1990), we have n  −

 1 1 (Xj−1 + 1) 2 − (eb(j−1)1t L) 2 δj = op (ebn1t /2 ).

(2.6)

j =1

Also note that e−bn1t

n − a.s. (Xk−1 + 1) → k=1

By (2.5), we obtain (2.4).

1 eb1t − 1

L.

(2.7)



Theorem 2.2. Under the conditions of Lemma 2.1, bˆ n is strongly consistent, while aˆ n is not weakly consistent. Furthermore,

  12 n − d (Xj−1 + 1) (bˆ n − b) → j=1

1

1teb1t

N (0, η1 ).

a.s.

Proof. Since Xj /Xj−1 → eb1t as j → ∞, we have n−1 ∞ −

1

Xj−1 + 1 j =1

(2.8)

a.s. Xj j=1 Xj−1 +1

→ eb1t . Furthermore,

∑n

< ∞ a.s.

(2.9) a.s.

By (1.8), it is easy to see that bˆ n → b. Note that bˆ n − b =

1

1t

(θˆ1n − γ1 )(e−b1t + op (1)) and that

  12 n − An − Bn (Xj−1 + 1) (θ1n − γn ) = 1 − Cn j=1 where



n − An = (Xj−1 + 1)

− 21

j =1

 Bn =

n

n − j =1

Note that

∑n

∑∞

j=1

εj

j=1 (Xj−1 +1)

n −

 εj ,

Cn = n

2

j =1

εj X j −1 + 1

E[ (X

εj2 2 j−1 +1)

j =1

  n −  j =1

|Fj−1 ] =

n n − − (Xj−1 + 1)

1 X j −1 + 1

∑∞

j=1



j =1

n − (Xj−1 + 1)

1 X j −1 + 1

−1 ,

 12  .

j =1

η1 Xj−1 +η0 Xj−1 +1 a.s.

< ∞. It follows from Theorem 2.17 in Hall and Heyde (1980) that a.s.

converges a.s. By (2.7) and (2.9), Bn → 0 and Cn → 0. Thus (2.8) follows from Lemma 2.1. By (2.1) and (2.8),

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

1109

it is not hard to see that n

 n −

− 21

2b

d

(Xj−1 + 1)

(ˆan − a) −→

j =1

eb 1 t − 1

p

which implies that aˆ n − a −→ ∞, as n → ∞.

N (0, η1 ),



Theorem 2.3. Let Yn (t ) = X[nt ] /n. Assume that b = 0 and (1.3) holds. Then Yn (·) converges in distribution on D([0, ∞), R+ ) to a CBI process defined by

∫ t

Y (t ) = (a + l01 )t +

(σ 2 + l12 )Y (s) dW (s),

(2.10)

0

where Y (0) = 0 and W (·) is a standard Brownian motion. Proof. This proof is divided into four steps. We may rewrite (1.2) as follows. Xnt n

=

x0 n

√ σ Xs

nt

∫ + at +

dB(s) +

n

0

nt



ξ



0

R+

n

N0 (ds, dξ ) +

nt



Xs−



0

0

ξ ˜ N1 (ds, du, dξ ).

∫ R+

(2.11)

n

Step 1. Applying Doob’s inequality to the martingale terms in (2.11) we obtain

[ E

]

1

sup Xns n 0≤s≤t



x0 n

+ (a + l01 )t + 4σ

nt

∫

E[Xs ] n2

0

 12



nt



+ 4 l12

ds

0

E[ X s ] n2

 12 ds

.

Let C (t ) := 1 + lim supn→∞ E[ 1n sup0≤s≤t Xns ]. Since E[Xt ] = (a + l01 )t, C (t ) is a locally bounded function of t ≥ 0. Similarly, 2 ] is also locally bounded. lim supn→∞ E[ n12 sup0≤s≤t Xns Step 2. Tightness. Since C (t ) is locally bounded, Xnt /n is a tight sequence of random variables for every t ≥ 0. Let {τn } be a sequence of stopping times bounded by T and let {δn } be a sequence of positive constants such that δn → 0 as n → 0. By the properties of independent increments of the Brownian motion and the Poisson process we have that

[ E

1 n

δn

∫ ] x0 |Xn(τn +δn ) − Xnτn | ≤ + (a + l01 )δn + 4σ n

Xnt n

which converges to 0 as n → ∞. Then some Skorokhod’s space (Ω , F , Ft , P), t



e−λY (s)

0

δn



+ 4 l12

C (T + s)ds

 12

,

0

Xnt n a.s.

. Without loss of generality, by Skorokhod’s theorem, we can assume that on

−→ Y (t ) in the topology of D([0, ∞), R+ ). We claim that for any fixed λ ∈ R, ] 1 2 2 (σ + l12 )λ Y (s) − (a + l01 )λ ds (2.12)

Xnt n

[



is tight in D([0, ∞), R+ ) by the criterion of Aldous (1978).

Step 3. Limiting. Let Y (·) be any limit point of

M (t ) = e−λY (t ) − 1 −

C (T + s)ds

0

 12

2

is a square integrable Ft -martingale. In fact, by Itô’s formula, it is not hard to show that Mn (t ) = e

−λ

Xnt n

−e

−λ

X0 n

∫ −

t

e 0

−λ Xnns

 An

Xns



n

ds

is a square integrable martingale, where A(x) = 21 xσ 2 λ2 + xn2 0 (e−λξ /n − 1 + λξ /n)µ(dξ ) − aλ + n 0 (e−λξ /n − 1)m(dξ ). By (1.3), the tightness of Xnt /n, Problem 13 (P. 151) in Ethier and Kurtz (1986) and Proposition 1.23 (P. 293) in Jacod and Schiryaev (1987), we have that

∞

a.s.

Mn (t ) −→ M (t ) in D([0, ∞), C), as n → ∞.

∞

(2.13)

a.s.

2 ] is locally bounded, supn E[Mn2 (t )] < ∞. Then for all t ≥ 0, Mn (t ) −→ M (t ) in R+ . Since lim supn→∞ E[ n12 sup0≤s≤t Xns L2

Then for any t ≥ 0, Mn (t ) −→ M (t ). Thus M (t ) is a martingale. Step 4. It follows from (2.12) and Theorem 2.42 (P. 86) in Jacod and Schiryaev (1987) that Y (·) is a semimartingale and it admits the canonicalrepresentation Y (t ) = (a + l01 )t + Yc (t ), where Yc (t ) is a continuous local martingale with quadratic t covariation process 0 (σ 2 + l12 )Ys ds. So Y (·) is the solution of the stochastic equation (2.10). Also by Step 2, we have the weak convergence for Xnt /n. Since [nt ]/n → t as n → ∞, we have Theorem 2.3. 

1110

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

Remark 2.1. By the above theorem and the continuous mapping theorem,

 Xn /n, n

−2

n −



1

 ∫ d → Y (1),

Xk−1

Y (t )dt



.

0

k=1

As calculated in Pitman and Yor (1982), the Laplace transform of (Y (1),

[



E exp −2λ1 Y (1) − 2λ2



1

Y (t )dt

]

1 0

Y (t )dt ) is given by −1

1

1

= (cosh(2(σ 2 + l12 )2 λ22 ) + 2λ1 λ2 2 sinh(2(σ 2 + l12 )λ12 ))−2(a+l01 )/(σ

2 +l

12 )

.

0 p

Theorem 2.4. Assume that b = 0 and conditions (1.3) and (1.4) hold. Then bˆ n → b and Y (1) − (a + l01 )

d

n(bˆ n − b) →

1te1t

1 0

Y (t )dt

,

(2.14)

where Y (·) is defined by (2.10). d

Proof. By Theorem 2.3, Xn /n → Y (1). It follows from Remark 2.1 that Y (1) > 0 and a.s.

1 0

p

Y (t )dt > 0, a.s. Then Xn → ∞.

We can find some subsequence {nk } such that Xnk → ∞. Define Mn = (Xn + 1)/[Πkn=1 (1 + η0 /(Xk−1 + 1))]. n

We see that Mn is a positive Fn martingale. By the martingale convergence theorem, Mn converges a.s. Then Πj=k1 (1 + ∑ ∑n a.s. a.s. a.s. η0 /(Xj−1 + 1)) → ∞ This implies that j=k 1 η0 /(Xj−1 + 1) → ∞ and thus nj=1 η0 /(Xj−1 + 1) → ∞. On the other hand,

∑n

j =1

εj /(Xj−1 + 1) is a Fn martingale and     2  n n n − − − η1 Xj−1 + η0 1 εj  ≤ (η1 + η0 ) E .  F j −1 = X j −1 + 1  (Xj−1 + 1)2 X j −1 + 1 j =1 j =1 j =1

By the local martingale convergence theorem (see Theorem 2.17 in Wei and Winnicki (1989)), we have n − j =1

  n − 1 εj =o Xj−1 + 1 X j −1 + 1 k=1

a.s..

(2.15)

Now we first consider n(θ1n − γ1 ) =

An − Bn 1 − Cn

,

(2.16)

where n An =

n ∑

εj

j =1 n ∑

,

Bn = n

2

n −

(Xj−1 + 1)

j =1

  −1 n n − − εj 1 , (Xj−1 + 1) X j −1 + 1 j =1 X j −1 + 1 j =1

j =1

 Cn = n

2

n n − − (Xj−1 + 1) j=1

j=1

By Remark 2.1 and (2.15), we have d

n(θˆ1n − γ1 ) →

0

d

∑n

Y (t )dt

p

.

Xj−1 + 1 j =1

Y (1) − (a + l01 )

1

 −1

1

p

p

εj /n → Y (1) − (a + l01 ), Bn → 0 and Cn → 0. Thus

,

and then θˆ1n → γ1 . Note that bˆ n − b =

(2.17) 1

1t

(θˆ1n − γ1 )(e−b1t + op (1)) and (2.14) follows from (2.17).



Remark 2.2. In the critical case, Theorem 2.4 shows that the WCLSE bˆ n has a non-Gaussian asymptotic distribution with normalizing factor n.∑However, it is not known what is the asymptotic distribution for aˆ n , which maybe depends on the n limiting behavior of j=1 X 1 +1 as n → ∞. This estimation problem will be studied in our future research.  j−1

J. Huang et al. / Statistics and Probability Letters 81 (2011) 1104–1111

1111

Acknowledgements The first author was supported by the RGC Earmarked Grants of the Hong Kong Polytechnic University (No. 500909). The second author was supported by NSFC (No. 10971106 and No. 11001137). The authors are very grateful to a referee whose comments improved this manuscript. References Aldous, D., 1978. Stopping times and tightness. Ann. Probab. 6, 335–340. Billingsley, P., 1999. Convergence of Probability Measures. John Wiley and Sons, Inc., New York. Dawson, D.A., Li, Z.H., 2006. Skew convolution semigroups and affine Markov processes. Ann. Probab. 34, 1103–1142. Duffie, D., Garleanu, N., 2001. Risk and valuation of collateralized debt obligations. Financ. Anal. J. 57, 41–59. Durrett, R., 2010. Probability: Theory and Examples. Cambridge University Press, New York. Ethier, S.N., Kurtz, T.G., 1986. Markov Processes: Characterization and Convergence. John Wiley and Sons Inc., New York. Filipović, D., 2001. A general characterization of one factor affine term structure models. Finance Stoch. 5, 389–412. Hall, P., Heyde, C., 1980. Martingale Limit Theory and its Applications. Academic Press, New York. Jacod, J., Schiryaev, A.N., 1987. Limit theorems for stochastic processes. Springer-Verlag, Berlin, Heidelberg, New York. Kawazu, K., Watanabe, S., 1971. Branching processes with immigration and related limit theorems. Theory Probab. Appl. 16, 36–54. Li, Z.H., 2000. Asymptotic behavior of continuous time and state branching processes. J. Aust. Math. Soc. (Ser. A) 68, 68–84. Overbeck, L., 1998. Estimation for continuous branching processes. Scand. J. Statist 25, 111–126. Overbeck, L., Ryden, T., 1997. Estimation in the Cox–Ingersoll–Ross model. Econometric Theory 13, 430–461. Pinsky, M.A., 1971. Limit theorems for continuous state branching processes with immigration. Bull. Amer. Math. Soc. 78, 242–244. Pitman, Jim, Yor, Marc., 1982. A decomposition of Bessel bridges. Z. Wahrsch. Verw. Gebiete. 59, 425–457. Wei, C.Z., Winnicki, J., 1990. Estimation of the means in the branching process with immigration. Ann. Statist. 18, 1757–1773. Wei, C.Z., Winnicki, J., 1989. Some asymptotic results for branching processes with immigration. Stoch. Process. Appl. 31, 261–282.