Extended Glivenko–Cantelli Theorem in ARCH(p )-time series

Extended Glivenko–Cantelli Theorem in ARCH(p )-time series

Statistics and Probability Letters 78 (2008) 1434–1439 www.elsevier.com/locate/stapro Extended Glivenko–Cantelli Theorem in ARCH( p)-time series Fuxi...

221KB Sizes 1 Downloads 17 Views

Statistics and Probability Letters 78 (2008) 1434–1439 www.elsevier.com/locate/stapro

Extended Glivenko–Cantelli Theorem in ARCH( p)-time series Fuxia Cheng ∗ Department of Mathematics, Illinois State University, USA Received 10 July 2006; received in revised form 20 September 2007; accepted 11 December 2007 Available online 27 December 2007

Abstract In this paper we consider the uniform strong consistency of nonparametric innovation distribution function estimation in ARCH( p)-time series. We obtain the extended Glivenko–Cantelli Theorem for the residual-based empirical distribution function. c 2007 Elsevier B.V. All rights reserved.

MSC: 62M10; 60F15; 60F05

1. Introduction We consider the AutoRegressive Conditional Heteroscedastic model of order p (ARCH( p)) q 2 + · · · + α X2 , X i = εi α0 + α1 X i−1 p i− p

(1.1)

where the innovations εi are independent and identically distributed (i.i.d.) random variables with mean 0, variance 1, unknown distribution function (d.f.) F, and the parameters α j (0 ≤ j ≤ p) are positive. Let Fi denote the σ -field generated by {X l , l ≤ i}; we further assume that εi+1 is independent of Fi . It follows that the conditional variances of X i ’s satisfy 2 2 Var{X i |Fi−1 } = E{X i2 |Fi−1 } = α0 + α1 X i−1 + · · · + α p X i− p.

(1.2)

Property (1.2) is called conditional heteroscedasticity and explains, together with its autoregressive nature, the name of this model. Model (1.1) was first introduced by Engle (1982) to allow the conditional forecast variance to change over time as a function of the past values. The model was later extended in various directions. ARCH and related models have become perhaps the most popular and extensively studied financial econometric models. See Gouri´erous (1997) for details. In most of the work, the main focus has been on estimating the unknown parameters. It is of interest and of practical importance to know the nature of the innovation distribution. Actually, if the distributions of the innovation are unspecified, the parametric component only partly determines the distribution behavior of model (1.1). It is as important to investigate the distribution of the innovation as estimating α j ’s. Stute (2001) uses the residual-based ∗ Corresponding address: Department of Mathematics, Illinois State University, Normal, IL 61790-4520, USA. Fax: +1 309 438 5866.

E-mail address: [email protected]. c 2007 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter doi:10.1016/j.spl.2007.12.009

F. Cheng / Statistics and Probability Letters 78 (2008) 1434–1439

1435

empirical d.f. Fˆn to estimate the distribution function of εi , and provides uniform weak consistency and distributional convergence results for a class of statistics based on Fˆn . Berkes and Horv´ath (2002) and Koul and Ling (2006) discuss weak convergence aspects of the residual empirical processes in ARCH and some other time series models. In this paper, we shall show the uniform strong consistency of Fˆn with a certain rate. In fact, we extend the Glivenko–Cantelli Theorem to the residual-based empirical d.f. Fˆn . The paper is organized as follows. In Section 2 we introduce the estimator for unknown α j and define the estimator Fˆn for F based on the residuals of model (1.1). We also introduce one basic assumption for our main result. In Section 3 we show the extended Glivenko–Cantelli Theorem for Fˆn . 2. Estimators and basic assumption In this section we first introduce the estimators for unknown α j and F. Assume that we observe X 0 , X 1 , . . . , X n . Let α jn denote estimators of α j , 0 ≤ j ≤ p, based on the n observations. Set α = (α0 , α1 , . . . , α p )>

and

αˆ = (α0n , α1n , . . . , α pn )> .

Let q 2 + · · · + α X2 , εˆ i = X i / α0n + α1n X i−1 pn i− p

1≤i ≤n

denote residuals. The empirical d.f. estimator based on these residuals is n 1X I (ˆεi ≤ t), Fˆn (t) := n i=1

We also need to define n 1X I (εi ≤ t), Fn (t) := n i=1 Un (t) :=

(2.1)

t ∈ R.

t ∈ R,

n 1X {I (εi ≤ t) − F(t)}, n i=1

t ∈ R.

(2.2)

We conclude this section with the following assumption needed for showing the uniform strong consistency of Fˆn for F. Assumption A. All α j , 0 ≤ j ≤ p are positive, α1 + α2 + · · · + α p < 1 and there exists γ : 0 < γ < 1/2 such that n γ kαˆ − αk → 0

almost surely (a.s.), as n → ∞,

(2.3)

where k.k denotes the Euclidean norm. Note: Assumption A is reasonable. In fact, assume that εi has finite fourth-order moment. Write Z i−1 = 2 , . . . , X 2 )> . For the conditional least-squares estimator of α (see Tjøstheim (1986)), i.e., (1, X i−1 i− p ( ) n X argmin 2 > 2 αˆ ≡α (X i − α Z i−1 ) , i= p+1

it is shown that r kαˆ − αk = O

log2 n n

! a.s.

(2.4)

Then it is clear that (2.4) implies that (2.3) holds for any γ satisfying γ < 1/2. The validity of (2.4) is proved by Tjøstheim (1986, pp. 254–256), combined with the following Law of the Iterated Logarithm.

1436

F. Cheng / Statistics and Probability Letters 78 (2008) 1434–1439

Lemma 2.1 (Law of the Iterated Logarithm). Suppose that {X i } is a stationary ergodic martingale difference with E(X 1 ) = 0 and E(X 12 ) < ∞. Then n P lim sup n −1/2 Xi n→∞ i=1 q =1 2E(X 12 ) log2 (n E(X 12 ))

a.s.

See Stout (1974, p. 303) for the proof of Lemma 2.1. In the following section, all limits are taken as the sample size n tends to ∞, unless specified otherwise. 3. Uniform strong consistency of Fˆn In this section we shall establish the uniform strong consistency of Fˆn for F with a certain rate. In particular, we extend the Glivenko–Cantelli Theorem to the empirical distribution function based on the residuals. We first show one lemma, which is about the closeness of εˆ i and εi , 1 ≤ i ≤ n. Lemma 3.1. For any τ satisfying τ < γ , we assume that ε1 has the finite moment of order K with K > 2/(γ − τ ). Then, under the Assumption A, we have that n τ max |ˆεi − εi | → 0

(3.1)

a.s.

1≤i≤n

Proof. By Assumption A, it follows that there exists large enough N such that for any n > N and 0 ≤ j ≤ p, we have that 3 α j n −γ 4 ≥ α j /4 a.s.

|α jn − α j | ≤ α jn

a.s.,

(3.2) (3.3)

Rewrite the difference between εˆ i and εi , 1 ≤ i ≤ n: X i α0 − α0n + εˆ i − εi =

2 (α j − α jn )X i− j

j=1

s ξin α0 +

p P j=1

s 2 α j X i− j

εi α0 − α0n + =

!

p P

α0n +

j=1

2 (α j − α jn )X i− j

j=1

ξin α0n +

2 α jn X i− j

!

p P

s

p P

p P j=1

, 2 α jn X i− j

q

q Pp Pp 2 2 . where ξin = α0 + j=1 α j X i− + α0n + j=1 α jn X i− j j For any n > N , by (3.3), it follows that v u p X 3u 2 ξin ≥ tα0 + α j X i− j a.s. 2 j=1 Combining this property with (3.2) and (3.4), it is easy to check that for any n > N , we obtain that max |ˆεi − εi | ≤ max

1≤i≤n

1≤i≤n

|εi | nγ

a.s.

(3.4)

F. Cheng / Statistics and Probability Letters 78 (2008) 1434–1439

1437

Therefore, in order to show (3.1), it is sufficient to prove that for any τ satisfying 0 < τ < γ , we have that n τ −γ max |εi | → 0 1≤i≤n

a.s.

(3.5)

Now we use the Borel–Cantelli Lemma from Kallenberg (1997, pp. 32) to show (3.5). For any  > 0, it is easy to verify that     τ −γ τ −γ P n max |εi | >  = 1 − P n max |εi | ≤  1≤i≤n

1≤i≤n

= 1−

n Y

P |εi | ≤ n γ −τ



i=1

= 1−

n Y

1 − P |εi | > n γ −τ



.

(3.6)

i=1

Since εi has the finite moment of order K , it follows that  E(|ε1 | K ) P |εi | > n γ −τ ≤ K K (γ −τ ) .  n Combining this inequality with (3.6), we obtain that for large enough n such that E(|ε1 | K )/( K n K (γ −τ ) ) ≤ 1, we have that    n E(|ε1 | K ) E(|ε1 | K ) P n τ −γ max |εi | >  ≤ 1 − 1 − K K (γ −τ ) ≤ K K (γ −τ )−1 , (3.7) 1≤i≤n  n  n where we use the inequality (1 − x)n ≥ 1 − nx for 0 ≤ x ≤ 1. Since K (γ − τ ) > 2, we have that ∞ X n=1

E(|ε1 | K ) < ∞.  K n K (γ −τ )−1

Combining this property with (3.7), it follows that for any  > 0, we have that   ∞ X τ −γ P n max |εi | >  < ∞. 1≤i≤n

n=1

Using the Borel–Cantelli Lemma in Kallenberg (1997, pp. 32), we obtain that n τ −γ max |εi | → 0 1≤i≤n

a.s.

Hence we have finished the proof of Lemma 3.1.



Note: If εi has the normal distribution, it has a finite moment of any order, i.e., it satisfies the assumption in Lemma 3.1. We shall state and prove the uniform strong consistency of Fˆn . Theorem 3.1. Assume that the assumptions of Lemma 3.1 hold and F is Lipschitz continuous of order 1. Then, for any η satisfying 0 < η < γ , we have that sup n η Fˆn (t) − F(t) → 0 a.s.

(3.8)

t∈R

Proof. Since Fn (t) is the empirical distribution function of ε1 , ε2 , . . . , εn , recall the extended Glivenko–Cantelli Lemma of Fabian and Hannan (1985, pp 80–83), that sup n β |Fn (t) − F(t)| → 0 t∈R

a.s., for any 0 < β < 1/2.

1438

F. Cheng / Statistics and Probability Letters 78 (2008) 1434–1439

Thus, if we show that, for any η (0 < η < γ ), sup n η | Fˆn (t) − Fn (t)| → 0,

a.s,

(3.9)

t∈R

then, by (3.8) and γ < 1/2, we obtain the claim (3.8). We now proceed to prove (3.9). Choose τ satisfying η < τ < γ , and define   Bn = max |ˆεi − εi | ≤ n −τ . 1≤i≤n

Then, by Lemma 3.1, we have ∞ P(∩i=n Bi ) → 1.

Therefore, for ∀  > 0, we obtain that    ∞ ∞ ∞ P ∪i=n sup n η | Fˆn (t) − Fn (t)|I (Bic ) ≥  ≤ P{∪i=n Bic } = 1 − P{∩i=n Bi } → 0, t∈R

which guarantees that sup n η | Fˆn (t) − Fn (t)|I (Bnc ) → 0,

a.s.

(3.10)

t∈R

Define di = εi − εˆ i

and

Dn = max |di |. 1≤i≤n

It is easy to see that, on Bn , we have |Dn | ≤ n −τ .

(3.11)

Rewrite n 1X Fˆn (t) = I (εi ≤ t + di ). n i=1

Since F is Lipschitz continuous of order 1, and η < τ , it follows that sup n η [F(t + n −τ ) − F(t − n −τ )] = O(n η−τ ) −→ 0.

(3.12)

t∈R

Next, rewrite n η ( Fˆn (t) − Fn (t)) = Z 1n (t) + Z 2n (t), where Z 1n (t) :=

n nη X [I (εi ≤ t + di ) − F(t + di ) − I (εi ≤ t) + F(t)], n i=1

Z 2n (t) :=

n nη X [F(t + di ) − F(t)]. n i=1

Using the monotonicity of F and (3.11), we see that   sup |Z 2n (t)|I (Bn ) ≤ n η sup F(t + n −τ ) − F(t − n −τ ) . t∈R

t∈R

Combining this property with (3.12), it follows that sup |Z 2n (t)|I (Bn ) −→ 0. t∈R

(3.13)

1439

F. Cheng / Statistics and Probability Letters 78 (2008) 1434–1439

It remains to show that sup |Z 1n (t)|I (Bn ) → 0

a.s.

(3.14)

t∈R

For any t ∈ R, on Bn , by (3.11) and the monotonicity of the functions I and F, we have that Z 1n (t) ≤ =

n nη X [I (εi ≤ t + n −τ ) − F(t − n −τ ) − I (εi ≤ t) + F(t)] n i=1 n nη X [I (εi ≤ t + n −τ ) − F(t + n −τ ) − I (εi ≤ t) + F(t)] + n η [F(t + n −τ ) − F(t − n −τ )], n i=1

and similarly, we also get that, on Bn Z 1n (t) ≥

n nη X [I (εi ≤ t − n −τ ) − F(t − n −τ ) − I (εi ≤ t) + F(t)] − n η [F(t + n −τ ) − F(t − n −τ )]. n i=1

Thus, we have that   sup |Z 1n (t)|I (Bn ) ≤ sup n η Un (t + n −τ ) − Un (t) + sup n η F(t + n −τ ) − F(t − n −τ ) t∈R t∈R t∈R   ≤ sup 2n η |Un (t)| + sup n η F(t + n −τ ) − F(t − n −τ ) , t∈R

t∈R

where Un is defined in (2.2). Since η < 1/2, using the extended Glivenko–Cantelli Lemma again, it follows that the first term in the above bond tends to zero with probability 1. By (3.12), we see that the second term also tends to zero. Therefore, we have shown that (3.14) holds. Thus, combining (3.10), (3.13) and (3.14), we have finished the proof of Theorem 3.1.  Comment: By the proof of Theorem 3.1, wee see that we can improve the uniform strong convergence rate of Fˆ for F if we are able to improve the convergence rate of αˆ for α. References Berkes, I., Horv´ath, L., 2002. Empirical processes of residuals. In: Empirical process techniques for dependent data. Birkh¨auser Boston, Boston, MA, pp. 195–209. Engle, R.F., 1982. Autoregressive conditional heteroscedasticity with estimates of variance of UK inflation. Econometrica 50, 987–1008. Fabian, V., Hannan, J., 1985. Introduction to Probability and Mathematical Statistics. Gouri´erous, C., 1997. ARCH Models and Financial Applications. Springer, Berlin. Kallenberg, O., 1997. Foundations of Modern Probability. Springer-Verlag, New York, Berlin, Heidelberg. Koul, H.L., Ling, S., 2006. Fitting an error distribution in some heteroscedastic time series models. Ann. Statist. 34, 994–1012. Stout, W.F., 1974. Almost Sure Convergence. Academic Press, New York. Stute W., 2001. Residual analysis for ARCH( p)-time series. Test 10, pp. 393–403. Tjøstheim, D., 1986. Estimation in nonlinear time series models. Stochastic Processes Appl. 21, 251–273.