Coefficient constancy test in AR-ARCH models

Coefficient constancy test in AR-ARCH models

Statistics & Probability Letters 57 (2002) 65–77 Coecient constancy test in AR-ARCH models Jeongcheol Ha, Sangyeol Lee ∗ Department of Statistics, C...

198KB Sizes 1 Downloads 75 Views

Statistics & Probability Letters 57 (2002) 65–77

Coecient constancy test in AR-ARCH models Jeongcheol Ha, Sangyeol Lee ∗ Department of Statistics, College of Natural Sciences, Seoul National University, San 56-1, Shin Lim-Dong, Kwan Ak-Ku, Seoul, 151-742, South Korea Received July 2001; received in revised form November 2001

Abstract In this article, we consider the problem of testing the coecient constancy in the AR-ARCH model: 2 yt = ( + bt )yt−1 + t , where t = t−1 t , t−1 = (0 + 1 t−1 )1=2 and t are iid r.v.’s. Under the assumption that bt and t are Gaussian, a locally best invariant test is provided for testing whether bt are identically zero or not. Since the exact distribution of the test statistic is hard to obtain, its limiting distribution is investigated. It is shown that the test statistic depends upon the parameter estimators and is asymptotically normal under c 2002 Elsevier Science B.V. All rights reserved. the null hypothesis.  Keywords: RCA model; ARCH model; AR-ARCH model; Locally best invariant test; Asymptotically normal

1. Introduction The random coecient autoregressive (RCA) model was introduced for investigating the results of random perturbations of a dynamical system in the ;elds of engineering, economics and ;nance. Since the RCA model was ;rst introduced in the literature, its basic properties have been derived by many authors. For instance, Conlisk (1974, 1976) and Andel (1976) examined the stability and second-order stationarity of the RCA model. Nicholls and Quinn (1980, 1981) studied the statistical inference regarding the RCA model. Prior to applying the properties of the RCA model in actual practice, one should perform a parameter constancy test. Nicholls and Quinn (1982) proposed a method based on a likelihood criterion, and Ramanathan and Rajarshi (1994) suggested a rank test. McCabe and Tremayne (1995) and Lee (1998) considered a locally best invariant (LBI) test (cf. Ferguson, 1967) assuming that the error terms are Gaussian, and derived its limiting distribution. In all these papers, it was assumed that the innovation errors are iid r.v.’s, but those are not necessarily iid in real situations. Therefore, there ∗

Corresponding author. Fax: +82-2-883-6144. E-mail address: [email protected] (S. Lee).

c 2002 Elsevier Science B.V. All rights reserved. 0167-7152/02/$ - see front matter  PII: S 0 1 6 7 - 7 1 5 2 ( 0 2 ) 0 0 0 5 1 - 2

66

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

is a need to design a test procedure for evaluating the parameter constancy in the RCA model with dependent errors. It is a very important task to determine how the correlation structure of the error process aHects the limiting distribution of the test statistic. In this article, we specially concentrate on the LBI test in the RCA model with ARCH errors since the AR-ARCH model is widely used in actual practice. The ARCH model, introduced by Engle (1982), has been proven to be very useful for modelling economic data possessing high volatility. For this reason, statistical methodologies for time series models with ARCH errors have been rapidly developed. For instance, Weiss (1984) considered a class of ARMA models with ARCH errors and studied the statistical inference for those models. See also Lee and Hansen (1994) and Lumsdaine (1996). Now, we consider the RCA model {yt } of the form yt = ( + bt )yt −1 + t ;

|| ¡ 1;

(1.1)

where bt are iid r.v.’s and {t } is an ARCH(1) process, such that 1. t = t −1 t , t = (0 + 1 t2 )1=2 , where t are iid r.v.’s with zero mean and unit variance, 0 ¿ 0, 0 6 1 ¡ 1, and t is independent of {s : s 6 t − 1}. 2. {bt } and {t } are independent iid r.v.’s with Ebt = 0, Eb2t = !2 ¿ 0 and E4t ¡ ∞. Our main objective is to develop an LBI test for testing H0 : !2 = 0 vs. H1 : !2 ¿ 0 under the assumption that (bt ; t ) is Gaussian. In Section 2, we derive the LBI test under the assumption that the parameters 0 ; 1 and  are all known. In Section 3, we investigate the asymptotic behavior of the test statistic with the estimators of the parameters substituted since the exact distribution of the test statistic is hard to calculate even under the Gaussian assumption. It is shown that the LBI test statistic is asymptotically normal under the null hypothesis if certain conditions are satis;ed. Finally, in Section 4, simulation results and concluding remarks are presented.

2. Locally best invariant test It is well known that the following representation of {yt } in (1.1) holds for iid t , y t = t +

 j −1 ∞   j=1

 ( + bt −i ) t −j ;

i=0

provided 2 + !2 ¡ 1 (cf. Hwang and Basawa, 1998), and consequently {yt } is strictly stationary and ergodic. Since the above representation also holds for the ARCH {t } and {t } is itself strictly stationary and ergodic (cf. Pantula, 1988), {yt } in (1.1) is strictly stationary and ergodic. Here we consider using the LBI test statistic for the constancy test of . It is well known from the literature that the usual test statistics, such as likelihood ratio (LR) and Lagrange multiplier (LM) test statistics, may not have 2 distributions asymptotically when the parameter under test is on the boundary under the null hypothesis (cf. Tanaka, 1983, Nicholls and Pagan, 1985; Leybourne

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

67

and McCabe, 1989). Furthermore, in our case, it is hard to obtain an explicit form for the LR test statistic. Now we consider the pdf of (y1 ; : : : ; yT ) under the assumption that (bt ; t ) is Gaussian. Throughout the following, we assume y0 = 0 and 0 = 0. It is obvious from (i) – (ii) that E(yt |yt −1 ) = yt −1 , and Var(yt |yt −1 ) = yt2−1 !2 + 2t −1 . Hence, from the assumption that (bt ; t ) is Gaussian, we have   y12 2 −1=2 f(y1 |y0 = 0; 0 = 0) = (20 ) exp − 2 20 and f(yt |yt −1 ) = (2)

−1=2

{yt2−1 !2

+



(yt − yt −1 )2 exp − 2(yt2−1 !2 + 2t −1 )

2t −1 }−1=2

 :

Thus, the joint density of y1 ; : : : ; yT conditional on y0 = 0 and 0 = 0 is f(y1 ; : : : ; yT ; !2 |y0 = 0; 0 = 0) = (2)−T=2

T  {yt2−1 !2 + t −12 }−1=2 t=1



T  (yt − yt −1 )2 ×exp − 2(yt2−1 !2 + 2t −1 ) t=1

 :

Setting zt = yt =y1 ; t = 1; 2; : : : ; T; and x = y1 , the joint density of (z2 ; : : : ; zT ; x) is 2

g(x; ! ) = (2)

−T=2

T −1

|x|

T 

zt2−1 x2 !2 + 2t −1

−1=2

t=1



T

1  (zt − zt −1 )2 x2 ×exp − 2 t=1 zt2−1 x2 !2 + 2t −1

 :

The LBI test, based on z2 ; : : : ; zT , is given by   2 2   1 (d=d! ) g(x; ! ) d x|!2 =0 ¿ c; g(x; 0) d x ’(z2 ; : : : ; zT ) =   0 otherwise;  where c is a constant. In order to calculate ’, it is necessary to diHerentiate g(x; !2 ) d x with 2 respect to a direct computation of (d=d!2 ) g(x; !2 ) d x is dicult, we utilize the fact:  ! . Since  2 2 (d=d! ) g(x; ! ) d x = (d=d!2 )g(x; !2 ) d x, which can be proved in a fashion similar to the proof of Lemma 1 of McCabe and Tremayne (1995). Then the LBI test is  1 ST (; 0 ; 1 ) ¿ c; ’(z2 ; : : : ; zT ) = 0 otherwise;

68

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

where

 ST (; 0 ; 1 ) =

(d=d!2 )g(x; !2 )|!2 =0 d x  : g(x; 0) d x

Substituting zt = yt =y1 , we can obtain the theorem as below. The proof is omitted for brevity since it is essentially the same as that of Theorem 2.1 of Lee (1998). Theorem 2.1. When bt and t are Gaussian and T = 2n for some n = 1; 2; : : : ; the LBI test statistic for H0 : !2 = 0 vs. H1 : !2 ¿ 0 is given by ST (; 0 ; 1 ) = −4 (; 0 ; 1 )T (T + 2) −2

−  (; 0 ; 1 )T with 2 (; 0 ; 1 ) =

T

t=1

T  (yt − yt −1 )2 yt2−1 4t −1 t=1

T  yt2−1 t=1

(2.1)

2t −1

(yt − yt −1 )2 =2t −1 . We reject H0 for large values of ST (; 0 ; 1 ).

Remark. For T = 2n + 1; the LBI test is given as follows: ST∗ (; 0 ; 1 ) = −4 (; 0 ; 1 )T (T + 2) − −2 (; 0 ; 1 )

T  (yt − yt −1 )2 yt2−1 4t −1 t=1 T

T (T + 1)  yt2−1 : T − 1 t=1 2t −1

It can be readily seen that ST and ST∗ behave the same; asymptotically. As seen in Theorem 2.1, the exact distribution of the LBI test is very hard to obtain. Therefore, its limiting distribution must be investigated to perform a test. In fact, one can check that T −1=2 ST (; 0 ; 1 ) = T −1=2

T 

(t2 − Et2 )(wt2−1 − Ewt2 ) + oP (1);

(2.2)

t=1

where t = (yt − yt −1 )=t −1 and wt = yt =t (cf. Theorem 3.1 below), therefore the LBI test statistic is asymptotically normal under the null hypothesis. Under the alternative hypothesis, we can see that  2 y 2 2 2 Cov(t ; wt −1 ) = ! Var 2t ¿ 0; t provided a fourth moment of yt exists. Hence, the right-hand side of (2.2) diverges to ∞ in probability, which asserts the consistency of the test. In actual practice, , 0 and 1 are unknown and should be estimated for performance of the test. As we will see in Section 3, the choice of estimators aHects the asymptotic variance of the limiting distribution, and one should know the exact asymptotic expansion form of the estimators. One may

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

69

employ any estimators satisfying the regularity conditions, but obtaining the asymptotic expansion form is somewhat involved. Therefore, following Weiss (1984), we employ the least-square estimator (LSE) for  and the Gaussian maximum likelihood estimator (MLE) for 0 and 1 (see also Lee and Hansen, 1994). The limiting distribution of the LBI test with these estimators applied will be derived in Section 3. 3. Asymptotic distribution of the LBI test ˆ ˆ0 ; ˆ1 ). Before In this section we investigate the limiting distribution of the LBI test T −1=2 ST (; we state our result, we introduce some notations: ˆ t −1 ˆ t −1 yt yt − y yt − y ; ˜t = ; wˆ t = ; ˆt = ˆt ˆt −1 t − 1 2 =

T  (yt − yt −1 )2 ; 2t −1 t=1

ˆ2 =

T  ˆ t − 1 )2 (yt − y ; ˆ2t −1 t=1

˜2 =

T  ˆ t − 1 )2 (yt − y ; 2t −1 t=1

ˆ t −1 . where ˆt = ˆ0 + ˆ1 ˆt and ˆt = yt − y The following is the main result of this paper. Theorem 3.1. Suppose that the null hypothesis holds; and let ST (; 0 ; 1 ) be the same as in (2.1). Then if Eyt6 ¡ ∞; ˆ −  = OP (T −1=2 ); ˆ0 − 0 = OP (T −1=2 ) and ˆ1 − 1 = OP (T −1=2 ); we have that as T → ∞; T

−1=2

ˆ ˆ0 ; ˆ1 ) = T −1=2 ST (;

T 

(t2 − 1)(wt2−1 − Ewt2 )

t=1

+ T 1=2 (ˆ0 − 0 )A + T 1=2 (ˆ1 − 1 )B + T 1=2 (ˆ − )C + oP (1);

(3.1)

where A=E

yt2 yt2 1 E − E ; 2t 2t 4t

B=E

yt2 t2 yt2 t2 E − E ; 2t 2t 4t

C = 21 E

yt2 yt −1 t yt2 yt −1 t − 2 E E : 1 4t 2t 2t

P ˆ ˆ0 ; ˆ1 )→ ∞ Remark. Although we do not present the details; we can readily see that T −1=2 ST (; under H1 ; provided the stochastic boundedness conditions for the estimators hold under H1 .

Proof. Put



D1 = 2T

1=2

(0 − ˆ0 ) T

−1



T  yt2−1 t2 t=1

+ 41 T 1=2 (ˆ − ) T −1

6t −1



 + 2T

1=2

(1 − ˆ1 ) T

T  yt2−1 yt −2 t2 t −1 t=1

6t −1

−1

T  yt2−1 t2 t2−1 t=1

 ;

6t −1



70

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

 D2 = T

1=2

(0 − ˆ0 ) T

−1

T  y2





t 4  t=1 t

+ 21 T 1=2 (ˆ − ) T −1

 +T

 D3 = T

1=2

(0 − ˆ0 ) T

−1

t

T  t2 4 t=1 t −1

 + 21 T 1=2 (ˆ − ) T −1

(1 − ˆ1 ) T

T  y 2 y t − 1 t

T  y 2 2 t=1





 +T

1=2

(1 − ˆ1 ) T

−1

T  t2 t2−1

t

t=1

ˆ ˆ0 ; ˆ1 ) into two terms, viz, Split T −1=2 ST (;  T  2 ˆ ˆ0 ; ˆ1 ) = T 2 ˆ−4 T −1=2 T −1=2 ST (; ˆ wˆ 2 t



4t −1



4t −1

t −1

:

− T −1

t=1 T 



t t 4t

t=1

T  y t − 2  2 t − 1

+ 2T 1=2 ˆ−4

−1

4t

t=1

and

1=2

T 

2 ˆt

T 

t=1

 wˆ 2t −1

t=1

2 ˆt wˆ 2t −1

t=1

= IT + IIT : We show that IIT goes to 0 in probability. We write T −1=2

T 

2

(ˆt wˆ 2t −1 − t2 wt2−1 ) = T −1=2

T 

t=1

2 2 (ˆt wˆ 2t −1 − ˜t wt2−1 ) + T −1=2

t=1

T 

2

(˜t − t2 )wt2−1 :

t=1

Note that the second term on the RHS of (3.2) is oP (1), and T

−1=2

T 

2 ˆt wˆ 2t −1 − T −1=2

T 

t=1

2 ˜t wt2−1

t=1

 = 2T 1=2 (0 − ˆ0 ) T −1

T  yt2−1 t2 t=1

 + 41 T 1=2 (ˆ − ) T −1

6t −1



 + 2T 1=2 (1 − ˆ1 ) T −1

t=1

T  yt2−1 yt −2 t2 t −1 t=1

T  yt2−1 t2 t2−1

6t −1

 + oP (1):

6t −1



(3.2)

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

71

Thus, we have T −1=2

T 

2 ˆt wˆ 2t −1 = T −1=2

t=1

T 

t2 wt2−1 + D1 + oP (1):

Meanwhile, it can be easily seen that T

1=2 −4



(3.3)

t=1

T 

2 ˆt wˆ 2t −1

=T

−1=2



2 −4

(T ˆ ) T

−1

t=1

T 

 2 ˆt wˆ 2t −1

= oP (1);

t=1

which asserts IIT = oP (1). Now, we handle IT . Note that T

−1=2

T 

wˆ 2t

=T

−1=2

t=1

T 

wt2 + D2 + oP (1);

(3.4)

t2 + D3 + oP (1):

(3.5)

t=1

and T

−1=2

T 

2 ˆt = T −1=2

t=1

T  t=1

In view of (3.3) – (3.5), we have  T  T T  2   2 T −1=2 wˆ 2t −1 ˆt wˆ 2t −1 − T −1 ˆt t=1

=T

−1=2

t=1 T  t=1

(t2



1)(wt2−1

t=1



Ewt2 )

+ D1 − T

−1

T 

t2 D2

t=1

−T

−1

T 

wt2−1 D3 + oP (1);

t=1

which yields (3.1). Theorem 3.1 indicates that the limiting distribution of the LBI test statistic depends upon the choice of the estimators of the autoregressive parameter as well as the ARCH parameters. This is a fundamental diHerence between our test and that of Lee (1998); in Lee’s setup the estimator of the autoregressive parameter did not aHect the limiting distribution of the test statistic. It is not dicult to guess that the LBI test is asymptotically normal. However, one should know the asymptotic expansion form of the parameter estimators in order to calculate the exact value of the asymptotic variance. Below, we give an example showing that the LBI test is asymptotically normal under the null hypothesis. In the following, we consider the case where the LSE for  and the MLE for 0 and 1 are employed to construct the LBI test. Suppose that ˆ is the LSE which minimizes Tt=1 (yt − yt −1 )2 . The Gaussian MLE for 0 and 1 are obtained as those maximizing the function: T T ˆ t − 1 )2 1  (yt − y 1 2 ln(0 + 1 ˆt −1 ) − ; (Â) = c − 2 t=1 2 t=1 0 + 1 ˆ2t −1

ˆ t −1 , and c is a constant. Then we obtain the following result. where  = (0 ; 1 ) , ˆt = yt − y

72

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

Theorem 3.2. Suppose that the assumptions of Theorem 3.1 hold. If ˆ0 and ˆ1 are the Gaussian MLE and ˆ is the LSE; then under H0 ; d ˆ ˆ0 ; ˆ1 )→ N (0; V ) T −1=2 ST (;

where

as T → ∞;

(3.6)



V

= (E4t

2    4 2 yt4 yt2 1 ∗ 2 t ∗ ∗ t ∗2 − 1) E 4 − E 2 + B E 4 + 2A B E 4 + A E 4 t t t t t

+ 2A

+2C A∗ =







y2 1 y2 E 4t − E 2 E 2t t t t

E3t



 + 2B





y 2 2 2 y 2 E t 4 t − E t2 E 2t t  t t

y3 y2 yt yt t3 E t − E(yt t )E 2t + A∗ E + B∗ E t t t t

A t4 B 2 E 4 − E t4 ; D t D t

B∗ = −

A t2 B 1 E + E 4; D 4t D t



2

+ C ∗ E(2t yt2 )

 ;

−1  C ∗ = C Eyt2

and D=E

 2 2  1 t4 E − E t4 : 4t 4t t

Remark. If t are assumed to be normal; V is reduced to the following:  2    4 2 yt4 yt2 1 ∗ ∗ 2 t ∗ ∗ t ∗2 V = 2 E 4 − E 2 + B E 4 + 2A B E 4 + A E 4 t t t t t     y2 y 2 2 1 y2 2 y 2 2 + C ∗ E(2t yt2 ): + 2A∗ E 4t − E 2 E 2t + 2B∗ E t 4 t − E t2 E 2t t t t t t t ˆ = 0; we can write; by Taylor’s expansion Proof. Since ˆ = (ˆ0 ; ˆ1 ) satis;es l (Â)   −1 1 l (Â) √ l (Â) + oP (1) T 1=2 (ˆ − Â) = − T T 

= T −1=2

 T T t4  t2 − 2t −1 t2  t2−1 (t2 − 2t −1 ) −E 4 E 4   t  4t −1 t t=1 4t −1 t=1  1   + oP (1):   D T T  2 2 2 2 2 2    −   ( −  )  1  t t −1 t t −1 t −1  −E t4 + E t t=1 4t −1 4t t=1 4t −1

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

73

This implies that T 1=2 (ˆ0 − 0 )A + T 1=2 (ˆ1 − 1 )B + T 1=2 (ˆ − )C  =

B 2 A t4 E 4 − E t4 D t D t 

+

  T 2 2   −  t − 1 t T −1=2 4t −1 t=1

B 1 A 2 − E t4 + E 4 D t D t



 2 −1

+ C Eyt

 T −1=2

  T 2 2 2   ( −  ) t −1 t t −1 T −1=2 4  t −1 t=1

T 

 t yt − 1

+ oP (1)

t=1

and therefore; T

−1=2

ˆ ˆ0 ; ˆ1 ) = T −1=2 ST (;

+T

  ∗    2 T  2  A + B∗ t2−1 yt − 1 t yt2 −1 −E 2 + 2t −1 2t −1 t 2t −1 t=1

−1=2

C



T 

t yt −1 + oP (1):

t=1

Then applying the CLT for martingale diHerences; we obtain d ˆ ˆ0 ; ˆ1 )→N(0; V ); T −1=2 ST (;

where V is the number in (3.6). This completes the proof. Although we have the asymptotic normality result, we have to estimate the asymptotic variance V for actual usage. A glimpse of the terms in V shows that one can easily ;nd a consistent estimator Vˆ satisfying Vˆ − V = OP (T −1=2 ). For instance, one can employ the estimator constructed by replacing the components in V by their moment estimators. In such a case, the ;nal test statistic becomes −1=2 ˆ ˆ0 ; ˆ1 ); S˜ T :=T −1=2 Vˆ ST (;

which is asymptotically distributed as N(0; 1). 4. Simulation results and concluding remarks In this section, we evaluate the performance of the test statistic S˜ T through a simulation study, in which the Gaussian MLE for 0 and 1 and the LSE for  are employed. The empirical sizes and powers are calculated at a nominal level of 0.1, and sets of 300, 500, and 800 observations are generated from the AR(1)-ARCH(1) model with  = 0:3; 0:5; 0:7, 0 = 0:2 and 1 = 0:1; 0:2; 0:3; 0:4. Here, {t } is assumed to be Gaussian. As alternatives, we consider the RCA(1) model

74

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

Table 1 Empirical sizes and powers of S˜T for T = 300 1

0.1

0.2

0.3

0.4

!2

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

0.0 0.2 0.4 0.5

0.118 0.389 0.711 0.794

0.101 0.693 0.938 0.970

0.086 0.953 0.998 0.998

0.094 0.410 0.681 0.760

0.109 0.687 0.951 0.963

0.112 0.932 0.994 0.994

0.121 0.409 0.674 0.768

0.105 0.665 0.927 0.960

0.093 0.928 0.996 0.996

0.139 0.442 0.708 0.750

0.123 0.686 0.923 0.950

0.116 0.931 0.997 0.996

Table 2 Empirical sizes and powers of S˜T for T = 500 1

0.1

0.2

0.3

0.4

!2

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

0.0 0.2 0.4 0.5

0.102 0.499 0.828 0.916

0.089 0.864 0.989 0.997

0.091 0.997 0.998 1.000

0.100 0.506 0.838 0.894

0.091 0.845 0.996 0.994

0.091 0.992 1.000 0.998

0.106 0.525 0.827 0.888

0.103 0.851 0.986 0.994

0.105 0.992 1.000 0.999

0.131 0.574 0.837 0.895

0.113 0.875 0.986 0.993

0.118 0.992 1.000 1.000

Table 3 Empirical sizes and powers of S˜T for T = 800 1

0.1

0.2

0.3

0.4

!2

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

 0.3

0.5

0.7

0.0 0.2 0.4 0.5

0.100 0.679 0.957 0.968

0.092 0.961 1.000 1.000

0.075 1.000 1.000 1.000

0.102 0.659 0.959 0.973

0.100 0.963 0.998 1.000

0.099 1.000 1.000 1.000

0.105 0.705 0.943 0.963

0.103 0.968 0.999 0.999

0.098 1.000 1.000 1.000

0.138 0.700 0.928 0.961

0.143 0.950 0.997 0.999

0.137 1.000 1.000 1.000

with !2 = 0:2; 0:4; 0:5. Recall that we have assumed a 6th moment condition for {yt }, which implies that 1 must satisfy 1513 ¡ 1(1 ¡ 0:405) (cf. Engle, 1982). In each simulation, 100 initial observations are discarded to remove initialization eHects. The ;gures in Tables 1–3 indicate the proportion of the number of rejections of the null hypothesis H0 : !2 = 0 out of 1000 repetitions. They indicate that sizes are very close to the nominal level when 1 = 0:1; 0:2, and size distortions become larger as 1 tends to 0.4, which is quite near the boundary of the region allowed for 1 . Meanwhile, as anticipated, the power increases as either n or !2 increases. For instance, when n = 500 (800) and !2 ¿ 0:4, the powers are greater than 0.8 (0.9). This means that the test performs adequately for large samples (say, n ¿ 300). However, the result

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

75

Fig. 1. yt = (0:3 + bt )yt−1 + t ; {t } ∼ N(0; 1).

Fig. 2. yt = (0:7 + bt )yt−1 + t ; {t } ∼ N(0; 1).

may be unsatisfactory if the sample size is not large enough, say, ¡ 200, which is due to the fact that our testing method is intrinsically based on asymptotic results. One of the most interesting phenomena is that the power increases as  approaches 1 contrary to the fact that the test statistics generally do not perform very well in that case. From Figs. 1 and 2, where the solid lines denote the plots of the AR(1) models and the dotted lines indicate the plots of the RCA(1) models with !2 = 0:5, we can see that the randomness eHects in the RCA models become more prominent for  close to 1. The reason is that the eHects of {bt } are being more integrated as  tends to 1.

76

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

So far, we have seen that the LBI test can be constructed in AR-ARCH models by analogy to McCabe and Tremayne (1995) and Lee (1998), but the limiting distribution of the LBI test is severely aHected by a choice of the estimators of the regressive parameters as well as the ARCH parameters. It was shown that the LBI test statistic is asymptotically normal under the null hypothesis and constitutes a consistent test, provided the regularity conditions are satis;ed. The exact asymptotic expansion form for the estimators was needed to calculate the asymptotic variance of the LBI test statistic. Especially, the limiting distribution of the test was obtained using the Gaussian MLE and the LSE for the parameters in the AR-ARCH model. Our simulation results strongly support the validity of our test for large samples. Here, we restricted ourselves to the case that bt and t are independent for the sake of technical convenience. However, the generalization to the correlated case is straightforward, and indeed one can check that the result in Theorem 2.1 remains the same even in this case. It is evident though that the correlation should aHect the asymptotic variance of the LBI test statistic. In actual fact, ignoring the ARCH eHects, one could still use the test statistic of Lee by obtaining its limiting distribution. The limiting distribution can be readily obtained since the AR-ARCH process is well known to be geometrically beta-mixing (cf. Neumann and Kreiss, 1998) and the existing results can be easily accessed in its derivation. However, it is obvious that our test should outperform Lee’s test when the data have the characteristics of the ARCH process as the former takes the ARCH eHects into consideration while the latter ignores them completely. It is believed that our test procedure provides a functional tool to practitioners who wish to test the coecient constancy in AR-ARCH models. Acknowledgements This research was supported (in part) by KOSEF through the Statistical Research Center for Complex Systems at Seoul National University. We are grateful to the referee for his=her valuable comments. References Andel, J., 1976. Autoregressive series with random parameters. Math. Opt. Statist. 7, 735–741. Conlisk, J., 1974. Stability in a random coecient model. Internat. Econom. Rev. XV, 529–533. Conlisk, J., 1976. A further note on stability in a random coecient model. Internat. Econom. Rev. XVII, 759–792. Engle, R.F., 1982. Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. inPation. Econometrica 50, 987–1008. Ferguson, T.S., 1967. Mathematical Statistics: A Decision Theoretic Approach. Academic Press, New York. Hwang, S.Y., Basawa, I., 1998. Parameter estimation for generalized random coecient autoregressive processes. J. Statist. Plann. Inference 68, 323–337. Lee, S., 1998. Coecient constancy test in a random coecient autoregressive model. J. Statist. Plann. Inference 74, 93–101. Lee, S., Hansen, B.E., 1994. Asymptotic theory for the GARCH(1,1) quasi-maximum likelihood estimator. Econ. Theory 10, 29–52. Leybourne, S.J., McCabe, B.P.M., 1989. On the distribution of some test statistics for coecient constancy. Biometrika 76, 169–177.

J. Ha, S. Lee / Statistics & Probability Letters 57 (2002) 65–77

77

Lumsdaine, R.L., 1996. Consistency and asymptotic normality of the quasi-maximum likelihood estimator in IGARCH(1,1) and covariance stationary GARCH(1,1) models. Econometrica 64, 575–596. McCabe, B.P.M., Tremayne, A.R., 1995. Testing a time series for diHerence stationary. Ann. Statist. 23, 1015–1028. Neumann, M.H., Kreiss, J.P., 1998. Regression-type inference in nonparametric autoregression. Ann. Statist. 26, 1570–1613. Nicholls, D.F., Pagan, A.R., 1985. Varying coecient regression. In: Hannan, E.J., Krishnaiah, P.R., Rao, M.M. (Eds.), Handbook of Statistics, Vol. 5. North-Holland, Amsterdam, pp. 413–449. Nicholls, D.F., Quinn, B.G., 1980. The estimation of random coecient autoregressive model I. J. Time Ser. Anal. 1, 37–46. Nicholls, D.F., Quinn, B.G., 1981. The estimation of random coecient autoregressive model II. J. Time Ser. Anal. 2, 185–203. Nicholls, D.F., Quinn, B.G., 1982. Random Coecient Autoregressive Models: An Introduction. Springer, New York. Pantula, S.G., 1988. Estimation of autoregressive models with ARCH errors. SankhySa, Ser. B 50, 119–138. Ramanathan, T.V., Rajarshi, M.B., 1994. Rank test for testing the randomness of autoregressive coecients. Statist. Probab. Lett. 21, 115–120. Tanaka, K., 1983. Non-normality of the Lagrange multiplier statistic for testing the constancy of regression coecients. Econometrica 51, 1577–1582. Weiss, A.A., 1984. ARMA models with ARCH errors. J. Time Ser. Anal. 5, 129–143.