Statistics & Probability Letters 50 (2000) 389 – 395
Bootstrap tests for unit roots in seasonal autoregressive models Zacharias Psaradakis∗ School of Economics, Mathematics and Statistics, Birkbeck College, University of London, 7-15 Gresse Street, London W1P 2LL, UK Received October 1999; received in revised form April 2000
Abstract This paper proposes bootstrap tests for the presence of unit roots in a seasonal autoregressive model. The asymptotic validity of the proposed bootstrap scheme is established, and Monte Carlo experiments are used to investigate the c 2000 Elsevier Science B.V. All rights reserved small-sample performance of the tests. Keywords: Bootstrap; Hypothesis testing; Least-squares estimator; Seasonal autoregressive model; Unit roots
1. Introduction Let the real-valued time series {Xt : t ∈ N} satisfy the autoregressive model Xt = Xt−m + t
(t ∈ N);
(1)
where {t : t ∈ N} is a sequence of independent, identically distributed (i.i.d.) random variables with E[1 ] = 0 and E[12 ] = 2 ∈ (0; ∞), and ∈ (−1; 1]. Model (1) with m¿2 is a simple seasonal model in which time series observed semi-annually, quarterly, or monthly are represented by m = 2, m = 4, or m = 12, respectively. For the sake of convenience, it will be assumed throughout the paper that X−m+1 = · · · = X0 = 0. We are interested in testing the null hypothesis H0 : = 1, under which the roots of the characteristic equation 1 − z m = 0 of (1) are all of unit modulus and {Xt : t ∈ N} is a seasonal random walk with period m (or an ARIMA(0; 1; 0)m process). This is a problem of practical importance since it is often of interest to test whether a series should be seasonally dierenced in order to render it stationary. Unstable models like (1) with = 1 can describe well a time series with changing seasonal patterns, and have been used extensively in the literature following the in uential work of Box and Jenkins (1970). When a nite segment {Xt : t = 1; : : : ; n} from a realization of the process de ned by (1) is available, a testing methodology for the hypothesis H0 : = 1 was presented in Dickey et al. (1984). Their tests are based ∗
Tel.: +44-207-631-6415; fax: +44-207-631-6416. E-mail address:
[email protected] (Z. Psaradakis).
c 2000 Elsevier Science B.V. All rights reserved 0167-7152/00/$ - see front matter PII: S 0 1 6 7 - 7 1 5 2 ( 0 0 ) 0 0 1 2 8 - 0
390
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395
on the statistics Kn := n(ˆ n − 1)
and
Tn :=
n 1 X 2 Xt−m ˆ2n t=1
!1=2 (ˆ n − 1);
Pn Pn Pn 2 )−1 t=1 Xt−m Xt , and ˆ2n := (n−1)−1 t=1 (Xt − where ˆ n is the least-squares estimator of , ˆ n := ( t=1 Xt−m ˆ n Xt−m )2 . Under the null hypothesis H0 : = 1, we have (e.g., Ghysels et al., 2000) ( )−1 ( m Z ) m Z 1 1 X X w W2s (r) dr Ws (r) dWs (r) (n → ∞); (2) L[Kn ] → L m 0
s=1
and
( m Z X L[Tn ] → L w
s=1
0
1
0
s=1
W2s (r) dr
)−1=2 ( m Z X s=1
0
1
) Ws (r) dWs (r)
(n → ∞);
(3)
where L[Y ] denotes the law of Y and W1 = {W1 (r) : r ∈ [0; 1]}; : : : ; Wm = {Wm (r) : r ∈ [0; 1]} are m independent, standard (one-dimensional) Brownian motions. 1 In what follows, we consider the problem of obtaining a bootstrap approximation to the sampling distribution of ˆ n when = 1, and propose bootstrap tests for the unit-root hypothesis H0 : = 1 (i.e. tests with rejection regions determined from the bootstrap approximation to the null distributions of Kn and Tn ). This is a problem that requires special attention since the standard bootstrap distribution estimator for autoregressions – as de ned, for instance, in Bose (1988) – is inconsistent for models with characteristic roots that lie on the unit circle (cf. Basawa et al., 1991a; Datta, 1996). In Section 2, we describe a bootstrap procedure appropriate for the model in (1) when = 1 and establish its asymptotic correctness (to rst order). Section 3 reports the results of simulation experiments on the nite-sample performance of bootstrap unit-root tests. Section 4 concludes. 2. Bootstrap unit-root tests To describe how a bootstrap approximation to the null sampling distributions of Kn and Tn may be obtained, x ∈ R, where G˜ n denotes the empirical let {t∗ : t = 1; : : : ; n} be a random sample from F˜n (x) := G˜ n (x + n ), P n −1 ∗ distribution function of {˜t := Xt − Xt−m : t = 1; : : : ; n} and n := n t=1 ˜t . Bootstrap replicates {Xt : t = 1; : : : ; n} are de ned via the recursive scheme ∗ + t∗ Xt∗ = Xt−m
where
(t = 1; : : : ; n);
(4)
∗ ; : : : ; X0∗ ) (X−m+1
= (X−m+1 ; : : : ; X0 ). The bootstrap analogues of Kn and Tn are then given by !1=2 n ∗ ∗ 1 X ∗2 ∗ ∗ ˆ Xt−m (ˆ n − 1); Kn := n(n − 1) and Tn := ∗2 n Pn
Pn −1
t=1
Pn ∗2 ∗ ∗ ∗2 ∗ ∗2 −1 2 ∗ ∗ := ( t=1 Xt−m ) where t=1 Xt−m Xt and n := E [t ]=n t=1 (˜t − n ) . (Henceforth, E [·], P [·], or ∗ L [ · ] are used to denote bootstrap expectation, probability, or law, respectively, conditional on X1 ; : : : ; Xn ). ∗ ˆ n
1 The limit laws of K and T stated above are the same as those given in Dickey et al. (1984); their asymptotic representations can n n √ P∞ be obtained from (2) and (3) by using the Karhunen–Loeve expansion of the standard Brownian motion Ws (r) = 2 2 i=1 [(2i − 1)]−1 sin[(i − 12 )r]Zsi , where {Zsi : i ∈ N} are i.i.d. N(0; 1) variates.
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395
391
The conditional distributions of Kn∗ and Tn∗ , given the original sample (X1 ; : : : ; Xn ), constitute a bootstrap approximation of the sampling distributions of Kn and Tn , respectively, under the null hypothesis H0 : = 1. Strong consistency of our bootstrap scheme requires that, with probability 1, the weak limits of L∗ [Kn∗ ] and L∗ [Tn∗ ] are the same as those of L[Kn ] and L[Tn ], respectively. The following theorem shows this to be true. Theorem 2.1. Let {Xt : t = 1; : : : ; n} satisfy model (1) with = 1 and m¿2. Then; ( )−1 ( m Z ) m Z 1 1 X X w W2s (r) dr Ws (r) dWs (r) (n → ∞); m L∗ [Kn∗ ] → L 0
s=1
and
( m Z X w L∗ [Tn∗ ] → L
0
s=1
0
s=1
1
W2s (r) dr
)−1=2 ( m Z X s=1
0
1
(5)
) Ws (r) dWs (r)
(n → ∞);
(6)
along almost every sample sequence {X1 ; X2 ; : : :}. Proof. Assuming, for convenience, that n=m =: N ∈ N, de ne m stochastic processes V∗N; s ={V∗N; s (r) : r ∈ 0; 1]}, s = 1; : : : ; m, with trajectories in the cadlag-space D[0; 1] by putting 0; r ∈ [0; N −1 ); bNrc V∗N; s (r) = X −1=2 ∗s; i ; r ∈ [N −1 ; 1]; N i=1
∗ , i = 1; : : : ; N . Since ˜t = t when where bNrc indicates the largest integer dominated by Nr, and ∗s; i := m(i−1)+s ∗ = 1, and, given (X1 ; : : : ; Xn ), {s; i : s = 1; : : : ; m; i = 1; : : : ; N } are conditionally i.i.d. with E∗ [∗1; 1 ] = 0 and Pn −1 2 E∗ [∗2 1; 1 ] = n t=1 (˜t − n ) , it can be shown in the same way as in the proof of Theorem 2:2 of Basawa et al. (1991b) that: (i) we have w
L∗ [V∗N; s (r1 ); : : : ; V∗N; s (rh )] → L[Ws (r1 ); : : : ; Ws (rh )]
(N → ∞);
for almost every sample sequence {X1 ; X2 ; : : :} and any nite subset {r1 ; : : : ; rh } of [0; 1]; (ii) there exist a non-decreasing, continuous function f on [0; 1] and ¿ 0 such that P∗ [|V∗N; s (r) − V∗N; s (r1 )| ∧ |V∗N; s (r2 ) − V∗N; s (r)|¿]6−4 {f(r2 ) − f(r1 )}2 ; for almost every sample sequence {X1 ; X2 ; : : :} and any 06r1 6r6r2 61. By (i), (ii) and Theorem 13:5 in Billingsley (1999) it follows, therefore, that w
L∗ [V∗N; s ] → L[Ws ]
in D[0; 1]
(N → ∞);
for almost every sample sequence {X1 ; X2 ; : : :}. Now observe that, as {Xt∗ : t = 1; : : : ; n} satisfy (1), we have ∗ = Xm(i−1)+s
i X j=1
∗s; j
(s = 1; : : : ; m; i = 1; : : : ; N ):
(7)
392
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395
In consequence, Kn∗ may be expressed as ( Kn∗
−2
=
n
n X
)−1 ( ∗2 Xt−m
−1
n
t=1
( =
n
−2
n X
) ∗ Xt−m t∗
t=1
m X N X
)−1 ( ∗2 Xm(i−2)+s
−1
n
s=1 i=1
m X N X
) ∗ Xm(i−2)+s ∗s; i
s=1 i=1
2 −1 m X i−1 m X i−1 N N X X X X N −1=2 ∗s; j ∗s; j ∗s; i n−1 = m−2 s=1 i=1 j=1 s=1 i=1 j=1 ( =
m−2
m Z X s=1
0
1
)−1 ( V∗2 N; s (r) dr
m−1
m X 1 s=1
2
−1 V∗2 N; s (1) − N
N X
!) ∗2 s; i
:
(8)
i=1
Pm PN a:s: 2 Since P∗ [|(mN )−1 s=1 i=1 ∗2 s; i − | ¿ ] → 0 as N → ∞ for every ¿ 0 (Bickel and Freedman, 1981, Theorem 2:1), the truth of (5) follows immediately from (8) on account of the bootstrap invariance principle R1 in (7), the continuous mapping theorem, and the fact that 0 Ws (r) dWs (r) = 12 {W2s (1) − 1} almost surely. The stated limiting result for L∗ [Tn∗ ] is obtained in a similar manner since, in view of the Kolmogorov a:s: strong law of large numbers, n∗2 → 2 as n → ∞. The asymptotic correctness of bootstrap tests of H0 : = 1 against the asymptotically stationary alternative H1 : ¡ 1 is an immediate consequence of Theorem 2.1. and Corollary 2.1. Under the assumptions of Theorem 2:1; limn→∞ P[Kn 6kn;∗ | = 1] = limn→∞ P[Tn 6tn;∗ | = 1] = ; where kn;∗ :=inf {x:P∗ [Kn∗ 6x]¿} and tn;∗ :=inf {x:P∗ [Tn∗ 6x]¿} for ∈(0;1). Remark. It can be shown that, under the assumptions of Theorem 2.1, the conclusions of the theorem and of Corollary 2.1 are valid if {t∗ : t = 1; : : : ; n} is a random sample from the empirical distribution of the Pn centred least-squares residuals Xt − ˆ n Xt−m − n−1 t=1 (Xt − ˆ n Xt−m ), t = 1; : : : ; n, rather than from F˜ n . For non-seasonal autoregressive models, such a resampling plan was proposed by Ferretti and Romo (1996). 3. Numerical results In this section, we investigate, by means of simulation experiments, the nite-sample performance of the proposed bootstrap unit-root tests. The experiments are conducted as follows. For each ∈ {1; 0:95; 0:9; 0:8}, 1,000 arti cial time series {Xt : t = 1; : : : ; n}, n ∈ {60; 80; 120; 200}, are generated from the seasonal model (1) with m = 4 and {t : t = 1; : : : ; n} being i.i.d. N(0; 1) variates. Then, for each set of observations, the null hypothesis H0 : = 1 is tested against the alternative H1 : ¡ 1 using Kn and Tn . Critical values for the tests are obtained from two dierent approximations to the null distributions of the test statistics: the asymptotic approximation (Dickey et al., 1984, Tables 2 and 3) and a bootstrap approximation (constructed by the Monte Carlo method using 499 replicates of Kn∗ and Tn∗ ). For comparison, we also include in score test derived in Tanaka (1996, Theorem 9:17), based on the statistic Pnthe simulations the P m 2 , which is the locally most powerful invariant test for testing Rn := m{ t=1 (Xt − Xt−m )2 }−1 t=1 Xt−m+n
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395
393
Table 1 Empirical rejection frequencies of tests under = n
1:00
0:95
0:90
0:80
1:00
= 0:05
0:95
0:90
0:80
= 0:10
60
Kn Tn Rn Kn∗ Tn∗ R∗n
0.005 0.044 0.006 0.054 0.055 0.066
0.070 0.254 0.113 0.265 0.285 0.276
0.265 0.518 0.294 0.519 0.539 0.438
0.783 0.869 0.588 0.874 0.884 0.699
0.021 0.092 0.023 0.098 0.100 0.115
0.178 0.428 0.196 0.420 0.438 0.428
0.524 0.734 0.461 0.728 0.739 0.650
0.940 0.963 0.805 0.968 0.967 0.881
80
Kn Tn Rn Kn∗ Tn∗ R∗n
0.011 0.049 0.009 0.050 0.054 0.056
0.123 0.313 0.152 0.322 0.329 0.295
0.440 0.641 0.395 0.650 0.658 0.533
0.953 0.969 0.768 0.969 0.971 0.831
0.020 0.090 0.025 0.095 0.099 0.119
0.286 0.497 0.152 0.515 0.514 0.484
0.698 0.822 0.595 0.828 0.824 0.738
0.994 0.994 0.924 0.996 0.994 0.955
120
Kn Tn Rn Kn∗ Tn∗ R∗n
0.014 0.052 0.014 0.059 0.055 0.055
0.290 0.488 0.308 0.494 0.503 0.442
0.802 0.870 0.623 0.879 0.872 0.727
0.999 0.998 0.916 0.999 0.998 0.940
0.041 0.099 0.026 0.105 0.105 0.109
0.506 0.688 0.471 0.677 0.699 0.635
0.937 0.960 0.825 0.962 0.962 0.879
1.000 1.000 0.987 1.000 1.000 0.992
200
Kn Tn Rn Kn∗ Tn∗ R∗n
0.014 0.046 0.027 0.045 0.044 0.056
0.619 0.752 0.524 0.752 0.765 0.630
0.994 0.997 0.867 0.996 0.995 0.903
1.000 1.000 0.997 1.000 1.000 0.997
0.045 0.097 0.058 0.095 0.098 0.110
0.835 0.896 0.733 0.896 0.899 0.816
1.000 1.000 0.967 1.000 1.000 0.975
1.000 1.000 1.000 1.000 1.000 1.000
H0 : = 1 versus H1 : ¡ 1 when the law governing {t : t ∈ N} is Gaussian. Critical values for the score test are obtained from the asymptotic m2 distribution of Rn , as P well as from a bootstrap approximation Pn m ∗ ∗2 )2 }−1 t=1 Xt−m+n ). 2 (constructed from 499 replicates of R∗n := m{ t=1 (Xt∗ − Xt−m Table 1 reports the empirical rejection frequencies of tests of nominal level ∈ {0:05; 0:10} (i.e. the fraction of arti cial samples in which the value of the test statistic does not exceed the appropriate critical value of signi cance level ). Bootstrap tests have an empirical Type I error probability which is never signi cantly dierent from the corresponding nominal value (at the 1% signi cance level). This is not the case with asymptotic tests based on Kn and Rn , both of which under-reject even when n = 200. Not surprisingly, the power of all tests against the asymptotically stationary alternative ¡ 1 increases with n, exceeding 0:7 for ¡ 0:9 and n¿120. There is generally no substantial dierence between the power of asymptotic and bootstrap tests based on Tn , the latter having a slight advantage in situations where n is small and is near the boundary of the stationary region. The asymptotic tests based on Kn and Rn are less powerful than their bootstrap counterparts in some cases, a disadvantage which can be traced to the fact that the tests tend to be conservative. The bootstrap score test is clearly dominated by the other two bootstrap tests.
2 Strong consistency of the bootstrap for R follows easily in view of the fact that, for almost every sample sequence {X ; X ; : : :}, n 1 2 w L∗ [V∗N; s (1)] → L[Ws (1)] as N → ∞.
394
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395 Table 2 Empirical rejection frequencies of tests under = 1 − ( m=n)
n
1
3
5
7
1
3
= 0:05
5
7
= 0:10
60
Kn Tn Rn Kn∗ Tn∗ R∗n
0.094 0.311 0.145 0.315 0.327 0.324
0.774 0.886 0.628 0.884 0.896 0.736
0.994 0.994 0.857 0.998 0.994 0.903
1.000 1.000 0.943 1.000 1.000 0.965
0.253 0.503 0.286 0.513 0.517 0.532
0.946 0.964 0.842 0.972 0.964 0.906
1.000 0.998 0.963 1.000 0.998 0.978
1.000 1.000 0.994 1.000 1.000 0.998
80
Kn Tn Rn Kn∗ Tn∗ R∗n
0.138 0.323 0.158 0.327 0.335 0.337
0.782 0.890 0.625 0.886 0.912 0.733
0.994 1.000 0.864 1.000 1.000 0.900
1.000 1.000 0.956 1.000 1.000 0.968
0.289 0.467 0.291 0.481 0.495 0.507
0.938 0.954 0.827 0.958 0.962 0.897
1.000 1.000 0.962 1.000 1.000 0.976
1.000 1.000 0.992 1.000 1.000 0.993
120
Kn Tn Rn Kn∗ Tn∗ R∗n
0.140 0.321 0.178 0.325 0.343 0.315
0.830 0.904 0.622 0.924 0.916 0.719
0.992 0.996 0.865 0.998 0.996 0.893
1.000 1.000 0.965 1.000 1.000 0.975
0.311 0.483 0.309 0.501 0.503 0.505
0.958 0.976 0.843 0.976 0.978 0.899
1.000 1.000 0.966 1.000 1.000 0.978
1.000 1.000 0.994 1.000 1.000 0.998
200
Kn Tn Rn Kn∗ Tn∗ R∗n
0.140 0.285 0.153 0.283 0.301 0.253
0.800 0.868 0.632 0.892 0.886 0.704
0.984 0.990 0.850 0.996 0.994 0.882
1.000 1.000 0.958 1.000 1.000 0.972
0.315 0.459 0.296 0.475 0.487 0.442
0.952 0.966 0.819 0.970 0.964 0.878
1.000 1.000 0.958 1.000 1.000 0.972
1.000 1.000 0.997 1.000 1.000 0.997
We also computed the empirical rejection frequencies of the tests for Gaussian time series generated according to (1) with = 1 − ( m=n), where ∈ {1; 3; 5; 7} and m = 4 (cf. Tanaka, 1996, Chapter 7:7). The results reported in Table 2 reveal that the empirical power of the tests increases with , reaching unity for = 7 and for tests other than the score tests. The optimality property of the latter appears to be very local indeed, the tests performing slightly better than some of the rivals only when = 1. For xed , bootstrap tests are consistently superior to their asymptotic counterparts, so their use is recommended. The sampling experiments were repeated with {t : t = 1; : : : ; n} being i.i.d. √18 (42 − 4) variates, yielding results similar to those obtained under the Gaussian model. 4. Concluding remarks In this paper, we have proposed bootstrap tests for the presence of unit roots in a seasonal autoregressive model and have established their asymptotic validity. Simulation experiments have shown that the bootstrap tests have the correct rejection probability under the unit-root hypothesis and power which is often higher than that of their asymptotic counterparts. Although we have focused on models with the property that E[Xt ] = 0, the analysis can be easily extended to deal with models which allow for non-zero seasonal or non-seasonal means, as in Dickey et al. (1984). A less straightforward extension concerns higher-order models in which {Xt − Xt−m } is a stationary, weakly
Z. Psaradakis / Statistics & Probability Letters 50 (2000) 389 – 395
395
dependent time series. In such cases, the resampling procedure of Section 2 needs to be modi ed so that the bootstrap samples replicate, as close as possible, the dependence structure of {Xt − Xt−m }. One possibility is to use an autoregressive sieve bootstrap scheme, which, as shown in Psaradakis (1999) for non-seasonal models, yields asymptotically correct tests for the unit-root hypothesis. A detailed analysis of this issue is currently being undertaken by the author. Acknowledgements Thanks are due to to an anonymous referee for useful suggestions which helped to improve the paper. References Basawa, I.V., Mallik, A.K., McCormick, W.P., Reeves, J.H., Taylor, R.L., 1991a. Bootstrapping unstable rst-order autoregressive processes. Ann. Statist. 19, 1098–1101. Basawa, I.V., Mallik, A.K., McCormick, W.P., Reeves, J.H., Taylor, R.L., 1991b. Bootstrap test of signi cance and sequential bootstrap estimation for unstable rst order autoregressive processes. Comm. Statist. Theory Methods 20, 1015–1026. Bickel, P.J., Freedman, D.A., 1981. Some asymptotic theory for the bootstrap. Ann. Statist. 9, 1196–1217. Billingsley, P., 1999. Convergence of Probability Measures, 2nd Edition. Wiley, New York. Bose, A., 1988. Edgeworth correction by bootstrap in autoregressions. Ann. Statist. 16, 1709–1722. Box, G.P.E., Jenkins, G.M., 1970. Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco, CA. Datta, S., 1996. On asymptotic properties of bootstrap for AR(1) processes. J. Statist. Plann. Inference 53, 361–374. Dickey, D.A., Hasza, D.P., Fuller, W.A., 1984. Testing for unit roots in seasonal time series. J. Amer. Statist. Assoc. 79, 355–367. Ferretti, N., Romo, J., 1996. Unit root bootstrap tests for AR(1) models. Biometrika 83, 849–860. Ghysels, E., Osborn, D.R., Rodrigues, P.M.M., 2000. Seasonal nonstationarity and near-nonstationarity, in: B.H. Baltagi (Ed.), A Companion to Theoretical Econometrics, Blackwell Publishers, Oxford. Psaradakis, Z., 1999. Bootstrap tests for an autoregressive unit root in the presence of weakly dependent errors, manuscript, School of Economics, Mathematics and Statistics, Birkbeck College, University of London. Tanaka, K., 1996. Time Series Analysis: Nonstationary and Noninvertible Distribution Theory. Wiley, New York.