Journal of the Korean Statistical Society (
)
–
Contents lists available at ScienceDirect
Journal of the Korean Statistical Society journal homepage: www.elsevier.com/locate/jkss
Mildly explosive autoregression with mixing innovations Haejune Oh a , Sangyeol Lee a, *, Ngai Hang Chan b,c a b c
Seoul National University, Republic of Korea Southwest University of Finance and Economics, China The Chinese University of Hong Kong, Hong Kong
article
a b s t r a c t
info
Article history: Received 29 March 2017 Accepted 7 September 2017 Available online xxxx
In this paper, the limit distribution of the least squares estimator for mildly explosive autoregressive models with strong mixing innovations is established, which is shown to be Cauchy as in the iid case. The result is applied to identify the onset and the end of an explosive period of an econometric time series. Simulations and data analysis are also conducted to demonstrate the usefulness of the result. © 2017 The Korean Statistical Society. Published by Elsevier B.V. All rights reserved.
AMS 2000 subject classifications: 62M10 62F03 Keywords: Mildly explosive autoregression Mixing innovations Limit theorem for least squares estimator
1. Introduction An autoregressive (AR) process of the form yt = ρ yt −1 + ϵt ,
ϵt ∼ i.i.d. N(0, σ 2 ), t = 1, . . . , n,
with an explosive root |ρ| > 1 was first studied in Anderson (1959) and White By assuming a zero initial value for ∑n (1958). ∑ n yt , a Cauchy limit theory was derived for the least squares estimate ρˆ n = ( t =1 yt −1 yt )( t =1 y2t −1 )−1 , that is,
ρn d (ρˆ n − ρ ) −→ C , ρ −1
(1)
2
d
where C denotes a Cauchy random variable and −→ denotes convergence in distribution as n → ∞. It is noted that the Gaussian assumption imposed on the innovation sequence (ϵt ) cannot be relaxed without affecting the asymptotic distribution in (1). In fact, Anderson (1959) gave some examples demonstrating that a central limit theorem does not apply and the limit distribution of the least squares estimate depends on the distributional assumptions imposed on (ϵt ). In other words, a general asymptotic inference for purely explosive AR processes seems infeasible. However, this situation becomes less severe when the explosive root approaches unity as n → ∞. In particular, Giraitis and Phillips (2006) and Phillips and Magdalinos (2007a) considered an AR process with root ρn = 1 + c /nν , ν ∈ (0, 1). When c > 0, such roots are explosive in finite samples and approach unity with rate slower than O(n−1 ). It is well known that the asymptotic behavior of such mildly explosive autoregressions is more regular than their purely explosive counterparts.
*
Corresponding author. E-mail address:
[email protected] (S. Lee).
https://doi.org/10.1016/j.jkss.2017.09.001 1226-3192/© 2017 The Korean Statistical Society. Published by Elsevier B.V. All rights reserved.
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
2
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
Under the assumption that the innovations are i.i.d. with finite second moment, Phillips and Magdalinos (2007a) established central limit theorems for sample moments generated by mildly explosive processes and obtained the following result: 1
d
(2) nν ρnn (ρˆ n − ρn ) −→ C , n → ∞. 2c This Cauchy limit theory is invariant to both the distribution of the initial condition y0 being any fixed constant value or random process of smaller asymptotic order than nν/2 . Such a near critical phenomenon is a reminiscence of the asymptotic normality established in Chan and Wei (1987) for the finite variance case, see also Buchmann and Chan (2007). The result was further generalized by Phillips and Magdalinos (2007b) to include a class of weakly dependent innovations. Aue and Horvàth (2007) relaxed the moment conditions on the innovations by considering an i.i.d. innovation sequence that belongs to the domain of attraction of a stable law. The limit distribution in this case took the form of a ratio of two i.i.d. stable random variables, which reduces to a Cauchy distribution when the innovations have a finite variance. Multivariate extensions were given in Magdalinos and Phillips (2008). Magdalinos (2012) considered mildly explosive autoregressions generated by a correlated innovation sequence that may exhibit long-range dependence. However, the innovation sequence considered in his paper was a linear process and did not include general nonlinear processes. Motivated by this consideration, this paper studies mildly explosive autoregressions with strong mixing innovation sequence. Particularly, this class accommodates several GARCH type models as given in Carrasco and Chen (2002) and double-threshold autoregressive models, which are often used to model financial time series. It is shown that the least squares estimate has the same limit distribution as the i.i.d. innovation case. As an application, a method for identifying the onset and the end of a bubble period of an econometric time series, originally considered in Phillips and Yu (unpublished), is developed. Our simulation result indicates that performance of the proposed dating procedure is not much affected by the type of GARCH and double-threshold autoregressive innovations. A real data analysis is conducted for the Korea Composite Stock Price Index (KOSPI) based on the innovation sequence following a threshold autoregressive conditional heteroscedastic model. This paper is organized as follows. Section 2 shows the limit distribution of the least squares estimate of mildly explosive models with strong mixing innovations. Section 3 applies the result to identifying the onset and the end of the explode period for an econometric time series. Sections 4 and 5 conduct a simulation study and data analysis to illustrate the usefulness of the method. Section 6 provides concluding remarks. All the lemmas and proofs of the theorem in Section 2 are provided in Appendix. 2. Mixing innovations Consider the time series model yt = ρn yt −1 + ut ,
t = 1, . . . , n;
√
ρn = 1 +
c nν
,
ν ∈ (0, 1), c > 0,
(3)
initialized at some y0 = oP ( nν ) independent of σ (u1 , . . . , un ), where (ut ) is a zero-mean strictly stationary and ergodic process. In this section, we investigate the limit distribution of the least squares estimate
ρˆ n = (
n ∑
yt −1 yt )(
t =1
n ∑
y2t −1 )−1
t =1
when the innovation sequence (ut )t ∈Z is strongly mixing with order α (·) decaying exponentially to 0, that is, α (k) = O(ξ k ) with 0 < ξ < 1. Specifically, it will be shown that d
(nν ρnn /2c)(ρˆn − ρn ) −→ C (see Theorem 1), where C denotes a Cauchy random variable. Let n 1 ∑ −(n−t)−1 ρn ut Xn = √ nν t =1
and
n 1 ∑ −j Yn = √ ρ n uj . nν j=1
(4)
Then, we have the following lemma, the proof of which is provided in Appendix. Lemma 1. Suppose that (ut )t ∈Z satisfies E|ut |4+δ < ∞ for some δ > 0 and α (k) ≤ ϕξ k with ϕ > 0 and ξ ∈ (0, 1). Then, the sequences (Xn )n∈N and (Yn )n∈N in (4) satisfy d
(Xn , Yn ) −→ (X , Y ) as
n → ∞,
where X and Y are two independent N(0, σ 2 /2c) random variables with σ 2 =
∑∞
k=−∞ Cov(u0
, uk ).
Using Lemma 1, we have the following theorem, the proof of which is provided in Appendix. Theorem 1. Suppose that the conditions in Lemma 1 are satisfied. Then, the following limits apply as n → ∞: Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
(a) ((ρn−n /nν )
∑n
t =1 yt −1 ut
, (ρn−2n /n2ν )
)
–
3
d
∑n
−→ (XY , Y 2 ),
2 t =1 yt −1 )
d
(b) (nν ρnn /2c)(ρˆn − ρn ) −→ C , where X and Y are independent N(0, σ 2 /2c) random variables and C is a Cauchy random variable. Remark 1. Theorem 1(b) implies that
ρnn
2(ρn −1)
ρnn
d
(ρˆ n − ρn ) −→ C and
ρˆ
and further, ( ρn )n → 1 using Theorem 1(b). As such, it holds that n
d
(ρn2 −1)
(ρˆ n − ρn ) −→ C . In fact, we can check that
ρˆ nn 2(ρˆ n −1)
d
(ρˆ n − ρn ) −→ C and
ρˆ nn (ρˆ n2 −1)
ρˆ n ρn
→ 1,
d
(ρˆ n − ρn ) −→ C . These
results can be used for constructing the confidence intervals of ρn as mentioned in Phillips, Wu, and Yu (2011). 3. Explosive episode In this section, we consider applying the result in Section 2 to detect the onset and the end of a bubble period of an econometric time series. As in Phillips and Yu (unpublished) to handle the i.i.d. innovation case, we consider the model with strong mixing innovations (ut ) as follows: yt = yt −1 I{t < λe } + ρn yt −1 I{λe ≤ t ≤ λf } +
( ∑ t
uk +
y∗λf
)
I{t > λf } + ut I{t ≤ λf }
(5)
k=λf +1
with ρn = 1 + ncν , c > 0, ν ∈ (0, 1), wherein λe and λf denote the unknown originating and collapsing points of a bubble period and I(·) denotes the indicator function. That is, {yt } follows a unit-root model until λe , experiences a bubble during [λe , λf ], and returns to the unit-root model after λf but with initial value y∗λf being possibly different from yλf . In estimating the bubble period, it is assumed that recursive autoregressions are runs with data {yt : t = 1, 2, . . . , λ = [nr ]}, r0 ≤ r ≤ 1, originating from λ0 = [nr0 ], so that the minimum amount of data used for the regressions is λ0 . We estimate λe = [nre ] and λf = [nrf ] based on the Dickey–Fuller test statistics: DFrρ = λ ρˆ µ (λ) − 1 or DFrt =
( ∑λ
− y¯ (−1) )2
t =1 (yt −1 2 u,λ
)1/2
( ) ρˆ µ (λ) − 1 , σˆ ∑λ ∑λ ∑λ 1 2 ¯ ¯ ¯ ¯ ¯ where ρˆ µ (λ) = ˆ u2,λ is the usual least t =1 (yt − y(0) )(yt −1 − y(−1) )/ t =1 (yt −1 − y(−1) ) , (y(0) , y(−1) )= λ k=1 (yk , yk−1 ) and σ (
)
squares residual variance estimator. The critical values were obtained from the limit theorem that under the null hypothesis, for each fixed r, as n → ∞, d
DFrρ −→
∫1 0
˜ WdW
∫1 0
˜2 W
d
∫1
, DFrt −→ (∫ 0 1 0
˜ WdW ˜2 W
)1/2 , ∫1
˜ where W is a standard Brownian motion and W(r) = W(r) − 0 W, 0 ≤ r ≤ 1, is the demeaned Brownian motion (cf. Tables 10.A.1 and 10.A.2 in Fuller, 1996). We date the origination of an explosive episode as τˆe = [nrˆe ] with rˆe being equal to either ρ
rˆeρ = inf {s : DFsρ > c vθn (s)} or rˆet = inf {s : DFst > c vθt n (s)}, s≥r0
s≥r0
ρ where c vθn (s) and c vθt n (s) denote the right side 100θn % critical values of the two statistics (‘c v ’ stands for ‘critical value’) based ρ on λs = [ns] observations and θn is the size of the one-sided test. One can allow θn → 0 as n → ∞, in this case, c vθn → ∞
and c vθt n → ∞. After finding rˆe , we estimate the end of the bubble by λˆ f = [nrˆf ], where rˆf is either ρ
rˆf =
inf
log(n) s≥ˆre + n
ρ {s : DFsρ < c vθn (s)} or rˆft =
inf s≥ˆre +
log(n) n
{s : DFst < c vθt n (s)}.
Based on the results in Section 2, we have the following results, the proofs of which are omitted because they are similar to those of Theorems 3.1–3.3 of Phillips and Yu (unpublished). Theorem 2. Under the null hypothesis of no episode of explosive behavior (c = 0 and ρn = 1 in model (5)) and provided ρ c vθn → ∞ and c vθt n → ∞, the probability of detecting the onset of a bubble by DF ρ or DF t tends to 0 so that P(rˆe ∈ [r0 , 1]) → 0 as n → ∞. Theorem 3. (i) If 1 ρ
c vθn
ρ
+
c vθ n n1−ν
→ 0,
(6)
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
4
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
P
under the alternative hypothesis of mildly explosive behavior in Model (5), rˆe → re as n → ∞, where rˆe is obtained from the DF ρ test. (ii) If 1 c vθn t
+
c vθt n n1−ν/2
→ 0,
(7) P
then under the alternative hypothesis of explosive behavior in model (5), rˆe → re as n → ∞, where rˆe is obtained from the DF t test. Theorem 4. (i) Under the null hypothesis of no episode of explosive behavior, P(rˆe ∈ [r0 , 1]) → 0 (as in Theorem 2) and P(rˆf ∈ [r0 , 1]) → 0 as n → ∞. (ii) Suppose conditions (6) and (7) hold and the alternative hypothesis of Model (5) with a mildly explosive episode applies. Then, P
ρ
conditional on (rˆe > r0 ), rˆf → rf as n → ∞, where rˆf = rˆf or rˆft . 4. Simulation study In this section, we report simulation results examining the performance of the aforementioned dating procedure in ρ Section 3. To detect a bubble period, the two experiments, based on the DFλ and DFλt tests, are √implemented for 1000 sample replications with data generated from Model (5) with various GARCH(1,1) innovations ut = ht ηt , where ηt ∼ i.i.d. N(0, 1) and ht = 0.2 + 0.3u2t −1 + 0.2ht −1 (LGARCH); log(ht ) = 0.2 + 0.3 log(ηt2−1 ) + 0.2 log(ht −1 ) (MGARCH); log(ht ) = 0.2 + 0.3(|ηt −1 | + 0.3ηt −1 ) + 0.2 log(ht −1 ) (EGARCH); ht = 0.2 + 0.3(ηt −1 − 0.2)2 ht −1 + 0.2ht −1 (NGARCH); ht = 0.2 + 0.3(ηt −1 − 0.2)2 + 0.2ht −1 (VGARCH) (cf. Carrasco & Chen, 2002), and with double-threshold AR innovations: ut = 0.4ut −1 I(ut −1 ≥ 0) − 0.1ut −1 I(ut −1 < 0)
{ }1/2 + ηt (0.01 + 0.3u2t −1 )I(ut −1 ≥ 0) + (0.02 + 0.2u2t −1 )I(ut −1 < 0)
(cf. Chen & Chen, 2000). From each sample path, we obtain rˆe and rˆf based on the DF ρ and DF t statistics for r ∈ [0.1, 1], setting r0 = 0.1. We employ n = 500, 1000 and the critical values log10 (log10 (nr)) for DF ρ and 53 log10 (log10 (nr)) for DF t , corresponding to significance levels θn with a range from 0.010 to 0.025 (cf. Tables 10.A.1 and 10.A.2 in Fuller, 1996) for the GARCH innovations. On the other hand, in the double-threshold AR innovations case, we use the critical values a1 log10 (log10 (nr)) + b1 for DF ρ and a2 log10 (log10 (nr)) + b2 for DF t . The coefficients a1 and b1 are estimated by the equations: ρ ρ ρ c vθ250 = aˆ 1 log10 (log10 (250)) + bˆ 1 and c vθ500 = aˆ 1 log10 (log10 (500)) + bˆ 1 with θ250 = 0.025 and θ500 = 0.010. The c vθ250 ρ 2 and c vθ500 are obtained by a Monte Carlo simulation using the estimator of σ (cf. Phillips, 1987). The coefficients a2 and b2 are similarly estimated. In all cases, we set the true values of the onset and the end dates as re = 0.4 and rf = 0.7. We also fix ν = 0.5 but allow c to take three different values so that the AR coefficient also takes the three values 1.03, 1.04 and 1.05. These numbers are empirically realistic given recent experience in the financial markets: see, for example, Phillips et al. (2011). Tables 1–6 exhibit the results for the DF ρ and DF t tests: the means and standard errors for rˆe and rˆf are obtained out of 1000 experiments. The tables show that the true values of re and rf are estimated more accurately in most cases as either n or ρn increases. It is noteworthy that the type of GARCH innovation does not affect much the result (although not reported here, the same experiment was conducted for various sets of parameters and the results were indifferent). This strongly indicates that the proposed dating procedure is more dependent with the degree of explosion rather than the correlation structure of the innovations as far as they are weakly dependent. This phenomenon is interesting in that the correlation of data often critically affects statistical inferences originally designed for i.i.d. samples. 5. Data analysis In this section, we apply the dating procedure to the Korea Composite Stock Price Index (KOSPI) (cf. Fig. 1). The dataset consists of 1434 weekly observed log KOSPI values from the fourth week of April 1982 to the fifth week of December 2009 (cf. Fig. 2). To detect the onset and end of an explosive episode, we first fit an AR(1) model with i.i.d. innovations in (3) to the dataset. The recursive DF ρ and DF t statistics are plotted in Figs. 3 and 4. The dot-dashed lines stand for the curves ρ log10 (log10 (ns)) and 35 log10 (log10 (ns)), respectively, with s ∈ [0.1, 1], which are used for obtaining the critical values c vθn (s) Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
5
Table 1 LGARCH case. Mean (standard error) ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
ρ
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.44745(0.12842) 0.608292(0.18327) 0.45236(0.12706) 0.61687(0.17814)
0.42952(0.10176) 0.61448(0.17695) 0.43231(0.10066) 0.61884(0.17369)
0.42035(0.08819) 0.62381(0.17053) 0.42241(0.08726) 0.62707(0.16773)
0.41093(0.08433) 0.60900(0.18037) 0.41416(0.08297) 0.61520(0.17621)
0.39988(0.08261) 0.60646(0.18486) 0.40176(0.08191) 0.61032(0.18247)
0.39658(0.07601) 0.61416(0.18037) 0.39939(0.07430) 0.62111(0.17578)
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.44476(0.12505) 0.60619(0.18396) 0.44910(0.12365) 0.61479(0.17947)
0.42529(0.09865) 0.60872(0.17882) 0.42887(0.09784) 0.61423(0.17508)
0.41784(0.08803) 0.61810(0.17363) 0.42047(0.08725) 0.62209(0.17052)
Table 2 MGARCH case. Mean (standard error) ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
0.41412(0.08140) 0.61745(0.17361) 0.41621(0.08071) 0.62133(0.17087)
0.40809(0.07401) 0.62348(0.16989) 0.41105(0.07196) 0.63053(0.16439)
0.39978(0.07252) 0.62198(0.17444) 0.40095(0.07202) 0.62420(0.17267)
Mean (standard error)
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.45141(0.13001) 0.61370(0.18146) 0.45583(0.13027) 0.62149(0.17785)
0.42856(0.10766) 0.61462(0.17859) 0.43250(0.10853) 0.62210(0.17484)
0.42010(0.08682) 0.61889(0.17267) 0.42367(0.08525) 0.62683(0.16751)
ρ
Table 3 EGARCH case. ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
0.40810(0.08428) 0.60878(0.18091) 0.41113(0.08271) 0.61427(0.17703)
0.40534(0.07572) 0.61929(0.17452) 0.40765(0.07422) 0.62532(0.17061)
0.39571(0.07712) 0.60799(0.18406) 0.39992(0.07412) 0.61949(0.17620)
Mean (standard error)
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.44876(0.12742) 0.60940(0.18172) 0.45195(0.12695) 0.61697(0.17867)
0.42870(0.10332) 0.61312(0.17627) 0.43275(0.10366) 0.62018(0.17255)
0.41974(0.08736) 0.61943(0.17279) 0.42274(0.08616) 0.62610(0.16832)
ρ
Table 4 NGARCH case. ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
0.40640(0.08778) 0.60350(0.18521) 0.40840(0.08720) 0.60713(0.18301)
0.40497(0.07541) 0.62057(0.17456) 0.40814(0.07333) 0.62794(0.16884)
0.39692(0.07623) 0.61038(0.18174) 0.40044(0.07450) 0.62004(0.17573)
Mean (standard error)
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.44618(0.12685) 0.60205(0.18359) 0.44969(0.12631) 0.60813(0.18076)
0.42309(0.10051) 0.60234(0.18152) 0.42681(0.10047) 0.60912(0.17809)
0.41898(0.08934) 0.61819(0.17407) 0.42206(0.08798) 0.62461(0.16953)
0.41423(0.08295) 0.61718(0.17564) 0.41708(0.08187) 0.62115(0.17209)
0.40259(0.07510) 0.61174(0.17818) 0.40505(0.07416) 0.61792(0.17413)
0.40185(0.07048) 0.62230(0.17218) 0.40435(0.06842) 0.62787(0.16764)
ρ
Table 5 VGARCH case. ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
ρ
and c vθt n (s). As seen in Figs. 3 and 4, both statistics reveal that the onset of a bubble is the fourth week of February 1986, whereas the end of the bubble appears to be slightly different, namely, the third week of February 1990 for DF ρ and the third week of April 1990 for DF t . Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
6
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
Fig. 1. The plot of KOSPI data from April, 1982 to December, 2009.
Fig. 2. The plot of log KOSPI data from April, 1982 to December, 2009.
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
7
Fig. 3. The method based on DF ρ .
Fig. 4. The method based on DF t .
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
8
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
Table 6 Double-threshold case. Mean (standard error) ρ
n = 500
rˆe ρ rˆf rˆet rˆft
n = 1000
rˆe ρ rˆf rˆet rˆft
ρ
1 + c /nν = 1.03
1 + c /nν = 1.04
1 + c /nν = 1.05
0.49360(0.20528) 0.66909(0.17284) 0.40097(0.04328) 0.67008(0.10855)
0.42816(0.10713) 0.66005(0.11996) 0.39661(0.04526) 0.66954(0.11202)
0.40864(0.05079) 0.67229(0.09313) 0.39302(0.04706) 0.66602(0.11767)
0.41401(0.05046) 0.68618(0.07174) 0.40302(0.03124) 0.68654(0.07698)
0.40557(0.02576) 0.69252(0.05510) 0.39801(0.03769) 0.68000(0.09277)
0.40290(0.02557) 0.69003(0.06415) 0.39826(0.03244) 0.68187(0.08721)
Next, we implement a Lagrange-multiplier test (cf. Zhang, Wong, Li, & Ip, 2011) for the error terms until the fourth week of October 1983 to check the fitness of a threshold ARCH (TARCH) model. Since the test statistic yields value 13.77857 (its corresponding p-value is less than 0.01), we conclude that the TARCH model is well-fitted to the residuals. The estimated model for the errors ut is as follows: ut = 0.23712ut −1 I(ut −1 ≥ 0) + 0.02201ut −1 I(ut −1 < 0)
√ + ηt 0.00039 + 00.07565u2t −1 . As suggested in our simulation study, we use the critical values 4.58908log10 (log10 (ns)) − 0.93697 for DF ρ and 11.12377log10 (log10 (ns)) − 3.72623 for DF t : see the long-dashed lines in Figs. 3 and 4. Our findings show that the onset dates are detected the same as in the previous analysis to use the i.i.d. innovations, whereas the end dates move backward compared with those obtained earlier, namely, the end of the bubble is detected at the fourth week of June 1989 for DF ρ and at the third week of January 1990 for DF t . This result suggests that our method can provide the practitioners with a more methodological yardstick in deciding whether to remain or to exit the market. 6. Concluding remarks In this study, we consider the mildly explosive autoregression with strong mixing innovations. A Cauchy limit theorem is obtained as in the i.i.d. case. The result is applied to identify the onset and the end of a bubble period for an econometric time series. Simulation results indicate that the proposed procedure is not much affected by the choice of GARCH type and double-threshold autoregressive innovations, but is a lot more sensitive to the degree of explosion. A data analysis is conducted and it is seen that the bubble detecting method works well for data with TARCH innovations. Acknowledgments We would like to thank the referees for their careful reading and valuable comments that improve the quality of the paper. This research was supported in part by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and future Planning No. 2015R1A2A2A010003894 (Lee); and by grants from HKSAR-RGC-GRF Nos 14300514 and 14325216; HKSAR-RGC-CRF:CityU8/CRG/12G; and the Theme-based Research Scheme of HKSAR-RGC-TBS T32-101/15-R (Chan). Appendix To prove Lemma 1, using the Cramer–Wold device, it suffices to show that
(
d
aXn + bYn −→ N 0, We rewrite aXn + bYn = 1
ζni = √
nν
(a2 + b2 )σ 2 )
for any a, b ∈ R.
2c
∑n
ζ
i=1 ni ,
(8)
where
{aρn−i + bρn−(n−i)−1 }ui ,
1 ≤ i ≤ n.
Let rn , pn , qn be sequences of positive integers such that rn (pn + qn ) ≤ n < (rn + 1)(pn + qn ) and rn ∼ n1−ν/2 , pn ∼ nν/2 − nν/4 and qn ∼ nν/4 . Here, notation an ∼ bn means n ∑ i=1
∑rn ζni =
j=1
√
nν
Vj
∑rn +
j=1
√
Wj
nν
Rn
+√
nν
an bn
→ 1 as n → ∞. We write
,
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
9
where Vj =
pn ∑
(aρn−(j−1)(pn +qn )−k + bρn−(n−(j−1)(pn +qn )−k)−1 )u(j−1)(pn +qn )+k ,
k=1 qn
Wj =
∑
(aρn−{jpn +(j−1)qn }−k + bρn−{n−(jpn +(j−1)qn )−k}−1 )ujpn +(j−1)qn +k ,
k=1
√ Rn =
nν
n ∑
ζni −
i=1
rn ∑
Vj −
rn ∑
j=1
Wj .
j=1
∑rn
Lemma A1. Under the conditions in Lemma 1, we have √
√j=1
Vj
( ) 2 2 2 d −→ N 0, (a +2cb )σ for any a, b ∈ R.
nν
ν
Proof. Let γ > 2, τ = ϵ r n with ϵ > 0 and d = pn κ∥u1 ∥γ with κ > |a| + |b|, where ∥u1 ∥γ = (E|u1 |γ )1/γ . Since pn rn ∼ n n and (ut ) is stationary, for all large n, we get
∥Vj + d∥γ ≥ d − ∥Vj ∥γ ≥ (κ − |a| − |b|)pn ∥u1 ∥γ ≥
√ ϵ nν rn
= τ. d
Then, using Theorem 3 of Bradley (1983), we can construct independent random variables V1∗ , . . . , Vr∗n , such that Vj∗ = Vj and
( ∥V + d∥ ) 2γγ+1 2γ j γ α (qn ) 2γ +1 , j = 1, . . . , rn . τ
P(|Vj∗ − Vj | > τ ) ≤ 11 Thus, setting ∆n =
∑rn
√j=1
Vi
nν
∑rn
−
j=1 √
Vj∗
nν
(9)
, we can see that ∆n = oP (1) because for any ϵ > 0,
rn ∑ ( 1 ϵ) P √ |Vj − Vj∗ | >
P(|∆n | > ϵ ) ≤
nν
j=1
rn
√
rn
∑ (
P |Vj − Vj∗ | >
≤
j=1 2γ
≤ 11α (qn ) 2γ +1
rn
rn ( γ ∑ ∥Vj + d∥γ ) 2γ +1 j=1
2 γ qn
ϵ nν )
τ
( ∥V + d∥ ) 2γγ+1 j γ 1≤j≤rn τ
≤ 11ϕξ 2γ +1 rn max 2 γ qn
= O(ξ 2γ +1 rn pn ) → 0. ∑rn
We show that
E(Vj∗ )2 = E
j=1 √
Vj∗
nν
(10)
is asymptotically normal. Note that
pn (∑
(aρn−(j−1)(pn +qn )−k + bρn−(n−(j−1)(pn +qn )−k)−1 )uk
)2
k=1
=
pn −1
pn −|l|
∑
∑
(aρn−(j−1)(pn +qn )−k + bρn−(n−(j−1)(pn +qn )−k)−1 )
l=−(pn −1) k=1
× (aρn−(j−1)(pn +qn )−k−|l| + bρn−(n−(j−1)(pn +qn )−k−|l|)−1 )Eu0 ul =
pn −1
pn −|l|
∑
∑
(In,j,l,k + Jn,l )Eu0 ul
l=−(pn −1) k=1
with In,j,l,k = a2 ρn−2(j−1)(pn +qn )−2k−|l| + b2 ρn−2(n−(j−1)(pn +qn )−k)+|l|−1 , Jn,l = abρn−n−1 (ρn−|l| + ρn|l| ). Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
10
H. Oh et al. / Journal of the Korean Statistical Society ( 1−ρn−2n
Further, using the fact that rn 1 ∑
nν
pn −1
pn −|l|
∑
∑
2 1 pn (ρn −1) , 2p 2c 1−ρn n
→
nν (ρn2 −1)
)
–
→ 1 and the dominated convergence theorem, as n → ∞,
In,j,l,k × Eu0 ul
j=1 l=−(pn −1) k=1
= a2
+ b2
rn pn ∑
(
nν
(
ρn−2(j−1)(pn +qn )
)( p∑ n −1 l=−(pn −1)
j=1 rn
pn ∑ nν
−|l|
ρn−2n+2(j−1)(pn +qn )−2
−2(p −|l|)
ρn (1 − ρn n pn (ρn2 − 1)
)( p∑ n −1
2+|l|
ρn
Eu0 ul
− 1)
pn (ρn2 − 1)
l=−(pn −1)
j=1
2(pn −|l|)
(ρn
)
)
) E u0 ul
(a2 + b2 )σ 2
→
(11)
2c
and owing to
ρn−n n
rn 1 ∑
nν
= o(1), rn pn ∼ n and
nν
pn −1
pn −|l|
∑
∑
(
∑pn −1
|l|
1−
l=−(pn −1)
pn
)
|l|
ρn Eu0 ul → σ 2 , we have
Jn,l × Eu0 ul
j=1 l=−(pn −1) k=1
( = ab
ρn−n−1 n
)(
rn pn
nν
pn −1
)
n
∑ (
|l|
1−
)
pn
l=−(pn −1)
ρn|l| Eu0 ul → 0.
(12)
Since V1∗ , . . . , Vr∗n are independent, combining (11) and (12) we have, as n → ∞, rn (∑ Vj∗ )2 (a2 + b2 )σ 2 → . √
E
nν
j=1
(13)
2c
Next, similar to (11) and (12), we have as n → ∞, rn pn ∑
nν
(pn −1)
pn −|l|
∑
∑
(In,j,l,k + Jn,l ) →
j=1 l=−(pn −1) k=1
a2 + b 2 2c
,
(14)
which implies that the LHS of (14) is uniformly bounded by a positive real number K1 . Then, for any η > 0, we get
) ( ∗ 2 (⏐ ∗ ⏐ ( rn rn )) ∑ √ (Vj ) 1 ∑ ⏐ Vj ⏐ 2 ν) E I = E V I( | V | > η n > η √ ⏐ ⏐ j ν j ν ν n
j=1
=
nν
n
n
(∑ pn rn 1 ∑ E
j=1
(aρn
−(j−1)(pn +qn )−k
j=1
+ bρn
−(n−(j−1)(pn +qn )−k)−1
)2 )uk
k=1
(⏐ ∑ ) pn ⏐2 ⏐ ⏐ ×I ⏐ (aρn−(j−1)(pn +qn )−k + bρn−(n−(j−1)(pn +qn )−k)−1 )uk ⏐ > η2 nν k=1
=
rn 1 ∑
nν
pn −1
pn −|l|
∑
∑
(In,j,l,k + Jn,l )Eu0 ul I
j=1 l=−(pn −1) k=1
pn −|l|
n −1 ( p∑
∑
)
(In,j,l,k + Jn,l )u0 ul > η2 nν ,
l=−(pn −1) k=1
which is no more than pn −1
pn −|l|
∑
K1
∑ 1
l=−(pn −1) k=1
pn
pn −1
≤ K1
∑ (
1−
l=−(pn −1)
Eu0 ul I
n −1 ( p∑
pn −|l|
∑
(|a| + |b|)2 u0 ul > η2 nν
)
l=−(pn −1) k=1
|l| ) pn
E u0 ul I
n −1 ( p∑ (
1−
l=−(pn −1)
|l| ) pn
u0 ul >
) η 2 nν −→ 0, 2 (|a| + |b|) pn
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
where we have used the fact that ∑rn
j=1 √
implies that {
∑pn −1
( 1−
l=−(pn −1)
|l| pn
)
)
–
11
Eu0 ul → σ 2 and nν /pn → ∞ as n → ∞. This, together with (13),
Vj∗
} satisfies the Lindeberg–Feller condition, and subsequently, in view of (10) and (13), ∑rn ∗ ) ( (a2 + b2 )σ 2 d j=1 Vj j=1 Vj . = √ + ∆n −→ N 0, √ nν
∑rn
nν
nν
(15)
2c
Hence, the lemma is established. □ ∑rn
j=1 √
Lemma A2. Under the conditions in Lemma 1,
Wj
and
nν
√Rn
nν
converge to zero in probability.
Proof. We apply the coupling theorem to Wj ’s similarly to the case of Vj ’s in the proof of Lemma A1 and follow the same lines as in (10)–(15) using qn ∼ nν/4 . Then, we get, as n → ∞
∑rn
j=1
√
Wj
(
d
−→ N 0,
nν/2
(a2 + b2 )σ 2 2c
) ,
which implies,
∑rn
j=1
√
Wj
P
−→ 0.
nν
(16)
Next, noticing that n ∑
Rn =
(aρn−j + bρn−(n−j)−1 )uj I(rn (pn + qn ) < n),
j=rn (pn +qn )+1
(
owing to rn pn ∼ n, we can easily see that Var
√Rn
)
nν
−→ 0 as n → ∞, and thus, by Chebyshev’s inequality,
√Rn
nν
P
−→ 0.
Combining this and (16), we establish the lemma. □ Proof of Lemma 1. Argument (8) (hence, Lemma 1) is a direct consequence of Lemmas A1 and A2. □ The following lemma is useful to prove Theorem 1. Lemma 2. Suppose that the conditions in Lemma 1 are satisfied. Then, as n → ∞, n n ρn−n ∑ ∑
nν
L2
ρnt −j−1 uj ut −→ 0,
(17)
t =1 j=t
n t −1 ρn−2n+1 ∑ ∑
nν
L2
ρnt −j−1 uj ut −→ 0,
(18)
t =1 j=1
L2
where −→ denotes convergence in the mean square. Proof. Since
ρn−n n nν
= o(1), owing to Lemma 2.1 of Davydov (1968), there exists a constant K2 > 0, such that
⏐ −n ∑ n ⏐ ρn n ∑
E⏐⏐
≤
ρn
≤
ρn
nν
−2n
n2ν −2n
t =1 j=t
n n n n ∑ ∑ ∑ ∑
ρn−(j−t)−(i−s)−2 |Euj ut ui us |
t =1 j=t s=1 i=s
(
n2ν
≤ K2
⏐2 ⏐ ρnt −j−1 uj ut ⏐⏐
36n ∥
ρn−2n n2 ν
2
u1 44+δ
∥
∞ ∑
δ/(4+δ )
(k + 1)[α (k)]
u1 42+δ
+ 288n ∥ ∥
k=0
n2 ∥u1 ∥44+δ → 0
2
∞ (∑
α (k)
)2 )
k=0
as n → ∞,
which establishes (17). The proof of (18) is similar. □ Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
12
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
Proof of Theorem 1. For the proof, we follow the lines in Phillips and Magdalinos (2007a). Squaring (3), summing over t ∈ {1, . . . , n}, and using y0 = oP (nν/2 ), we obtain n ρn−2n ∑
n2ν
1
y2t −1 =
n2ν ( n2
ρ − 1)
t =1
nν (ρ 2 n
ρn−2n (y2n − y20 ) − 2ρn−2n+1
nν
− 1)
y2n −
nν
t =1
n 2ρn−2n+1 ∑
nν
ρn−2n ∑n 2 t = 1 ut nν
Since nν (ρn2 − 1) → 2c as n → ∞, we get n ρn−2n+1 ∑
n ∑
yt −1 ut − ρn−2n
t =1
{ ρn−2n
1
=
{
yt −1 ut −
= OP
n nν
ρn
u2t
}
t =1 n ρn−2n ∑
t =1
(
n ∑
) −2n
nν
u2t + oP (ρn−2n ).
}
t =1
= oP (1). Further, by Lemmas 1 and 2,
n n t −1 y0 ρ −n ∑ −(n−t) ρ −2n+1 ∑(∑ t −1−j ) yt −1 ut = √ √n ut + n ν uj ut = op (1), ρn ρn n nν nν t =1 t =1 j=1
so that n ρn−2n ∑
n2ν
1
y2t −1 =
nν (ρ 2 n
t =1
− 1)
1
=
nν (ρn2 − 1) 1
=
2c
( ρn−n )2 √ yn + oP (1) nν
n ( 1 ∑ )2 ρn−j uj + oP (1) √
nν
j=1
Yn2 + oP (1),
where Yn is the one in (4). Hence, due to Lemma 1, we get n ρn−2n ∑
n2ν
d
y2t −1 −→
t =1
1 2c
Y 2,
d
(
Y = N 0,
σ2 ) .
(19)
2c
We express n ρn−n ∑
nν
t =1
n t −1 n y0 1 ∑ −(n−t +1) ρ −n ∑ ∑ t −j−1 yt −1 ut = √ √ ρn uj ut , ρn ut + nν n nν nν t =1 t =1 j=1
√
where the first term is asymptotically negligible because y0 / nν = oP (1) and 1
√
nν
n ∑
d
ρn−(n−t +1) ut = Yn
t =1
owing to Lemma 1. Hence, using Lemma 2, we obtain n ρn−n ∑
nν
yt −1 ut =
t =1
=
n t −1 ρn−n ∑ ∑
nν
t =1 j=1
n n ρn−n ∑ ∑
nν
ρnt −j−1 uj ut + oP (1) ρnt −j−1 uj ut + oP (1)
t =1 j=1
= Xn Yn + oP (1), which in turn implies n ρn−n ∑
nν
t =1
d
d
(
yt −1 ut −→ XY , X , Y = N 0,
σ2 ) 2c
(20)
due to Lemma 1. This yields (a). Also (b) is an immediate consequence of (19), (20) and the fact that the limiting random variables X and Y are independent. □ References Anderson, T. W. (1959). On asymptotic distributions of estimates of parameters of stochastic difference equations. The Annals of Mathematical Statistics, 30, 676–687. Aue, A., & Horvàth, L. (2007). A lmit theorem for mildly explosive autoregression with stable errors. Econometric Theory, 23, 201–220.
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.
H. Oh et al. / Journal of the Korean Statistical Society (
)
–
13
Bradley, R. (1983). Approximation theorems for strongly mixing random variables. The Michigan Mathematical Journal, 30, 69–81. Buchmann, B., & Chan, N. H. (2007). Asymptotic theory of least squares estimators for nearly unstable processes under strong dependence. The Annals of Statistics, 35, 2001–2017. Carrasco, M., & Chen, X. (2002). Mixing and moment properties of various GARCH and stochastic volatility models. Econometric Theory, 18, 17–39. Chan, N. H., & Wei, C. Z. (1987). Asymptotic inference for nearly nonstationary AR(1) processes. The Annals of Statistics, 15, 1050–1063. Chen, M., & Chen, G. (2000). Geometric ergodicity of nonlinear autoregressive models with changing conditionl variances. The Canadian Journal of Statistics/La Revue Canadienne de Statistique, 28, 605–613. Davydov, Y. A. (1968). Convergence of distributions generated by stationary stochastic processes. Theory of Probability and its Applications, 13, 691–696. Fuller, W. A. (1996). Introduction to statistical time series. John Wiley & Sons. Giraitis, L., & Phillips, P. C. B. (2006). Uniform limit theory for stationary autoregression. Journal of Time Series Analysis, 27, 51–60. Magdalinos, T. (2012). Mildly explosive autoregression under weak and strong dependence. Journal of Econometrics, 169, 179–187. Magdalinos, T., & Phillips, P. C. B. (2008). Limit theory for cointegrated systems with moderately integrated and moderately explosive regressors. Econometric Theory, 25, 482–526. Phillips, P. C. B. (1987). Time series regression with a unit root. Econometrica, 55, 277–301. Phillips, P. C. B., & Magdalinos, T. (2007a). Limit theory for moderate deviations from a unit root. Journal of Econometrics, 136, 115–130. Phillips, P. C. B., & Magdalinos, T. (2007b). Limit theory for moderate deviations from a unit root under weak dependence. In D. A. P. Garry, & E. Tzavalis (Eds.), The refinement of econometric estimation and test procedures: Finite sample and asymptotic analysis (pp. 123–162). Cambridge: Cambridge University Press. Phillips, P. C. B., Wu, Y., & Yu, J. (2011). Explosive behavior in the 1990s NASDAQ: when did exuberance escalate asset values?. International Economic Review, 52, 201–226. Phillips, P. C. B., & Yu, J. (2009). Limit theory for dating the origination and collapse of mildly explosive periods in time series data. In Sim kee boon institute for financial economics. Singapore Management University, (unpublished manuscript). White, J. S. (1958). The limiting distribution of the serial correlation coefficient in the explosive case. The Annals of Mathematical Statistics, 29, 1188–1197. Zhang, X., Wong, H., Li, Y., & Ip, W.-C. (2011). A class of threshold autoregressive conditional heteroscedastic models. Statistics and its Interface, 4, 149–157.
Please cite this article in press as: Oh, H., et al., Mildly explosive autoregression with mixing innovations. Journal of the Korean Statistical Society (2017), https://doi.org/10.1016/j.jkss.2017.09.001.