Estimation of spectral density for seasonal time series models

Estimation of spectral density for seasonal time series models

Available online at www.sciencedirect.com Statistics & Probability Letters 67 (2004) 149 – 159 Estimation of spectral density for seasonal time seri...

238KB Sizes 2 Downloads 138 Views

Available online at www.sciencedirect.com

Statistics & Probability Letters 67 (2004) 149 – 159

Estimation of spectral density for seasonal time series models Dong Wan Shin∗ Department of Statistics, Ewha Women’s University, Seoul 120-750, South Korea Received May 2003; accepted September 2003

Abstract For estimating spectral densities of stationary seasonal time series processes, a new kernel is proposed. The proposed kernel is of the shape which is in harmony with oscillating patterns of the autocorrelation functions of typical seasonal time series process. Basic properties such as consistency and nonnegativity of the spectral density estimator are discussed. A Monte-Carlo simulation is conducted for multiplicative monthly autoregressive process and moving average process, which reveal that the proposed kernel provides more e5cient spectral density estimator than the classical kernels of Bartlett, Parzen, and Tukey–Hanning. c 2003 Elsevier B.V. All rights reserved.  Keywords: E5ciency; Kernel; Spectral density

1. Introduction In many instances, time series data are observed on seasonal bases. For example, economic and
Tel.: +82-2-3277-2614; fax: +82-2-3277-3607. E-mail address: [email protected] (D.W. Shin).

c 2003 Elsevier B.V. All rights reserved. 0167-7152/$ - see front matter  doi:10.1016/j.spl.2003.09.012

150

D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

Franses (1998) and Shin and Oh (2000) require values of spectral density estimates for =2j=d; j= 0; : : : ; [d=2], where d is the period of seasonality. Also, values of spectral density would be needed for generalized method of moments estimations of nonlinear simultaneous equation models when applied to seasonal data. We observe that the autocovariance functions (ACF) of seasonal time series data tend to be more signi
2. New kernel Let zt be a d-seasonal stationary time series such as a d-seasonal multiplicative ARMA process. We are interested in estimating the spectral density ∞ 

f() = (2)−1

h cos(h);

h=−∞

based on a sample {z1 ; : : : ; zn }, where h = cov(zt ; zt −h ) is the ACF of zt . One estimator of f() is the sample spectrum given by n−1 

fˆn () = (2)−1

ˆh cos(h);

h=−(n−1)

n−h −1

where ˆh = n K t+h − z) K and zK = n−1 t=1 (zt − z)(z in , people usually consider a smoothed one ‘ 

fKn () = (2)−1

n

t=1

zt . Since the sample spectrum is not smooth

wh‘ ˆh cos(h)

h=−‘

with a suitable bandwidth ‘ and a kernel wh‘ . The kernel wh‘ is commonly determined by wh‘ = w(|h|=‘); where w(·) is a function continuous at 0 and at all but a
D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

151

Fig. 1. Population ACF for monthly multiplicative AR and MA processes.

Of the widely used kernels are the Bartlett kernel, Parzen kernel and Tukey–Hanning kernel, which are determined by w(x) = (1 − |x|)I (|x| 6 1); w(x) = {1 − 2a + 2a cos(x)}I (|x| 6 1);

a = 0:25;

w(x) = (1 − 6x2 + 6|x|3 )I (|x| 6 1=2) + 2(1 − |x|)3 I (1=2 ¡ |x| 6 1); respectively, where I (A) is the indicator function of a set A. We note that seasonal time series have usually special features having more signi
h=−‘

where r = [d=2];

r1 = [(d − 1)=2];

r‘ = rc‘ ;

 and c‘ is a positive sequence such that c‘ → ∞ as ‘ → ∞. Since ‘h=−‘ wh‘ ˆhd+i cos((hd + i))  ˜ estimates ∞ h=−∞ hd+i cos((hd + i)) and wir‘ → 1 as n → ∞; fn () estimates r ∞   (2)−1

hd+i cos((hd + i)) = f(): i=−r1 h=−∞

152

D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

Fig. 2. Various Kernels with d = 12 and c‘ = 1 (line—classical; dotted—proposed).

Note that the sample ACF ˆhd+i has weight wir‘ wh‘ . Owning to wh‘ , weight for ˆhd+i with
D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

153

The conditions in C1 are the classical ones which are required for weak consistency of classical estimators. Theorem 1 establishes asymptotic bias and variance of the proposed estimator. Theorem 1. Assume C1 holds and # = limn→∞ {‘=(rc‘ )} exists. Then (i) limn→∞ ‘q [E f˜n () − f()] = −w(q) g(q) (), ∞ (ii) limn→∞ (n=‘)var[f˜n ()] = 2d 0 {w(z)}2 d z{f()}2 {1 + I ( = 0)}, where q is the largest number such that w(q) = lim |z|−q {1 − w(z)} = 0 z →0

exists and is 4nite and r ∞   (|h|q + #q |i|q ) cos((hd + i)) hd+i : g(q) () = i=−r1 h=−∞

Let ‘ be the bandwidth for fKn () which uses the same set of { ˆh ; h = 0; ±1; : : : ; ±‘d + r} as f˜n () based on bandwidth ‘. We then have ‘ = ‘d + r. According to Theorems of 5A and 5B of Parzen (1957), q lim ‘q [E fKn () − f()] = lim d−q ‘ [E fKn () − f()] = −w(q) d−q f(q) () n→∞

n→∞

and lim (n=‘)var[f˜n ()] = d limn→∞ (n=‘ )var[fKn ()] = 2d

n→∞

where (q)

f () =

∞ 

 0



{w(z)}2 d z{ f()}2 {1 + I ( = 0)};

|h|q cos(h) h :

h=−∞

Therefore, f˜n () and fKn () have the same asymptotic variance in that limn→∞ (n=‘)var[f˜n ()] = limn→∞ (n=‘ )var[fKn ()] but have diPerent asymptotic biases as manifested by diPerent multiplicative factors g(q) () and dq f(q) (). Note that the two terms g(q) () and d−q f(q) () depend on , #, type of model, and type of kernels through q. Numerical investigation of the two terms reveal that one term does not dominate the other term. For some combination of (; #; q); g(q) () ¿ d−q f(q) () and for the other combination g(q) () 6 d−q f(q) (). This means that, compared with fKn (); f˜n () is asymptotically more e5cient for some parameter combination and less e5cient for other combination. Theorem 2. Assume C1 and further that ‘=n → 0 and ‘1−q =c‘ → 0 then, for each  ∈ [0; ); p f˜n ()→ f(). Note that the condition ‘1−q =c‘ → 0 imposes condition ‘c‘ for consistency of f˜n (). For the Bartlette Kernel, q = 1. Therefore, we need c‘ → ∞ for consistency of f˜n (). For the other two kernels, we have q = 2 and we need ‘c‘ → ∞ for consistency.

154

D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

In the following theorem, we show that, if c‘ = 1, then the proposed spectral density estimators with Bartlett kernel and Parzen kernel are nonnegative. On the other hand, the proposed spectral density estimator with Tukey–Hanning kernel is not always nonnegative because the original spectral density estimator is not always nonnegative. Theorem 3. Let w(·) be the function corresponding to the Bartlett kernel or the Parzen kernel. If c‘ = 1, then f˜n () ¿ 0 for all . Nonnegativity of spectral density estimate is crucial in some applications such as semi-parametric unit root tests and generalized method  of moment estimation which require nonnegative estimates for the long-run variance '02 = 2f(0) = ∞ h=−∞ h . Also, the semi-parametric seasonal unit root tests of Shin and Oh (2000) and Breitung and Franses (1998) require estimates of 'k2 = 2f(k ); k = 2k=d; k =0; 1; : : : ; [d=2]. Now, Theorem 3 indicates that we can use f˜n (k ) as nonnegative estimators of 'k2 , which may be better than fKn (k ) for typical monthly time series as indicated by the simulation work in the following section. Theorem 3 also indicates that, if we choose c‘ to be a slowly growing sequence, we can have a consistent spectral density estimator f˜n () which is positive for almost all samples. 4. A Monte-Carlo study We compare the proposed method with the existing methods for the three kernels of Bartlette, Parzen, and Tukey–Hanning. Finite sample total mean square error (MSE) of spectral density estimates are compared. For a given spectral density estimator fn () based on a sample {z1 ; : : : ; zn },  2 the
D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

155

Table 1 Finite sample e5ciencies of f˜n () over fKn () for model (1 − B12 )(1 − B)zt = et , where et is a sequence of independent N(0; 1) errors; n = 120; number of replications is 1000 Bartlette 

0

Parzen

Tukey–Hanning

0.2

0.4

0.6

0.8

0

0.2

0.4

0.6

0.8

0

0.2

0.4

0.6

0.8





c‘ = 1

0.0 0.2 0.4 0.6 0.8

1 1 1 1 1

1.31 1.26 1.17 1.09 1.02

1.25 1.21 1.14 1.08 1.02

1.14 1.12 1.09 1.05 1.02

0.99 1.00 1.02 1.02 1.01

0.86 0.88 0.93 0.97 0.99

1.40 1.28 1.17 1.09 1.03

1.31 1.23 1.15 1.08 1.03

1.14 1.11 1.10 1.06 1.02

0.89 0.96 1.01 1.02 1.01

0.76 0.84 0.91 0.96 0.99

1.32 1.29 1.21 1.11 1.03

1.30 1.27 1.19 1.11 1.03

1.20 1.18 1.14 1.09 1.03

1.03 1.04 1.05 1.05 1.02

0.82 0.88 0.93 0.97 1.00

0.0 0.2 0.4 0.6 0.8

3 3 3 3 3

1.51 1.45 1.33 1.16 1.04

1.46 1.39 1.26 1.14 1.04

1.31 1.26 1.18 1.10 1.03

1.12 1.10 1.06 1.04 1.01

0.88 0.90 0.92 0.95 0.99

1.63 1.53 1.35 1.17 1.04

1.55 1.45 1.29 1.13 1.04

1.38 1.27 1.17 1.10 1.03

1.06 1.04 1.04 1.03 1.01

0.79 0.81 0.87 0.93 0.98

1.47 1.43 1.34 1.18 1.05

1.44 1.41 1.30 1.17 1.04

1.36 1.32 1.23 1.13 1.04

1.19 1.14 1.10 1.06 1.02

0.90 0.90 0.92 0.95 0.99

0.0 0.2 0.4 0.6 0.8

5 5 5 5 5

1.56 1.52 1.43 1.23 1.05

1.51 1.46 1.35 1.21 1.05

1.40 1.36 1.24 1.13 1.04

1.21 1.16 1.10 1.05 1.02

0.95 0.95 0.95 0.97 0.99

1.70 1.67 1.47 1.25 1.05

1.66 1.57 1.41 1.21 1.05

1.44 1.41 1.27 1.16 1.04

1.14 1.11 1.08 1.05 1.02

0.86 0.87 0.89 0.94 0.98

1.51 1.51 1.43 1.26 1.06

1.49 1.47 1.37 1.23 1.06

1.44 1.39 1.30 1.18 1.05

1.25 1.20 1.15 1.09 1.03

0.96 0.96 0.94 0.97 0.99

0.0 0.2 0.4 0.6 0.8

3 3 3 3 3

c‘ = ‘ 1.10 1.10 1.09 1.06 1.02

1.10 1.09 1.08 1.05 1.02

1.08 1.07 1.06 1.04 1.01

1.06 1.05 1.03 1.02 1.01

1.00 0.99 0.99 0.99 1.00

1.10 1.10 1.09 1.06 1.02

1.10 1.10 1.09 1.06 1.02

1.08 1.09 1.07 1.04 1.02

1.06 1.04 1.04 1.03 1.01

0.98 0.97 0.98 0.99 1.00

1.03 1.04 1.04 1.04 1.02

1.03 1.04 1.04 1.04 1.02

1.03 1.03 1.04 1.03 1.01

1.02 1.02 1.02 1.02 1.01

0.99 0.99 0.99 0.99 1.00

0.0 0.2 0.4 0.6 0.8

5 5 5 5 5

1.06 1.06 1.05 1.04 1.02

1.06 1.06 1.05 1.03 1.01

1.05 1.05 1.04 1.03 1.01

1.04 1.04 1.02 1.01 1.01

1.01 1.00 0.99 0.99 1.00

1.03 1.03 1.03 1.03 1.01

1.03 1.04 1.04 1.03 1.01

1.03 1.03 1.03 1.02 1.01

1.02 1.02 1.02 1.01 1.01

1.00 1.00 0.99 0.99 1.00

1.00 1.01 1.01 1.01 1.01

1.00 1.00 1.01 1.01 1.01

1.00 1.00 1.01 1.01 1.01

1.00 1.00 1.00 1.00 1.00

0.99 0.99 0.99 1.00 1.00

(2) E5ciency patterns are sensitive to the bandwidths ‘ and c‘ . Except for AR model with  = 0:8, e5ciency values are greater for c‘ =1 than for c‘ =‘ and are greater for smaller ‘. Hence, except for strong local autocorrelation caused by , the proposed weights work better for smaller ‘ and for smaller c‘ . However, for AR model with  = 0:8, the proposed weights have more adverse ePect for c‘ = 1 than for c‘ = ‘ and for smaller ‘. (3) E5ciency values decrease as  or  increase. This means that superiority of the proposed kernel is more conspicuous for smaller local autocorrelation induced by  or . (4) Except for AR model with  = 0:8, e5ciency values decrease as  or  decrease. This implies that, except for strong local AR autocorrelation, the new scheme works better for smaller

156

D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

Table 2 Finite sample e5ciencies of f˜n () over fKn () for model zt = (1 − B12 )(1 − B)et , where et is a sequence of independent N(0; 1) errors; n = 120; number of replications is 1000 Bartlette 

0

Parzen

Tukey–Hanning

0.2

0.4

0.6

0.8

0

0.2

0.4

0.6

0.8

0

0.2

0.4

0.6

0.8





c‘ = 1

0.0 0.2 0.4 0.6 0.8

1 1 1 1 1

1.31 1.27 1.24 1.21 1.21

1.28 1.24 1.23 1.20 1.20

1.23 1.20 1.21 1.18 1.18

1.19 1.19 1.16 1.16 1.16

1.20 1.16 1.15 1.15 1.15

1.40 1.30 1.23 1.20 1.20

1.37 1.27 1.23 1.20 1.20

1.30 1.24 1.22 1.19 1.19

1.27 1.23 1.19 1.18 1.18

1.28 1.21 1.18 1.17 1.17

1.32 1.30 1.29 1.28 1.28

1.32 1.30 1.28 1.28 1.27

1.30 1.27 1.27 1.26 1.26

1.28 1.26 1.25 1.25 1.26

1.27 1.26 1.26 1.25 1.24

0.0 0.2 0.4 0.6 0.8

3 3 3 3 3

1.50 1.46 1.40 1.37 1.35

1.48 1.42 1.36 1.35 1.33

1.45 1.38 1.34 1.31 1.29

1.41 1.35 1.31 1.26 1.28

1.39 1.34 1.30 1.26 1.25

1.64 1.56 1.46 1.44 1.41

1.61 1.56 1.44 1.39 1.37

1.53 1.48 1.40 1.36 1.35

1.49 1.42 1.39 1.33 1.30

1.49 1.42 1.36 1.31 1.31

1.47 1.44 1.40 1.37 1.37

1.47 1.44 1.39 1.37 1.36

1.46 1.41 1.38 1.37 1.36

1.43 1.41 1.37 1.33 1.33

1.43 1.42 1.35 1.36 1.35

0.0 0.2 0.4 0.6 0.8

5 5 5 5 5

1.56 1.53 1.50 1.46 1.45

1.56 1.52 1.48 1.44 1.43

1.50 1.46 1.44 1.41 1.39

1.48 1.45 1.40 1.38 1.38

1.47 1.44 1.41 1.37 1.35

1.70 1.67 1.61 1.57 1.56

1.68 1.62 1.56 1.55 1.53

1.66 1.57 1.55 1.51 1.48

1.62 1.55 1.51 1.44 1.48

1.60 1.54 1.49 1.45 1.43

1.51 1.51 1.47 1.48 1.46

1.52 1.50 1.48 1.45 1.46

1.51 1.48 1.44 1.44 1.43

1.50 1.48 1.44 1.43 1.41

1.50 1.46 1.44 1.42 1.41

0.0 0.2 0.4 0.6 0.8

3 3 3 3 3

c‘ = ‘ 1.10 1.09 1.09 1.08 1.09

1.10 1.09 1.09 1.09 1.08

1.10 1.09 1.08 1.08 1.08

1.10 1.09 1.08 1.08 1.07

1.09 1.09 1.08 1.07 1.07

1.10 1.10 1.09 1.09 1.09

1.10 1.10 1.09 1.09 1.09

1.10 1.09 1.09 1.09 1.09

1.10 1.09 1.09 1.09 1.08

1.10 1.09 1.08 1.08 1.09

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

0.0 0.2 0.4 0.6 0.8

5 5 5 5 5

1.06 1.06 1.06 1.06 1.06

1.06 1.06 1.06 1.06 1.05

1.06 1.06 1.05 1.06 1.05

1.06 1.06 1.05 1.05 1.05

1.06 1.06 1.05 1.05 1.05

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.03 1.03 1.03 1.03 1.03

1.00 1.00 1.00 1.00 1.00

1.00 1.00 1.00 1.00 1.00

1.00 1.00 1.00 1.00 1.00

1.00 1.00 1.00 1.00 1.00

1.00 1.00 1.00 1.00 1.00

seasonal autocorrelation caused by  or . However, in case of strong local AR autocorrelation, the new scheme has more adverse ePect for smaller seasonal AR autocorrelation. We have conducted similar experiments for other situations. For the same experiment with n = 240; 480, we have found that the e5ciency patterns are similar to those with n = 120 in Tables 1 and 2. Moreover, similar e5ciency advantage of f˜n () over fKn () is observed for a seasonal ARMA(1,1) model (1 − *B12 )zt = (1 − B12 )et .

D.W. Shin / Statistics & Probability Letters 67 (2004) 149 – 159

157

5. Discussions In order for good performance of the proposed estimator f˜n (), the period of seasonality d, should be cleverly speci