A divergence test for autoregressive time series models

A divergence test for autoregressive time series models

Statistical Methodology 8 (2011) 442–450 Contents lists available at ScienceDirect Statistical Methodology journal homepage: www.elsevier.com/locate...

221KB Sizes 1 Downloads 125 Views

Statistical Methodology 8 (2011) 442–450

Contents lists available at ScienceDirect

Statistical Methodology journal homepage: www.elsevier.com/locate/stamet

A divergence test for autoregressive time series models S. Lee a , A. Karagrigoriou b,∗ a

Department of Statistics, Seoul National University, Seoul, South Korea

b

Department of Mathematics and Statistics, University of Cyprus, Nicosia, Cyprus

article

info

Article history: Received 13 April 2010 Received in revised form 26 January 2011 Accepted 19 April 2011 Keywords: Autoregressive models Divergence measure Divergence test Residual empirical process Unstable process

abstract In this paper, we study the normality test for the innovations of unstable autoregressive models based on the divergence test. In order to investigate the asymptotic behavior of the tests, we use the link between the divergence test and the residual empirical process. Simulation results are provided for illustration. © 2011 Elsevier B.V. All rights reserved.

1. Introduction Divergence measures are used to measure the distance between populations or the mutual information concerning two random variables and to construct goodness of fit tests. The most important family of measures is Csiszar’s family of φ -divergence measures [10,3] where φ is a real valued convex function on [0, 1]. The φ -divergence measure takes different forms according to the choice of φ . For instance, the classic Kullback–Leibler measure is obtained for φ(u) = u log(u) or u log(u) − u + 1. Measures of divergence can be used in statistical inference for the construction of tests of fitting [24,1,23,12] or in statistical modeling for the construction of model selection criteria [2,4,6,21,22]. In this paper, we focus on the application of the φ -divergence test to unstable autoregressive (AR) models with unit roots. Special attention will be given to the normality test for the innovations of those models. We mention as a representative work on unstable models that of Chan and Wei [7], who deal with the asymptotic properties of least squares estimates in unstable models. For the motivation and



Corresponding author. E-mail addresses: [email protected] (S. Lee), [email protected] (A. Karagrigoriou).

1572-3127/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.stamet.2011.04.006

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

443

early history for the normality test, we refer the reader to [17] and the papers cited therein. Rosenblatt [20] is also an excellent source, illustrating the importance of the normality test for time series models. Among the goodness of fit methods, one can mention the Kolmogorov–Smirnov and Bickel– Rosenblatt tests (cf. [5]) which are well known to detect heavy-tailed alternatives better (see [14]). Although these tests have their own merit, it is widely accepted that they also have certain shortcomings. For instance, the Kolmogorov–Smirnov test has a tendency to produce low powers in many situations, and the Bickel–Rosenblatt test has difficulty in choosing an optimal bandwidth. It is well known, as seen in [17,16], that the residual based test behaves differently from the test based on true errors, depending upon the characteristic of the structure of the time series models. In particular, the result of Lee and Taniguchi [16] reveals that the ARCH effects severely affect the limiting null distribution of the residual empirical process. Hence, the residual based test for those models should be carefully analyzed before actual usage. It is noteworthy that a strong link exists between the asymptotic behavior of the φ -divergence test and the empirical process with estimated parameters, as seen in [18]. In the present work, by tracing and relying on this link, we are able to employ the empirical process technique and derive the limiting null distribution of the φ -divergence test for autoregressive time series models. In particular, for the unstable AR models, this enables us to employ a specific type of φ -divergence test for which the parameter estimation effect is successfully removed in the limiting null distribution (cf. Section 2.2). This paper is organized as follows. In Section 2, we explain the link between the φ -divergence test and the empirical process and investigate the limiting null distribution for unstable AR models. In Section 3, we provide the simulation results to illustrate our findings. In Section 4, we provide concluding remarks. 2. Statistical inference 2.1. The φ -divergence test for the iid case In this section, we explain how the φ -divergence test is linked to the empirical process and how the link is used for constructing a φ -divergence test for iid samples; this will be used for constructing a φ -divergence test whose limiting null distribution is unaffected by parameter estimation. Let Xt be iid r.v.’s following the distribution F , and suppose that one wishes to test H0 : F = F0 vs. H1 : F ̸= F0 . Let Φ be a class of continuously twice-differentiable real valued functions φ defined on R+ such that φ(1) = 0, φ ′ (1) = 0, φ ′′ (1) ̸= 0, 0φ(0/0) = 0 and 0φ(p/0) = limu→∞ φ(u)/u. For φ ∈ Φ and the partition {Ai : i = 1, . . . , M } of the real line, where M is a positive integer larger than 1, the φ -divergence test is given by (see e.g. [13]) Tˆnφ =

2n

φ ′′ (1)

ˆ n ), Dφ (p, p

(1)

ˆ n = (ˆpn1 , . . . , pˆ nM )′ with pˆ ni = Ni /n := where p = (p1 , . . . , pM )′ with pi = PH0 (Ai ), and p ∑n M × RM , such that for all M × 1 vectors t =1 I (Xt ∈ Ai )/n, and Dφ is a real valued function on R p = (p1 , . . . , pM )′ and q = (q1 , . . . , qM )′ , Dφ (p, q) = the proof of Theorem 3.2 of [24], as n → ∞, Tˆnφ = n

∑M

i=1

pi φ(qi /pi ). According to the arguments in

M − (ˆpni − pi )2 /pi + oP (1). i=1

In particular, if Ai = (xi−1 , xi ], where −∞ = x0 < x1 < · · · < xM = ∞ with F0 (xi ) = i/M, provided F0 is continuous and strictly increasing, we can give the expression Tˆnφ = M

M − {Un (i/M ) − Un ((i − 1)/M )}2 + oP (1), i =1

444

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

where

Un (s) = n−1/2

n − {I (F0 (Xj ) ≤ s) − s},

0 ≤ s ≤ 1.

j=1

In view of Theorem 3.2 of [24], we have: Theorem 1. Under H0 , as n → ∞, d

2 Tˆnφ → χM −1 .

(2)

Next, we turn our attention to the composite hypothesis test. Let Xt be iid r.v.’s following the distribution F . Suppose that one wishes to test H0 : F ∈ {Fθ (x): θ ∈ Θ ⊂ Rd } vs. H1 : not H0 , where Fθ (x) is strictly increasing and continuous in x. Let θ0 denote the true parameter under H0 , and let θˆn be an estimate of θ0 such that n1/2 (θˆn − θ0 ) is asymptotically normal under H0 . Here we consider the situation where the Xt are unobservable and only δnt , t = 1, . . . , n, n ≥ 1, are available and approximate the Xt in the sense that

Yˆ n (s) = Yn (s) + Rn (s) + ∆n (s),

0 ≤ s ≤ 1,

(3)

where

Yˆ n (s) = n−1/2

n − {I (Fθˆn (δnt ) ≤ s) − s}, t =1

n − {I (Fθ0 (Xt ) ≤ s) − s}, Yn (s) = n−1/2 t =1

Rn (s) = n1/2 (θˆn − θ0 )′ hθ0 (s), sup |∆n (s)| = oP (1), 0≤s≤1

and hθ is a real valued function defined on Rd . A typical example of (3) is the residual empirical process from AR(1) models with the variance of innovations estimated (cf. [17]). Also, the estimated empirical process in [11] is an example in which the δnt are identical to the Xt themselves. In this case, the φ -divergence test under consideration is Tˆnφ =

2n

φ ′′ (1)

ˆ n ), Dφ (p, p

ˆ n = (ˆpn1 , . . . , pˆ nM )′ with pˆ ni = n−1 where p (1/M , . . . , 1/M )′ . Note that

(4)

∑n

t =1

I ((i − 1)/M < Fθˆn (δnt ) ≤ i/M ) and p =

ˆ n (i/M ) − Yˆ n ((i − 1)/M )} + 1/M , pˆ ni = n−1/2 {Y and thus,

(ˆpni − pi )2 = n−1 {Yn (i/M ) − Yn ((i − 1)/M ) + Rn (i/M ) − Rn ((i − 1)/M )}2 = OP (n−1 ). Hence, in view of the arguments of [19], we can have Tˆnφ = M

M − {Yn (i/M ) − Yn ((i − 1)/M ) + Rn (i/M ) − Rn ((i − 1)/M )}2 + oP (1). i=1

Then, according to Theorem 1 of [18] and Theorem 1 of [8], we obtain the following result.

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

445

Theorem 2. Suppose that M ≥ d + 2. Also, suppose that Fθ satisfies (A1)–(A3) of [18] and ∂∂x Fθ0 (x) > 0 ∂F for all x ∈ R, hθ (s) = ∂θθ (Fθ−1 (s)). Then, if θˆn is an MLE of θ0 , under H0 , d

Tˆnφ →

M −d−1



Zi2 +

i =1

d −

(1 − λj )Z˜j2 ,

j =1

where Z1 , . . . , ZM −d−1 , Z˜1 , . . . , Z˜d are iid N (0, 1) r.v.’s and the λj are real numbers in [0, 1] which represent the roots of the equation det(J (θ0 ) − λI(θ0 )) = 0, where J (θ0 ) and I(θ0 ) are the Fisher information matrices from the original and discretized models imposed on {Xt } under H0 , respectively. 2.2. Unstable AR(m) models In this subsection, we apply the result in Theorem 1 to the normality test for the unstable autoregressive model: Xt − β1 Xt −1 − · · · − βm Xt −m = εt ,

(5)

where the εt are iid random variables with zero mean, finite variance σ and common distribution F . We assume that the corresponding characteristic polynomial ψ has a decomposition 2

ψ(z ) = 1 − β1 z − · · · − βm z m l ∏ = (1 − z )a (1 + z )b (1 − 2 cos θk z + z 2 )dk ψ ∗ (z ), k=1

where a, b, l, dk are nonnegative integers, θk belongs to (0, π ) and ψ ∗ (z ) is the polynomial of order r = m − (a + b + 2d1 + · · · + 2dl ) that has no zero on the unit disk in the complex plane. When a = b = dk = 0, {Xt } is stationary. When one of a, b and dk is non-zero, the process is said to be unstable. The commonly used integrated AR (IAR) model is the case where a ̸= 0 = b = dk , for all k. For relevant references to this model, we refer the reader to [7,17]. Since the normality of the AR process implies that of its error process and vice versa, we consider the normality test based on {εt } rather than that based on {Xt } itself. Suppose that one wishes to test H0 : F = ϕ(·/σ ) vs. H1 : not H0 , where ϕ denotes the N (0, 1) distribution. Suppose that Xt = (Xt , . . . , Xt −m+1 )′ and X0 = 0. Let

 βˆ n =

n −

 −1 Xt −1 X′t −1

t =1

n −

Xt −1 Xt ,

n > m,

t =1

be the least squares estimate of β = (β1 , . . . , βm )′ based on X1 , . . . , Xn . Then, the residuals are ′

εˆ t = Xt − βˆ n Xt −1 ,

t = 1, . . . , n

and the process under consideration is

Yˆ n (s) = n−1/2

n − {I (ϕ(ˆεt /σˆ n ) ≤ s) − s},

0 ≤ s ≤ 1,

t =1

where σˆ n2 = n−1 Suppose that

∑n

t =1

Eˆn (x) = n−1/2

εˆ t2 .

n − {I (ˆεt /σˆ n ≤ x) − ϕ(x)},

x ∈ R,

t =1

En (x) = n−1/2

n − {I (εt /σ ≤ x) − ϕ(x)}, t =1

x ∈ R,

446

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

ρn1 (x) = n1/2 (σˆ n /σ − 1)xϕ ′ (x), x ∈ R, n − ρn2 (x) = n−1/2 (βˆ n − β)′ Xt −1 ϕ(x/σ )/σ ,

x ∈ R.

t =1

According to [17], under H0 we have the expression

Eˆn (x) = En (x) + ρn1 (x) + ρn2 (x) + ηn (x) with supx |ηn (x)| = oP (1), where we have used the following fact: n1/2 (σˆ n2 − σ 2 ) = n−1/2

n − (εt2 − σ 2 ) + oP (1). t =1

If a ≥ 1, ρn2 is not negligible and Theorem 1 does not hold for this case. However, if a = 0, ρn2 ˆn is negligible and therefore Eˆn behaves asymptotically in the same way as En + ρn1 . Consequently, Y behaves asymptotically in the same way as

Yn∗ (s) = n−1/2

n − {I (ϕ(εt /σn ) ≤ s) − s}, t =1

2 ˆ where σn2 = n t =1 εt . This in turn implies that the φ -divergence test based on Yn satisfies the convergence result in Theorem 2. More precisely, we have the following.

∑n −1

Theorem 3. Suppose that a = 0. Suppose that M ≥ 3 and define 2n

φ

Tˆn1 =

φ ′′ (1)

ˆ n ), Dφ (p, p

ˆ n = (ˆpn1 , . . . , pˆ nM )′ with pˆ ni = n−1 where p . . . , 1/M )′ . Then, φ

d

∑n

t =1

I ((i − 1)/M < ϕ(ˆεt /σˆ n ) ≤ i/M ) and p = (1/M ,

M −2

Tˆn1 →



2 Zi2 + (1 − λ)ZM −1 ,

i=1

where Z1 , . . . , ZM −1 are iid N (0, 1) r.v.’s and 0 ≤ λ ≤ 1. In order to handle the case of a ≥ 1, borrowing the idea of [15], we consider the empirical process

ˆ n (s) = n−1/2 Z

n − {I (G0 ((ˆεt /σˆ n )2 ) ≤ s) − s},

0 ≤ s ≤ 1,

t =1

where G0 denotes the distribution function of χ 2 (1). We set

Z∗n (s) = n−1/2

n − {I (G0 ((εt /σn )2 ) ≤ s) − s},

0 ≤ s ≤ 1.

t =1 1 1/2 By putting x = {G− , since ρn2 (−x) = ρn2 (x), we get 0 (s)}

ˆ n (s) = n−1/2 Z

n n − − {I ((ˆεt /σˆ n ) ≤ x) − ϕ(x)} − n−1/2 {I ((ˆεt /σˆ n ) ≤ −x) − ϕ(−x)} t =1

t =1

= En (x) − En (−x) + ρn1 (x) − ρn1 (−x) + oP (1) d

= Z∗n (s) + oP (1). This, due to Theorem 2, leads to the following result.

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

447

Theorem 4. Suppose that M ≥ 3 and define φ

Tˆn2 =

2n

φ ′′ (1)

ˆ n ), Dφ (p, p

ˆ n = (ˆpn1 , . . . , pˆ nM )′ with pˆ ni = n−1 where p (1/M , . . . , 1/M )′ . Then, φ

d

Tˆn2 →

(6)

∑n

t =1

I ((i − 1)/M < G0 ((ˆεt /σˆ n )2 ) ≤ i/M ) and p =

M −2



2 Zi2 + (1 − λ)ZM −1 ,

i =1

where Z1 , . . . , ZM −1 are iid N (0, 1) r.v.’s and 0 ≤ λ ≤ 1. Remark. Note that the above result does not require any a priori information about the existence of the unit root 1 in the characteristic polynomial. The approach used to obtain Theorem 4 is also valid for a test of fitting other than the normality test as long as the underlying distribution is symmetric around zero. However, the asymptotic result may not coincide with that in Theorem 4. 3. The simulation study In this section, we report simulation results for various φ -divergence tests for the AR(1) model: Xt = β Xt −1 + ϵt . Here, we consider the Cressie–Read power divergence tests [9] of the form

φ(u) =

uλ+1 −u−λ(u−1) λ(λ+1)

with λ = −1/2, 0, 2/3, 1, which are known as the Freeman–Tukey, likelihood ratio, Cressie–Read, and chi-squared tests, respectively. Since the residual based φ -divergence test for AR(1) models behaves asymptotically in the same way as the one based on true errors, we can φ φ obtain empirically the critical values of Tˆn1 and Tˆn2 through a Monte Carlo simulation. For this task, the sample size 1000 and number of repetitions 10 000 are used. In order to see the performance of those tests, we consider the hypothesis H0 : {ϵt } ∼ f0 vs. H1 : {ϵt } ∼ f1 , where f0 denotes the N (0, 1) density and f1 (x) = ζ g (ζ x), where g (x) = (1 − p)f0 (x) + p(1/ξ )f0 (x/ξ ) and ζ 2 = 1 − p + pξ 2 . Here, we choose p = 0.1 and ξ = 3. Observe that the above choices for p and ξ give rise to an alternative relatively close to the null hypothesis. Furthermore, the ζ parameter controls the variance under the alternative hypothesis of being equal to 1, a choice often made to set a fair situation for both the null and the alternative hypotheses. Further, we employ φ φ n = 100, 300, 500, 1000, and β = 0.1, 0.5, 0.7, 0.9 for Tˆn1 and β = 0.1, 0.5, 0.7, 0.9, 1 for Tˆn2 . Throughout this simulation, the nominal level is 0.05, the partition number M is 10, and the empirical sizes and powers are produced from 1000 repetitions. Tables 1 and 2 show that the tests with λ = −1/2 have size distortions to a certain degree while the others have not. Further, Tables 3 and 4 exhibit that the tests produce reasonably good powers. It is noteworthy that both the sizes and the powers are not affected by the values of β except for the case of λ = −1/2. This is because correlation effects are effectively discarded by employing the residual based tests. Our findings support the validity of our tests. Taking into consideration the size and the power results in the simulations, the choices λ = 0 and λ = −1/2 appear to have a slight edge over the choices of λ = 2/3 and λ = 1. Furthermore, the test φ φ statistic Tˆn2 appears in a number of instances to outperform the corresponding Tˆn1 statistic. Of course such conclusions should be treated with caution when the φ -divergence goodness of fit test is applied to more general classes of time series models. 4. Concluding remarks In this paper we have studied the φ -divergence test for the normality of AR models. For obtaining a test unaffected by parameter estimation, we traced the link between the φ -divergence test and

448

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

Table 1 φ

φ

Empirical sizes of Tˆn1 and Tˆn2 with λ = −1/2 and 0. n

ˆφ

φ

Tˆn2

Tn1 100

β1

λ = −1/2

0.1 0.5 0.7 0.9 1

0.068 0.070 0.068 0.059

β1

λ=0

0.1 0.5 0.7 0.9 1

0.052 0.056 0.050 0.048

300

500

1000

100

300

500

1000

0.050 0.045 0.057 0.064

0.060 0.048 0.052 0.043

0.047 0.047 0.055 0.055

0.057 0.068 0.079 0.079 0.067

0.050 0.045 0.051 0.050 0.065

0.050 0.039 0.042 0.049 0.062

0.046 0.042 0.038 0.056 0.068

0.062 0.059 0.070 0.051

0.048 0.044 0.052 0.038

0.042 0.045 0.048 0.047

0.047 0.045 0.058 0.059 0.053

0.057 0.043 0.043 0.057 0.045

0.048 0.052 0.052 0.068 0.053

0.041 0.052 0.053 0.055 0.044

Table 2 φ

φ

Empirical sizes of Tˆn1 and Tˆn2 with λ = 2/3 and 1. n

φ

φ

Tˆn1

Tˆn2

100

β1

λ = 2/3

0.1 0.5 0.7 0.9 1

0.067 0.048 0.054 0.043

β1

λ=1

0.1 0.5 0.7 0.9 1

0.042 0.041 0.046 0.042

300

500

1000

100

300

500

1000

0.044 0.046 0.052 0.040

0.049 0.049 0.059 0.052

0.055 0.063 0.060 0.055

0.034 0.041 0.037 0.046 0.045

0.046 0.054 0.042 0.035 0.052

0.059 0.055 0.048 0.068 0.048

0.045 0.062 0.047 0.044 0.039

0.045 0.044 0.052 0.051

0.047 0.032 0.057 0.038

0.040 0.046 0.055 0.040

0.042 0.065 0.057 0.058 0.048

0.054 0.045 0.048 0.056 0.045

0.059 0.044 0.043 0.047 0.041

0.053 0.049 0.043 0.041 0.047

Table 3 φ

φ

Empirical sizes of Tˆn1 and Tˆn2 with λ = −1/2 and 0. n

ˆφ

φ

Tˆn2

Tn1 100

β1

λ = −1/2

0.1 0.5 0.7 0.9 1

0.336 0.331 0.342 0.330

β1

λ=0

0.1 0.5 0.7 0.9 1

0.291 0.289 0.275 0.290

300

500

1000

100

300

500

1000

0.739 0.726 0.729 0.747

0.921 0.940 0.931 0.931

0.999 0.998 0.998 0.998

0.326 0.314 0.314 0.324 0.330

0.753 0.727 0.761 0.750 0.763

0.926 0.929 0.931 0.931 0.941

0.998 0.999 0.999 0.998 0.998

0.708 0.725 0.745 0.742

0.926 0.934 0.928 0.941

1 0.997 0.999 0.999

0.312 0.312 0.312 0.308 0.277

0.747 0.756 0.737 0.726 0.749

0.935 0.947 0.930 0.917 0.912

0.998 0.999 0.998 0.999 0.999

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

449

Table 4 φ

φ

Empirical sizes of Tˆn1 and Tˆn2 with λ = 2/3 and 1. n

ˆφ

φ

Tˆn2

Tn1 100

β1

λ = 2/3

0.1 0.5 0.7 0.9 1

0.301 0.322 0.281 0.269

β1

λ=1

0.1 0.5 0.7 0.9 1

0.270 0.287 0.257 0.265

300

500

1000

100

300

500

1000

0.758 0.719 0.719 0.706

0.928 0.919 0.936 0.934

0.996 0.998 1 0.998

0.275 0.292 0.285 0.309 0.289

0.721 0.743 0.728 0.727 0.749

0.918 0.934 0.934 0.937 0.939

0.998 0.999 1 1 0.999

0.714 0.721 0.708 0.681

0.919 0.918 0.933 0.921

0.999 1 0.999 1

0.292 0.314 0.265 0.280 0.281

0.726 0.754 0.716 0.755 0.723

0.944 0.943 0.936 0.940 0.940

0.997 0.998 1 1 0.998

the empirical process, and were able to construct the desired tests. For instance, in handling the unstable model, we modified the normality test to the chi-squared distribution test and were able to successfully discard the nuisance terms from the residual empirical process that hinder its limiting null distribution from converging weakly to a Gaussian process, with its covariance structure independent of the estimators of model parameters. Although we restrict ourselves to AR models, the results of this work could easily be extended to a larger class of time series models including ARMAtype models and to a general class of distributions besides the normal distribution. Acknowledgment The first author acknowledges that this work was supported by grant No. R01-2006-000-10545-0 from the Basic Research Program of the Korea Science & Engineering Foundation. References [1] N. Aguirre, M. Nikulin, Chi-squared goodness-of-fit test for the family of logistic distributions, Kybernetika (Prague) 30 (1994) 214–222. [2] H. Akaike, Information theory and an extension of the maximum likelihood principle, in: B.N. Petrov, F. Csaki (Eds.), 2nd Intern. Symposium on Information Theory, Akademia Kaido, Budapest, 1973, pp. 267–281. [3] S.M. Ali, S.D. Silvey, A general class of coefficients of divergence of one distribution from another, J. R. Stat. Soc. B 28 (1966) 131–142. [4] T. Bengtsson, J.E. Cavanaugh, An improved Akaike information criterion for state-space model selection, Comput. Statist. Data Anal. 50 (2006) 2635–2654. [5] P. Bickel, M. Rosenblatt, On some global measures of the deviations of density function estimates, Ann. Statist. 1 (1973) 1071–1095. [6] J.E. Cavanaugh, Criteria for linear model selection based on Kullback’s symmetric divergence, Aust. N. Z. J. Stat. 46 (2004) 257–274. [7] N.H. Chan, C.Z. Wei, Limiting distribution of least squares estimates of unstable autoregressive models, Ann. Statist. 16 (1988) 367–401. [8] H. Chernoff, E.L. Lehmann, The use of maximum likelihood estimates in χ 2 tests of goodness of fit, Ann. Math. Statist. 25 (1954) 579–586. [9] N. Cressie, T.R.C. Read, Multinomial goodness-of-fit tests, J. Roy. Statist. Soc. 5 (1984) 440–454. [10] I. Csiszar, Eine Informationstheoretische Ungleichung und ihre Anwendung auf den Bewis der Ergodizitat on Markhoffschen Ketten, vol. 8, Publication of the Mathematical Institute of the Hungarian Academy of Sciences, 1963, pp. 84–108. [11] J. Durbin, Weak convergence of the sample distribution function when parameters are estimated, Ann. Statist. 1 (1973) 279–290. [12] C. Huber-Carol, F. Vonta, Frailty models for arbitrarily censored and truncated data, Lifetime Data Anal. 10 (4) (2004) 369–388. [13] A. Karagrigoriou, K. Mattheou, P. Panayiotou, On generalized tests of fit for multinomial populations, in: V. Rykov, N. Balakrishnan, M. Nikulin (Eds.), Mathematical and Statistical Methods in Reliability Applications to Medicine, Finance, and Quality Control, Birkhauser, Boston, 2010, pp. 243–254.

450

S. Lee, A. Karagrigoriou / Statistical Methodology 8 (2011) 442–450

[14] S. Lee, S. Na, On the Bickel–Rosenblatt test for first-order autoregressive models, Statist. Probab. Lett. 56 (2001) 23–35. [15] S. Lee, O. Na, S. Na, On the cusum of squares test for variance change in nonstationary and nonparametric time series models, Ann. Inst. Statist. Math. 55 (2003) 467–485. [16] S. Lee, M. Taniguchi, Asymptotic theory for ARCH-SM models: LAN and residual empirical processes, Statist. Sinica 15 (2005) 215–234. [17] S. Lee, C.Z. Wei, On residual empirical processes of stochastic regression models with applications to time series, Ann. Statist. 27 (1999) 237–261. [18] D.S. Moore, A chi-square statistic with random cell boundaries, Ann. Math. Statist. 42 (1971) 147–156. [19] D. Morales, L. Pardo, I. Vajda, Asymptotic divergence of estimates of discrete distributions, J. Statist. Plann. Inference 48 (1995) 347–369. [20] M. Rosenblatt, Gaussian and Non-Gaussian Linear Time Series and Random Fields, Springer, New York, 2000. [21] A.-K. Seghouane, Vector autoregressive model order selection from finite samples using Kullback’s symmetric divergence, IEEE Trans. Circuits Syst. 53 (10) (2006) 2327–2335. [22] J. Shang, Selection criteria based on Monte Carlo simulation and cross validation in mixed models, Far East J. of Theor. Statist. 25 (2008) 51–72. [23] J. Zhang, Powerful goodness-of-fit tests based on likelihood ratio, J. R. Statist. Soc. B 64 (2) (2002) 281–294. [24] K. Zografos, K. Ferentinos, T. Papaioannou, φ -divergence statistics: sampling properties, multinomial goodness of fit and divergence tests, Comm. Statist. Theory Methods 19 (1990) 1785–1802.