Strong approximation for RCA(1) time series with applications

Strong approximation for RCA(1) time series with applications

ARTICLE IN PRESS Statistics & Probability Letters 68 (2004) 369–382 Strong approximation for RCA(1) time series with applications Alexander Aue Univ...

247KB Sizes 1 Downloads 96 Views

ARTICLE IN PRESS

Statistics & Probability Letters 68 (2004) 369–382

Strong approximation for RCA(1) time series with applications Alexander Aue Universitat . zu Koln, . Mathematisches Institut, Weyertal 86–90, D–50931 Koln, . Germany Received 1 December 2003; received in revised form 1 March 2004 Available online 18 May 2004

Abstract In this paper, we derive a strong invariance principle for the partial sums of RCA(1) random variables. An application yields asymptotic tests for a change in the mean of the observations both for sequential and a posteriori procedures based on CUSUMs. r 2004 Elsevier B.V. All rights reserved. MSC: primary 62M10; secondary 60F17 Keywords: RCA time series; Strong invariance principles; CUSUM test; Sequential procedure; a posteriori tests

1. Introduction Originally, random coefficient autoregressive (shortly RCA) time series have been invented in the context of random perturbations of dynamical systems, but are now used in a variety of applications; for example, in finance and biology (cf. Tong, 1990). RCA time series are generalizations of autoregressive time series, since they allow for randomly disturbed coefficients as well. We shall focus our attention on RCA time series of order one here. A sequence of random variables fXn g is called RCA(1) time series if it satisfies the equations Xn ¼ ðj þ bn ÞXn1 þ en ; nAZ; ð1:1Þ where Z denotes the set of integers and      2  (i) bn 0 o 0 iid ; B ; en 0 0 s2 (ii) j2 þ o2 o1:

E-mail address: [email protected] (A. Aue). 0167-7152/$ - see front matter r 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2004.04.007

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

370

The sequences fbn g and fen g; respectively, are called (white) noise. According to Nicholls and Quinn (1982), (ii) is a necessary and sufficient condition for the second-order stationarity of fXn g: So, with (i), it also assures the strict stationarity. Moreover, Feigin and Tweedie (1985) showed that EXn2k oN for some kX1 if the moments of the noise sequences satisfy Ee2k n oN

and Eðj þ bn Þ2k o1

for the same k: Now, the following properties are immediate consequences of the previous remarks. Lemma 1.1. Let fXn g be an RCA(1) time series satisfying (i) and (ii), and let gX denote its covariance function. Then, EXn ¼ 0;

EXn2 ¼

s2 1  j2  o2

and

gX ðnÞ ¼

jn s2 1  j2  o2

for all nAZ: Proof. Using (1.1), easy computations give the results. & The paper is organized as follows. In Section 2, we state the strong invariance principle for the partial sums obtained from fXn g: In Section 3, a sequential procedure for a change in the mean is introduced which falls back on a paper of Chu et al. (1996), while in Section 4 a posteriori tests are considered. Finally, the proofs of the theorems and their corollaries are given in Section 5.

2. A strong invariance principle Strong approximations have played a fundamental role in probability and statistics ! et al. (1975, 1976). Their ever since the milestone papers of Strassen (1964) and Komlos results for partial sums of independent and identically distributed random variables have been extended to various dependence concepts, for example to linear processes (cf. Horva! th, 1997, strong mixing sequences (cf. Kuelbs and Philipp, 1980) or martingale difference arrays (cf. Eberlein, 1986). We are going to verify a strong invariance principle for the partial sums of an RCA(1) time series fXn g here. Let N denote the set of the positive integers and N0 ¼ N,f0g: Set Sn ðmÞ ¼ Xmþ1 þ ? þ Xmþn ;

mAN0 ; nAN;

ð2:1Þ

where we will abbreviate Sn ð0Þ ¼ Sn : Define furthermore Fm ¼ sðbn ; en : npmÞ;

mAZ;

ð2:2Þ

as the filtration generated by the noise sequences. Throughout the paper, we assume the underlying probability space to be rich enough such that both fXn g and the approximating Wiener process can be defined on it. Then, we can prove the following theorem.

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

371

Theorem 2.1 (strong invariance for RCA(1) time series). Let fXn g be an RCA(1) time series satisfying (i), (ii) and let Eje1 jk oN

and

Ejj þ b1 jk o1

ð2:3Þ

for some k > 2: Then, there exists a Wiener process fW ðtÞgtX0 ; such that St  sS W ðtÞ ¼ Oðt1=n Þ

a:s:

as t-N for some n > 2; where s2S ¼

s2 1þj : 2 2 1j o 1j

ð2:4Þ

The proof of Theorem 2.1 is postponed to Section 5.1.

3. A sequential procedure Recently, there have been a number of papers which consider sequential procedures for testing parameter changes. For example, Chu et al. (1996) argue convincingly, that nowadays in many economic applications data come in steadily and are therefore cheaply obtainable. Hence, they gave a test procedure which is a simplification of Wald’s sequential probability ratio test (cf. Wald, 1947) and takes into account that monitoring is cost free under the null hypothesis. Their results were picked up by Horva! th et al. (2004), who applied a similar sequential procedure to linear models. Following their lines, we consider a change in the mean model for the random variables Yn ¼ Dm IfnXmþk g þ Xn ;

nAN;

where fXn g is an RCA(1) time series. Moreover, we assume that there is no change in a historical data set of size m: We are interested in testing the hypotheses H0 : Dm ¼ 0; HA : Dm a0: We stop and reject H0 ; if jQðm; kÞj exceeds gðm; kÞ for the first time, i.e. we consider the stopping rule tm ¼ inffkX1 : jQðm; kÞjXgðm; kÞg; where inf | ¼ N; with the CUSUM detector Qðm; kÞ ¼

m þk X j¼mþ1

Yj 

m k X Yj m j¼1

and the boundary function   g k k 1=2 1þ ; gðm; kÞ ¼ cm m mþk

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

372

where gA½0; 12Þ and c ¼ cðaÞ: Then, we can derive the following limit theorems for the test statistic under the null and alternative hypothesis. Theorem 3.1 (asymptotic under the null hypothesis). Let fXn g be an RCA(1) time series satisfying (i), (ii) and let Eje1 jk oN and Ejj þ b1 jk o1 for some k > 2: Then, under H0 ;     # 1 jQðm; kÞj jWðtÞj p1 ¼ P sup sup pc ; lim P m-N sS kX1 gðm; kÞ tg 0ptp1 # where fWðtÞg tA½0;1 denotes a Wiener process and sS is given in (2.4). Theorem 3.2 (asymptotic under the alternative). Let fXn g be an RCA(1) time series satisfying (i), (ii) and let Eje1 jk oN and Ejj þ b1 jk o1 for some k > 2: If D ¼ D and k ¼ oðmÞ; then, under H ; m

A

1 jQðm; kÞj P sup !N sS kX1 gðm; kÞ as m-N; where sS is given in (2.4). The proofs of Theorems 3.1 and 3.2 are given in Section 5.2. Our next goal is to replace the variance parameter s2S of the approximating Wiener process fW ðtÞg in Theorem 2.1 by a suitable estimator s# 2S;m ðmANÞ (which is obtained by plugging in consistent estimators of the parameters of the RCA(1) time series). # m ; s# 2m and Lemma 3.1. Let fXn g be an RCA(1) time series satisfying (i), (ii) and for every mAN let j 2 2 2 # m be (weakly) consistent estimators of the parameters j; s and o ; respectively. Then, o #m P 2 s# 2m 1þj s# 2S;m ¼ ð3:1Þ ! sS 2 2 #m  o #m1j #m 1j as m-N; i.e. s# 2S;m is a (weakly) consistent estimator of s2S : # 2m ; s# 2m Þ: Since Proof. The estimator s# 2S;m is obtained by applying a measurable mapping to ðj# m ; o the latter estimators are (weakly) consistent, (3.1) is proved. & Now, we get the following corollaries of Theorems 3.1 and 3.2. Corollary 3.1. Let the assumptions of Theorem 3.1 be satisfied. Then, under H0 ;     # 1 jQðm; kÞj jWðtÞj sup pc ; lim P p1 ¼ P sup m-N s# S;m kX1 gðm; kÞ tg 0ptp1 where s# 2S;m is defined in (3.1).

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

373

Proof. The assertion follows from Theorem 3.1 and Lemma 3.1. & Corollary 3.2. Let the assumptions of Theorem (3.2) be satisfied. Then, under HA ; 1 jQðm; kÞj P sup !N s# S;m kX1 gðm; kÞ as m-N; where s# 2S;m is defined in (3.1). Proof. The assertion follows from Theorem 3.2 and Lemma 3.1. & One way to obtain consistent estimators of j; s2 and o2 is the application of a two step procedure which leads to (conditional) least-squares estimators (LSE). Following the lines of Nicholls and Quinn (1982), firstly !1 m m X X 2 # m;L ¼ Xi1 Xi1 Xi j i¼1

i¼1

can be calculated by minimizing the sum m X ðXi  jXi1 Þ2 : i¼1 2 with respect to j: Set X% 2m ¼ m1 ðX02 þ ? þ Xm1 Þ: Then, in a similar way, we get !1 m m X X 2 2 # m;L Xi1 Þ2 ; # 2m;L ¼ ðXi1  X% 2m Þ2 ðXi1  X% 2m ÞðXi  j o

s# 2m;L ¼

1 m

i¼1 m X

i¼1

# m;L Xi1 Þ2  o # 2m;L X% 2m ðXi  j

i¼1

as minimizers (with respect to o2 and s2 ; respectively) of m  X 2 2 # m;L Xi1 Þ2  o2 Xi1 ðXi  j  s2 : i¼1 2 Recall, that under H0 we have EððXi  jXi1 Þ2 j Fi1 Þ ¼ o2 Xi1 þ s2 for all iAZ: How long does it take until the change–point is detected and how big is the difference between the stopping of the test procedure and the true change–point given by the underlying model? We will answer this question by giving the limit distribution of the stopping rule tm : Thereby, we will follow closely the lines of Aue and Horva! th (2004). So, we establish the following conditions on Dm and k : Let pffiffiffiffi mjDm j-N; ð3:2Þ Dm -0;

k ¼ Oðmy Þ

with some

as m-N: Recall that gA½0; 12Þ:

1 2 2g 0pyo 1g

ð3:3Þ

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

374

Theorem 3.3 (limit distribution of the stopping rule). Let fXn g be an RCA(1) time series satisfying (i), (ii) and let conditions (3.2) and (3.3) be satisfied. If Eje1 jk oN and Ejj þ b1 jk o1 for some k > 2; then, under HA ;   tm  am lim P px ¼ FðxÞ; m-N bm where F denotes the standard normal distribution function. Moreover,  1=2g 1=ð1gÞ cm am ¼ ; jDm j pffiffiffiffiffiffi am sS bm ¼ ; ð1  gÞjDm j where s2S is given in (2.4). The proof of Theorem 3.3 will be given in Section 5.2, too. 4. A posteriori tests Let Y1 ; y; Ym be observations of the random variables Yn ¼ Dm Ifn>k g þ Xn ; 1pnpm; where fXn g is an RCA(1) time series. Instead of a sequential monitoring scheme, we shall apply a test procedure based on a fixed data set of m observations. For a comprehensive review confer . o+ and Horva! th (1997). Again, we are interested in testing the change in the mean hypotheses Csorg H0 : Dm ¼ 0; H : D a0; k om: A

m

The test is based on the CUSUM k m X k k X Yi  Yi : Qðm; kÞ ¼ Sk  Sm ¼ m m i¼1 i¼1

ð4:1Þ

But instead of (4.1), we will consider the functional versions ½mt

m X X Tm ðtÞ ¼ S½mt  tSm ¼ Yi  t Yi ; tA½0; 1 ; i¼1

i¼1

where ½ denotes the integer part. Theorem 4.1 (asymptotic under the null hypothesis). Let fXn g be an RCA(1) time series satisfying (i), (ii) and let Eje1 jk oN

and

Ejj þ b1 jk o1

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

375

for some k > 2: Then, under H0 ; jTm ðtÞj D sup pffiffiffiffi ! sup jBðtÞj; msS tA½0;1

tA½0;1

Tm ðtÞ D sup pffiffiffiffi ! sup BðtÞ msS tA½0;1

tA½0;1

as m-N; where fBðtÞgtA½0;1 denotes a Brownian bridge and s2S is defined in (2.4). The proof of Theorem 4.1 is given in Section 5.3. As in the previous section, s2S can be replaced by the consistent estimator s# 2S;m defined in (3.1) without losing the convergence results of Theorem 4.1. Corollary 4.1. Let the assumptions of Theorem 4.1 be satisfied. Then, under H0 ; jTm ðtÞj D sup pffiffiffiffi ! sup jBðtÞj; ms# S;m tA½0;1

tA½0;1

Tm ðtÞ D sup pffiffiffiffi ! sup BðtÞ ms# S;m tA½0;1

tA½0;1

as m-N; where s# 2S;m is defined in (3.1). Proof. The assertion follows from Theorem 4.1 and Lemma 3.1. & 5. Proofs The section is organized as follows. In Section 5.1, we provide the proof of the strong invariance principle claimed in Section 2. Section 5.2 contains the proofs of the sequential procedures introduced in Section 3, while the theorems concerned with the a posteriori tests of Section 4 are proved in Section 5.3. 5.1. Proofs of Section 2 Theorem 2.1 is a consequence of the following theorem due to Eberlein (1986), which we state in a simpler modification here, since real random variables (instead of d-dimensional random vectors) suit our purposes. Let jj jj1 denote the L1 -norm. Theorem 5.1 (Eberlein). Let fXn g be a sequence of real random variables such that (a) EXn ¼ 0 for all nAN; (b) jjEðSn ðmÞjFm Þjj1 ¼ Oðn1=2y Þ for some yAð0; 12Þ; (c) there exists a constant sW ; such that uniformly in mAN0 ; 1 ES 2 ðmÞ  s2W ¼ Oðnr Þ n n as n-N for some r > 0;

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

376

(d) there exists a g > 0; such that uniformly in mAN0 ; jjEðSn2 ðmÞjFm Þ  ESn2 ðmÞjj1 ¼ Oðn1g Þ as n-N; (e) there exists a constant MoN and k > 2; such that EjXn jk oM for all nAN: Then, there exists a Wiener process fW ðtÞgtX0 ; such that t X Xn  sW W ðtÞ ¼ Oðt1=n Þ a:s: n¼1

as t-N for some n > 2: Proof. See Theorem 1 in Eberlein (1986). & We start with the proof of Theorem 2.1 and firstly obtain the order of the conditional expectation of Sn ðmÞ: Here and in the sequel, we shall use the following property of conditional expectations. If C1 CC2 are s-fields, then EðX jC1 Þ ¼ EðEðX jC2 ÞjC1 Þ: Lemma 5.1. Let fXn g be an RCA(1) time series satisfying (i) and (ii). Then, jjEðSn ðmÞjFm Þjj1 p

jjjEjX0 j 1  jjj

for all mAN0 and nAN: Proof. Firstly, EðSn ðmÞjFm Þ ¼ Xm

n X i¼1

ji ¼

jXm ð1  jn Þ a:s: 1j

as n-N; since EðXmþi jFm Þ ¼ EðEðXmþi jFmþi1 ÞjFm Þ ¼ jEðXmþi1 jFm Þ ¼ ji Xm

a:s:

for i ¼ 1; y; n by iteration. Therefore, we get



jð1  jn Þ

jjjEjX0 j

p jjEðSn ðmÞjFm Þjj1 ¼ E Xm 1  j 1  jjj for all mAN0 and nAN: & Secondly, we determine the asymptotic variance of Sn ðmÞ: Lemma 5.2. Let fXn g be an RCA(1) time series satisfying (i) and (ii). Then, uniformly in mAN0 ;   1 s2 1þj 1 2 ESn ðmÞ ¼ þO 2 2 n n 1j o 1j as n-N:

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

Proof. It holds, 1 1 EðSn ðmÞSn ðmÞÞ ¼ E n n and mþn X

1 E n 2 n

! Xk2

k¼mþ1

mþn X k>l

mþn X

! Xk2

þ

k¼mþ1

377

mþn 2X EðXk Xl Þ n k>l

1 s2 ¼ ngX ð0Þ ¼ ; n 1  j2  o2

mþn n1 2X 2s2 1X EðXk Xl Þ ¼ g ðk  lÞ ¼ ijni : n k>l X 1  j2  o2 n i¼1

Now, n1 n1 1X 1X ijni ¼ ðn  iÞji n i¼1 n i¼1

  jð1  jn Þ j jð1  jn Þ 0 ¼  1j n 1j   j 1 þO ¼ 1j n

as n-N; since   jð1  jn Þ 0 1 -  1j ð1  jÞ2

and

jð1  jn Þ j 1j 1j

exponentially fast, finishing the proof. & Finally, we calculate the order of the difference between the variance and the conditional variance of Sn ðmÞ: Lemma 5.3. Let fXn g be an RCA(1) time series satisfying (i) and (ii). Then, uniformly in mAN0 ; jjEðSn2 ðmÞjFm Þ  ESn2 ðmÞjj1 ¼ Oð1Þ; as n-N: Proof. The proof is given in three steps. (a) Firstly, EðSn2 ðmÞjFm Þ ¼

mþn X

EðXk2 jFm Þ þ 2

k¼mþ1

mþn X

EðXk Xl jFm Þ

k>l

We recursively get 2 jFm Þ ¼ ðj2 þ o2 Þi Xm2 þ s2 EðXmþi

i X j¼1

ðj2 þ o2 Þj1

a:s:

a:s:

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

378

and hence, mþn X

EðXk2 jFm Þ

¼

n X

2 EðXmþi jFm Þ

i¼1

k¼mþ1

ðj2 þ o2 ÞXm2 ð1  ðj2 þ o2 Þn Þ ¼ 1  j2  o2 ns2 s2 ðj2 þ o2 Þð1  ðj2 þ o2 Þn Þ þ  1  j2  o2 1  j 2  o2

a:s:

For i > j we have EðXmþi Xmþj jFm Þ ¼ EðEðXmþi Xmþj jFmþj ÞFm Þ 2 jFm Þ ¼ jij EðXmþj

a:s:

Thus, mþn X

EðXk Xl jFm Þ ¼

n X

EðXmþi Xmþj jFm Þ

i>j

k>l

¼

n X

jij ðj2 þ o2 Þj Xm2 þ s2

i>j

n X

jij

i>j

1  ðj2 þ o2 Þj 1  j 2  o2

a:s:

(b) It holds, ESn2 ðmÞ ¼

n X ns2 jij 2 þ 2s : 1  j2  o2 1  j2  o2 i>j

(c) From (a) and (b), EðSn2 ðmÞjFm Þ  ESn2 ðmÞ n n n n X X X X 2 2 ¼ EðXmþi jFm Þ  EXmþi þ2 EðXmþi Xmþj jFm Þ  2 EðXmþi Xmþj Þ i¼1

¼ D1 þ D2

i>j

i¼1

i>j

a:s:

Now, D1 -ðXm2  s2 Þ

j2 þ o2 ¼ Oð1Þ 1  j2  o2

a:s:

as n-N: Moreover, D2 ¼ Oð1Þ if j ¼ 0: If ja0; we get D2 ¼

n 2ðXm2  s2 Þ X jij ðj2 þ o2 Þj ¼ Oð1Þ 1  j2  o2 i>j

a:s:

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

379

as n-N as follows. Set I¼

n X

jij ðj2 þ o2 Þj ¼

n n X X

i>j

jij ðj2 þ o2 Þj

j¼1 i¼jþ1

and observe that n X

j

ij

2

2 j

ðj þ o Þ ¼

i¼jþ1



j2 þ o2 j

j

jjþ1  jnþ1 : 1j

This yields I¼

n n j X 1 X ðj2 þ o2 Þj  ðj2 þ o2 Þj jnjþ1 1  j j¼1 1  j j¼1

¼ I1 þ I2 with I1 ¼

jðj2 þ o2 Þ ð1  jÞð1  j2  o2 Þ

and n 1 X I2 ¼ O jjjnjþ1 1  j j¼1

!

n 1 X ¼O jjjj 1  j j¼1

! ¼ Oð1Þ

since j2 þ o2 o1 and the sum converges to ð1  jjjÞ1 : & Thus, we have proved Theorem 2.1. Proof of Theorem 2.1. Applying Feigin and Tweedie (1985), from (2.3), we immediately get that EjX1 jk oN: Hence, the assertion follows from Theorem 5.1 by the use of Lemmas 5.1–5.3. & 5.2. Proofs of Section 3 Theorem 3.1 is traced back to a corresponding theorem in Horva! th et al. (2004), which gives the asymptotic under the null hypothesis of no change in the regression parameters of a linear model. Proof of Theorem 3.1. (a) From Theorem 2.1, for all kAN we get m þk X Xi ¼ sS ðW ðm þ kÞ  W ðmÞÞ þ Oððm þ kÞ1=n þ m1=n Þ a:s: i¼mþ1

and m X i¼1

Xi ¼ sS W ðmÞ þ Oðm1=n Þ

a:s:

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

380

* m ðkÞ ¼ W ðm þ kÞ  W ðmÞ: Then, as m-N: Set W

 

m X 1

mþk kX k * m ðkÞ  W ðmÞ

Xi  Xi  s S W sup



gðm; kÞ m m kX1 i¼mþ1 i¼1 1 0 ðm þ kÞ1=n þ m1=n þ mk m1=n C B ¼ O@sup   k g A a:s: kX1 m1=2 1 þ mk mþk

as m-N: Now, sup 1pkpm

ðm þ kÞ1=n þ m1=n þ mk m1=n   k g m1=2 1 þ mk mþk

ðm þ kÞ1=n þ 2m1=n p sup   k g 1pkpm m1=2 1 þ k m mþk  1g k 1=n 1=n1=2 pð2 þ 2Þm sup mþk 1pkpm pð21=n þ 2Þm1=n1=2 2g1 ¼ oð1Þ as m-N; since n > 2: Similarly, sup mokoN

ðm þ kÞ1=n þ m1=n þ mk m1=n 1=n1=2 ¼ O m ¼ oð1Þ g   k m1=2 1 þ mk mþk

as m-N: Thus, we have proved

 

þk m X 1

m k X k * m ðkÞ  W ðmÞ

¼ oð1Þ sup Xi  Xi  s S W



m i¼1 m kX1 gðm; kÞ i¼mþ1

a:s:

as m-N: * m ðkÞg is independent of m: Hence, (b) The distribution of fW sup kX1

* m ðkÞ  k W ðmÞj D * jW jWðkÞ  mk W ðmÞj m ¼ sup ; gðm; kÞ gðm; kÞ kX1

* where fWðtÞg is a Wiener process independent of fW ðtÞg: Now, we are exactly at the starting point of the proof of Theorem 2.1 in Horva! th et al. (2004). Following their arguments, our proof is complete. & We proceed with the asymptotic behaviour under the alternative hypothesis.

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

381

Proof of Theorem 3.2. Set k* ¼ m þ k : Then, * ¼ Qðm; kÞ

m þk* X i¼mþ1

m k* X Xi  Xi þ Dm ðk*  k þ 1Þ m i¼1

with Dm ¼ Da0 under HA : By Theorem 3.1, we have ! m þk* m X k* X 1 Xi  Xi ¼ OP ð1Þ * m i¼1 gðm; kÞ i¼mþ1 as m-N: Since k ¼ oðmÞ by assumption, pffiffiffiffi mjDj jDm jðk*  k þ 1Þ B 1g -N * 2 gðm; kÞ as m-N; where an Bbn means abnn -1 as n-N: This finishes the proof. & Finally, we consider the limit distribution of the delay time. It turns out, that under our assumptions, we can apply a theorem of Aue and Horva! th (2004). Proof of Theorem 3.3. Since m X Xi ¼ sS W ðmÞ þ Oðm1=n Þ

a:s:

i¼1

as m-N by Theorem 2.1, we also have that m pffiffiffiffi X Xi ¼ OP m i¼1

as m-N: Therefore, the assumptions of Theorem 2.1 in Aue and Horva! th (2004) are satisfied and the assertion follows. &

5.3. Proofs of Section 4 The final subsection contains the proof of Theorem 4.1. Proof of Theorem 4.1. It follows from Theorem 2.1, that there exist a Wiener process fW ðtÞg and a n > 2; such that k X Xi ¼ sS W ðkÞ þ Oðk1=n Þ a:s: i¼1

as k-N: Hence,





jT ðtÞj sS

m sup pffiffiffiffi  pffiffiffiffijW ð½mt Þ  tW ðmÞj ¼ O



m m tA½0;1

½mt 1=n þ tm1=n pffiffiffiffi sup m tA½0;1

! ¼ oð1Þ a:s:

ARTICLE IN PRESS A. Aue / Statistics & Probability Letters 68 (2004) 369–382

382

as m-N: Now,



jW ð½mt Þ  tW ðmÞj jW ðmtÞ  tW ðmÞj



pffiffiffiffi pffiffiffiffi sup





m m tA½0;1

jW ð½mt Þ  W ðmtÞj pffiffiffiffi m tA½0;1

p sup

¼ oP ð1Þ as m-N; since the sample paths of Wiener processes are continuous with probability one. Finally, the scale transformation yields ( ) 1 D D pffiffiffiffiðW ðmtÞ  tW ðmÞÞ ¼ fW ðtÞ  tW ð1Þg ¼ fBðtÞg; m where fBðtÞgtA½0;1 denotes a Brownian bridge. The second claim is proved in a similar way and hence is omitted. &

Acknowledgements The author thanks Professor Lajos Horva! th from the University of Utah for several fruitful discussions and comments, and Professor Josef Steinebach for financial support by a University of Cologne travel grant. References Aue, A., Horv!ath, L., 2004. Delay time in sequential detection of change. Statist. Probab. Lett. 67, 221–231. Chu, C.-S., Stinchcombe, M., White, H., 1996. Monitoring structural change. Econometrica 64, 1045–1065. + M., Horv!ath, L., 1997. Limit Theorems in Change-Point Analysis. Wiley, Chichester. . o, Csorg Eberlein, E., 1986. On strong invariance principles under dependence assumptions. Ann. Probab. 14, 260–270. Feigin, P.D., Tweedie, R.L., 1985. Random coefficient autoregressive processes: a Markov chain analysis of stationarity and finiteness of moments. J. Time Ser. Anal. 6, 1–14. Horva! th, L., 1997. Detecting changes in linear sequences. Ann. Inst. Statist. Math. 49, 271–283. Horv!ath, L., Hu$skov!a, M., Kokoszka, P., Steinebach, J., 2004. Monitoring changes in linear models. J. Statist. Plann. Inference, to appear. ! J., Major, P., Tusna! dy, G., 1975. An approximation of partial sums of independent r.v.’s and the sample d.f. I. Komlos, Z. Wahrsch. Verw. Gebiete 32, 111–131. ! J., Major, P., Tusn!ady, G., 1976. An approximation of partial sums of independent r.v.’s and the sample d.f. Komlos, II. Z. Wahrsch. Verw. Gebiete 34, 33–58. Kuelbs, J., Philipp, W., 1980. Almost sure invariance principles for partial sums of mixing B-valued random variables. Ann. Probab. 8, 1003–1036. Nicholls, D.F., Quinn, B.G., 1982. Random Coefficient Autoregressive Models: An Introduction. Springer, New York. Strassen, V., 1964. An invariance principle for the law of the iterated logarithm. Z. Wahrsch. Verw. Gebiete 3, 211–226. Tong, H., 1990. Non-linear Time Series: A Dynamical System Approach. Clarendon, Oxford. Wald, A., 1947. Sequential Analysis. Wiley, New York.