A note on the residual empirical process in autoregressive models

A note on the residual empirical process in autoregressive models

~:. : ; .' ",....,-. STATISTICS & PROBABILITY LETTERS ELSEVIER Statistics & Probability Letters 32 (1997) 405-411 A note on the residual empirical ...

242KB Sizes 0 Downloads 101 Views

~:. : ; .' ",....,-.

STATISTICS & PROBABILITY LETTERS ELSEVIER

Statistics & Probability Letters 32 (1997) 405-411

A note on the residual empirical process in autoregressive models Sangyeol Lee Department of Statistics, Sookrnyun~ Women's University. Seoul, 140-742, South Korea Received December [995: revised March 1996

Abstract Suppose that {X,} is the stationary AR(p) process of the form: X, - /,t = f l l ( X t . - i - la) + . . . + [~t,(X,_p - I~) + ~:,, where {~:,} is a sequence of i.i.d, random variables with mean zero and finite variance a 2. In this paper, we study the asymptotic behavior of the empirical process computed from the least-squares residuals, for which some estimators of/~ and a 2 are substituted. Due to the estimation of the location and scale parameters, the limiting process of the residual empirical process is shown to be a Gaussian process which is not a standard Brownian bridge. The result is applicable to the goodness-of-fit test of the errors in autoregressive processes. Stationary AR(p) process; Goodness-of-fit tests; Residual empirical process; Gaussian process

Kevwordsv

I. Introduction Let { X , ' t X, -

= 0,+1,-t-2 . . . . } be the stationary autoregerssive process o f the form

I~ = [ ~ ( X , _ ~

-

t~) + ' "

+ [~p(X,_p

-

I~) + ~:,,

(1.1)

where y is an unknown constant and {t:t: t = 0 , + 1 , + 2 . . . . } are i.i.d, random variables with mean zero and variance a 2 E (0, oc.). In time series, a great number o f techniques have been developed for stationary processes under the Gaussian assumption. An interesting property, under the Gaussian assumption, is that if oc Y, = ~j__,, ~6,_j, ~i=0 I¢,1 < oc, is a linear process where {fit} are i.i.d, random variables with E6t = 0 and E,6t2 E (0, oc), then {YI} is Gaussian if and only if {fit} is Gaussian. However, it has been notified by many authors that actual time series models are not always Gaussian. See, for example, Bell and Smith (1986) and And61 (1988, 1989) that deal with nonnegative autoregressive models. See also Rosenblatt (1985, p. 206). For the reasons above, it can be an issue to test whether a time series is Gaussian or not. Since the model ( I . 1 ) can be expressed as a linear process, the Gaussian assumption for {Xr} can be checked by testing the ]This work was supported by Sookmyung Women's University Research Fund in 1996. 0167-7152/97,'$17.00 c(~ 1997 Elsevier Science B.V. All rights reserved PII S 0 1 6 7 - 7 1 5 2 ( 9 6 ) 0 0 1 0 0 - 9

S. Lee I Statistics & Probability Letters 32 (1997) 405..-411

406

Gaussian behavior of the errors. Therefore, one can consider using the residual empirical process for the Gaussian test. See, for example, Pierce (1985). Actually, the residual empirical process can be used for testing the goodness-of-fit tests of the errors with non-Gaussian distributions as well. Boldin (1982) considered using the residual-based empirical process for testing H0 : et "~ F, where F is the completely specified distribution of ~:t. Under the assumption that p is equal to 0, he showed that the residual empirical process converges weakly to a standard Brownian bridge, and the Kolmogorov-Smirnov statistic and the Cramer-Von Mises statistic can be generated from the residual empirical process. See Koul (1992, Ch. 7) for the background on residual empirical processes. The objective of this paper is to derive the limiting distribution of the residual empirical process in the situation where p is not equal to 0 and rt has the distribution Fo(./a), a > O, Fo completely specified. In i.i.d, and fixed design regression models, it is well-known that the estimation of nuisance parameters affects the asymptotic behavior of the empirical process (cf. Sukhatme, 1972; Pierce and Kopecky, 1979; Durbin, 1973; Shorak and Wellner, 1986, p. 232). Suppose that 61 ..... fin are i.i.d, sample from the normal distribution n with mean p and variance a 2. Let ft, = n -I )--~j=l 6j, 62 = n - I z_~j=l~X-'"tf.j - ,~,)2 and qj = (6j - #)/a. Define Jn(t) = n -1:2 ~

[l(dp(qj)<~t) - t],

t c [0, I],

/-I

J , ( t ) = n -1'2 ~

[l(~((6j - fin)/dn)<~t) - t],

t E [0, 1],

j=l

where q~ denotes the standard normal distribution. According to our analysis (cf. Theorem 2.1 ), J , does not have the same limiting distribution as of J,. Rather, the limiting distribution of J , coincides with J , , where i n ( t ) = n -1'2 ~ [ l ( ~ ( q j ) < ~ t ) - t + c~(~-I(t))tlj + 2- I q~(q>-~(t))@-t(t)(r/2 - 1)], j=l

and ~b = ¢". Consequently, the limiting process of J , turns out to be the Gaussian process Z in Theorem 2.1. Pierce (1985) showed that the same result holds for the residual empirical process of a Gaussain autoregressive time series. The residual empirical process in this case has exactly the same limiting distribution as of Z. In Section 2, a more general result of Pierce (1985) is presented (cf. Theorem 2.1). In this case, the underlying distribution of errors in (1.1) is not necessarily normal. Theorem 2.1 shows that under some regularity conditions, the residual empirical process converges weakly to a Gaussian process. This limiting process is not a standard Brownian bridge due to parameter estimation.

2. Main results n X,i, and Let {Xt} be the time series in (1.1). Assume that XI,. . . ,X. are observed. Let X = n -I ~--~i=l ~, = (/3.1 ..... D.p)' be the least-squares estimator of fl--(fl~ . . . . . tip)' obtained by solving the equations:

? ~--~{Xx--f.-fl,(Xj_,-X.) O/~k j=j

.....

The residuals are P

= xj - x°

-

L, k=l

xj_

- x.),

flp(X~_p+, _ ~ . ) } 2 = 0 .

k = 1. . . . . p.

S. Lee I Statistics & Probability Letters 32 (1997) 405-411

407

and the residual emprical process under consideration is ~ ' , , ( t ) = n -b2 ~ [ l ( F o ( ~ j / ~ n ) < ~ t ) - t ] ,

t E [0,1],

/=1

where F ( x ) = Fo(x/a), Fo completely specified, is the distribution of ~:l and n fill

~

/ =I

Assume that f0 = F0 satisfies the following regularity conditions:

R: suplxfo(x)l


suplx2f'o(x)l




[x4fo(x)dx
(2.1)

d

X

Define n

Y,~(t) = n-1.2 Z

[l(Fo(uj) <~t) - t + atuj + bt(u~ - 1)],

j=l

where uj = t://a, at = f o ( F o t ( t ) )

and bt = 2-~ f o ( F o l ( t ) ) F o l ( t ) .

Then we have the following:

Theorem 1. Under Condition R, Yn(t) = Yll(t) + pll(t),

(2.2)

where sup0~t~l IPll(t)l ~ 0 as n -~ oo. Hence, Y , converqes w e a k l y to a m e a n zero Gaussian process Y, such that E Y ( t ) Y ( s ) = s A t - st + asCtl+ atcsl + bsct2 + btcs2 + atas

+ ( a , b , + a~b,)Eu~ + bsb,(Eu 4 - 1). where ctl = E l ( Fo( ul ) <<.t )ul a n d ct2 = E l ( Fo( ul ) <~t )( u~ - 1 ). In particular, i f Fo = dp, it holds" that as n ---* ~c.

f'll(t) ~ z(t), where Z is" a m e a n zero Gaussian process, such that E ( Z ( s ) Z ( t ) ) = s A t - st

-2-1 q~(~-l(S))q~(q~-l(t))- ( p ( ~ b - I ( S ) ) ~ - I ( S ) 4 9 ( ~ - I ( t ) ) ~ - I ( t ) .

(2.3)

Remark. (a) The result can be used for testing the goodness-of-fit hypothesis H0 : t:t '~ Fo(./a). A suitable statistic for i.i.d, random sample can be used for the goodness-of-fit test for the errors et. (b) The limiting process relys on the limiting distribution o f the estimators o f / t and ¢r2. In Gaussian case, using the least-squares estimator and maximum likelihood estimator gives the same limiting distribution. (cf. Pierce, t985). However, in non-Gaussian cases, the limiting distributions may not be the same.

Before we prove Theorem 1, we introduce a lemma. The following can be found in Boldin (1982).

408

S. L e e / S t a t i s t i c s & ProbabiliO' L e t t e r s 32 ( 1 9 9 7 ) 4 0 5 - . 4 1 1

L e m m a 1. Suppose that {Yt} is the stationary A R ( p ) process, such that Yt = ? ' Y

+ 6t, where 7 is a

p × l vector, Yt = (Yt . . . . . Yt- p+ t Y, and 3t are the i.i. d random variables with distribution G, Eft = 0 and Var(3t) < .:x~. I f G satisfies supx ]G'(x)l < vc and SUpx IG"(x)l < ~

and if K is a positive constant, then as

n ----~ oc,,

sup

n -I.2 Z "[ l ( f j < ~ x

sup

- ' : x z < x < ~ ' z ] ] S ]'<-n-I"K

+s'Zj_l)-

G(x + s ' Z : _ l ) + G ( x ) - l ( 6

j
LO.

]j--I

Proof of Theorem I. Put Zj = (X: - p)/a. Note that

e//a = u, - (~,, - fl)'zj_, -2.

1-

/ink .=

: = u: - l . ( j ) , •

where Zj = (Zj . . . . Z/_ p..i

)t

It

and 2 . = n - I ~ j _ I Zj. Then,

n

Y.(Fo(x) ) = n- 1.2 Z

[l(gJ/'a" ~
j=l tl

= n-1"2 Z

[l(u: <~x6n/a + l . ( j ) ) - Fo(x)]

)=1 n

[l(uj <~x) - Fo(x)] + rni(x) + rnz(x) + ~n3(x),

= n -t,.2 Z 1=1

where

*hi(x) = n - j2 ~

[l(u/ <~xgn/'a ) - Fo(x6./:a ) + Fo(x )

-

l(ttj

~
/-I tl

"t'n2(X) = n-I..2

Z

[I(uj <~Xd,,/a + l . ( j ) ) -- Fo(xd.,/a + l . ( j ) )

/=l

+ F o ( x d . / a ) - l(u/ <~x6./a)], 11

Zn3(X) = n-1"2 Z

[Fo(x~,,/a + l , ( j ) ) - F0(x)].

J--l

Since d 2 ~ a 2 as n --, cx~, we have sup IFo(x6./a) - Fo(x)l ~ 0

as

n ----* o o .

x

Due to the fact that: as n --+ oc, sup [Fo(x)-Ftffy) I <~d,,

n - I / 2 Z"[ l ( u j < ~ x ) - F o ( x ) + F o ( y ) - l ( u j=l

j ~
(2.4)

X Lee I StatL~'tics & Prohahilio' Letters 32 (1997) 405 411

409

where d,, is any sequence of random variables decaying to 0 in probability (cf. Billingsley, 1968, p. 106-108), it holds that supx r,,l(X)[ = or(l). Meanwhile, supx ]t,2(x)] = op(1) in view of Lemma 1. Hence, we only have to deal with r~3(x). Using Taylor's series expansion, we can write

r,,3(x)= n- l2 Z [(6,,/a -- l )x + l,( j ) ] f )(x) + 2-1n -12 ~ [(d,,."a - 1)x + ln(J )]'.lo(~.j), i-I j-i

(2.5)

where ~/ := ~i(x) lies between x and (d,,/a)x + l,,(j). Here, rewrite the model in (1.1) as the linear process of the form: X t - ~,[ = ~

9~t~:t--i.

i--O

where the coefficients ~, are determined by the relation

1 -/~lz . . . . . .

~iz'

l~r,z f' = i=0

for all Iz ~< 1 in the complex plane. Since the argument in the proof of Theorem 1 of Fakhre-Zakeri and Lee (1992. p. 193) allows the following representation (cf. Phillips and Solo, 1993):

i--O

j= I

where ~,~ -" ~ 0 a.s., and since ~,.~,,~, -- (1 - ~ ' = , ilk) -~ we can write ,,~

2

l , , ( j ) = (/~ - ~ ) ' n

' 2

/=t

zj_

+ n' 2 2 .

1-

j=l

/~,

k=l

)

= n-I 2 ~ uj + op(l). i I

(2.6)

The following can be shown similarly to the arguments of Rosenblatt (1985, p. 85): it -~ . , n a_(o.,,/a

_

1) =

2-1rt-l.2~--"

~ ( u T - -, /'

l)+op(l).

(2.7)

1

Thus in view of (2.1) and (2.5)-(2.7), we will get n Tn3(X ) = /7

I 2Z

['fo(X)tl¢ + 2

-1



2

X.Io(X)(II j -- 1 )] -{- (~n(X),

(2.8)

i=1

where sup, }•,(.r): = Op(1), only if sup max x'.10(~j)l

=

op(nl"2)



(2.9)

410

S. Lee I Statistics & Probability Letters 32 (1997) 405 411

In order to show (2.9), it is sufficient to prove that: for some M > 0, ,'~2 = op(n I .2) sup max x 2,,~/ ]r[ ~>M I ~
(2.10)

because (i) sUPlxl ~<.~4maxl <~j<~n [x2fl'~o(~j )[ <~M e supx i.f~(x)i < .w ~2 .t ~

2

(ii) maxt ~
= Op(1).

(2.11)

Let M > 1 and ~: E (0, 1/2). For any r / > O, P

n-12

sup

max x ' ~ j > ~ q '.

I.xl~>M I~
~
sup

max x / ~ / > ~ n

+P(le,,,"a-11

q,~',a,,/~a-ll~<~:,

max II,(j)l~<~

> ~,) + P (\,max~/~,,I / , ( j ) ] > ~.)

~
+o(1)

(2.12)

by (2.11). Since the first term o f (2.12) goes to 0, (2.10) is proved. This establishes (2.8) and therefore (2.2) is obtained in view o f (2.4). The weak convergence result for Y~ follows from the tightness o f Y~, which is due to (2.1), and central limit theorem. Since if F0 = ~, ctl = - a t and ct2 = - 2 b t , we have (2.3). []

Acknowledgements

I am garteful to the refree for the valuable c o m m e n t s and introducing me the paper by Pierce and K o p e c k y (1979).

References

And~l, J. (1988), On AR(I) processes with exponential white noise, Commun Statist. Theory Methods 17. 1481-1495. And61, J. (1989), Non-negative autoregressive process, J. Time Ser. Anal. 10, I- I1. Bell. C.B. and E.P. Smith (1986), Inference for non-negative autoregressive schemes, Comrnun. Statist. Theory Methods 15, 2267 -2293. Billingsley, P. (1968), Conrer~lence of Probability Measures (Wiely, New York). Boldin, M.V. (1983). Estimation of the distribution of noise in an autoregressive scheme. Theory Probah Appl. 27. 886-871. Durbin, J. (1973). Weak convergence of the sample distribution function when parameters are estimated, Ann. Statist. I, 279 290. Fakhre-Zakeri, 1. and S. Lee (1992), Sequential estimation of the mean of a linear process, Sequential Anal I1, 181- 197. Koul, H.L. (1992). Weighted empiricals and linear models (IMS Lecture Notes Monograph Series, Vol. 21, Hayward, Calif.).

S. Lee I Statistics & Probabilio" Letters 32 (1997) 405- 411

411

Phillips, P.C.B. and V. Solo (1993), Asymptotics for linear processes, Ann. Statist. 20, 971- 1001. Pierce. D.A. (1985), Testing normality in autoregressive models, Biometrika 72, 293-297. Pierce, D.A. and K.J. Kopecky (1979), Testing goodness of fit for the errors in regression models, Biometrika 66, I 5. Rosenblau, M. (1985), Stationary Sequences and Random Fields (Birkh~iser, Boston Basel Stuttgart). Shorack G. and J. Wellner (1986), Empirical Processes with Applications to Statistics (Wiley, New York). Sukhatme. S. (1972), Fredholm determinant of a positive definite kernel of a specieal type and its application, Ann. Math. Statist. 43, 1914-1926.