Parameter estimation in a regression model with random coefficient autoregressive errors

Parameter estimation in a regression model with random coefficient autoregressive errors

Journal of Statistical Planning and Inference 36 (1993) 57-67 57 North-Holland Uniwsity of Georgia Athens, GA, USA Received 30 July 1991; revi...

895KB Sizes 0 Downloads 86 Views

Journal

of Statistical

Planning

and Inference

36 (1993) 57-67

57

North-Holland

Uniwsity of Georgia Athens, GA, USA

Received 30 July 1991; revised manuscript received 19 June 1992

Ahstract: The least squares estimators of the regression and the autoregression parameter2 are obtained for a regression model with random coefficient autoregressive errors. The limit distribution of the least squares estimators are obtained using a weighted central limit theorem for m-dependent processes. The proof of the weighted central limit theorem for In-dependent processes is also given. A MS Subject Classification: Primary 62M 10. Key words nr~dplrruses: Nonlinear time series; random coefficient autoregressive processes; least squares estimators; asymptotic distributions; regression.

prxesses;

rn-dependent

Nonlinear time series models have been studied extensiveiy in the recent literature, see, for instance, Tong (1990) and the references therein. One typica!ly assumes that the process is stationary (and ergodic) when studying the asymptotic properties of the estimators of the model parameters. In this paper, we consider the problem of parameter estimation for a nonlinear time ceries model with a trend component. More specifically we consider a first-order random coefficient autoregressive process with regression and derive the asymptotic distributions of the least squares estimators of the regression and the autoregression parameters. The model and the least squares estimators are presented in SeLLIon2. A weighted is of interest in its own central limit theorem for nl-dependent processedc (which \ right) is derived in Section 3. Section 4 contains the limit distributions of the least squares estimators.

Corrc.vpon(lenc~t~ to: Prof. I.V. Basawa, Department Georgia. Athens. GA 30602. US&

037%3758/93/$06.00

‘~1 1993-Elsevier

of Statistics. -304 Statistics Building, The IlniLxzrsity of

Science Publishers

B.V. All rights reserved

S. Y. Hwang and I. V. Bapawa / Parameter estimation in regressive models

58

the

least squares estimators

Consider the model specified by Yf=.$p+

u/r,

where {Y,) are observable

t =

42 ,..., n,

(2.U

{x,} are known k x 1 vectors of

random variables,

covariates, /? is an unknown k x 1 vector of regression parameters and {WI} are unobservable random errors. It will be assumed that { W[} is a first-order random coefficient autoregressive process, i.e., a stationary solution of w, = (0+z,)W,_,

f&,,

(2.2)

where {z,} and (6,) are zero mean independent processes each consisting of i.i.d. random variables with finite second moments a: and a: respectively and (0’ + a:)< 1. See, for instance, Nicholls and Quinn (1982) for a background on the model in (2.2). Throughout this paper, we shall denote Y= (Y,, . . . , YjJT, p= (&, . . . , ,!?k)T, X= (X1,. . . ,x,,)T: n xk matrix and W=(W,, . . . . W,,)T. Our main goal is to estimate j? and 8 based on a given sample Y,, . . . , I:,. ThP or&nary least squares (OLS) estimator j?,] and the generalized least squares (GLS) estimator (assuming 0 known) denoted by & are given by j!$ = (XTX)-‘XTY

(2.3)

and & =

(xTT-l(e)x)-IxTr-l(e)Y,

where

1

r-l(e) =

-8

0

0

..-

0 .

0

-8

i+e2

0 -8

I+@

... -6

..

-8 -8

I.

.!

(2.4

(2.5)

When 8 is unknown, as is often the case in practice, r(6) in (2.4) is replaced by T(o) where g is a suitah’uIL estimator of 0. Noting that Wt is unobservable, a reasonable estimator of 8 is the least squares estimator 6) based on the residuals { W[*= Yf - .I+;~&,, t 3 I} and is given by

(2.6) Consequently,

the estimated GLS estimator BG can be written as

p;; = (XTf -‘(B,,)X)-‘XTf

- ‘($,)Y.

(2.7) Section 4. First, we

S. Y. Hwang and I. 1’. Basawa / Paratuercr esslintaiionin regressive modds

et {Ut) be a zero mean stationary m-dependent process wit tion y,,( ). It is well known in time series literature that if v2 = l

59

autocovariance /:‘=_,,t y,(h) f 0,

then

N(O,v')as n-, oo.

Ul-

(3.1)

See, for instance, Fuller (1976). In this section, we shall develop a central limit theorem which can be viewed as a generalized version of the result in (3.1). Consider the following normalized weighted sum of random variables U,, t = 1, . . . , n: n

-l/2

( j c

r=r

2

at

II

c

/

t-1

a,49

(3.2)

where {a,} is a sequence of real constants. It is assumed that the following regularity conditions on {a,} are satisfied.

6-3

c

n - 11 t= 1

ata,+iIli /C:‘=, af--mll, /1=0,&l ,....

nM$’ C:‘=, a: is bounded where M, denotes maxlGi6,1 (all. In the above, all the limits are taken as n + 00.

emark. Conditions (Cl) to (C3) are the usual conditions on the regression coefficients (see, for instance, Fuller, 1976, Ch. 9) and are satisfied for a variety of regression coefficients, including polynomials. Under the conditions (Cl) to (CSj, it is easily seen that (3.3) where

o2 m =

(3 -4)

It is further assumed that ai,>O. From (3.3) one may expect (3.5) Before proceeding with the verification of (3.5) we state a well-known result which will play a basic role in the proof of (3 5). ,}

be a seqr!ence of rapldom variables defined bq’

/y, = G,,kfR,,~.,

n= I.,2)... 9 k = 1,2,....

60

S. Y. Hwang and I. V. Basarva / Parameter estimation in rqyessive models

. See, for instance, Brockwell and Davis (I987), p. 201).

Y

We now state and prove the main result of this section. 3.1. i/rider tlze corlditiorls (CI) to (C3), rz’ehave, us tt -+ 00,

isthere 0:) is defined in (3.4). s. For equal weights, e.g., Q,= 1 for t = 1, . . . ,n, note that the result in the above theorem is identical with the classical result given in (3.1). In the case when r?l=O Theorem 3.1 reduces to ehe central limit theorem for weighted sums of i.i.d. random variables (see, for instance, Chow and Teicher, 1978, p. 302). roof. For fixed k > 2122,let I^be the integer part of n/k and define (3.6) irst observe that I,‘_: 1 G, is the sum of r independent random variables. Denoting by sf the variance of CI= 1 G,, $ can be written as

rom (3.7) it can be shown that (3.8)

of the set A. The third ine where i’(A) denotes the in sequently, the Lindeber folPows from the stationarity of tion holds using the fact that ~.s,/(k - m) ]-) 00 and s$-+ w, which that as n (or r)--, 00,

r -1 C

GjL

N(O* I)*

(3.10)

St-j=1

Now, we partition the weighted sum in (3.2) into three pal._ (3.11) where

-l/2

r C Gj-y i_=]

i r j=t

Gj

and -I/2

-l/2

t=l

r

c G, _

j=l

If it can be verified that )$m ,'\rnm var [Rnk ( 1)I = 0

(3.12)

lim lim var[Rnk(2)J = 0,

(3.13)

and k-.can-.w

then the result in the theorem follows readily from (3. IO), (3.12) ma 3.1. In order to verify (3.12), first note that

and

(3.13) via kem-

S. I’. Hwang am’ 1. I’. Busawa /I Parameter

62

estimation

in regressive wocieis

e autoregressive

e regression parameters a

c

Consider the model defined in (2.1) and (2.2), i.e., Y,

=

A-g + W,)

u~=(8+Z~)w~_r -FE,, t = 1,2)...) n, where the second moments of Z, and e, are denoted by a: and CJ:, respectively and it is assumed that, for the stationarity of the proces In this section we are concerned with the derivati tions of the least squares estimator & of 8 and the estimated GLS estimator & of p defined in (2.6) and (2.7), respectively. We begin with the application of Theorem 3.1 to the first-order random coefficient autoregressive process { W,}. First observe that for each HI= 1,2, . . . , the stationary solution Wt (see, for instance, Nicholls and Quinn. 1982) can be written as

and

iVtnt =

1

j=O

Titshould be noted that { W~,ll}is a zero mean stationary m-dependent process with autocovariance function yL;,,(9 ) such that jJ[,,,.(h)= 8

‘I a;[1 - (e’+ O_:)“(t ’ -

and cK,‘,1 is stationary and its autocovariance y,,,,,(h) = 8

h ]/( 1 - n’-

@),

(4.1)

function JJ~-,,:(.) is given by

h CT;@-I- CJy ‘/(l - 8’ - a;),

(4.2)

where h = 0, t 1, . . . . Under conditions (C 1) to (C3), we have, as n -+ 00, (4.3)

(4.6) Applying Theorem 3.1 to the first sum on

t e

right of (4.6), we have (4.7)

where

and it is easily seen that (4.8)

Furthermore,

it is not difficult to verify using (4.2) that -l/2

lim lim var

m --a m I1 -+ cao

K

i a: r=l

I1

1

C a, VII,, = 0.

)

t=l

(4.9)

1

The result in Theorem 4.1 now follows readily from (4.7), (4.8) and (4.9) via Lemma 3.1. 0 We now turn our attention to the OLS estimator /$ of p defined in (2.3). The following theorem (Theorem 4.2) presents the asymptotic normality of B,,. First consider the following regularity conditions: Here and in what follows, x,~denotes the (t,i)th element of the matrix X= (x1, . . . . x$ and OJ 1) indicates a term which is bounded in probability. (El) For each fixed i= 1,2, . . . . k, {X,i,I= 1, . . . . n) satisfy the conditions (Cl), (C2) and (C3). (Et) For a? i, j = 1, . . . , k and h = 0, _+1, . . . , there exist g,(h) such that lim t1--t00

=

We define two k x k

at rices

g,Jh).

D,, and B as diagonal matrix

and B=

i

II

I,j= l,...,k

.

(4.10)

S. Y. Hwatq

64 ewe

and I. V. Basawa / Parameter estimation

iti regrewive

tnodels

.2. Under conditions (E 1) and (E2), we have D,,(P,, 9)

= Q,(l).

Moreover, as n + 00,

wn -P) L

N(O,A

-‘B/l -I),

(4.13)

where

A - lim D-‘XTXD-’ II ’ II n+w

and B is as defined in (4.11). roof. Note that D,,(& +I) = D,,(XTX--iD,,D,;lXTW. The i-th element of DpriXTW is seen to be -l/2

(

i f=l

x;

!I c l=l

>

-y,;,

WI,

which is bounded in probability by Theorem 4.1 Also, D,,(X TX)-lD,I converges to Al as n --+00. This, in turn, implies the result in (4.12). The proof of (4.13) follows from the same arguments as those in the proof of Theorem 9.1.1 of Fuller (1976). We now consider the problem of estimating 6. First recall that {IVt} are unobservable. The least squares estimator t$ of 8 based on the residuals { WI*= Yf-x7&} is then given by (2.6). !Ii 4.3. Under the conditions (El) and (E2), if, in addition, E W,k 00, then we have as n -+00, li’n(&-e,L

N(O,(a;EW;+,~EW,‘$‘(EW;)‘).

-

A-‘(& -P).

(4.14)

(4.15)

ar arguments to those in the proof of Theorem 9.3.1 of Fuller (1976),

a

S. Y. Hwang and 1. V. Basawa 1

rarneter estimation

in regressive model.5

65

Returning to the problem of estimating the regression paramete if we denote by V the variance-covariance matrix of W= ( be written as

where

r(e)

=

1

8

8 :

1

fj

...

om-1

...

(p-2

:

m-l I 0'

.

(4.16)

;

Recall that the GLS estimator & of /I assuming Bis known is given by (2.4). When 8 is unknown, the estimated GLS estimator pG can be expressed as (2.7), i.e. p;; r- (XTr-‘(~~)X)-IXTr-*(~~)Y, where 6,, is defined in (2.6). The following theorem states that & and & are asymptotically equivalent in the sense to be specified below and the limiting distributions of & and j?G are identical. Under the same condiiions as for Theorem 4.2, we have, (4.17)

and the commoti limiting distribution of & and & is given by (4.18)

*whereD,, is dedfined in (4.10)and G = lim L?,;‘XTP(0)XD,j? I1--b00

.

Consider Q,CiL

-4

=

If it can be verified that

,‘(xTr-‘(e,t)x)o,‘]-‘D,‘XTr-‘ce,,>w.

(4.19)

66

S. k’. Hwang

and

I. C’. Basawa

/ Parametc~r

estimation

in regressive

models

and

D,;‘XTr-~(8,,)W-D,;‘XTr-‘(6)W=

(-!-)

3,

(4.22)

)hi ’

then (4.17) follows immediately from (4.21) and (4.22). To verify (4.21), consider the (&j)th element of the term on the left of (4.21) given by

c:lZi’xfixt+

i,j

which implies (4.21) since fi(&-- 0) is bounded in probability by Theorem 4.3. The proof of (4.22) follows on similar lines and is omitted. It remains to verify (4.18) to complete the proof. It can be shown from standard arguments that

w-[QI&-P)l-)

(1 - @)a,Z

l _82_a2 G-’ N,

(4.23)

as n+ok

Now, we write D&-p)

= [D,;iXTl-(0)XD,,i]-iD,;iX~r~-i(O)W = G-‘D,,‘XTT-‘::O)W+o

where o,(l) stands for ‘converges that as n -+ 00,

P(1) ’

to zero in probability’.

(4.24) It then suffices to show

(4.25) Introducing a k x 1 vector of constants that k=r-‘(0)X, WC have

c = (c I, . . . , c~)~ and the n x k matrix r? such

c~D,;'X~~-~ W = cTD,,‘xTW = i r=i

4,.K9

(4.26)

where -ii2 4. II =

cjxtj 9

and Zlj denotes the (tJ)th

element of x. Note that Theorem

4.1 can be restated as

-ii2

a, kz/,-% N(O,o’)

a-id 4,

II

is an alternative

version of (C:‘=,

af)-i’2a,.

Furthermore,

denoting

for i, j= 1, . ..$ k,

67

we have, as n -+ 00,

Consequently,

which,

we conclude

in turn,

implies,

via the Cramer-Wold

device

D,;‘XTl--‘(B)W~N(O,L),

(4.27)

the k x k matrix with /;; as its (i,j)th

where L denotes

Also, L can be expressed L = lim

in terms of G defined

element

given by

in (4.19) as

D,;‘XTP(e)vf -l(e)xo,l

II -+ 00

(1 -e2)a,2 = 1 _O1_az

lim D,;‘XTP(0)XD,;’ _ n-+00

-

(p

__

@)&

= l-&La,? Combining



(4.28)

G.

(4.27) and (4.28), we obtain

(4.25) and hence complete

the proof.

Ll

cknowledgement I.V. Basawa’s work was partially Research.

supported

by a grant from the Office of Naval

ces Brockwell, P.J. and R.A. Davis (1987). Time Series: Theory and Methods. Springer-Verlag, New York. Chow, Y.S. and H. Teicher (1978). PrcSot .!ff_b hI.,lor,v. Springer-Ve ‘16, New York. Fuller, W.A. (1976). Introdrlction 10 St/-rr.5”: .: , ~*BJ’J q ~8cs. Wiley, , .v York. Nicholls, D.F. and B.G. Quinn (1982). . !Cndotn Cotjjicrerdt A.uroregressive Models: An Inrroduction, Lecture Notes in Statistics, Vol. 11, Springer-Verlag, New York. Tong, H. (1990). Non-linear Time Series. Oxford University Press, Usford.