ELSEVIER
Journal of Statistical Planning and Inference 44 (19951 219-235
journal of statistical planning and inference
M a x i m u m likelihood estimation for a fractionally differenced autoregressive model on a two-dimensional lattice S. Sethuraman, I.V. Basawa* D~Tartment ~['Statistics, The University ~?fGeorgia, 204, Statistics Buihling, Athens, GA 30602-1952. ['SA
Received 20 July 19!)2; revised 14 October 1993
Abstract A two-dimensional time series model with long-memory dependence is introduced. The model is based on a fractionally differenced autoregressive process (long-memory) combined with a standard pth order stationary autoregressive process (short-memory). The maximum likelihood estimators for the parameters in the model are derived and their asymptolic distributions are obtained. A novel feature of the derivations is that a new two-dimensional martingale representation is used. A M S Subject ClassOqcation: 62M 10, 62M05 Key words: Time series; Lattice processes; Long-memory dependence; Maximum likelihood estimation; Asymptotic inference
1. Introduction A n d e r s o n (1978) has discussed p r o b l e m s of inference b a s e d on several i n d e p e n d e n t autoregressive processes. B a s a w a et al. (1984) a n d B a s a w a a n d Billard (1989) have studied large s a m p l e inference for regression m o d e l s with multiple o b s e r v a t i o n s a n d a u t o c o r r e l a t e d errors. K i m a n d B a s a w a (1992) c o n s i d e r e d e m p i r i c a l Bayes e s t i m a t i o n based on d a t a from several i n d e p e n d e n t autoregressive processes. In all the p a p e r s cited above, the o b s e r v a t i o n s at each time p o i n t are a s s u m e d to be i n d e p e n d e n t , a n d the d e p e n d e n c e extends only a l o n g the time axis. H o w e v e r , in s o m e situations, when i n d i v i d u a l s at each time p o i n t belong to the s a m e family or share the s a m e c o m m o n e n v i r o n m e n t , the o b s e r v a t i o n s on these i n d i v i d u a l s at a n y given time p o i n t w o u l d
* Corresponding author. 0378-3758/95/$09.50 © 1995--Elsevier Science B.V. All rights reserved. SSD! 0 3 7 8 - 3 7 5 8 ( 9 4 ) 0 0 0 4 6 - 8
220
S. Sethuraman, I.K Basawa/Journal of Statistical Planning and Inference 44 (1995) 219-235
generally be correlated. It will therefore be useful to consider models which permit dependence between time points as well as within the group of individuals at each time point. In this paper, we consider such a model and study the asymptotic properties of the maximum likelihood estimators of the model parameters. Most of the standard time series models assume that the observations separated by a long time lag are nearly independent. Such processes exhibit a short-memory dependence. In many empirical problems arising in fields such as econometrics, geology, hydrology and oceanography, the autocorrelations with long time-lags are not negligible, thus necessitating the study of long-memory time series models. If p(k) denotes the autocorrelation function with lag k in a stationary process, the process oo is said to have a long memory if Wk=_~lp(k)]=oo and a short memory if Zk°~- -o~ [p(k)[ < ~. See Granger and Joyeux (1980), Lawrance and Kottegoda (1977) and Greene and Fietlitz (1977, 1979) for applications of long-memory models. Fractionally differenced autoregressive models provide a rich class of long-memory models. Hosking (1981), Yajima (1985, 1991) and Fox and Taqqu (1986) have studied various aspects of the asymptotic estimation problem for one-dimensional fractionally differenced models. In this paper, we introduced a two-dimensional process based on a fractionally differenced autoregressive process. In Sections 2 and 3, we introduce the model, derive the likelihood function under the assumption that the errors are normally distributed and establish the asymptotic normality of the maximum likelihood estimators of the parameters. Inference problems for two-dimensional processes with long-range dependence have not as yet been addressed in the literature, to our knowledge. Stationary processes on a plane and related inference problems have been studied by several authors including Whittle (1954), Tjostheim (t978, 1983), Dahlhaus and Kunsch (1987), Guyon (1982) and Martin (1979, 1990). None of these papers, however, involve long-range dependence. Consider a lattice process {Xts, (t, s)~Y 2 }, where 7/denotes the set of all integers, {0, _+ 1, _+2 .... }. It will be assumed that { Xts} is stationary with a linear (quadrant) representation
Xts = ~ j=O
~ ~(k,j)~t-k,s-j,
(1.1)
k=O
where { ~ij} is a two-dimensional white noise, and c((k,j) are non-random coefficients which depend on a finite number of parameters. For simplicity, we consider the special case where ~ (k,j) is factorizable, i.e. ~ (k,j) = qk qJj, where {qk } and { ~bj} are determined by a fractionally differenced and a standard autoregressive processes respectively. We further assume that { ~ij} are Gaussian random variables. Let Fij denote the ~-field generated by {Xt~, t<~i and s<~j}. Given any Xt~, we shall visualize the 'past' as F t - 1 , s VFt, s - 1. The likelihood function based on the sample {Xt~, t= 1..... m and s = 1..... n } is then obtained in terms of the prediction errors of Xt, given the 'past' and the corresponding prediction error variances. In the derivations of the asymptotic distributions of the maximum likelihood estimators, we use a two-dimensional
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and lnjerence 44 (1995) 219 235
221
martingale approach which is of interest on its own. Our method of deriving the likelihood and the use of the martingale approach in asymptotics are both general enough to be useful for any general lattice process (not necessarily factorizable) having a quadrant representation. It is to be noted that, even in the context of a onedimensional fractionally differenced process, our martingale technique provides a new powerful tool.
2. The model and the likelihood function
Consider the factorizable model on a two-dimensional lattice specified by
Vd,O(Bs)Xts=¢ts,
0 < d < 0 . 5 , t , s = 0 , + l , + 2 .....
(2.1)
where V~=(1-B~) d= ~ 7r~d)B~ k-O
with P
,,
i
1
Bt and Bs are backward-shift operators defined by BIXts=X~ i.~ and BJXts=X,.~_j, respectively. We assume {~,~} are independently and normally
and
distributed random errors with mean zero and variance ~2. It is also assumed that O(z)=
(2.2)
1 - ~ Oizi 4:0, i=1
/
for [z[~
(2.3)
Y,~= O(B~)X,s.
(2.4)
where
From (2.2) and (2.4), we have X , , = ~ _jO (°) Y,,_i,, j=o
~ 10}°)[< ~c,
(2.5)
j=o
where q ' ( z ) = l / O ( z ) = ~ Oj(o)zj j=o
and
0=(01, .... Op)r.
222
S. Sethuraman, I.V. Basawa/Journal of Statistical Plannin9 and Inference 44 (1995) 219-235
Equation (2.3) implies that
Y,,= ~ r/~a)~,-k.s,
~ [r/~a']2< o0,
k=0
(2.6)
k=O
where q(#~=F(k+d)/{F(k+1)F(d)} and F ( . ) denotes the gamma function. Note that, from (2.5) and (2.6),
X,s=~
~,(~),,,< "lk "?'jo) ~ t - k , s - j ,
(2.7)
j=Ok=O
We have thus shown that Xt~ has a linear and quadrant representation. See Tjostheim (1983). In addition, the model is factorizable (see e.g. Martin (1979)). Consequently, Cov(X,-X,:,) =°~
~o \j=O
"~{ V
.~").~")
q'j v ' j + l s - s l l / ~ k % "lk "tk+lt-t~l
.(d) ~,~-sllYl,-t,
_ ~ 2 ~(0)
-u
,~,~0~,/,~0~
(say).
(2.8)
It is to be noted that the model has a long-memory in t and short memory in s. Our goal is to estimate d, 0 and a z based on the sample X = ( X t s , t = 1..... m; s= 1..... n). The likelihood function of X is seen to be L = (2r~)-,."/2 [Sx [- 1/2 exp { -- 2-1 X r Sx 1X },
(2.9)
where Sx is the variance-covariance matrix of X determined by (2.8). The direct calculation of ISxl and Sx 1 can be avoided by expressing Sx in terms of the one-step predictors and their mean squared errors which are easily calculated recursively using the algorithms given in Brockwell and Davis (1987). To this end, we proceed as follows. Let F u denote the a-field generated by the random variables {Xt~, t ~
E(X,~Ipast)=E(X,~IF,_I.s)+E(X,~IF,,~_I) -E{E(X,,IF,_L,)IF,_I,,_a
}.
(2.10)
It is seen that the last term on the right-hand side of (2.10) is equal to E(Xts I Ft- 1,~- 1) since Ft-~,~-1 ~ F, a,~. Note that the conditional expectations on the right-hand side of (2.10) can be carried out using one-dimensional recursive 'innovations algorithms' described by Brockwell and Davis (1987). Let b(a. ts 0) = Xt~ - E (Xt~ I past)
(2.11)
denote the prediction error. The prediction error variance is given by Var(bts) = trZu~ 1 r~°)-a,
(2.12)
S. Sethuraman, I.V. Basawa/Journal of Statistical Planniny and In[erence 44 (1995) 219 235 223
where ul d~a and r~°2 ~ are as defined in the Appendix. It can further be verified that { bt~ ] and {bc~,} are o r t h o g o n a l for t # t ' or s#s'. We can finally rewrite the likelihood function in (2.9) in terms of the prediction errors {b,~} and their variances, as follows: - n~2
× exp
} - m/2
~'~j ~ /q3 ~ t_(2o.2)_1i_~1 ~ /[-fh(d,O),2 / ,,~-~/ i" j=l [_"i l'j 1~)
(2.1 3)
Let L1 = ( n m ) 1/nL. The m a x i m u m likelihood (ML) estimators (d, (~, 0 2) are the values of d, 0 and a 2 which maximize L1 and are obtained as a solution of~L1/O[3=O, where [3r = (d, 0 r, cr2 ). In the next section, we shall study the asymptotic properties of the M L estimators.
3. The joint asymptotic distribution of maximum likelihood estimators Let fl=(fll,flT,f13)T=(d, OT,~72)T and fl = (ill, fiT,/~3)T = ( 6~,0T, 0"2)T. Let flo = ( d o , 0oT, ~roZ)w be the true value of ft. In order to simplify the notation, we shall use the same notation [3 to represent the true value flo. Consider the likelihood equation: "
O=(mn)l/2~
_
--(mn)
1/2cL[
~2L1
gfl + ( (~f12 ~.)(mn)
1/2
(fi--fl),
(3. l,
where Ifl*-fll<~bfi-[31, OL1/tg[3 refers to the vector of derivatives of L1 with respect to [3 and ~2L~/~[32 I~* denotes the matrix of second derivatives evaluated at [3=[3*. Equation (3.1) yields
lmnt
[31;
Ir
3.:'-t
Let C be the p a r a m e t e r set defined by C={0e~P:O(z)#0
forlzl~
where 0 = ( 0 1 , 0 2 . . . . . 0p). Note that 0 can be expressed as a continuous function (O(zl,z 2. . . . . zp)) of the zeros zl,z2 ..... z~ of O(z). The p a r a m e t e r set C is therefore the image under 0 of the set {(z~ . . . . . zp): Iz~l>l, i = 1 . . . . . p}. Let deD where D=[6,½-6] with 0 < 6 < ¼ and 0 < a 2 < ~. The following t h e o r e m gives the limit distribution of ft.
Theorem 3.1. As m ~
and n --* ~, we have
( x / ~ ) ( d _ d , ( O _ O ) V , o 2 _ 6 2) ~-~ Np+ 2(O,~),
224
S. Sethuraman, I.V. B a s a w a / J o u r n a l o f Statistical Planning and Inference 44 (1995) 2 1 9 - 2 3 5
]
where
V6/~2
o
=LOo
V-I 0
20-4
and V is the p × p variance-covariance matrix of a stationary AR(p) process, P
Zt= ~ OiZt-i+et, t = p + l . . . . . 2p, i=1
with {e,} i.i.d. N(0, a2). Proofi We shall show that as m ~ ~ and n ~ ~ ,
( aeL1 ) .... ,Z#*
(a) \ - ~ 5 -
(b) x / ~ a L 1 / a f t
d
N(0, Z -
1).
Then using Slutsky's theorem, in conjunction with (3.2), we get the required result. F r o m now on, for notational convenience, let us suppress the dependence on d and 0 and denote ~t~' td 0) , hta, ~t~ o), u~)- 1 and r~°-) 1 by ~t~, bts, ut- 1 and r~_ l, respectively. We shall prove part (a) for one term only. The proofs for the convergences of the other terms follow along similar lines.
o2L, ,.~,
[Or~_,][Or~_,]" , O~r~_,}
ao~=2-ni~,(~T-,_,L~JL 1
L aO J
mno'2 t = 1 s = 1
1
+
ao
bit- 1 rs- 1
J r~_, ~ ao .[.
vts tit- 1 rs- 1
~ ¢[ ( -_,~ k abt~ so F ars_ 17 T
t ab's [Or~_ll T b,s voa0 J
J +.,-lr.-,L
1
2 02r,_a
+~b,, ~
bt2 [Or,_,lFOr,_,lT t ao jL-~- j ."
u,_,r~_,L
(3.3)
By Lemmas A. 1-A.6, in the Appendix, it follows that the second term of (3.3) converges to 1
02411
and the other terms converge to zero as m --*~ and n ~ , O~C. Hence, as m - - * ~ and n ~ ,
002
0-2
uniformly in d~D and
002 J ) '
(3.4)
225
S. Sethuraman, I.V. Basawa/Journal ol Statistical Planning and lnlerence 44 (1995; 219 235
Since E[(~{ll/~O)((~ll/Gqo)T]=~2V --*m and n --+m,
and
E[~11(82~11/(~02)3~-0,
we
obtain
as
m
(?2L 1
....
(3.:5)
902 Similarly, as m --+ oc and n -+ <~, [ e I(0~11~2\~/ + E (
....
i}2Ll~d 2
82911~J-n2%
since E [ { l l ?,2¢1ffSd2] =0 and E({3~11/8d) 2 =0.2 n2/6, see, for instance, Yajima (1985). Also, t~2L1
('~( 0.2 )2
a.s.
' (20"4)- 1 -- 0.- 6 E ( ~ 2 1 ) = -- (2°'4) - 1 ,
and t'~2L 1
~2L1
80 8d'
tgd (?0.2
and
~2L 1 8 0 ?G 2
all converge almost surely to zero. Consequently, as m --+ oo and n -+ oc, ?2L I a* a.s. __S_1
(3.6)
~f12 Hence part (a) is proved. T o prove part (b), let us c o n s i d e r
X~T(?L1/@fl,
where
,/I,T=[,~.l,/~T,23]
with
/ ] ' 2 = ( 2 2 1 ,/~22 . . . . . 22p) T" ~
,,~X (")'L 1
eft
= _(mn)- U2
~ ,_q~,t~,~-2l + t = 1 s = 1 (0.211 t _ 1 Fs 1
-I-232
10.-211--O" 2 Ut
20.2~
t= l s= l
1
b2sl rs- 11}
j
-t
,~2,,t, ,~0 (72 bit- 1 rs
u2_tr,
lrj
1
~nO
T (?rs- 1 1
(?W~- q
l l t - l-77.2 l's 1
(3,7) '
Using L e m m a s A.1 and A.2 and the fact that E(b2,,)<~E(X2,), it can be shown that the second and third terms of (3.7) converge to zero in probability as m --+ m and n ---,:c~.
226
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and Inference 44 (1995) 219-235
Hence, consider gts ,
%~t=ls=l
where
gt~=
--21
'~t~ :a
~'t~-~2_,~0
0"2 U t - 1 r s _ 1
0"2Ut_lrs_ 1
1
(3.8)
20"2\tYZUt_srs_l
Set Hm,,= ~
~ gts-
(3.9)
t=ls=l
Let o~,.. be the a-algebra generated by { X,s, t -%
)
E(H,../~ij)=HI~= ~ ~ gt~,
where
(i,j)<~(m,n).
(3.10)
t=ls=l
We shall prove this result for the case i = m - 1 and j = n - 1. The proof of the result for other values of i and j follows along similar lines. Note that H,.. can be written as m--1
H,..=H.,-1,.-I+
g,.~+ ~. gt.. s-1
(3.11)
t=l
Consequently
,.
)
31.,
The proof is complete once we show that m--1
=
o~
,. ,)__o
(3.13)
We shall first show that
E(s~=lgms/o~m-l,n-, ) = 0 . Since ~ , . _ 1 , . ~ m _ 1 , . _ 1 ,
(3.14)
we have
E( ~=lgm./~m-l,.-1)=E[E( ~=lgmJ~m-l,.)/~m_l,._, 3. Ob,.JOdis
Observe that Also, note that
~ . , _ a , . - m e a s u r a b l e and
E(b~s/~m- x,.) = 0"2Urn- 1 r~_ 1
E(bms/~m-l,.)=O, for
for s = 1. . . . . n.
(3.15) s = 1. . . . . n.
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and ln/erence 44 (1995) 219 235
2127
Consequently, by the definition of gins in (3.25), we have E
gm~/.Y',,-1,n s
x2 -bms2 -
= --E
1
s = l (72blrn
lrs
(?bins / ~ r n 1 ~ 0 /'
1,n
•
(3.16)
F r o m (3.15) and (3.16), we have E s 1
Oms/~m_l,n_l = E
Since ~ , . , . - 1 ~ . , - 1 , n - 1 , we have
Fgms/g,, s 1 0-2 Um - 1 Fs 1
(3.17)
~,# ~
(~0
"
by the s m o o t h i n g p r o p e r t y of conditional expectation.
crrj ./~??/
l,n
l
~ E
Next observe that 6bms/O0 is ~m,,._l-measurable s = 1. . . . . n. Consequently,
and
(72/'1m 1G 1 ~ v / / ~ " ' =
s=l
for
E(bmj~m,n-1)=O.
{3.19)
F r o m , (3.17) (3.19), we arrive at the result stated in Eq. (3.14). Similarly, we can show that E
gt~/~,,-1,,-i
=0.
\t=l
Hence { Hm,, ~mn } is a martingale on a two-dimensional lattice. N o w define K mn--L~t=l~n=l 2 --Win E [ g t2,,/~~ ' ~ _ l , n _ l ] 2 2 and Smn=E(Kmn). Using facts that E( ? In L/~d) 2 = -- E( ~,2 In Lfi~d 2 )
the
(3.20)
and [ ? l n L aln L ]
EL
[-a21n L 7
a,Bj
it can be readily shown that as m -0 m and n --+ o-,:, p - l i m ( m n ) - 1 K i2n ,--- lira (ran) -1 Sran 2 = ) , 2 ( 7 - 2 E ( ( ? ~ 1 lloyd)2 + f c T V ~ 2 +22(2(74) 1.
(3.21)
228
S. Sethuraman, l.F. Basawa/Journal of Statistical Plannin9 and Inference 44 (1995) 219-235
We shall verify Eq. (3.21) for one term only. That is, we shall show that as m ~ o e and n ----~o o ,
a-2(mn)-' ~ L E[ -2,bu ~bul2 2~r_2E(~_) 2.
(3.22)
LUt-lrs-1 ad A
,=1,=1
The proofs for the convergences of the other terms follow along similar lines. Using the result in (3.20), we have
a_Z(mn)-i 2- E. e / -F- ---- 2 abt, 0b,sl2 ,=1,=1 LU~_lr,_l ad J n V E'ab"'z
g
/ "'
=2~(mn~r2) - ' 2-.. ~
/
L as°t"
--
+ E { v,,~
t=l,---_aLU,-lr,-x
~-) au,_ I _E(b,, _ ao,__
\ut-lr~-l/
U2- 1 r s - 1
]
.
(3.23)
ad
Since E [ b,s(~?2bt,/ dd 2 )] = E [ b,,( Ob,,/8d)] = 0, Eq. (3.23) becomes
"
"
F 2,b,s
abul 2
~r-2(mn)-' ~=,~=,ELU-_,r,----_,ad J = - -
~ , ~d ,
m n o "2 t = l
bit-Its-1
s=l
, ~,~-2 WEaGl/aCl]~,
(3.24)
by monotone convergence theorem. Next observe that
L
-'
t=ls=l
[
= e2(SZ""] z kmn/
1-1
(ran) -z ~ L E(94) • t=~,=l
L
t=ls=l
(3.25)
Since the process { Xt,} is stationary and ergodic, by ergodic theorem, we have as m--,oo and n ~ o o ,
( mn)-a ~ L 9~
....
) E(gll )
t=ls=l
and hence (ran) -2 ~', L 34
.... , 0.
(3.26)
t=ls=l
By monotone convergence theorem, in conjunction with (3.24)-(3.26), we conclude that as m ---,oo and n --.oe,
t=l
s=l
S. Sethuraman, I.V. Basawa/dournal of Statistical Planning and Inference 44 (1995) 219-235
229
Since { H,.,, ~ , . . } is a martingale on a two-dimensinal lattice and (i) K2,S~, 2 L 1
/=ls=l
it follows that S~, 1
gt~--*N(0,1)
as m ~ o o
and n--.oc.
(3.27)
t=ls=l
The results stated in (3.27) can be proved via a direct extension of the martingale central limit t h e o r e m of Brown (1971, p. 60). Hence we obtain the desired result.
Appendix: Some properties of predictors and prediction error variances Let
X,~=E(X,~IF, 1,s), ~Gs=E(X,~IF,,~-I), u ~ ) = ~ o ( ~ / ! a ) ) 2, and 7 ~ ) = ~ = o
(~!°)) 2. Define
u~=E(X,~-R,,)2/(a2~,~?),
t~2
(0)l =E(Xts_~ts)2/(a2 rs-
2<~s<~p
g(Od)),
and r~°~= l for all j ~>p. It can then be shown that Var(b,~)
=
G2u~d_) 1 r ~ 1,
where b,~ is as defined in (2.11). The following technical lemmas are needed in the p r o o f of the a s y m p t o t i c normality of the M L estimators ( T h e o r e m 3.1). L e m m a A.1. We have, ul~ ~ rs-l"(°)__. 1
as t ~ oo and s --* ~x),
uniformly in d~D and OEC. Proof. First note that tr- z E { [ b~a' 0)] 2 ~f -_,,~a) - ~ t - l r s - - 1,~o) Yajima (1985, p. 309) showed that
•
U~d)-1 = { r ( t + 1 -d)}-2 { F ( t + 1 ) r ( t + 1 - 2 d ) } ,
230
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and Inference 44 (1995) 219-235
and 0 ~>.p. Hence, it readily follows (see, Remark 3 of Brockwell and Davis (1987), p. 169) that r~l ~1
as s - ~ ,
uniformly in OeC.
Hence we have the assertion.
Lemma A.2. (i) O(i)u~d)_1/~dti) = O ( t - 1), i= 1, 2, uniformly in d~D.
(ii) ~t°r~°)_ 1/(;300) ---~o( 1 ), as s--*~, uniformly in OeC, for i= 1, 2. Proof. Since u~a)_1 = [ F ( t + l - d ) ] (~=2u~1
-2 { F ( t + 1 ) F ( t + 1 - 2 d ) } , we have
{G(t+ 1-d)-G(t+
1-2d)}
where G(z)=~lnx/Oz.
We have
c~u~a)-~ <~Mt-~
where M is a constant independent of d.
Similarly we can show that ~2U~d-) 1
0d z - O ( t - 1 ) • Hence the proof of part (i). Part (ii) follows directly from L e m m a A. 1. Hence the proof is completed. L e m m a A.3. Let
~,~("'°)--vdO(Bs)X~=--.
-
~, ~(kd)OjX,-k,~-~ k=oj=o
where 0 o = - 1. Then, (mn)-i ~
~ [~,a~0)]: .... , Ee#(la,'l°)] z
as m --*~ and n --*~,
t=ls=l
uniformly in d~D and O~C. Proof. Note that {Xt~} is regular since it can be expressed in the linear form
/=Ok=O
S. Sethuraman, I.V. Basawa/Journal o[' Statistical Planninq and lnlerence 44 (1995) 219 235
25,1
where q~,e)= n~-a). Since { Xts } is regular and strictly stationary, it follows that [ Xt~I is ergodic and hence {~t~ ~(a, 0)~, is also ergodic. By the ergodic theorem we have as m ~ ~ and n ~ ~,,
t=ls=l
for any d6D and OeC. The uniformity in proved as follows. We observe that
Irc~a)l<~[F(-d)kl+a]-I
d~D and O~C of the
above convergence is
for k = 1 , 2 , 3 . . . . .
Let 1
M -- sup
a~Dr(--d)
where the g a m m a function 'St ~ ' e
F(x)
is defined as
tdt
x>0,
x-'F(l+x)
X=0, x<0.
Y(x):= 3¢o Hence,
I~a)l<~Mk (l+d)<~Mk-"+~)
fork=l,2,3 .....
(A.I)
Next note that O(z) can be expressed as p
O(z)-- [I (1- - Z i i
1 Z),
(A.2)
1
where Izil > 1, for i = 1. . . . . p. Let { Mi, i = 1, 2, 3, 4, 5 } be positive constants being independent of d and 0. F r o m (A.1), (A.2) and the fact that la-bl<
k~orC~a,{i=Q(l_zF1B~)} Xt k,s
~<{ k=L'~ku)'(14-BYlX,-k,~'} p
E
2 IX,-k.s-,ik
i=0
i=l
k=l
and
i=1
i=lk=2
iA.:;t
232
S. Sethuraman, I.V. Basawa/Journal of Statistical Plannin9 and Inference 44 (1995) 219-235
Similarly,
l a~#~°)/a01 ~
where X t - k = ( X t - k , s - 1 . . . . . Xt-k,~_p) t. The rest of the proof of the lemma can be completed by following the arguments in L e m m a 3.1 of Yajima (1985). L e m m a A.4. Let rTAa, o)_ ~h(d, O)" Then rr t , $ o) -_- ~ t;(a, ,8 t,8 l m ~ =1
w(d.O)l 2
....
0,
as m ~ o o and n --.oo, uniformly in d e D and OEC.
Proof. Define t-I W ~ / ' 0) .... = -
,
p
L '~'~i - ( " ' - ~ 'T ~ ' t l l t - l , i J'O I X t - i , s - I
2
i=0/=0
and
Wt,s,d,O)2 --
--
~ ~ ~Id) OiXt_i,s_l. i=tl=O
Then
t=ls=l
t=is=l
First we shall show that as m ---,oo and n --*oo, rr t, S, 21
)0
t=ls=l
uniformly in d e D and OeC. F r o m (A.3), we have
j=tl=O
t-j,s-I J
~-z~t,s
and by the stationarity of the process {Xts}
j-(l+~)
e [ E Zl2)s,] 2F T(2)t.~t2,sz-1121~< Lj=tl
E(X~I)" _l
Hence
[ ira.)-' -
s=l L~t'sA J
<~Mn-lm-4a,
j
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and lnJerence 44 (1995) 219 235
233
which leads to the assertion by the arguments given in Lemma 3.3 of Yajima (1985). Next, we shall show that (mn) -1
: ~ [ W(a,o)12 t,s,l_l .... ' 0 t=ls=l
uniformly in dGD and OGC. Note that w~td~O~can be written as t
1
w%:,~ = - Z
p
Y, , ~ ) ~,), 0, x , _ ~,~_,,
k=0/=0
where ;dka) , t --- 1 -.,(a) -r tPt, kLr =(d)lk _l 1 = 1 - - F ( t ) F( t - k - d )
[ F ( t - d ) F( t - k ) ] - 1
Following the arguments given in Lemma 3.3 of Yajima (1985), we have fork<~t ~
:(d)t Il < M k ( t _ k ) _ l I~k,
for t ~ < k < . t - - 2 ,
< ~ M t ~ ( t - k ) -~
where (1 +23) -1 < f l < 1. Consequently, (
ta
IW,(,aL~I
P
Z IX,-k,~-tlk-~(t-k) -'
Lk=O/=O t-3
-4-
p
~
t-1
IX,-k,s-l[tak-(Z+l)(t--k)-a+
k=tt~+ 1 l = 0
~
~
/
IX,-~,s-~lt-'
k = t - 21=O
=M(Z~L I "~ z t,(1)s, 2 ~'
~ t( ". ... . L
(A.4)
(say).
Clearly,
(mn)-I :
:[-7(1)32 L ~ t , s, a d
.... > 0.
t=ls=l
Since the process {Xts} is stationary, from (A.4), it follows that E
(ran) -1
(Zt, s, 1) 2
z 2amE(X,~l )
Mn-lm2(2p
t=ls=l
and E
(mn) 1
(Z~1~,2)2
<
~
2at~)E(X41).
t=ls=l
Since ( 2 f l - 2 - 2 6 f l ) < 0 and (1 - f l - 2 6 f l ) < 0 , we have as m --,oo and n ~oo,
i_Lt,s," t=ls=l
a's'~ 0
234
S. Sethuraman, I.V. Basawa/Journal of Statistical Planning and Inference 44 (1995) 219-235
and (W//'/)-I
~ ~ rZ~ls{2]2 .... ) 0 t=ls=l
Hence we have the assertion. The following lemmas (Lemmas A.5 and A.6) can be obtained by proceeding along similar lines as those in Lemmas A.3 and A.4. The proofs are omitted. Lemma
A.5.
(i) (mn)-' £ ~. {(?~'°)/Od}2 .... ,E[c?~{lnlO,/~3d]2' t=ls=l
(ii) (ran)-' £ £ {a2¢~a'°}/ad2}2
....
} EEcq2~(dio,/ad2]2'
t=ls=l
(iii)
(ran)-' ~ ~
{¢3¢[~'°'/#0} 2 .... ,
E[O¢]ni°'/OO]2,
t=ls=l
(iv) (mn)-' £ £ {02~}a'°)/002} 2
....
) E[~2¢(ldiO,/~0212,
t=ls=l
(v) (mn)-' ~. ~ {(72~'°'/~d~0}2 .... , E[~¢]di°'/c~d¢30]2 t=ls=l
as m ~ Lemma
and n --+oo, uniformly in d~D and O~C. A.6.
(i) (ran)-1 £ £ {OW~ff'°'/3d}2
....
'0,
t=ls=l
(ii) (ran)-' ~ ~, {~2W[d,f}/c~d2}2 .... ,0, l=ls=l
(iii) finn)-' £
£ {dW~a;°'/80} 2 .... ,0,
t=ls=l
(ivt i==l-' ~ ~ {02wL";°'/002)2 .... ,o, t=ls=l
(v} (==t-' ~ ~ {~w:~o,/odoo)~ ....,o l=ls=l
as m ---~ooand n ~oc,, uniformly in d~D and O~C.
S. Sethuraman, I.V. Basawa/Journal (~[ Statistical Planninq and Inference 44 (1995) 219 235
235
Acknowledgement I.V. B a s a w a ' s w o r k w a s p a r t i a l l y s u p p o r t e d Research and from the National
b y g r a n t s f r o m t h e Office o f Naw~l
Science Foundation.
We thank
t h e r e f e r e e s for
a careful reading and helpful suggestions.
References Anderson, T.W. (1978). Repeated measurements on autoregressive process. J. Amer. Statist. Assoc. 7'3, 371 378. Basawa, I.V. and L. Billard (1989). Large sample inference for a regression model with autocorrelated errors. Biometrika 76, 283 288. Basawa, I.V., L. Billard and R. Srinivasan (1984). Large sample tests for homogeneity for time series models. Biometrika 71, 203-206. Brockwell, P.J. and R.A. Davis (1987). Time Series: Theory and Methods. Springer, New York. Brown, B.M. (1971). Martingale central limit theorems. Ann. Math. Statist. 42(1) 59-66. Dahlhaus, R. and H. Kunsch (1987). Edge effects and efficient parameter estimation for stationary random fields. Biometrika 74, 877 882. Fox, R. and M.S. Taqqu (1986). Large sample properties of parameter estimates for strongly dependent stationary Gaussian time series. Ann. Statist. 14, 517-532. Granger, C.W. and R. Joyeux (1980). An introduction to long-memory time series models and fractional differencing. J. Time Set. Anal. 1, 15-29. Greene, M.T. and B.D. Fielitz (1977). Long-term dependence in common stock returns. J. Financial Economies 4, 339 349. Greene, M.T. and B.D. Fietlitz (1979). The effect of long term dependence on risk return models of common stocks. Oper. Res. 27, 944-951. Guyon, X. (1982). Parameter estimation for a stationary process on a d-dimensional lattice. Biometrika 69, 95 105. Hosking, J.R.M. (1981). Fractional differencing. Biometrika 68, 165-176. Kim, Y.M. and i.V. Basawa (1992). Empirical Bayes estimation for first-order autoregressive process. Austral. J. Statist. 34, 105-114. Lawrance, A.J. and N.T. Kottegoda (1977). Stochastic modelling of riverflow time series. J.R. Statist. Soc. Set. A, 140, Part 1, 1 27. Martin, R.J. (1979). A subclass of lattice processes applied to a problem in planar sampling. Biometrika 66, 209-217. Martin, R.J. (1990). The use of time series models and methods in the analysis of agricultural field trials. Comm. Statist. Theory Methods 19, 55-81. Tjostheim, D. (1978). Statistical spatial series modeling. Adv. m Appl. Probab. 10, 130 154. Tjostheim, D. (1983). Statistical spatial series modeling II: Some further results on unilateral lattice processes. Adv. in Appl. Probab. 15, 562-584. Whittle, P. (1954). On stationary processes in the plane. Biometrika 41, 434-449. Yajima, Y. (1985). On estimation of long-memory time series models. The Austral. J. Statist. 27, 303 320. Yajima, Y. (1991). Asymptotic properties of the LSE in a regression model with long-memory stationary errors. Ann. Statist. 19, 158-177.