A three-dimensional unilateral autoregressive lattice process

A three-dimensional unilateral autoregressive lattice process

Journal of Statistical Planning and Inference 59 t1997) 1 18 ELSEVIER journal of statistical planning and inference A three-dimensional unilateral ...

754KB Sizes 2 Downloads 30 Views

Journal of Statistical Planning and Inference 59 t1997) 1 18

ELSEVIER

journal of statistical planning and inference

A three-dimensional unilateral autoregressive lattice process R.J. M a r t i n School of Mathematics and Statistics, The Unit,ersity, Shelfield $3 7RH, UK

Received 8 March 1995; revised 31 January 1996

Abstract

The three-dimensional nearest-neighbour process is considered, and some of its properties are derived. It is shown that the correlations are difficult to obtain, and that it is possible for them to decay very slowly.

AMS class!ficatiom 62M30, 62M 10, 62M20 Keywords: Autoregressive processes; Lattice processes; Spatial processes: Unilateral processes

1. Introduction

The class of stationary autoregressive-moving average (ARMA) models has been widely used in time series. Properties of the models can be found relatively easily. parameter estimation is not difficult, and asymptotic distributions of the estimators can be found. Some of this ease of use is shared by the unilateral ARMAs in the plane, in which the dependence equation for the value at a site involves values on 'past' sites; see Martin (1996) and the references therein. However. Whittle (1954) showed there are considerable difficulties in extending AR processes in the plane to the more intuitively appealing bilateral ARs, usually now known as Simultaneous ARs (SAR), in which dependence involves both 'past' and 'future' values. There are also considerable difficulties associated with the more fruitful extension to Conditional autoregressions (CAR); see, for example, Besag (1974). Because of their apparent simplicity, planar ARMAs have continued to be of interest; see, for example, Basu and Reinsel (1993). Although general properties of unilateral ARMAs, SARs and CARs are easily obtained, there has been little specific work on processes in d > 2 dimensions. There has been some interest in using unilateral three-dimensional processes in geostatistics (Sharp and Aroian, 1985). In this paper, some results on a simply-defined d-dimensional unilateral quadrant AR 0378-3758,/97/$17.00 :~ 1997 Elsevier Science BV. All rights reserved PIl S03 7 8 - 3 7 5 8 ( 9 6 ) 0 0 0 9 0 - 0

2

R.J. Martin/Journal oJ'Statistical Planning and Inference 59 (1997) 1-18

process are given, with particular interest in the case d = 3. Even the three-dimensional process has some very complicated properties.

2. Preliminaries The main interest here is in a particular three-dimensional process, the threedimensional nearest-neighbour process NN(3). In order to understand the NN(3) better, and the problems that arise for it, some results are given for the general d-dimensional NN(d) process, and methods that succeed for the one- and twodimensional processes are examined. Some general definitions for a d-dimensional process are given first, using vector notation. Examples are given in Sections 3 and 4. General results on the existence of stationary d-dimensional processes, and their Wold decomposition are given in G u y o n (1982), and for unilateral processes in Tjostheim (1978). Consider a zero mean stationary process {xl} on a (possibly infinite) d-dimensional regular rectangular lattice. It is assumed that the process is Gaussian, or that only the second-order properties are relevant. It will be assumed here that there is a systematic nested ordering of the suffices and the sites, e.g. a lexicographic ordering of sites when d = 3. Let 79 = E(xixl +g) be the covariance of the x's at lag g, with ~x2 = 70, and let pg = 7./70 be the correlation at lag g. It is always true that pg = p _gV 0. Other symmetries in the correlations may result from the definition of the process; see Martin (1995) for the case d = 2. The (non-normalised) spectral density function (sdf) h(~o) is defined as h(~o)=

~ g--

7, cos(g'o)), c~

where oo represents the vector with each element 0% etc. The inverse relationship is important for processes in d > 1 dimensions, and is

7g = (2re)-e

f~h(co)cos(g'o~)de~.

(1)

Except in some special cases, and a limited use of recursions, Eq. (1) must be used to obtain the correlations of SAR and CAR processes. The dl-dimensional margin of the process has covariance structure yg = ?'g,where, after permuting the order of the dimensions so that the omitted d2 = d - dl dimensions are last, g = (g;O'a2)'. Thus, if co = (o)i,¢o~)', the sdf of this marginal process is h\(dl + 1 ..... d ) ( ~ l )

= (2rC)-da ~ ~ h ( ~ ) do) 2 • .j-

(2)

R.J. ,kIarlin / J o u r n a l o f Statistical P A m n i n x a n d lq/i'rence 59 (1997) I

1~

3

Finally, note that if all the marginal processes can be simulated exactly, it is possible to simulate a d-dimensional unilateral AR process exactly using the AR equation and the marginal values. If this is possible, it is then also possible to obtain the exact Gaussian likelihood; see, for example, Martin (1995).

3. The N N (d) process 3.1. Generalities o / the N N ( d ) p r o c e s s

The stationary d-dimensional nearest-neighbour process NN(d) is defined as a unilateral q u a d r a n t autoregression with dependence on the d adjacent preceding neighbours, one in each direction. The generating equation is given by -V-i,.i . . . . . .

i~ =

2(1.0 . . . . . 0 Xi~

l . i 2 , . . , i , ~ -~ '3(0, 1,0 . . . .

2 . . . . it l,id

-}- "~-0,0 ..... 0 . 1 X i l . i

OXi~.i2

l.is . . . . . i d 2c- " ' "

1 -}- ~il,f z . . . . i,i,

where the ~:i,.i...... i., are uncorrelated, with zero mean and variance 0-,2. Let T, denote the backshift operator, where T , , x i , . i ...... i,, reduces the uth sulfix of x by 1, and let the subscript [u] denote the row d-vector with a 1 in the uth position and 0's elsewhere. Assume the %,1 are non-zero Vu = 1 .... , d (or else the process is an NN(d'), d' < d). Let T = (T1 . . . . . Tj} and z = (z~ . . . . . z,~), and let A,da} denote the AR-generating function, d

na(z}

1-

y , ~,,lz,. u

l

Then the N N ( d ) model can be written

Ad(T)>,-i

=

1 --

~-[u] T u

xi =

.'(i -

l¢= 1

~][u] -?(i

L,,l =

~]i.

(I}

it: [

Thus, provided Z [:%11 < 1 (the stationarity condition),

xi = ~A,I(T))

si=

1 --

~I,,IT,,

~:i,

(4)

I1=1

which can be expanded to obtain the infinite unilateral q u a d r a n t moving-average representation. Using E(s~s~ +~) 0 unless g = 0. the series (4) for x~ and x~ +~ can then be used to obtain a series representation for ;,~ = E(x~x~, ~t. The autocovariance function (acgf} is C l ( Z ) ~- a , 7 / ~ A d ( z ) A d ( z

1)

:

E3'u

2°,

4

R.J. Martin~Journal of Statistical Planning and Inference 59 (1997) 1 18

where z - a denotes I] z/-1, etc. The sdf ha(to), as when d = 1, 2, is equal to F ( e i'°) (see, for example, Martin, 1995). Thus

,o,)} '

/ {1 .+

Z ~.~ u=l

.

.

2 Z ~Eu~COSO).+ 2 u=l

y

~.~Es~COS(~.

}

~) .

u
(5) If F(z) = a z / z vjz i, the conditional (CAR) distribution of xl given the values at all other sites (on an infinite lattice) is given, for a G a u s s i a n process, by the conditional m e a n and variance (Besag, 1974):

E(x, lxj, i ¢ j ) = V o

1 ~ l~jXi-j,

var(xilxj, i # j ) = a 2 / V o .

(6)

j#O

N o t e that Fd(Z), ha(cO) and E(xi[xy, i c j ) are essentially equivalent. F o r the NN(d) d 2 these become: var(xilxj, i # j } = a~/{1 + Zu=l ~tu]}, and

E(xiIxj, i # j ) =

{~t.~(x, ~

Eo~ +

x/+ [.]) --

u=

'

Z (X[u]~[s] u
(Xi-[u]+[s] "~ Xi+[u]-[s])

}

Recurrences in the covariances (or correlations) can also be obtained. Multiplying both sides of (3) by x~_g, and taking expectations gives d

(8)

2 Pg = ~ ~[.]Pg-[u] + ~x~(g)/ O-x,

u=l

where 7x~(g) = E(x~ + ge~) is the (X, e)-covariance. The term 7x~(g) is zero except when g, ~< 0 Vu. W h e n g = 0, Yx~(g) = o2. Thus, for example, Eq. (8) with g = 0 and O = Is], respectively, gives d

2

2

O-~/O-x = 1 -- ~ ~[,]PtuJ, u=l

d

P[s] = ~ ~[,lPt,l-ts],

s = 1 . . . . . d.

(9)

u=l

A useful simplification of the N N ( d ) is the equal-parameter case, when ~tul = ~ Vu, with [~[ < 1/d. This has pg = Peg for any p e r m u t a t i o n matrix P. Equations (9) then b e c o m e O-2/O-~ = 1 - do,p, and p = ~{1 + (d - 1)pa}, where p denotes the correlation between adjoining sites, Prvj, v = 1 .... , d, and Pa denotes two-dimensional diagonal correlations P[ul-[s], u # s, such as P [ 1 ] - [ 2 ] : P l , 1,0 . . . . . 0 .

3.2. The NN(1) process W h e n d = 1, with ~ = ~ 1 , [ ~ [ < 1 , A x ( z l ) = 1 - ~ Z x , the NN(1) is the wellk n o w n time-series first-order autoregression AR(1), x~ = ~x~_ ~ + e~. Results for this

R.J. Martin ,/Journal of StatLstical Planning and hfference 59 (1997) 1 18

are easily o b t a i n e d . In p a r t i c u l a r , a~.2/ax2 = 1 - ,~2, po =~l<, a2/,(l + ~2 _ 29:cos(~a). T h e infinite M A r e p r e s e n t a t i o n (4) is Xi ~

and

hi(e)1)=

~ o(r ~q r" r=O

U s i n g this. the c o v a r i a n c e s are given by ;'y = yr~:o 9:"+-" = :&'(1 - 9 : 2 ) ( g tive d e r i v a t i o n s use, f r o m (8), 70 = ;'~ + a~2 a n d 7, = : % - 1 (g > O) or.

~.~ 0 ) .

Aherna-

r1(~1) = o-~/{ 1 + ~2 _ 29:/z~ + : ? 1 ) } __ ,[(72/(1

{(1

__ ~2)}

-

~z,)

~' +

9:z1'(1

-

~z~

U s i n g the k n o w n a:,/a~, 2 2 an exact G a u s s i a n s i m u l a t i o n is x l = c~/x/1 - ~ 2 a n d 9:~'('i 1 -~- ~-;i (i = 2 , 3 . . . . . Fl), w h e r e the r,i are s i m u l a t e d i n d e p e n d e n t N (0, o-,2). A c o n s e q u e n c e of hi (o)) a n d p~, using (1), is the i m p o r t a n t i n t e g r a l result

Xi =

1

2n

f7

cos ~og de) 3(DI9:2 . ~ 1 + 9:2 _ 2,~cos(o 1 --

T w o m o r e g e n e r a l f o r m s of this will be useful l a t e r (a 2 > c 2 = c~ + c~ t: _ _1

f~

2n

cosoog d¢o = [.{a - (a 2 - c2) 1'2tJ' /C] I!1 ~ a --CCOSe) (a 2 __ C2)1 2

10)

and

1 f ~

2n

~

a

1 -- c 1 cos(o

c2 sino)

2_

d{o

1 (a 2

c2 _

(@)1,,2"

{1 ] )

Eq. (11) follows b y w r i t i n g c l cos (o - c2 sin c9 as c coslv) ÷ fl).

3.3. The NN(2) process T h e N N ( 2 ) , xi,j=9:~.oX~ 1 . i + 9 : o . l x i . j - 1 +~;i.j with 19:1,o]+]9:o.~1< 1 was the first p l a n a r A R to be c o n s i d e r e d (Whittle, 1954}. It has sdf h 2 ( { o ~ , , ) 2 1 = a,:2~¢/tI + 9:i."o ÷ 9:o,21 - - 2~1. o cos (ol 29:o, 1 COS ~o,_ ÷ 29:1,o ;q). 1 COS(C91 -- t,),. _) I~ F r o m (7), the c o n d i t i o n a l m e a n is given by (1 + 9:~.o + 9:o.~)E[x~,jlXk.,V( k, 1 ) ¢ (i,j) 1 =9:1,0(Xi

1,j ÷ Xi+l,.j) ÷ 20.1(Xi,j

--9:1,09:0,1(Xi_1,.i+1 ÷ .XSi+1,j__ 1).

1 ÷ Xi.i+l)

(12)

T h e c o r r e l a t i o n s for this are n o t e a s y to o b t a i n . N o t e t h a t i n t e r c h a n g i n g 9:~.o a n d 9:o. ~ results in i n t e r c h a n g i n g P.o.h a n d Ph.~ V g, h. Eqs. (9) give 2~

cr,:/a; = 1 -- 9:1.OtOl,O

-

-

9:0.11)0.1

6

R.J. Martin/Journal o f Statistical Planning and h¢ference 59 (1997) l 18

and P l , o = 961,o + ~o. l P

1,1,

Po, l = : q , o P - a , l + 9 6 o ,

l•

(13)

The general equation (8) gives Po.h = ~I.0PO l.h + 960,lPo, h 1

forg>0orh>0.

(14)

However used (14) Whittle's min(g, h)

these c a n n o t be solved simply. Whittle (1954) derived the Po, o for g > 0, and to give Po.h for g, h > 0 as a sum of g + h + 2 terms. Besag (1972) noted (1954) Po,h for g, h > 0 is in error, and gave Po.h for g, h > 0 as a sum of terms. He also showed that the recursions (14) can be used to obtain, for h g, h >~ O, P-o,h = P ol,oP0,1. Pickard (1980) and T o r y and Pickard (1992) considered the process (Pickard process) which includes the diagonally adjacent term, Xi, j ~- 961,0X, i - l , j ~- 960,1Xi, j - 1 Jl- 9 6 1 , 1 X i - 1 , j 1 ~- ei,j. This arises from the consideration of M a r k o v chains, and shares m a n y of the properties of the NN(2) process. They obtain P-g,h = P~,oP~.1 for g , h >~ 0 directly. Then, P-1,1 = Pl,oPo, a, together 2 2 2 2 2 2 with Eqs. (13), gives P l , o = ( l + 9 6 t , o - ~o,1 -a~/ax)/(2961,o), and a~:/ax= {(1 + ffl.0 -~- 960,1)( 1 Jr- 3{1,0 - - 960,1)( 1 - - 961.0 -}- 960.1)(1 - - 961.0 - - 960.1)} 1/2" The NN(2) has flo,h different from P-o,h for all g, h > 0. This lack of axial (or reflection) s y m m e t r y holds even in the equal-parameter case, when the correlations are laterally symmetric, that is Pa, h = Ph, g V y , h. In the equal-parameter case, Eqs. (13) give a ~2 / a x2 = 1 - 2 ~ p l . o , and ~ = p l . o / ( 1 + P l , - 1 ) , which can be solved using 2 = (1 -- a~/a~)/(296), and as2/crx2= x/1 - 4~.2 = (1 P~,o)/ [ 1 , - 1 = P l , O . Thus, Pl,o (1 + p2,o ). Axial symmetry with Pg, h = pl~,l 1,o plhl o, 1 Vg, h is obtained with the separable AR(1)*AR(I), or d o u b l y geometric process, which is a Pickard process with 96~, a = - 961,o c%, ~ (Martin, 1979, 1990). N o t e that when ]96Lo[ + [96o,~[ is close to 1 it is possible for the NN(2) to have pl,o large even when Z~,o is small, for example 961,o = 0.049, 96o.1 = 0.95, has PLo = 0.8215, Po, l = 0.9898. It is also possible in the equal-parameter case to have py,h increasing with total lag [g] + ]h [, e.g. P2.1 > P2, o if 96 > 0.376. Since Po, o = P~,o, Po,h = Pl~'lo, the one-dimensional marginal process is an AR(1), but with parameters P~.o, Po.~ and not 96Lo, 96o,1. Thus, exact simulation is easy. Assume a set of n l n 2 independent {el,j} with mean 0 and variance a~2 has been simulated. Set Xl. 1 to ~ X1, j = P o , 1 X l , j - 1 Xi, 1 = JOI,0Xi

2

2 q , ~, then use

+ ~,~(1 - - / 0 2 1 ) ax/Cr,: 2 2 el,j

(j = 2, 3 . . . . . n2)

- - p 2 , 0 ) O.x/ae22 '~i. 1

(i = 2, 3, ... , nl).

1,1 ~- N ~

Then for i , j > 1 use x~,~ = 96~,oX~_~,s + 96o,1X~.s_i + gi, j. Consider now the alternative a p p r o a c h to obtaining the 7o,h using the infinite M A representation (4). This is Xi,j ~

961,0 ~ZO,1 ~ ' i - v . j - s .

r,s=O

r

R.,L M a r t i n / J o u r n a l

({/'Statistical P l a n n i n g a n d h?[i~rence 5 9 ( 1 9 9 7 ) 1

IS

7

This gives, for ,q, h ~> 0,

~,~=o \

J\

~0. 1

s + ]1

and

r

0

S

I"

The special case , -~ ~'O.0,'O-S =

t" -]- S ~ r.s=O

-

2r 2s 71.070,1

r

is a k n o w n sum (Riordan, 1968, p. 142, Eq. (8)), giving the above result for 7o, o/o-~. The sums for (,q, 11) ~ (0, 0) are not in Riordan (1968). The first few terms can be used to get approximations to the correlations for Ix ,. o [, [7o. ~t not too large. Using

x=O

"

Y -- S

Y

the covariances in the equal-parameter case become for ,q, h ~> 0, .

ff2r, r=0

r +

~J

,

y,h,(71:

~

72~'. r=0

\

r

,

These sums immediately show that the process is not axially-symmetric, and that for ,q, h ~> 0, and a given :~, 7-o,h only depends on # + h. The second sum is again k n o w n (Riordan, 1968, p. 154, problem 2(c)), leading to the simple form of the correlation P-o,h when g, h >~ 0. N o w consider the integral m e t h o d for obtaining the marginal covariances ~,.',;.~j. The sdf h_~ ~2~((oa) of the marginal process can be obtained directly using (10), or by using Lemma 1 below. It follows that h2..~z~(ol) = a~.,'(l ~,' + :~Lo ~ - :~6,1 ~ 2':g,ocosu)~ iThis corresponds to an A R ( I ) with parameter @, where ~p/(1 + ~p2) =: 7~,o,/(1 + ~[,o - ~2.1). The result for P,;,o ( = ~pl~;I ~p = P~.o) follows, and using (1 I) gives the result for ax/a~. 2 2 The m e t h o d is more difficult to use for ,, 0.~ when ,q, h > 0, requiring integrals that result from higher-order ARs.

3.4. Marginal properties of the NN(d) process N o w consider some general properties for the case d > 2. Although the recursions (8) are available, there are not e n o u g h equations to solve for the correlations, even using the m e t h o d of Besag (1972). This still holds in the equal-parameter case. The case d = 2 might appear to suggest that the (d - 1)-marginal processes of a NN(d) are N N ( d -- 1) processes. However, this is not true.

8

R.a( Martin/Journal of Statistical Planning and lnJerence 59 (1997) l 18

L e m m a 1. Consider the (d - 1)-margin of the N N ( d ) for the first d - 1 dimensions. This has acgf a2/{ Ad-1 (Z) Aa-1 (z- 1) _ ~al}Proof. F r o m (2), the spectral density function of this (d - D-margin is hd',(d)(~O ) = (2rr)- i f ~

ha(e))de~d,

where ~ = (o)'_, ~oa)'. The result follows by using (5) and Eq. (11) with d

a=

d--1

2

d-1

1 + ~ c%1--2 ~ c q , l c o s c o u + 2 u--1

~

~[,lcqslc°s(c°u-C°s),

u
u=l

cl = 2C~rd] 1 -- ~ Cq,~COSCO, ,

C2 = 2~d~ ~ c%lsinc~)u,

u=l

u=l

and converting the answer to an acgf.

[]

Thus, the reciprocal of the acgf of the margin differs from that of the N N ( d - 1) process only in that c~a2 is subtracted. F o r d > 2, it is clear that Ad- 1 (Z) Ad- 1(Z- 1) --~d] 2 cannot be factorized into A ( z ) A ( z - 1 ) for a unilateral AR generating function A(z). This means that except for d = 2, the ( d - 1)-marginal process does not have a finite unilateral representation. Thus, the M a r k o v chain m e t h o d used for d = 2 (Pickard, 1980; T o r y and Pickard, 1992) does not a p p e a r to work for d > 2; see Politis (1994) for some related work on d-dimensional M a r k o v chains. The (d - 1)-marginal processes have a finite C A R representation (6), which can be obtained from Ad- 1 (~) Ad I (Z- 1) __ ~[d]" However, the (d - 2)-marginal processes do not have a finite C A R representation. L e m m a 2. The (d - 2)-margin of the NN(d) for the first d - 2 dimensions has acgf r72/[{Ad_2(Z)Ad_2(z-a) -

2 2 2 3~[dII -- 0([dl}

-

--

2 4C~td-

~X~dl] 1/2"

i]

Proof. From (2), the spectral density function of this (d - 2)-margin is

The result follows as in the p r o o f of L e m m a 1 with d--1

d-2

2

2 --2 ~

u=l

ca = 2c~[a 1] 1

d-2

C%jCOSCO,+2

u=l

~tulCOSfDu

- -

u=l

~

:%]~s]COS(CO,--CO~),

u
,

C2 = 2~ta ~] ~ u=l

~[,] sin Co,,.

[]

R.J. Martin/Journal qf'Statistical Planning and lnfierence 59 (1997) 1 l~'

9

The fact that the d e n o m i n a t o r of the acgf is a square root shows that there is no finite C A R representation.

4. The NN(3) process xi.j,t=~l.o.oXi ~ . i . t + ~ o . t . o x i . i 1.~-} :(o.o. lxi.i.s 1 + ~:i.j.> This has leq.o.ol + I: Consider first the two- and one-dimensional marginal processes, that is the covariances { 7o,h,0 } and { 7o, o,o }, respectively. Clearly, other marginal covariances follow by permuting ~ . o , o , ~ o , 1,.,:~o,o. 1. Using L e m m a 1, the two-dimensional marginal process has sdf Now

consider

the

NN(3)

process

h3 {31((01, (02) =

aft/{b

+ + _

-

2z(1,

o, o COS

(01

-

2~o, 1,0

COS G')2 -}-

2~-1.o. o :%, 1,0

COS ((01 -'- O)2 )1"

and thus, using (6), the two-dimensional marginal process has C A R representation

h ....

E [Xi,.i]Xk, 1 ~I (k, I) # (i,.j) } : 3(1, o, o ( x i -

1. j ~- x i + 1,.j) -r- 3(0, 1, o (xi,.]

--~(l,0.O~O.',O(Xi

1.j+l

~-Xi+l.j

1 ~- xi..j ~ i ) I)

(15)

and var Ixijlxkl V ( k , l) :~ (i,j)] = ( 72, e / b 4 -+ . C o m p a r i n g (15) with (12) shows this is not a NN(2). It is a special case of the second-order CAR(2), and does not have a finite AR representation. N o t h i n g appears to be k n o w n about the correlation structure for this special case. Using L e m m a 2, the one-dimensional marginal process has sdf v

2

113 /2,3)((01 ) = or2/{(b+ - - - 2x1 .o. o cos c91 )2 - 4 ~ g . ,. o 0(0,0. 1 I

1/2

( ] 6)

The one-dimensional marginal process is not an AR(1), and has no finite A R M A representation. Such spectral density functions do not appear to have been considered in the time series literature. Although neither the two- nor one-dimensional marginal processes are types that have been studied previously, it is possible to approximate them if the relevant parameters are not too large. For low [ ~ 0 , 1 , 0 3 ( 0 , 0 . l I, the sdf of the one-dimensional margin, h3~,;~((o~), is approximately that of an AR(1) with parameter ~, where ~p/(1 + ~ P ' ~ ) = ~ l . O , o / b + , so that po, o , o ~ q ) I'. A better approximation uses ~b/(1 + @2) : :q.o,ob+ / { b 2 - - 4 :(2, l,o~o,o,a, ~,. More accuracy can be obtained by using a higher-order AR as an approximation. F o r low [:(0.o, ~l, the sdf of the twodimensional margin h3.(3~((,)1, (Oz) is approximately that of a NN(2) process with parameters ~//1.o,/~o. 1, where ~91,o/(1 + ~bl.O + I//2 1 ) = ~l.O,O/b++ , |~tO.l"l~l.O<~.~.

o!'~ ~. o. ,5.

10

R.J. Martin/Journal o f Statistical Planning and Injerence 59 (1997) l 18

N o w c o n s i d e r the infinite sum form of the 7o,h,k resulting from the infinite M A r e p r e s e n t a t i o n (4). This is (r + s + t)! Xi,.i,t

r!s!t!

r,s.t=O

t ~]' °" ° °~" 1' °':%' °' 1 si-'" J-s' I - "

This gives, for g, h, k / > 0, y~l,h,k/a2

9 O. 0 ~0, h 1,00~0.0. k = 0(1, 1

~,

M

(r+s+t)[(r+s+t+g+h+k)l

.... ,=o

"

2r

2s

,.!s!t!

9~2t o,o.,,

'D(1, O. 0 D~O,1,00{0,0.1 (r + s + t + g)! (r + s + t @ h + k)!

x

r,s,t=O

(r + g)!s!t!

O t h e r cases can be f o u n d using ~/o,h,k = 7 o,-h, v a r i a n c e is then given by

70,0,0/0.2 =

(r -~ S ~- t)!

r!s! t!

.... t=o

2r

2s

2t

r!(s + h)!(t + k)! 0{1'0'03{0'1'0g0'0'1 ' k, a n d i n t e r c h a n g i n g the cq,> The

2 °(1' °' ° °{°' 1' ° D{°'°' 1 "

A l t h o u g h these series are useful for low [~1,o,o1, [zO, l,O[, 1C~o.o,11, a n d a small n u m b e r of terms can be used to get g o o d a p p r o x i m a t i o n s to the Po.h,k then, their sum is n o t k n o w n in general. It turns out that s o m e of the sums can be expressed in terms of s t a n d a r d functions; see L e m m a 8, a n d the discussion below it. N o w c o n s i d e r the e q u a l - p a r a m e t e r case. T h e n 7o, h,k = Y~,k,h = 7h,o,k = 7h,k,g = ?k,o,h = Yk,h,g g g, h, k, as well as 7g,h,k = Y-g,-h,-kVg, h, k. After simplifying,

°,"g,h,k/0.e2 =C~g+h+k ~ :~2",,=o ~' ( rs) ( r + g +s + h +h + k )k(

)'-g'h'k/0.2=gg+h+k ~ ger s=0 ~ ( r - f S- g ) ( : - } - h++h:+)

s+k

\

s+k

for all g, h, k ~> 0, with

70,0,0/0. 2 =

0{2r r=0

~, ( : 0 2

(2;)

s:O

Even in this case, the series do n o t a p p e a r to have k n o w n expressions for their sums. T h e y d o show that, unlike when d = 2, the Yo,h,k differ. L e m m a 3. Apart Jrom the equalities due to the symmetries Yg,h,k = 7-g.-h,-k and "~/g,h,k :

Yh,k,g,

etc., all

]~9,h, k

differ.

R.J. Martin/Journal q/Statistical Plannin~zand h!/&'ence 59 (1997) 1 l,'~'

II

Proof. The first term in ~,'o.h.k/'a{ is l(g + h + k)!/(g!h!k!)] ~g~t,+k whilst that of ,, ,. t,, k ,' cr;; is [(h + k)!/(h!k!)}x "+h+k. The second term in ;' ~,h.k..'C;,7 iS [(h + k + 1)!/(h!k!) + (g + 1)(h + k + 2)!/{(h + 1)!(k + 1)!}]:( °+h+k~2. The resuh follows by c o m p a r i n g these coefficients for given ,q, h, k. [~ For the equal-parameter case• consider further the first few terms in the series expansion of Po.h,~, which give a g o o d a p p r o x i m a t i o n for small I~1. The coefficients can be seen to increase rapidly with the n u m b e r of terms, so that m a n y terms arc 1 needed to evaluate accurately the ;',,h,k numerically as I~1 approaches 3. As is shown 1 below, [~1 needs to very close to j for p to be at all large, so that accurate cwduation using the series m a y not then be possible. L e m m a 4. For g• h, k >t O, the pc. h, k jbr the equal-parameter case at~d small 1:([ are !ti~x'tt

hy _(g+h+k)!~o+h+k

(h+k)!~°+a+kll h ! k ~"

P ,,h.k--

+(g+l)

1+

+

+

()(~)

+{2q+h+k

+

u . = + O ( ~ 4) .

Proof. Divide the series for 7g,h,k by that for ",'o.o,o, and simplify the result. The series show, for low I~1, the order and relative size of the fGh.k. The leading term is proportional to :~1,1+ i/, + ikl, and its coefficient is largest if the lags .q, h, k are all positive and as equal as possible. Some examples of the series expansions arc given below to the term in u.8:

7o.o.o/a{ = 1 + P = Pl,o,o

[)2.0.0

=

30~ 2 + 1 5 ~ 4 +

= ~ + 2~3 +

3(2

93~ ~ + 639~ 8.

10 :(-~ + 60 ~-7,

-~ 4~'~ + 26a6 + 180c(s•

P1.1.o = 2:~2 + 624 + 32 :~ + 198a s-

P~ = P-~,~,o = ~2 + 5:(* + 30~ ~' + 198~ s, P3, o, o = .~3 + 6us + 48:(v,

P2,~.o

= 3~3

-t- 13:~s + 81~".

p l . t , ~ = 6 ~ 3 - F 180~5 + 9 6 S .

P4.o.o = ~4 + 8 ~ + 76u8,

P-2,~.o = ~3 + 8~s + 60:(v, p

~ . 1 . 1 = 2~3 + 12~5 + 78~ "7,

12

R.,L Martin/Journal of Statistical Planning and InJerence 59 (1997) 1 18

P3,1,o = 4~4 + 23~6 + 167~8,

P-3,1,o = ~a + 11~6 + 1000~s,

P2,2,o = 6~ ~ + 32~6 + 214~s,

P-2,2.o = ~4 q_ 12~6 -I- 1097 s,

P2,1,1 = 12~ 4 + 44~6 + 258~8,

P-2,1,1 = 2~ 4 + 18~6 q- 144~ 8.

It can easily be seen that (at least for low lags) there are no power relationships as when d = 2. F o r example, to the same order, p2 = ~ z +4ct4 + 2 4 ~ 6 + 1607s; p3 = ~3 + 6~5 q_ 42~7; p4 = ~4 + 8~6 + 64~s. N o t e that alternative series for 7g,h,o, Y-s,h, o i n terms o f ~ l , O , o / b + + , ~ o , l . o / b + + - , -71,o,oC%,1,o/b++ ; a n d y 0 , o , o i n terms of ~ l , o , o / b + - - , 7o, l,o ~o,o, 1/b+ _ can be obtained from the two- and onedimensional marginal acgf's; see also (19). N o w consider the recursions (8). These become 2 2 ffs/ax

=

1 --

~I,O,OPl.

O,O - -

~O,I.OPO,

I,O

--

~0, 0,1,00. 0, 1

and Dg, h,k ~- O~l,o,oPs-l,h,k q- O'O,I,oPs,h

1,k -}- ~O,O, lfls, h,k-1

(17)

for all g, h, k except g, h, k ~< 0. Their solution does not seem to be possible, even in the 2 2 = 1 - 3ap, and ~ = p/(1 + 2pd) SO that equal-parameter case. In this case, a~/ax p and Pd can be found in terms of Yo, o, o / a 2. Since in 3 dimensions there is only the one general s y m m e t r y Ps, h,k = P-a, h,-k, Besag's (1972) m e t h o d then leads to just two equations in 3 unknowns. Although the two-dimensional C A R margin (15) can be used to obtain recurrences in the Yo,h,o (Besag, 1974), no new relationships are obtained. In the equal-parameter case it does appear that the Ps,h,k can be expressed in terms of two basis elements: one of { 7o, o, o, 71, o, o, 7-1,1, o } and one of the r e m a i n d e r - for example 71, o, o and 72, o, o. N o t e that it is not straightforward to obtain this expression in all cases. Similarly, in the 3-parameter case, it appears that the {7~,1} =

{71,o,o,7o, l,O,7O,O,1} and {Yet,j} = {72,o,o,)'o,2, o,7o, o,2} m a y be sufficient. F o r example, if c~' = c* = 1, c~' = - l, 3

P - l , l , O =(2~1,o,o~o,l,O} -1 2

C*a[uJ(PM--~[ul)

u--1

and

Pl,I.O = (2~1,0,og0,1.0)-1

3 ~ c.* ~(.j (Pt.J - ~t.J P2 t.J - cq.j a~/Yo, 2 o, o). u=l

The analysis so far suggests that unless [al,O,O], 1~O,l.O[, [ao,o,l[ are small, the correlations can only be obtained by some numerical use of (1) (plus recursions). Even for the {Yt.j,72t.j}, there m a y be problems with evaluating (1) if [al,o,o[ + I~O,l,O[ + 1~o,o, 11 is close to 1. However, there is then a link with k n o w n results for the two-dimensional first-order C A R (Besag, 1981), This is because the two-dimensional first- and second-order C A R have the same type of one-dimensional margin.

R.J. Martin/Journal q f Statistical Planning and h{/i~rence 59 (1997) l l;~'

13

L e m m a 5. The two-dimensional second-order CAR(2) specified by E {xs,jlxk.iV(k,l) =/=(i,j)} = fil,o(X~ ~.j + x~+~.j) +/:Io.l(xi.i

4-[~l,l(Xi-l,j

1 -{- Xi+l,j+l)~-[t1.

l(Xi--1.i+l 2r- Xi+l,j

i + x~.j~ 1)

1),

and var [xi, il.'%tV(k,l)v~(i,j)} = 0-~, has the sdf of its one-dimensional mar~lin given by h (,,(~,)1) _ = 0 -2, / < ( a l -- 4 a z c o s ¢ o l

where al l -4f12.1-4([31,1[~.o 41~i., 1-¢,. ,.

+ 4 a 3 c o s 2 ( ~ ) l 11,'2

fll.-i )2

az = fil.O + 2[JO,l([Jl,l + fil _1). a.~ =:

-

Proof. The sdf of the CAR(2) is h((ol, (¢)2) = 0"2/{ 1 - 2fil.o cos (ol - 213o. 1 cos ('J2 -2131.1cos(c91 + ( o 2 ) - 2 f i l - l C O S 0 n l c')2)}. Then, using (2) and (11) with a 1 - 2131.oCOS~,31, el = 2{fio, 1 + (fl~,~ + [:~. 1)cos(!)l 1, e2 = 2(ill.1 - ill.-1 )sin¢,),, and simplifying, gives the result. [] L e m m a 6. The one-dimensional margin of a CAR(2) is equivalent to the one-dimensional mar qin of a CAR(l). Proof. The first-order C A R ( l ) has / J l . 1 : [ J l , 1 = 0 . Using ~'1,o and %,1 for the C A R ( l ) parameters, and 02 for its variance, its one-dimensional margin has sdf h i_~,(~ol) = 0-2/(1 - 4Vo2.1 - 4Vl.o cos (ol + 4v~. o cos2 (,)l)1,2 0-g/l(1 - 2Vl.oCOS(,)l)z _ 4vo. lI 1

(18)

Equating coefficients with those of h.!2)((ol ) in L e m m a 5 gives

Vl.(,=a3/a2,

Vo, L = ( 1 - - a l a 3 / a<~ )-> 1,~ "/2,0-~./a~,. = a 3 / a 2 .2'

[]

The one-dimensional margin corresponds to an AR(I) if vo~ = 0. Otherwise, the process does not c o r r e s p o n d to a finite A R M A . Since the two-dimensional margin of the NN(3) is a special case of the CAR(2), its one-dimensional margin is equivalent to the one-dimensional margin of a CAR(l). L e m m a 7. The one-dimensional margin of the NN(3) is equivalent to the one-dimensional margin of a CAR(I), where the parameters of the C A R ( I ) (see Lemma 6) are Vl.0

~l,O.o/b +

Vo, l = y.O.l.O2O,O. 1 / b +

0.2 = 0-~,,'b+_

Proof. This can be seen by equating (16) and (18). It also follows from L e m m a 6 since the two-dimensional margin of the NN(3) has [:tl,o = ~.l.O,o/b+ . - , [4o. 1 - ~o.l,o/ b + + - , f i , . ~ =O, f i , . - l = ~1.o.o%,1,o/b++ , 0-2 = 0-2/b++-.

14

R.J. Martin/Journal of Statistical Planning and Injerence 59 (1997) 1 18

Although Lemma 7 does not allow simple closed forms for 70, o, o, it does allow some of Besag's (1981) results to be used. Lemma 8. 7o, o,o=2aZKOc)/{Tt(m+++m+_

m_+ m_ +)1/2}, and 7 1 , o , o = ( 1 + 2Vo,1)7o.o,o/(2V1,o) + a2m+-+m++-H(v, ~c)/{=~l.O,o(m+++m+--m + _ m _+)i/2}, where K 2 = 16vl,oVo, 1/{1 --4(V,,o - - V o . 1) 2 } = 16%.l.oC%,o,l/(m+++m+-- m_+ m--+),~ =4vl,o/(1 --2Vx,o + 2Vo. x)=4Cq.o.o/{(m-+-m +),andK, llareelliptic integrals of the first and third kind, respectively. Proof. See Eqs. (7) and (8) of Besag (1981), and substitute and simplify.

[]

It should be possible to express 70. o, o, g > 1 in terms of elliptic integrals, but this is not simple. Evaluation of the formulae in Lemma 8 shows that to get an appreciable Pl, o, o, values of ]:~1.o.o1 + ]~o, 1. o[ + ]~o. o. 1 ] must be very close to 1. For example, in the equal-parameter case, the values of p for ~ = 0.1, 0.2, 0.3, 0.33, 0.333, 0.3333, 0.33333, 0.333333, 0.3333333333 are, respectively, to 3 decimal places, 0.102, 0,220, 0.415, 0.604, 0.709, 0.772, 0.813, 0.841, 0.901. These values show that evaluation of(l) when X I%] is close to 1 may be liable to numerical difficulties. Other implications of the CAR(l) margin are that there is a series for 70,o,o, although again this is useful only for small values of ]:q.o.o/b+ 1, [c%,1,oC~o,o.1/b+ _ 1, and that there may be a good approximation for larger values. Using Besag's (1981), Eq. (3), the series is, for g ~> 0, (7e2

7o,o,o-b+

( O~l,O.O)gr. f~s= (g + 2r + 2s)' _ \b+__ o(r +g)!r!s!s!

x (:q,o,o/b+

-)2r(c%, 1,o~o.o, 1/b+ _)2s.

(19)

Using Besag's (1981), Eq. (14), the approximation for g > 0 is 2 ~1/2 7o.o,o~a~Ko{(m-++m--/cq,0,o, g}/{27r,('9'1,0,O0~O,1,00~O,O,1)l/2},

(20)

where Ko is the modified Bessel function of the second kind of order zero. The approximation is good when the I:%]l are all close to ½, but can be poor in other cases. Using the approximation K ( 1 - e ) ~ (½)ln(16/e) for small e (Abramowitz and Stegun, 1965, formula 17.3.26), and Lemma 8, gives, for ~c close to 1:

7o, o,o ~ a21n {16/(1 - g)}/{rt(m+ ++m+ _m

+_m_ _+)1/2}.

(21)

Then, (20) and (21) give an approximation to Py, o,o when the [:q,]l are all close t o l . Thus, pg, o , o ~ K o { ( m ++m --/~x,o,o) U2 g}(m+++m+__m_+ m__+)1/21n{16/ (1 -

K)}/{2(~o,l,o~o,o, 1)1/2}.

In the equal-parameter case, p = (1 - a~/7o,o,o)/(3c0. Hence, using (21), a good approximation to p for ~ close to ½ (above 0.33, say) is given by p ~ 1 - 4=/[3x/3 In {4/(1 - 3:0}],

(22)

R..L Martin/Journal of Statistical Planning and ln/erence 59 (1997~ 1 1~'

15

which shows that 1 - 3:~ has to be as small as about 4 x 10 2 ~ before p = 0.95 is reached, The approximation for P0, o, o shows that the correlations can decay very slowly for high 1.. For example, when :~ = 0.333333 (p -- 0,813), P3oo.o,o is still above 0.1. It is noteworthy that this behaviour occurs when d = 3 for a unilateral autoregression. It is not known if there is a general recursion in the Po.o.o. It is possible to obtain some using 117) and the recursions for the first-order CAR (Besag, 1981). For exampk:, %~2f13.o.o = 3:~ (1 -- 3(2)p2,o,o -- (1 -- 3~4-)p + :~(1 - 22), and 623(1 y.2)p.,.o.o = 222(5 12y2 + 5~4)/)3,o,o

-- 2:<(1 - - ~ 2 ) (2 -- 372 -

6~ a)/)2,o,o

+ 4 7 2 (1

3:x e + 3;<¢)p.

Some examples of low lag correlations are given in Tables 1 (equal-parameter easel, 2, and 3. Note in the equal-parameter case how the correlations for higher lags become much larger as :~ increases from 0.33. Note also that for large :, the correlations P,,h,k do not decay with lag sum l,ql + Ih] + ]k[, e.g./)~, 1.1, [)e, 1.o can be bigger than P2.o,o. When the {~.[,]} are unequal, 1~,o.o, for example, can be very large even when ~ , o . o is quite small. Despite its unilateral representation, exact simulation of the N N ( 3 ) process is not simple because the one- and two-dimensional marginal processes are not simple. It appears that for the three initial one-dimensional margins {x~. ~, ~,-':l, i. ~, .v~, ~.~ I, the one-dimensional correlations P,.o.~, would have to be calculated (or approximation

Table 1 Values of some P0.a,k (to 3 decimal places) lot the equal-parameter N N ( 3 ) :~

0.1

0.2

11.3

0.33

0.333

1/.3333

#7.o., t~2. o. o P,, ~, o /) 1. l.(J 1"3.O,O PZ, ~. 0 I~ 2.,.o t~..,, P t, 1.1 P.*, o , ,

0.102 0.010 0.021 0.011 0.001 0.003 0.001 0.006 0.002 (I.000

0.220 0.049 0.092 0.051 0.011 0.030 0.012 0.055 0.021 0.002

1/.415 0.179 0.289 0.192 0.079 0.164 0.090 0.260 0.134 0.036

0.604 0.395 0.517 0.416 0.273 0.395 0.296 0.512 0.359 0.196

0.709 0.547 0.646 0,565 1/.445 0.550 I).467 0.645 I).521 0.375

(/.772 0.643 0.722 0.658 0.562 0,646 0.579 0.722 0.623 11.505

Table 2 Values of/h,,], u = 1, 2, 3, (to 3 decimal places) for some q,~t.J] for the 3 parameter N N I 3 )

:~x, o, o

0.4

0.44

0.444

0.5

0.6

0.75

:%. ~.o

0.3

0.33

0,333

0.35

0.3

0.2

1/.85 0. I

~o. o.,

0.2

0.22

0.222

O. 149

1/.099

0.049

0.049

1) 1.o.o

0.503

0.677

0.765

0.798

0.844

0.907

1/.935

Po. i. o

0.422

0.612

0.716

0.739

0.736

0.721

1/.633

/)(J,O, 1

0.319

0.524

0.649

0.599

0.558

0.498

0.5 I 6

R.,L Martin~Journal o f Statistical Planning and Inference 59 (1997) 1 18

16

Table 3 F u r t h e r values of Po, h~k for (el.0,o, co, L0, ~0,o. ~ ) = (0.4, 0.3, 0.2) (first row), (0.85, 0.1, 0.049) (second row) P2.0,o

Po,2.o

,00,0,2

P - 1,1,o

Pl.0.

0.258 0.875

0.183 0.442

0.108 0.328

0.225 0.602

0.179 0.498

i

,o0,1. - 1

Pl.l,O

rOl.o, 1

PO, l,!

0.159 0.437

0.345 0.653

0.266 0.528

0.231 0.453

(20) used if appropriate), and the Cholesky decomposition of the covariance matrix found. A similar procedure may be required for the two-dimensional margins {xi, j,l, x~,j,~, Xi, l,t}, although it may be possible to use the two-dimensional CAR structure, which specifies V - ~ everywhere except when both sites are border sites; see Martin (1996). Of course, an approximate simulation of the N N (3) can be obtained by using approximate simulations along the three initial edges and planes, and discarding sufficient initial values. The problems with exact simulation also imply that exact Gaussian m a x i m u m likelihood estimation will be difficult, although it may be reasonable to use simple approximations (see Section 5).

5. Discussion The results for the simple N N model in more than 2 dimensions suggest that unilateral processes are even more difficult to use in practice in higher dimensions. The existence of slow decay in the correlations is interesting, but the difficulties in obtaining the marginal distributions, and hence in exact simulation and exact Gaussian m a x i m u m likelihood, argue against its use in practice. If data are available from a regular lattice, and the lattice sizes nl, ..., nd are each sufficiently large, it may be reasonable to use least-squares or Yule Walker estimation. Basu and Reinsel (1992) show for the two-dimensional Pickard process that the Yule-Walker estimator can have substantial bias with small samples; see also H a and Newton (1993) and G u y o n (1982). However, even requiring each side to be only of size 10 results in a sample of 10d observations, which is 1000 when d = 3. The Yule-Walker (and 'large-sample' least squares) equations lead to the sample version of the second equation (9). Thus, in the general d-dimensional case, if rg denotes the sample correlation at lag 9 (preferably the 'unbiased' version), the {~ul} are the solution to d

r~,j = ~, ~ul r[,~_ 1~1, v = 1 .... , d. u:l

In matrix form, the vector estimate ~ = (~1 .... , ~d)' is given by ~ = R - l r , where R = (rt,j_tvj) .... and r = (r~,0,. In the one-parameter case, ~ =/5/{l + (d - 1)~d}, where/5 = ( ~ : 1 rEul)/d is the average neighbour sample correlation, and, for d > 1,

R.J. Martin/Journal o f Statistical Planning and Inference 59 (1997) 1 18

/Sd=(~,,¢,,r1,, 1 l , , ~ ) / { d ( d - 1 ) } correlation.

is the

average

two-dimensional diagonal

17

sample

C a l c u l a t i o n of the N N ( 3 ) c o r r e l a t i o n s c a n be difficult u n l e s s all Ic%11 are small, w h e n the series for 7o,h,k c a n be used. O t h e r w i s e , L e m m a 8 c a n be used for 7o.0,o a n d [;'I,,1IE v a l u a t i n g (1) c a n be tried directly for o t h e r 7,,h,k, or, for {7,.o,01 a n d [7,,~,.,,I, the o n e - d i m e n s i o n a l a n d t w o - d i m e n s i o n a l spectra c a n be used. R e c u r s i o n s c a n also be tried, b u t n u m e r i c a l p r o b l e m s m a y arise in all cases. T h e inverse p r o b l e m of c h o o s i n g the :tl, ~for a given [PI,I], u = 1, 2, 3 c a n be solved by n u m e r i c a l i n t e r p o l a t i o n , or in the e q u a l - p a r a m e t e r case with large I~l u s i n g the a p p r o x i m a t i o n (22) for p. This gives 1

4

~ ~ - (,~) e x p [ - 4 ~ / { 3 x / / 3 ( 1 - P ) I ] . T h e fact t h a t the P i c k a r d t w o - d i m e n s i o n a l process has a s i m i l a r c o r r e l a t i o n f u n c t i o n to the N N ( 2 ) process m i g h t suggest a d d i n g m o r e terms to the N N ( 3 ) , in p a r t i c u l a r all :%h.k with 0 ~< g , h , k ~< 1. H o w e v e r . it a p p e a r s that the o n l y ~Jl I~,11.()Po,o.1 ikl a n y w h e r e (except w h e n (g, h. k~ is 0. process for w h i c h Pg, h,k = Pl,o.oPo. [1], [2] or [ 3 ] ) is the s e p a r a b l e A R ( 1 ) * A R ( 1 ) * A R ( 1 ) process, for which it holds for all y, h, k.

Acknowledgements

I a m grateful to W.E. S h a r p a n d B. T u r n e r for the q u e r y t h a t led to this work, to a referee for helpful c o m m e n t s , a n d to D.N. Politis for a d v a n c e n o t i c e of Politis (1994).

References

Abramowitz, M. and 1.A. Stegun, Eds. (1965). Handbook ~! Mathematical k'um'tion~. Dover, New York. Basu, S. and G.C. Reinsel (1992). A note on properties of spatial Yule Walker estimators..l. Smti.st. ('oml~ut. Simul. 41,243 255. Basu, S. and G.C. Reinsel (1993). Properties of the spatial unilateral firsl-order ARMA model. ~Ide'. 4[,pl. Probab. 25, 631 648. Besag, J.E. (1972). On the correlation structure of some two-dimensional stationary processes. Biometrika 59, 43 48. Besag, J.E. (1974). Spatial interaction and tile statistical analysis of lattice systems (with discussion ). J. Roy. Statist. Soe. B 36, 192 236. Besag, J. (1981). On a system of two-dirnensional recurrence equations. J. Roy. Statist. S~c. B 43, 302- 309. Guyon, X. (1982). Parameter estimation for a stationary process on a d-dimensional lattice. Biometrika 69, 95 105. Ha, E. and H.J, Newton (1993). The bias of estimators of causal spatial autoregressive processes. Bimnetrika 80, 242 245, Martin, R.J. (1979). A subclass of lattice processes applied to a problem in planar sampling. Biomelrika 66. 209 217. Martin, R.J. (1990). The use of time series models and methods in the analysis of agricuhural field Irials. Commun. Statist. -Theory Meth. 19, 55 81. Martin. R.J. (1996). Some results on unilateral ARMA lattice processes. J. Statist. Phmn. lq/bre~we 50, 395 411.

18

R.J. Martin/Journal of Statistical Planning and Inference 59 (1997) 1 . 18

Pickard, D.K. (1980). Unilateral Markov fields. Adv. Appl. Probab. 12, 655-671. Politis, D.N. (1994). Markov chains in many dimensions. Adv. Appl. Probab. 26, 756 774. Riordan, J. (1968). Combinatorial Identities. Krieger, Huntingdon, New York. Sharp, W.E. and L.A. Aroian (1985). The generation of multidimensional autoregressive series by the herringbone method. Math. Geol. 17, 67 79. Tjostheim, D. (1978). Statistical spatial series modelling. Adv. Appl. Probab. 10, 130-154. Correction (1984) 16, 220. Tory, E.M. and D.K. Pickard (1992). Unilateral Gaussian fields. Adt:. Appl. Probab. 24, 95 112. Whittle, P. (1954). On stationary processes in the plane. Biometrika 41,434 449.