Accepted Manuscript On the normal approximation for random fields via martingale methods Magda Peligrad, Na Zhang
PII: DOI: Reference:
S0304-4149(17)30180-1 http://dx.doi.org/10.1016/j.spa.2017.07.012 SPA 3167
To appear in:
Stochastic Processes and their Applications
Received date : 3 February 2017 Revised date : 11 July 2017 Accepted date : 15 July 2017 Please cite this article as: M. Peligrad, N. Zhang, On the normal approximation for random fields via martingale methods, Stochastic Processes and their Applications (2017), http://dx.doi.org/10.1016/j.spa.2017.07.012 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
*Manuscript
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
On the normal approximation for random …elds via martingale methods Magda Peligrad and Na Zhang Department of Mathematical Sciences, University of Cincinnati, PO Box 210025, Cincinnati, Oh 45221-0025, USA. email:
[email protected] email:
[email protected] Abstract We prove a central limit theorem for strictly stationary random …elds under a sharp projective condition. The assumption was introduced in the setting of random sequences by Maxwell and Woodroofe. Our approach is based on new results for triangular arrays of martingale di¤erences, which have interest in themselves. We provide as applications new results for linear random …elds and nonlinear random …elds of Volterra-type. MSC: 60F05, 60G10, 60G48 Keywords: Random …eld; Central limit theorem; Maxwell-Woodroofe condition; Martingale approximation.
1
Introduction
Martingale methods are very important for establishing limit theorems for sequences of random variables. The theory of martingale approximation, initiated by Gordin (1969), was perfected in many subsequent papers. A random …eld consists of multi-indexed random variables (Xu )u2Z d . The main di¢ culty when analyzing the asymptotic properties of random …elds, is the fact that the future and the past do not have a unique interpretation. Nevertheless, it is still natural to try to exploit the richness of the martingale techniques. The main problem consists of the construction of meaningful …ltrations. In order to overcome this di¢ culty mathematicians either used the lexicographic order or introduced the notion of commuting …ltration. The lexicographic order appears in early papers, such as in Rosenblatt (1972), who pioneered the …eld of martingale approximation in the context of random …elds. An important result was obtained by Dedecker (1998) who pointed out an interesting projective criteria for random …elds, also based on the lexicographic order. The lexicographic order leads to normal approximation under projective conditions with respect to rather large, half-plane indexed sigma algebras. In order to reduce the size of the …ltration used in projective conditions, mathematicians introduced the so-called commuting …ltrations. The traditional way for constructing commuting …ltrations is to 1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
consider random …elds which are functions of independent random variables. We would like to mention several remarkable recent contributions in this direction by Gordin (2009), El Machkouri et al. (2013), Wang and Woodroofe (2013), Volný and Wang (2014), and Cuny et al. (2015), who provided interesting martingale approximations in the context of random …elds. It is remarkable that Volný (2015) imposed the ergodicity conditions to only one direction of the stationary random …eld. Other recent results involve interesting mixing conditions such as in the recent paper by Bradley and Tone (2017). In this paper we obtain a central limit theorem for random …elds, for the situation when the variables satisfy a generalized Maxwell-Woodroofe condition. This is an interesting projective condition which de…nes a class of random variables satisfying the central limit theorem and its invariance principle, even in its quenched form. This condition is in some sense minimal for this type of behavior as shown in Peligrad and Utev (2005). Its importance was pointed out, for example, in papers by Maxwell and Woodroofe (2000), who obtained a central limit theorem (CLT); Peligrad and Utev (2005) obtained a maximal inequality and the functional form of the CLT; Cuny and Merlevède (2014) obtained the quenched form of this invariance principle. The Maxwell-Woodroofe condition for random …elds was formulated in Wang and Woodroofe (2013), who also pointed out a variance inequality in the context of commuting …ltrations. Compared to the paper of Wang and Woodroofe (2013), our paper has double scope. First, to provide a central limit theorem under generalized MaxwellWoodroofe condition that extends the original result of Maxwell and Woodroofe (2000) to random …elds. Second, to use more general random …elds than Bernoulli …elds. Our results are relevant for analyzing some statistics based on repeated independent samples from a stationary process. The tools for proving these results will consist of new theorems for triangular arrays of martingales di¤erences which have interest in themselves. We present applications of our result to linear random …elds and nonlinear random …elds, which provide new limit theorems for these structures. Our results could also be formulated in the language of dynamical systems, leading to new results in this …eld.
2
Results
Everywhere in this paper we shall denote by jj jj the norm in L2 : By ) we denote the convergence in distribution. In the sequel [x] denotes the integer part of x: As usual, a ^ b stands for the minimum of a and b: Maxwell and Woodroofe (2000) introduced the following condition for a stationary processes (Xi )i2Z , adapted to a stationary …ltration (Fi )i2Z : X
1
k 1
k
jjE(Sk jF1 )jj < 1; Sk = 3=2
Xk
i=1
Xi ;
(1)
p and proved a central limit theorem for Sn = n. In this paper we extend this result to random …elds. 2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
For the sake of clarity we shall explain …rst the extension to random …elds with double indexes and, at the end, we shall formulate the results for general random …elds. We shall introduce a stationary random …eld adapted to a stationary …ltration. For constructing a ‡exible …ltration it is customary to start with a stationary random …eld ( n;m )n;m2Z and to introduce another stationary random …eld (Xn;m )n;m2Z de…ned by Xn;m = f (
i;j ; i
n; j
m);
(2)
where f is a measurable function. Note that Xn;m is adapted to the …ltration Fn;m = ( i;j ; i n; j m): As a matter of fact Xn;m = T n S m (X0;0 ) where for all u and v; T f (::::x 1;v ; x0;v ) = f (::::x0;v ; x1;v ) (T is the vertical shift) and Sf (::::xu; 1 ; xu;0 ) = f (::::xu;0 ; xu;1 ) (S is the horizontal shift). We raise the question of normal approximation for stationary random …elds under projection conditions with respect to the …ltration (Fn;m )n;m2Z . In several previous results involving various types of projective conditions, the methods take advantage of the existence of commuting …ltrations, i.e. E(E(XjFa;b )jFu;v ) = E(XjFa^u;b^v ): This type of …ltration is induced by an initial random …eld ( n;m )n;m2Z of independent random variables, or, more generally can be induced by stationary random …elds ( n;m )n;m2Z where only the columns are independent, i.e. m = ( n;m )n2Z are independent. This model often appears in statistical applications when one deals with repeated realizations of a stationary sequence. We prove this property in Lemma 17 in the Appendix. It is interesting to point out that commuting …ltrations can be described by the equivalent formulation: for a u we have E(E(XjFa;b )jFu;v ) = E(XjFu;b^v ):
(3)
This follows from this Markovian-type property, see for instance Problem 34.11 in Billingsley (1995). Our main result is the following theorem which is an extension of the CLT in Maxwell and Woodroofe (2000) to random …elds. Below we use the notation Sk;j =
Xk;j
u;v=1
Xu;v :
Theorem 1 De…ne (Xn;m )n;m2Z by (2) and assume that (3) holds. Assume that the following projective condition is satis…ed X
j;k 1
1 jjE(Sj;k jF1;1 )jj < 1: j 3=2 k 3=2
(4)
In addition assume that the vertical shift T is ergodic. Then there is a constant c such that 1 E(Sn2 1 ;n2 ) ! c2 as min(n1 ; n2 ) ! 1 n1 n2 3
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
and p
1 Sn ;n ) N (0; c2 ) as min(n1 ; n2 ) ! 1: n1 n2 1 2
(5)
By simple calculations involving the properties of conditional expectation we obtain the following corollary. Corollary 2 Assume the following projective condition is satis…ed X
j;k
1 jjE(Xj;k jF1;1 )jj < 1; 1=2 k 1=2 j 1
(6)
and T is ergodic. Then there is a constant c such that the CLT in (5) holds. The results are easy to extend to general random …elds (Xu )u2Zd introduced in the following way. We start with a stationary random …eld ( n )n2Z d and introduce another stationary random …eld (Xn )n2Z d de…ned by Xk = f ( j ; j k); where f is a measurable function and j k denotes ji ki for all i. Note that Xk is adapted to the …ltration Fk = ( u ; u k): As a matter of fact Yk = T1 T2 :::Td (Y0 ) where Ti are the shift operators. In the next theorem we shall consider commuting …ltrations in the sense that for a u 2 R1 ; b; v 2 Rd 1 we have E(E(XjFa;b )jFu;v ) = E(XjFu;b^v ): For example, this kind of …ltration is induced by stationary random …elds ( n;m )n2Z;m2Z d such that the variables m = ( n;m )n2Z 1 are independent, m 2 Z d 1 . All the results extend in this context via mathematical induction. Below, jnj = n1 ::: nd : Theorem 3 Assume that (Xu )u2Zd and (Fu )u2Zd are as above and assume that the following projective condition is satis…ed X
u 1
1 jjE(Su jF1 )jj < 1: juj3=2
In addition assume that T1 is ergodic. Then there is a constant c such that 1 E(Sn2 ) ! c2 as min(n1 ; :::; nd ) ! 1 jnj and
1 p Sn ) N (0; c2 ) as min(n1 ; :::; nd ) ! 1: jnj
(7)
Corollary 4 Assume that
X
u 1
1 jjE(Xu jF1 )jj < 1 juj1=2
and T1 is ergodic. Then the CLT in (7) holds. 4
(8)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Corollary 4 above shows that Theorem 1.1 in Wang and Woodroofe (2013) holds for functions of random …elds which are not necessarily functions of i.i.d. We shall give examples providing new results for linear and Volterra random …elds. For simplicity, they are formulated in the context of functions of i.i.d. Example 5 (Linear …eld) Let ( n )n2Z d be a random …eld of independent, identically distributed random variables which are centered and have …nite second moment. De…ne X Xk = aj k j : Assume that
P
j 0
2 j 0 aj
< 1 and
j X X X jbj j 2 < 1 where b = ( au+i )2 : j 3=2 jjj i 0 u=1 j 1
(9)
Then the CLT in (7) holds. Let us mention how this example di¤ers from other results available in the literature. P Example 1 in El Machkouri et al. (2013) contains a CLT under the condition u 0 jau j < 1: If we take for instance for ui positive integers au1 ;u2 ;:::;ud =
d Y
i=1
P
( 1)ui p
1 ; ui log ui
then u2Z 2 jau j = 1: Furthermore, condition (8), which was used in this context by Wang and Woodroofe (2013), is not satis…ed but condition (9) holds. Another class of nonlinear random …elds are the Volterra processes, which plays an important role in the nonlinear system theory. Example 6 (Volterra …eld) Let ( n )n2Z d be a random …eld of independent random variables identically distributed centered and with …nite second moment. De…ne X Xk = au;v k u k v ; (u;v) (0;0)
where au;v are real coe¢ cients with au;u = 0 and cu;v (j) =
j X
P
u;v 0
a2u;v < 1: Denote
ak+u;k+v
k=1
and assume X jbj j < 1 where b2j = 3=2 jjj j 1 u
X
0;v 0;u6=v
Then the CLT in (7) holds. 5
(c2u;v (j) + cu;v (j)cv;u (j)):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Remark 7 In examples 5 and 6 the …elds are Bernoulli. However, we can take as innovations the random …eld ( n;m )n:m2Z having as columns independent copies of a stationary and ergodic martingale di¤ erences sequence.
3
Proofs
In this section we gather the proofs. They are based on a new result for a random …eld consisting of triangular arrays of row-wise stationary martingale di¤erences, which allows us to …nd its asymptotic behavior by analyzing the limiting distribution of its columns. Theorem 8 Assume that for each n …xed (Dn;k )k2Z forms a stationary martingale di¤ erence sequence adapted to the stationary nested …ltration (Fn;k )k2Z 2 and the family (Dn;1 )n 1 is uniformly integrable. In addition assume that for all m 1 …xed, (Dn;1 ; :::; Dn;m )n 1 converges in distribution to (L1 ; L2 ; :::; Lm ); and m 1 X 2 L ! c2 in L1 as m ! 1: (10) m j=1 j Then
1 Xn p Dn;k ) cZ as n ! 1; k=1 n
where Z is a standard normal variable.
p Proof of Theorem 8. For the triangular array (Dn;k = n)k 1 ; we shall verify the conditions of Theorem 13, given for convenience in the Appendix. Note that for " > 0 we have 1 E( max D2 ) n 1 k n n;k
p 2 "2 + E(Dn;1 I(jDn;1 j > " n))
2 and, by the uniformly integrability of (Dn;1 )n
1,
(11)
we obtain:
p 2 lim E(Dn;1 I(jDn;1 j > " n)) = 0:
n!1
Therefore, by passing to the limit in inequality (11), …rst with n ! 1 and then with " ! 0; the …rst condition of Theorem 13 is satis…ed. The result will follow from Theorem 13 if we can show that n
1 1X 2 Dn;j !L c2 as n ! 1. n j=1
2 To prove it, we shall apply the following lemma to the sequence (Dn;k )k2Z af2 2 ter noticing that, under our assumptions, for all m 1 …xed, (Dn;1 ; :::; Dn;m )n 1 converges in distribution to (L21 ; L22 ; :::; L2m ):
6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Lemma 9 Assume that the triangular array of random variables (Xn;k )k2Z is row-wise stationary and (Xn;1 )n 1 is a uniformly integrable family. For all m 1 …xed, (Xn;1 ; :::; Xn;m )n 1 converges in distribution to (X1 ; X2 ; :::; Xm ) and m 1 X Xu ! c in L1 as m ! 1: (12) m u=1 Then
n 1X Xn;u ! c in L1 as n ! 1: n u=1
Proof of Lemma 9. Let m 1 be a …xed integer and de…ne consecutive blocks of indexes of size m, Ij (m) = f(j 1)m+1; :::; mj)g: In the set of integers from 1 to n we have kn = kn (m) = [n=m] such blocks of integers and a last one containing less than m indexes. Practically, by the stationarity of the rows and by the triangle inequality, we write 1 Xn Ej (Xn;u u=1 n
c)j X 1 X kn 1 Xn Ej (Xn;k c)j + Ej (Xn;u j=1 k2Ij (m) u=kn m+1 n n Xm 1 m Ej (Xn;u c)j + EjXn;1 cj: u=1 m n
Note that, by the uniform integrability of (Xn;1 )n lim sup n!1
m EjXn;1 n
cj
lim sup n!1
1;
(13) c)j
we have
m (EjXn;1 j + jcj) = 0: n
Now, by the continuous function theorem and by our conditions, for m …xed, we have the following convergence in distribution: 1 Xm (Xn;u u=1 m
c) )
1 Xm (Xu u=1 m
c):
In addition, by the uniform integrability of (Xn;k )n and by the convergence of moments theorem associated to convergence in distribution, we have Xm 1 Ej (Xn;u n!1 m u=1 lim
c)j =
and by assumption (12) we obtain Ej
1 Xm (Xu u=1 m
Xm 1 Ej (Xu u=1 m
c)j;
c)j ! 0 as m ! 1:
The result follows by passing to the limit in (13), letting …rst n ! 1 followed by m ! 1. When we have additional information about the type of the limiting distribution for the columns the result simpli…es. 7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Corollary 10 If in Theorem 8 the limiting vector (L1 ; L2 ; :::; Lm ) is stationary Gaussian, then condition (10) holds and 1 Xn p Dn;k ) cZ as n ! 1; k=1 n
where Z is a standard normal variable and c can be identi…ed by 2 c2 = lim E(Dn;1 ): n!1
Proof. We shall verify the conditions of Theorem 8. Note that, by the martingale property, we have that cov(Dn;1 ; Dn;k ) = 0: Next, by the condition of uniform integrability, by passing to the limit we obtain cov(L1 ; Lk ) = 0: Therefore, the sequence (Lm )m is an i.i.d. Gaussian sequence of random variables and condition (10) holds. In order to prove Theorem 1 we start by pointing out an upper bound for the variance given in Corollary 7.2 in Wang and Woodroofe (2013). It should be noticed that to prove it, the assumption that the random …eld is Bernoulli is not needed. Lemma 11 De…ne (Xn;m )n;m2Z by (2) and assume that (3) holds. Then, there is a universal constant C such that X 1 1 p jjSn;m jj C jjE(Sj;i jF1;1 )jj: i;j 1 (ji)3=2 nm By applying the triangle inequality, the contractivity property of the conditional expectation and changing the order of summations we easily obtain the following corollary. Corollary 12 Under the conditions of Lemma 11 there is a universal constant C such that X 1 1 p jjSn;m jj C jjE(Xj;i jF1;1 )jj : i;j 1 (ji)1=2 nm Proof of Theorem 1. We shall develop the "small martingale method" in the context of random …elds. To construct a row-wise stationary martingale approximation we shall introduce a parameter. Let ` be a …xed positive integer and denote k = [n2 =`]. We start the proof by dividing the variables in each line in blocks of size ` and making the sums in each block. De…ne (`)
Xj;i =
1 `1=2
i` X
u=(i 1)`+1
8
Xj;u ; i
1:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Then, for each line j we construct the stationary sequence of martingale di¤er(`) ences (Yj;i )i2Z de…ned by (`)
(`)
(`)
Yj;i = Xj;i
(`)
E(Xj;i jFj;i
1 );
(`)
where Fj;k = Fj;k` . Also, we consider the triangular array of martingale di¤er(`)
ences (Dn1 ;i )i
1
de…ned by
1 X n1 (`) (`) Dn1 ;i = p Y : j=1 j;i n1 Pk (`) p In order to …nd the limiting distribution of ( i=1 Dn1 ;i = k)k when min(n1 ; ; k) ! 1; we shall apply Corollary 10. It is enough to show that (`)
(`)
(Dn1 ;1 ; :::; Dn1 ;N ) ) (L1 ; :::; LN ); (`)
where (L1 ; :::; LN ) is stationary Gaussian and [(Dn1 ;1 )2 ]n1 is uniformly integrable. Both these conditions will be satis…ed if we are able to verify the condi(`) (`) tions of Theorem 14, in the Appendix, for the sequence (a1 Yn;1 +:::+aN Yn;N )n , where a1 ; :::; aN are arbitrary, …xed real numbers. We have to show that, for ` …xed k X 1 X (`) (`) (`) jj E(a1 Yj;1 + ::: + aN Yj;N jF1;N )jj < 1: (14) k 1 k 3=2 j=1 By the triangle inequality it is enough to treat each sum separately and to show that for all 1 v N we have X
1
k 1
k
jj 3=2
(`)
k X j=1
(`)
(`)
(`)
(`)
(`)
(`)
E(Yj;v jF1;N )jj < 1:
(`)
By (3) we have that E(Yj;v jF1;N ) = E(Yj;v jF1;v ): Therefore, by stationarity, the latter condition is satis…ed if we can prove that X
1
k 1
k
jj 3=2
k X j=1
E(Yj;1 jF1;1 )jj < 1:
Now, by using once again (3), we deduce (`)
(`)
(`)
(`)
E(Yj;1 jF1;1 ) = E(Xj;1 (`)
(`)
(`)
(`)
E(Xj;1 jFj;0 )jF1;1 ) =
E(Xj;1 jF1;1 )
(`)
(`)
E(Xj;1 jF1;0 ):
So, by the triangle inequality and the monotonicity of the L2 norm of the conditional expectation with respect to increasing random …elds, we obtain jj
k X j=1
(`)
(`)
E(Yj;1 jF1;1 )jj
2jj
k X j=1
(`)
(`)
E(Xj;1 jF1;1 )jj = 2 9
1 `1=2
jjE(Sk;` jF1;` )jj:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Furthermore, since the …ltration is commuting, by the triangle inequality we obtain jjE(Sk;` jF1;` )jj = jj
k X ` X
u=1 v=1
E(Xu;v jF1;v )jj
`jj
k X
u=1
E(Xu;1 jF1;1 )jj:
By taking into account condition (4), it follows that we have X
1
k 1
k
jj 3=2
k X j=1
(`)
(`)
E(Yj;v jF1;N )jj
2`1=2
X
k 1
1 jjE(Sk;1 jF1;1 )jj < 1; k 3=2
showing that condition (14) is satis…ed, which implies that the conditions of Corollary 10 are satis…ed. The conclusion is that p where
2 `
1 X n1 X k (`) Yj;i ) N (0; j=1 i=1 n1 k
2 `)
as min(n1 ; k) ! 1;
is de…ned, in accordance with Theorem 14, by 2 `
Xn 1 (`) E Y n!1 n j=1 j;1
= lim
2
:
According to Theorem 3.2 in Billingsley (1999), in order to prove convergence p and to …nd the limiting distribution of Sn1 ;n2 = n1 n2 we have to show that lim lim sup jj p
`!1 n1 ;k!1
and N (0;
2 `)
) N (0;
2
1 Sn ;n n1 n2 1 2
p
1 X n1 X k (`) Y jj = 0 j=1 i=1 j;i n1 k
(15)
); which is equivalent to 2 `
!
2
as ` ! 1:
(16)
p The conclusion will be that Sn1 ;n2 = n1 n2 ) N (0; 2 ) as min(n1 ; n2 ) ! 1: Let us …rst prove (15). By the triangle inequality we shall decompose the di¤erence in (15) into two parts. Relation (15) will be established if we show both X n1 X k 1 (`) (`) jj lim lim sup p E(Xj;i jFj;i 1 )jj = 0: (17) j=1 i=1 `!1 n1 ;k!1 n1 k and
lim
n1 ;k!1
jj p
1 Sn ;n n1 n2 1 2
p
1 Sn ;k` jj = 0: n1 k` 1
(18)
In order for computing the standard deviation of the double sum involved, before taking the limit in (17), we shall apply Lemma 11 and a multivariate version of Remark 15 in the Appendix. This expression is dominated by a universal constant times X Xj Xi 1 (`) (`) (`) jj E(E(Xu;v jFu;v 1 )jF1;0 )jj: 3=2 u=1 v=1 i;j 1 (ij) 10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Now,
Xj
u=1
Xi
(`)
v=1
(`) E(E(Xu;v jFu;v
(`) 1 )jF1;0 )
=
1 `1=2
E(Sj;i` jF1;0 ):
So, the quantity in (17) is bounded above by a universal constant times 1 X 1 jjE(Sj;i` jF1;0 )jj; i;j 1 (ij)3=2 `1=2
which converges to 0 as ` ! 1 under our condition (4), by Lemmas 2.7 and 2.8 in Peligrad and Utev (2005), applied in the second coordinate. As far P as theP limit (18) is concerned, since by Lemma 11 and condition (4) p n2 n1 2 the array j=1 i=1 Xj;i = n1 n2 is bounded in L , it is enough to show that, for k` < n2 < (k + 1)`; we have X n1 X n2 1 Xj;i jj = 0: lim jj p i=k`+1 j=1 n1 ;n2 !1 n1 n2
We just have to note that, again by Lemma 11, condition (4) and stationarity, there is a constant K such that X n1 X n2 p jj Xj;i jj K n1 ` j=1
i=k`+1
and `=n2 ! 0 as n2 ! 1: We turn now to prove (16): By (15) and the orthogonality of martingale di¤erences, 1 1 X n1 (`) lim lim sup j jj p Y jj j = 0: Sn1 ;n2 jj jj p j=1 j;0 `!1 n1 ;n2 !1 n1 n2 n1 So
lim lim sup j jj p
`!1 n1 ;n2 !1
1 Sn ;n jj n1 n2 1 2
By the triangle inequality, this shows that convergent to a constant and also lim
n1 ;n2 !1
jj p
`
`)
j = 0:
is a Cauchy sequence, therefore
1 Sn ;n jj = : n1 n2 1 2
The proof is now complete. The extensions to random …elds indexed by Z d ; for d > 2; are straightforward following the same lines of proofs as for a two-indexed random …eld. We shall point out the di¤erences. To extend Lemma 11, we …rst apply a result of Peligrad and UtevP (2005) (see Theorem 14 in the Appendix) to the stationary m sequence Yj (m) = i=1 Xj;i with j 2Z d 1 and then we apply induction. In order to prove Theorem 3, we partition the variables according to the last index. Let ` be a …xed positive integer, denote k = [nd =`] and de…ne (`)
Xj;i =
1 `1=2
i` X
u=(i 1)`+1
11
Xj;u ; i
1:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Then, for each j we construct the stationary sequence of martingale di¤erences (`) (`) (`) (`) (`) (Yj;i )i2Z de…ned by Yj;i = Xj;i E(Xj;i jFj;i 1 ) and 1 X n0 (`) (`) Y : Dn0 ;i = p j=1 j;i jn0 j
(`)
(`)
For showing that (Dn0 ;1 ; :::; Dn0 ;N ) ) (L1 ; :::; LN ); we apply the induction hypothesis. Proof of Example 5. Let us note …rst that the variables are square integrable and well de…ned. Note that X X E(Su jF0 ) = ak j j 1 k uj 0
and therefore E(E 2 (Su jF0 )) =
X
X
(
ak+i )2 E( 12 ):
i 0 1 k u
The result follows by applying Theorem 3 (see Remark 15 and consider a multivariate analog of it). Proof of Example 6. Note that E(Sj jF0 ) = =
X
j X
j X
X
au;v
k u k v
k=1 (u;v) (k;k)
ak+u;k+v
u
v
(u;v) (0;0) k=1
=
X
cu;v (j)
u
v:
(u;v) (0;0)
Since by our conditions cu;u = 0 we obtain X E(E 2 (Sj jF0 )) = (c2u;v (j) + cu;v (j)cv;u (j))E(
2 u v) :
u 0;v 0;u6=v
4
Appendix.
For convenience we mention a classical result of McLeish which can be found on pp. 237-238 Gänssler and Häusler (1979). Theorem 13 Assume (Dn;i )1 i n is an array of square integrable martingale di¤ erences adapted to an array (Fn;i )1 i n of nested sigma …elds. Suppose that max jDn;j j !L2 0 as n ! 1:
1 j n
12
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
and
n X j=1
Then
Pn
j=1
2 Dn;j !P c2
as n ! 1.
Dn;j converges in distribution to N (0; c2 ):
The following is a Corollary of Theorem 1.1 in Peligrad and Utev (2005). This central limit theorem was obtained by Maxwell and Woodroofe (2000). Theorem 14 Assume that (Xi )i2Z is a stationary sequence adapted to a stationary …ltration (Fi )i2Z . Then there is a universal constant C1 such that jjSn jj If
C1 n1=2
X1
k=1
X1
k=1
1 k 3=2
jjE(Sk jF1 )jj:
1 jjE(Sk jF1 )jj < 1; k 3=2
then (Sn2 =n)n is uniformly integrable and and there is a positive constant c such that 1 2 E (Sn ) ! c2 as n ! 1: n If in addition the sequence is ergodic we have 1 p Sn ) cN (0; 1) as n ! 1: n Remark 15 Note that we have the following equivalence: X1
1
k=1 k 3=2
jjE(Sk jF1 )jj < 1 if and only if
Remark 16 The condition (1) is implied by X1
k=1
X1
k=1
1 jjE(Sk jF0 )jj < 1: k 3=2
1 jjE(Xk jF1 )jj < 1: k 1=2
Lemma 17 Assume that X; Y; Z are integrable random variables such that (X; Y ) and Z are independent. Assume that g(X; Y ) is integrable. Then E(g(X; Y )j (Y; Z)) = E(g(X; Y )jY ) a.s. and E(g(Z; Y )j (X; Y )) = E(g(Z; Y )jY ) a.s. Proof. Since (X; Y ) and Z are independent, it is easy to see that X and Z are conditionally independent given Y . The result follows from this observation by Problem 34.11 in Billingsley (1995).
13
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Acknowledgements. This research was supported in part by the NSF grant DMS-1512936 and the Taft Research Center at the University of Cincinnati. The authors would like to thank Yizao Wang for pointing out that …ltrations generated by random …elds with independent columns generate commuting …ltrations. We would also like to thank anonymous referees for carefully reading the manuscript and their numerous suggestions, which improved the presentation of the paper.
References [1] Billingsley, P. (1995). Probability and measures. (3rd ed.). Wiley Series in Probability and Statistics, New York [2] Billingsley, P. (1999). Convergence of probability measures. (2nd ed.). Wiley Series in Probability and Statistics, New York. [3] Bradley, R. and Tone, C. (2017). A Central Limit Theorem for NonStationary Strongly Mixing Random Fields. Journal of Theoretical Probability 30 655-674. [4] Cuny, C. and Merlevède, F. (2014). On martingale approximations and the quenched weak invariance principle. Ann. Probab. 42 760-793. [5] Cuny, C. Dedecker J. and Volný, D. (2015). A functional central limit theorem for …elds of commuting transformations via martingale approximation, Zapiski Nauchnyh Seminarov POMI 441.C. Part 22 239-263 and Journal of Mathematical Sciences 2016, 219 765–781. [6] Dedecker, J. (1998). A central limit theorem for stationary random …elds. Probability Theory and Related Fields. 110 397-426. [7] El Machkouri, M., Volný, D. and Wu, W.B. (2013). A central limit theorem for stationary random …elds. Stochastic Process. Appl. 123 1-14. [8] Gänssler, P. and Häusler, E. (1979). Remarks on the Functional Central Limit Theorem for Martingales. Z. Wahrscheinlichkeitstheorie verw. Gebiete 50 237-243. [9] Gordin M. I. (1969). On the central limit theorem for stationary processes. Dokl. Akad. Nauk SSSR, 188 739–741. and Soviet Math. Dokl. 10 1174– 1176. [10] Gordin, M. I. (2009). Martingale co-boundary representation for a class of stationary random …elds, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 364, Veroyatnostn i Statistika. 14.2, 88-108, 236; and J. Math. Sci. 163 (2009) 4, 363-374. [11] Maxwell, M. and Woodroofe, M. (2000). Central limit theorems for additive functionals of Markov chains. Ann. Probab. 28, 713–724. 14
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
[12] Peligrad, M. and Utev, S. (2005). A new maximal inequality and invariance principle for stationary sequences. Ann. Probab. 33, 798-815. [13] Rosenblatt, M. (1972). Central limit theorem for stationary processes, Berkeley Symp. on Math. Statist. and Prob. Proc. Sixth Berkeley Symp. on Math. Statist. and Prob., Vol. 2 (Univ. of Calif. Press), 551-561. [14] Volný, D. (2015). A central limit theorem for …elds of martingale di¤erences, C. R. Math. Acad. Sci. Paris 353, 1159-1163. [15] Volný, D. and Wang, Y. (2014). An invariance principle for stationary random …elds under Hannan’s condition. Stochastic Proc. Appl. 124 40124029. [16] Wang Y. and Woodroofe, M. (2013). A new criteria for the invariance principle for stationary random …elds. Statistica Sinica 23 1673-1696.
15