Journal of the Korean Statistical Society 39 (2010) 449–454
Contents lists available at ScienceDirect
Journal of the Korean Statistical Society journal homepage: www.elsevier.com/locate/jkss
Almost sure limit theorem for stationary Gaussian random fields Hyemi Choi ∗ Department of Statistical Informatics, Chonbuk National University, Jeonbuk, 561-756, Republic of Korea
article
info
Article history: Received 25 February 2009 Accepted 29 September 2009 Available online 8 October 2009 AMS 2000 subject classifications: primary 60F15 secondary 60F05
abstract We obtain an almost sure version of a maximum limit theorem for stationary Gaussian random fields under some covariance conditions. As a by-product, we also obtain a weak convergence of the stationary Gaussian random field maximum, which is interesting independently. © 2009 The Korean Statistical Society. Published by Elsevier B.V. All rights reserved.
Keywords: Almost sure limit theorem Extreme value theory Maximum Stationary Gaussian random field
1. Introduction Random field theory has recently received increasing attention. Applications of random field theory are extremely numerous and diverse. These include image analysis, atmospheric sciences and geostatistics, among others. In particular, Gaussian random fields are an important class for several reasons: Many natural phenomena are reasonably modelled by them, the model is specified by expectations and covariances, and its estimation and inference are relatively simple. The book by Adler and Taylor (2007) is a good reference for general random field theory including Gaussian random fields and Piterbarg (1996) deals with the maximum of Gaussian random fields in detail. In this paper we consider the maximum of stationary Gaussian random fields with discrete parameter and obtain its almost sure limit theorem. The almost sure version of the central limit theorem was first discovered by Brosamler ∑n (1988) and Schatte (1988), and has been discussed by various authors. Its general form is that if the partial sums Sn = k=1 ξk for a D
sequence of random variables ξ1 , ξ2 , . . . satisfy an (Sn − bn ) → G for some constants {an }, {bn } and a distribution function G, then under some conditions lim
n→∞
1
n − 1
log n k=1 k
I (ak (Sk − bk ) ≤ x) = G(x) a.s.
for any continuity point x of G, where I (A) denotes the indicator function for the event A. Recently Cheng, Peng, and Qi (1998) and Fahrner and Stadmüller (1998) extended this principle by establishing the almost sure limit theorem for the maximum of independent identically distributed random variables. In particular, Csáki and Gonchigdanzan (2002) and Shouquan and Zhengyan (2007) obtained the almost sure limit theorems for the maxima of stationary Gaussian sequences. The weak convergence of the linearly transformed maximum of a Gaussian sequence is well known, e.g. if Mn is the maximum of
∗
Tel.: +82 63 270 3391; fax: +82 63 270 3405. E-mail address:
[email protected].
1226-3192/$ – see front matter © 2009 The Korean Statistical Society. Published by Elsevier B.V. All rights reserved. doi:10.1016/j.jkss.2009.09.004
450
H. Choi / Journal of the Korean Statistical Society 39 (2010) 449–454
a (standard) Gaussian sequence {ξn , n ≥ 1} of random variables, i.e. Mn = maxi≤n ξi , we have under appropriate covariance conditions P{an (Mn − bn ) ≤ x} → exp(−e−x ) where an =
√
2 log n and bn = an − n − 1
1
lim
n→∞
log n k=1 k
for every x ∈ R,
log log n+log 4π 2an
(1)
. Csáki and Gonchigdanzan (2002) proved
I (ak (Mk − bk ) ≤ x) = exp(−e−x ) a.s. for every x ∈ R
(2)
with the same {an }, {bn } as in (1), by imposing additional conditions on the covariances, which conditions are weakened by Shouquan and Zhengyan (2007). The present work is devoted to the study of the stationary Gaussian random field aspect of (2) under the modified version of the covariance condition assumed by Shouquan and Zhengyan (2007). As a by-product, we obtain the weak convergence result for the maximum of stationary Gaussian random fields (see Lemma 3). In the sequel, {Xj } indexed by j ≡ (j1 , . . . , jd ) in Nd is assumed to be stationary standardized Gaussian random fields with covariance rn = Cov(Xj , Xj+n ), n ∈ Zd , where N and Z denote the set of positive integers and the set of integers respectively. We assume that the sampling region is given by Jn ≡ {j ∈ Nd : 1 ≤ ji ≤ ni , i = 1, . . . , d}, where n = (n1 , . . . , nd ) ∈ Nd . Let Mn = maxj∈Jn Xj and if E is any subset of Nd , M (E ) will denote maxj∈E Xj and, of course, Mn = M (Jn ). The rest of the paper is organized as follows. Section 2 contains the statement of the main result and its proof, which is achieved by a series of preliminary lemmas collected in Appendix. ∏ In the sequel, λE is the number of elements in E for any subset E of Nd . Let λk = i:ki ̸=0 |ki | for k = (k1 , . . . , kd ) and
λ0 = 1. Again, of course, λk = λJk when k ∈ Nd . Also let log n denote (log n1 , . . . , log nd ). We express the condition that min1≤i≤d ni → ∞ and nj /ni < K for some K > 0, 1 ≤ i, j ≤ d by simply writing n → ∞. Let Φ (·) and φ(·) denote the standard Gaussian distribution function and its density function respectively. Also for brevity, let In = {j ∈ Zd : −ni ≤ ji ≤ ni , i = 1, . . . , d, j ̸= 0}. In the case that {Xj } is isotropic, the set In may be replaced by Jn . 2. Main result We obtain the following almost sure limit results for the maxima of the stationary Gaussian random fields {Xj }. Throughout this paper, we use K for a positive constant whose value may change from line to line and let ax, a ∈ {−1, 1}d for any d-dimensional real vector x denote (a1 x1 , . . . , ad xd ). Theorem 1. Suppose ran → 0 for any a ∈ {−1, 1}d as n → ∞, satisfying
λn −1
−
|rk | log λk exp{γ |rk | log λk } = O((λlog n )−ϵ )
(3)
k∈In
for some ϵ > 0, γ > 2d. (i) If {un } is constants such that λn (1 − Φ (un )) → τ for 0 ≤ τ < ∞ and ui < uj when i < j, then lim
n→∞
(ii) If an =
1
λlog n
− 1 I (Mk ≤ uk ) = exp(−τ ) a.s. λk k∈Jn
√
2 log λn and bn = an −
lim
n→∞
1
λlog n
log log λn +log(4π) , 2an
then
− 1 I (ak (Mk − bk ) ≤ x) = exp(−e−x ) a.s. for every x ∈ R. λk k∈Jn
The above result still holds under the condition λn −1 stronger. When {Xj } is isotropic, ran condition rn → 0.
|rk | log λk = O((λlog n )−ϵ ) instead of (3), which is slightly = rn for any a ∈ {−1, 1}d and hence the condition ran → 0 is equivalent to the ∑
k∈In
Proof of Theorem 1. We first note that the condition (3) implies
λn −1
−
|rk | log λk exp{γ |rk | log λk } → 0
(4)
k∈In
and hence P(Mn ≤ un ) → e−τ under the conditions of the theorem according to Lemma 3(i) in Appendix. Then we have the partial sums of P(Mk ≤ uk ) with weights λ 1 λ converge, i.e. log n k
lim
n→∞
1
λlog n
− 1 P(Mk ≤ uk ) = exp(−τ ). λk k∈Jn
H. Choi / Journal of the Korean Statistical Society 39 (2010) 449–454
451
Therefore, it suffices to prove that lim
n→∞
1
λlog n
− 1 {I (Mk ≤ uk ) − P(Mk ≤ uk )} = 0 a.s., λk k∈Jn
which will be done by showing that
Var
1
λlog n
− 1 1 I (Mk ≤ uk ) = O . λk (λlog n )ν k∈Jn
(5)
for some ν > 0 due to Lemma 1 in Fahrner and Stadmüller (1998) since the expectation of ξk ≡ I (Mk ≤ uk ) − P(Mk ≤ uk ) equals 0 and |ξk | ≤ 1 for all k ∈ Nd . Now, we have
Var
1
λlog n
− − 1 − 1 1 1 2 I (Mk ≤ uk ) = E (ξ ξ ) E (ξ ) + k l λk (λlog n )2 k∈Jn (λk )2 k λk λl k∈Jn k̸=l ≡ T1 + T2 .
Since |ξk | ≤ 1, it follows that T1 ≤
1
1
−
(λlog n )
≤
(λk )
2
2
k∈Jn
K
(λlog n )2
.
(6)
Note that for k ̸= l such that uk ≤ ul
|E(ξk ξl )| = |Cov(I (Mk ≤ uk ), I (Ml ≤ ul ))| ≤ |Cov{I (M (Jk ) ≤ uk ), I (M (Jl ) ≤ ul ) − I (M (Jl − Jk ) ≤ ul )}| + |Cov{I (M (Jk ) ≤ uk ), I (M (Jl − Jk ) ≤ ul )}| ≤ E |I (M (Jl ) ≤ ul ) − I (M (Jl − Jk ) ≤ ul )| + |Cov{I (M (Jk ) ≤ uk ), I (M (Jl − Jk ) ≤ ul )}| λl − λJl −Jk K ≤ + , λl (λlog l )ϵ where applying Lemma 2 shows the last relation. In order to consider T2 , we define Am = {(k, l) ∈ Jn × Jn : (2mj − 1)(kj − lj ) ≥ 0, k ̸= l} for m ∈ Λ ≡ {(m1 , . . . , md ) : mj = 0, 1, j = 1, . . . , d, m ̸= 1}. Note that A1 = ∅ from the condition of {un }. Let m m am denote (a1 1 , . . . , ad d ) for a ∈ Rd and m ∈ Λ. Then, we have T2 ≤
K
(λlog n )2
1
− − m∈Λ (k,l)∈Am
λk λl
λl − λJl −Jk 1 + . λl (λlog l )ϵ
(7)
Since (λl − λJl −Jk )/λl becomes λk1−m /λl1−m for (k, l) ∈ Am , it follows that
− (k,l)∈Am
1 λl − λJl −Jk
λk λl
λl
=
− Am
− λ(log n)m λk1−m K ≤ ≤ K λlog n λ(log n)m , λk λl λl1−m λkm λl1−m A∗ m
mj (1−mj ) kj lj
where A∗m = { > 1, k ̸= l}. Furthermore, if we define Am (0) = {(k, l) ∈ Jn × Jn : (mj − 1)(kj − lj ) ≥ 0, k ̸= l} and Am (1) = {(k, l) ∈ Jn × Jn : mj (kj − lj ) ≥ 0, k ̸= l}, we have for the second part on the right-hand side of (7),
− (k,l)∈Am
1
1
λk λl (λlog n
)ϵ
=
1
−
≤ K (λ(log n)1−m )2−ϵ
1
−
λ 1−m λl1−m (λ(log l)1−m )ϵ A (1) Am (0) k m
λkm λlm (λ(log l)m )ϵ
− λ(log k)m λkm A (1) m
≤ K (λlog n )2−ϵ (λ(log n)m )ϵ . Therefore, T2 ≤ K
− λ(log n)m m∈Λ
λlog n
+
λ(log n)m λlog n
ϵ
and hence T2 ≤ (λlog n )−ν for some ν > 0. This and (6) together establish (5), which concludes the proof of (i). Next, take un = x/an + bn . Then we see that
λn (1 − Φ (un )) → exp(−e−x ) as n → ∞ and hence (ii) immediately follows from (i) with un = x/an + bn .
452
H. Choi / Journal of the Korean Statistical Society 39 (2010) 449–454
Appendix This Appendix states some lemmas with the proof of these, which are used in the proof of the main result in Section 2. Lemma 1. Let {Xn , n ∈ Nd } be stationary standardized Gaussian random fields with covariances {rn }. Assume that ran → 0 as n → ∞, and rn satisfies (3). If λn (1 − Φ (un )) is bounded, then sup
−
k∈Jn j∈I n
u2 + u2n 1 λk |rj | exp − k =O 2(1 + |rj |) (λlog n )ϵ
(8)
for some ϵ > 0. The proof only for the two dimensional case n = (n1 , n2 ) is given in the following. Basically the same technique may be applied for higher dimensional case. Proof of Lemma 1. We prove this result when λn (1 − Φ (un )) actually converges to a finite limit, from which it follows as stated. For if λn (1 − Φ (un )) ≤ K for some constant K > 0 and vn is defined to satisfy λn (1 − Φ (vn )) = K then (8) follows with vn for un . But since clearly vn ≤ un , it follows at once that (8) holds for un itself. Suppose, then, that λn (1 − Φ (un )) converges to a finite limit. Since 1 − Φ (x) ∼ φ(x)/x as x → ∞, we have that
exp −
u2n 2
∼K
un
λn
,
un ∼
2 log λn .
(9)
Let In (a) denote {j ∈ Z2 : aj ∈ Jn , j ̸= 0} for a ∈ {−1, 1}2 . Note that {In (a)} induces a partition of In into four sets. Let δ(a) = supaj̸=0 |raj |. Note that δ(a) < 1 since ran → 0 as n → ∞ (Leadbetter, Lindgren, & Rootzén, 1983, p. 86). For each a ∈ {−1, 1}d , define αi (a), βi , pi (a), qi i = 1, 2 to be positive constants such that
β ≡ min βi = 2/γ ,
1 − δ(a) αi (a) < min βi , , 1 + δ(a)
βi < 1/2,
α (a)
pi (a) = [ni i
],
qi = [λβn i ].
We suppress ‘‘(a)’’ in δ(a), αi (a), pi (a), i = 1, 2 when there is no confusion. Split each index set In (a) into nine parts, the first for aj ∈ [0, p1 ] × [0, p2 ] except j = 0, the second for (p1 , q1 ] × [0, p2 ], the third for (q1 , n1 ] × [0, p2 ], the fourth for [0, p1 ] × (p2 , q2 ], the fifth for (p1 , q1 ] × (p2 , q2 ], the sixth for (q1 , n1 ] × (p2 , q2 ], the seventh for [0, p1 ] × (q2 , n2 ], the eighth for (p1 , q1 ] × (q2 , n2 ] and the ninth for (q1 , n1 ] × (q2 , n2 ]. Call these nine parts of the sum as Ti (a), i = 1, . . . , 9 for each. By (9), T1 (a) is dominated by
1 2 u + u2n 1+δ u2 + u2n α α α α = λk n1 1 n2 2 exp − k λk n1 1 n2 2 exp − k 2(1 + δ) 2 1 1+δ uk un α α ≤ K λk n1 1 n2 2 λk λn 2 1+α1 − 1+δ
≤ Kn1
2 1+α2 − 1+δ
n2
1
(log λn ) 1+δ . −σ
2 Since 1 + αi − 1+δ < 0, i = 1, 2 from the choice of α1 , α2 , we get T1 (a) ≤ λn 1 for some σ1 > 0, uniformly for k ∈ Ia n. (1) To deal with T2 (a) and T5 (a), define δn1 (a) = supa1 j1 ≥n1 |raj |. Using (9) again, we have
T2 (a) + T5 (a) ≤ λk exp −
β1 +β2
≤ λk λn
2
β1 +β2
≤ K λk λn
u2k + u2n
−
exp
aj∈ (p1 +1,q1 ]×[0,q2 ]
exp −
u2k + u2n
|rj |(u2k + u2n ) 2(1 + |rj |)
1−δp(1) 1
2
√
log λk log λn
1−δp(1) 1
λk λn (1)
−1+β1 +β2 +2δp
(1)
1 ≤ K λn (log λn )1−δp1 2 ≤ λ−σ for some σ2 > 0, n ∑ which follows from the facts that βi < 1 and δp(11) (a) → 0 as n → ∞. Now, when defining δn(22) (a) = supa2 j2 ≥n2 |raj |, it −σ may be shown that T4 (a) is dominated by λn 3 for some σ3 > 0.
H. Choi / Journal of the Korean Statistical Society 39 (2010) 449–454
453
Next, consider parts T3 (a), T6 (a) and T9 (a). For brevity, define the set A = (q1 , n1 ] × [0, n2 ]. Again using (9), we have T3 (a) + T6 (a) + T9 (a) = λk
−
1+|1r | 2 j uk + u2n |rj | exp − 2
aj∈A
1 − √log λk log λn 1+|rj | ≤ K λk |rj | λk λn aj∈A − 1 |rj | exp(2|rj | log λn ). ≤ K log λn λn aj∈A
Note that log j1 ≥ β1 log λn for j1 > q1 and hence β −1 log j1 ≥ log λn from the definition of β . Thus T3 (a) + T6 (a) + T9 (a) ≤ K
≤K
1 −
λn
1 −
λn
|rj | log j1 exp(2β −1 |rj | log j1 )
aj∈A
|rj | log λj exp(γ |rj | log λj ) ≤
j∈In
K
(λlog n )ϵ
.
Similarly T7 (a) + T8 (a) ≤ K (λlog n )−ϵ . Therefore the sum over each In (a), a ∈ {−1, 1}d in (8) has the uniform upper bound K (λlog n )−ϵ in k and hence we establish (8). Also, we need the following lemma for the proof of our main result. Lemma 2. Let {Xj , j ∈ Nd } be stationary standardized Gaussian random fields. Assume that ran → 0 for any a ∈ {−1, 1}d as n → ∞, and {rn } satisfy (3). If λn (1 − Φ (un )) is bounded, then for k, n ∈ Nd such that k ̸= n and un ≥ uk (i) |P{M (Jk ) ≤ uk , M (Jn − Jk ) ≤ un } − P{M (Jk ) ≤ uk }P{M (Jn − Jk ) ≤ un }| = O((λlog n )−ϵ ). (ii) E|I (M (Jn ) ≤ un ) − I (M (Jn − Jk ) ≤ un )| ≤
λn −λJn −Jk λn
+
K
(λlog n )ϵ
.
Proof of Lemma 2. For Part (i), we compare the maximum of Xj , j ∈ Jk ∪ Jn under the full covariance structure with the maximum of Xj assuming variables arising from Jk and Jn are independent. Then applying Normal Comparison Lemma (see Leadbetter et al. (1983), page 81) to stationary Gaussian random fields gives
|P(M (Jk ) ≤ uk , M (Jn − Jk ) ≤ un ) − P(M (Jk ) ≤ uk )P(M (Jn − Jk ) ≤ un )| − − u2k + u2n u2 + u2n |ri−j | exp − ≤ K ≤ K λk . |rj | exp − k 2(1 + |ri−j |) 2(1 + |rj |) (i,j)∈J ×Jn j∈In k i̸=j
Hence the first result follows from Lemma 1. Next, for (ii) we note that E|I (M (Jn ) ≤ un ) − I (M (Jn − Jk ) ≤ un )| = P(M (Jn − Jk ) ≤ un ) − P(M (Jn ) ≤ un )
≤ |P(M (Jn − Jk ) ≤ un ) − Φ λJn −Jk (un )| + |P(M (Jn ) ≤ un ) − Φ λJn (un )| + |Φ λJn −Jk (un ) − Φ λJn (un )|,
where the first and second parts on the right-hand side does not exceed K (λlog n )−ϵ by Corollary 4.2.4 of Leadbetter et al. (1983), and the third part is dominated by 1 − λJn −Jk /λn from the fact that xn−k − xn ≤ k/n, 0 ≤ x ≤ 1. Hence (ii) follows and thus the proof completed. The following weak convergence result is the extended version of Theorem 4.5.2 of Leadbetter et al. (1983) to the stationary Gaussian random field. It enables us to see that the appropriate conditions on covariances {rn } imply P(Mn ≤ un ) → e−τ with appropriately high levels un in higher dimensional cases too. Lemma 3. Let {Xj , j ∈ Nd } be stationary standardized Gaussian random fields with covariances {rn } satisfying that as n → ∞, ran → 0 for any a ∈ {−1, 1}d and
λ n −1
−
|rk | log λk exp{γ |rk | log λk } → 0
for some γ > 2d.
(10)
k∈In
Then as n → ∞ (i) for 0 ≤ τ < ∞, P(Mn ≤ un ) → e−τ iff λn (1 − Φ (un )) → τ . √ (ii) P{an (Mn − bn ) ≤ x} → exp(−e−x ) for every x ∈ R, where an = 2 log λn and bn = an − (log log λn + log(4π ))/(2an ).
454
H. Choi / Journal of the Korean Statistical Society 39 (2010) 449–454
Proof of Lemma 3. The ‘‘only if’’ of Part (i) and (ii) may be proved with obvious modifications of arguments used in Theorem 4.3.3 of Leadbetter et al. (1983). Now, for ‘‘if’’ of Part (i), suppose λn (1 − Φ (un )) → τ < ∞. On applying the same arguments used in the proof of Lemma 1, it is obvious that (10) implies λn
∑
u2
j∈In
|rj | exp(− 1+|nrj | ) → 0 as n → ∞. Further, it follows
from the Normal Comparison Lemma that
− P{Mn ≤ un } − Φ (un )λn ≤ K λn |rj | exp − j∈In
Hence P{Mn ≤ un } → e−τ .
u2n 1 + | rj |
.
References Adler, R., & Taylor, J. E. (2007). Random fields and geometry. New York: Springer. Brosamler, G. A. (1988). An almost everywhere central limit theorem. Mathematical Proceedings of the Cambridge Philosophical Society, 104, 561–574. Cheng, S., Peng, L., & Qi, Y. (1998). Almost sure convergence in extreme value theory. Mathematische Nachrichten, 190, 43–50. Csáki, E., & Gonchigdanzan, K. (2002). Almost sure limit theorems for the maximum of stationary Gaussian sequences. Statistics and Probability Letters, 58, 195–203. Fahrner, I., & Stadmüller, U. (1998). On almost sure max-limit theorems. Statistics and Probability Letters, 37, 229–236. Leadbetter, M. R., Lindgren, G., & Rootzén, H. (1983). Extremes and related properties of random sequences and processes. New York: Springer. Piterbarg, V. I. (1996). Asymptotic methods in the theory of gaussian processes and fields. Providence: American Mathematical Society. Schatte, P. (1988). On strong versions of the central limit theorem. Mathematische Nachrichten, 137, 249–256. Shouquan, C., & Zhengyan, L. (2007). Almost sure limit theorems for a stationary Gaussian sequence. Applied Mathematics Letters, 20, 316–322.