The invariance principle for linear multi-parameter stochastic processes generated by associated fields

The invariance principle for linear multi-parameter stochastic processes generated by associated fields

Statistics and Probability Letters 78 (2008) 3298–3303 Contents lists available at ScienceDirect Statistics and Probability Letters journal homepage...

444KB Sizes 1 Downloads 7 Views

Statistics and Probability Letters 78 (2008) 3298–3303

Contents lists available at ScienceDirect

Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro

The invariance principle for linear multi-parameter stochastic processes generated by associated fields Tae-Sung Kim a , Mi-Hwa Ko a,∗ , Yong-Kab Choi b a

Department of Mathematics, WonKwang University, Jeonbuk, 570-749, Republic of Korea

b

Division of Mathematics and Information Statistics, Gyeongsang National University, Kyungnam, 660-701, Republic of Korea

article

info

a b s t r a c t

Article history: Received 16 April 2007 Received in revised form 23 October 2007 Accepted 2 June 2008 Available online 18 June 2008

We derive the invariance principle for the linear random field generated by identically distributed and associated random fields. Our result extends the result in Bulinski and Keane [Bulinski, A.V., Keane, M.S., 1996. Invariance principle for associated random fields. J. Math. Sci. 81, 2905–2911.] to the linear random field in the identically distributed case as well as the result in Marinucci and Poghosyan [Marinucci, M., Poghosyan, S., 2001. Asymptotics for linear random fields. Probab. Lett. 51, 131–141.] to the associated case. © 2008 Elsevier B.V. All rights reserved.

MSC: 60F05 60G10

1. Introduction Recently the concept of association or positive dependence is widely used. This concept arose independently in reliability theory and statistical physics, where one prefers to say that the random variables satisfy FKG-inequalities. Recall that a finite collection of real-valued random variables Y1 , . . . , Yn is called associated if Cov(f (Y1 , . . . , Yn ), g (Y1 , . . . , Yn )) ≥ 0 for any coordinatewise nondecreasing functions f , g : Rn → R, whenever the covariance exists. An infinite family of random variables is associated if every finite subfamily has this property (see Esary et al. (1967)). In 1984, Newman (Newman, 1984, p 138) posed the problem of proving the invariance principle for stationary associated random fields {ξ (t1 , . . . , tp ), (t1 , . . . , tp ) ∈ Zp } when p ≥ 3. After that Burton and Kim (1988) and Bulinski and Keane (1996) proved the invariance principle for associated random fields when p ≥ 3 in the different methods. Let W (·, . . . , ·) denote multi-parameter standard Brownian motion, i.e. a zero-mean Gaussian process with covariance function satisfying E (W (t1 , . . . , tp )W (s1 , . . . , sp )) =

p Y

min(tj , sj ),

(1.1)

j =1

also, let Dp be the space of ‘‘cadlag’’ functions from [0, 1]p to R; it is possible to introduce on Dp a metric topology which makes it complete and separable, and indeed Dp is the multi-dimensional analogue of the Skorohod space D[0, 1], see Straf (1970) or Poghosyan and Roelly (1998) for details. Without loss of generality, we can assume that E ξ (t1 , . . . , tp ) = 0 for all (t1 , . . . , tp ) ∈ Zd . Define in the Skorohod space Dp the partial sum processes, Wn (r1 , . . . , rp ) =



1

[nr1 ] X

σ np/2

t1 = 1

···

[nrp ] X

ξ (t1 , . . . , tp ),

tp = 1

Corresponding author. E-mail addresses: [email protected] (T.-S. Kim), [email protected] (M.-H. Ko), [email protected] (Y.-K. Choi).

0167-7152/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2008.06.022

(1.2)

T.-S. Kim et al. / Statistics and Probability Letters 78 (2008) 3298–3303

3299

where 0 ≤ r1 , . . . , rp ≤ 1, (t1 , . . . , tp ) ∈ Nd , and [.] the integer-part function and

X

σ2 =

Cov(ξ (0, . . . , 0), ξ (t1 , . . . , tp )) < ∞.

(1.3)

(t1 ,...,tp )∈Zp

From the theorem of Bulinski and Keane (1996) we obtain the following result: Theorem 1.1 (Bulinski and Keane, 1996). Let {ξ (t1 , . . . , tp )} be an identically distributed and associated random field with E ξ (t1 , . . . , tp ) = 0. Let E |ξ (t1 , . . . , tp )|q < ∞

for some q > 2.

(1.4)

Then {ξ (t1 , . . . , tp )} fulfills the invariance principle, i,e., Wn (·) ⇒ W (·) where ⇒ denotes weak convergence. In the proof of Lemma 1 in Bulinski and Keane (1996) they also established the following maximal inequality for associated random fields. Lemma 1.2 (Bulinski and Keane, 1996). Let A be a family of parallelepipeds in Rd+ of the form V = (a, b], i,e., V = (a1 , b1 ] × Qp · · · × (aP p , bp ], where 0 ≤ ai ≤ bi < ∞, ai , bi ∈ N ∪ {0}, i = 1, . . . , p. For V ∈ A, denote |V | = i=1 (bi − ai ) and S (V ) = (t1 ,...,tp )∈V ξ (t1 , . . . , tp ), M (V ) = max{|S (Q )| : Q = (a, s] ⊂ V }. Let {ξ (t1 , . . . , tp )} be an identically distributed and associated random field satisfying the condition of Theorem 1.1 stated above. Then, there exists a constant C > 0 such that q

E (M (V ))q ≤ C |V | 2 ,

q > 2.

(1.5)

Marinucci and Poghosyan (2001) generalized a result known for p = 1 as the Beveridge–Nelson decomposition (cf. Phillips and Solo (1992)) to the p ≥ 2 case and derived the invariance principle for linear random fields generated by the independent and identically distributed random field {ξ (t1 , . . . , tp )} by exploiting it to decompose partial sums of linear random fields p into a partial sum of independent components and a remainder, which is shown to be uniformly of smaller order on Z+ . In this paper we will extend Theorem 1.1 to the linear random field generated by the associated random fields (see Theorem 3.1) by applying this technique (Lemma 2.1). 2. Preliminaries Define a linear random field by u(t1 , . . . , tp ) =

∞ X

···

i 1 =0

∞ X

a(i1 , . . . , ip )ξ (t1 − i1 , . . . , tp − ip ),

(t1 , · · · , tp ) ∈ Zp ,

(2.1)

ip =0

where {ξ (t1 , . . . , tp )} is an identically distributed and associated random field with E ξ (t1 , . . . , tp ) E |ξ (t1 , . . . , tp )|q < ∞, q > 2 and the real number a(i1 , . . . , ip ) ≥ 0 for all (i1 , . . . , ip ),

i1 , . . . , ip ∈ N ∪ {0}.

=

0 and

(2.2)

First we consider the decomposition of multivariate polynomials in Marinucci and Poghosyan (2001); put A(x1 , . . . , xp ) =

∞ X

···

i1 =0

∞ X

i

ip

a(i1 , . . . , ip )x11 · · · xp ,

(x1 , . . . , xp ) ∈ Rp ,

i p =0

and assume that |xi | ≤ 1, i = 1, . . . , p, and ∞ X i 1 =0

···

∞ ∞ X X

···

ip =0 k1 =i1 +1

∞ X

a(k1 , · · · , kp ) < ∞.

(2.3)

kp =ip +1

Clearly (2.3) implies A(1, . . . , 1) =

∞ X i1 =0

···

∞ X

a(i1 , . . . , ip ) < ∞.

i p =0

The following lemma generalizes a result known for p = 1 as the Beveridge–Nelson decomposition (cf. Phillips and Solo (1992)).

3300

T.-S. Kim et al. / Statistics and Probability Letters 78 (2008) 3298–3303

Lemma 2.1 (Marinucci and Poghosyan, 2001). Let Γp be the class of all 2p subsets γ of {1, 2, . . . , p}. Let yi = xi if j ∈ γ and yi = 1 if j 6∈ γ . Then we have

( X Y

A(x1 , . . . , xp ) =

γ ∈Γp

where it is assumed that

j∈φ

(2.4)

∞ X

= 1, and

···

i1 =0

∞ X

aγ (i1 , . . . , ip ) =

(xj − 1) Aγ (y1 , . . . , yp ),

j∈γ

Q

Aγ (y1 , . . . , yp ) =

)

∞ X

ip

i

aγ (i1 , . . . , ip )y11 · · · yp ,

(2.5)

ip =0

∞ X

···

s1 =i1 +1

a(s1 , . . . , sp ),

(2.6)

sp =ip +1

where the sums go over indexes sj , j ∈ γ , where as sj = ij if j 6∈ γ . As in Marinucci and Poghosyan (2001), we also consider the partial backshift operator satisfying Bi ξ (t1 , . . . , ti , . . . , tp ) = ξ (t1 , . . . , ti − 1, . . . , tp ) i = 1, 2, . . . , p,

(2.7)

which enables us to write (2.1) more compactly as ∞ X

u(t1 , . . . , tp ) =

···

i 1 =0

∞ X

ip

i

a(i1 , . . . , ip )B11 · · · Bp ξ (t1 , . . . , tp )

ip =0

= A(B1 , . . . , Bp )ξ (t1 , . . . , tp ),

(2.8)

where A(B1 , . . . , Bp ) =

∞ X

···

i1 =0

∞ X

i

ip

a(i1 , . . . , ip )B11 · · · Bp .

ip =0

The above ideas shall be exploited here to establish the invariance principle for the linear random fields. To this aim, we write

ξγ (t1 , . . . , tp ) = Aγ (L1 , . . . , Lp )ξ (t1 , . . . , tp ),

(2.9)

where the operator Li is defined as Li = Bi for i ∈ γ , Li = 1 otherwise; for instance, when p = 2

ξ1 (t1 , t2 ) = A1 (B1 , 1)ξ (t1 , t2 ), ξ2 (t1 , t2 ) = A2 (1, B2 )ξ (t1 , t2 ), ξ12 (t1 , t2 ) = A12 (B1 , B2 )ξ (t1 , t2 ). Remark 2.1. Note that from (2.2), (2.3) and (2.6) we have 0≤

∞ X

···

i1 =0

∞ X

aγ (i1 , . . . , ip ) < ∞.

(2.10)

i p =0

Remark 2.2. Note that ξγ (t1 , . . . , tp ) = 0

P∞

i1 =0

···

P∞

i p =0

aγ (i1 , . . . , ip )ξ (t1 − i1 , . . . , tp − ip ) by (2.5), (2.7) and (2.9) and

that ξγ (t1 , . . . , tp ) s are associated by the properties of association since aγ (i1 , . . . , ip ) ≥ 0 (see Esary et al. (1967)). 3. The invariance principle Theorem 3.1. Let u(t1 , . . . , tp ) be defined as in (2.1) and {ξ (t1 , . . . , tp ), (t1 , . . . , tp ) ∈ Zp } the identically distributed and associated mean zero random field satisfying the condition (1.4) of Theorem 1.1. Assume that (2.2) and (2.3) hold. Then, for 0 ≤ r1 , . . . , rp ≤ 1 1

[nr1 ] X

σ np/2

t1 = 1

where σ 2 =

P

···

[nrp ] X

u(t1 , . . . , tp ) ⇒ A(1, . . . , 1)W (r1 , . . . , rp )

tp = 1

(t1 ,...,tp )∈Zp

Cov(ξ (0, . . . , 0), ξ (t1 , . . . , tp )) < ∞.

(3.1)

T.-S. Kim et al. / Statistics and Probability Letters 78 (2008) 3298–3303

3301

To prove Theorem 3.1 we need the following lemma. Lemma 3.2. Let {ξ (t1 , . . . , tp )} be an identically distributed and associated mean zero random field satisfying condition (1.4) in Theorem 1.1. Assume that (2.2) and (2.3) hold. Then E |ξγ (t1 , . . . , tp )|q < ∞

for γ ∈ ΓP and q > 2.

(3.2)

Proof. From Remarks in Section 2 we have

ξγ (0, . . . , 0) =

∞ X

···

i 1 =0

=

∞ X

∞ X

aγ (i1 , . . . , ip )ξ (−i1 , . . . , −ip )

ip =0

aγ (φ(i))ξ (−φ(i))

i=0

where φ : Z → Zp and {ξ (−φ(i))} is a sequence of identically distributed and associated random variables. Hence, E |ξγ (t1 , . . . , tp )|q = E |ξγ (0, . . . , 0)|q

q ∞ X aγ (φ(i))ξ (−φ(i)) = E i=0 ( q ) 1q q ∞ X  aγ (φ(i))ξ (−φ(i)) =  E i =0 " #q ∞ X 1 q q ≤ aγ (φ(i))(E |ξ (−φ(i))| ) i=0

≤C

" ∞ X

#q aγ (φ(i))

i =0

< ∞ by (2.10), the first bound following from Minkowski’s inequality and the second bound from condition (1.4).



Corollary 3.3. Let u(t1 , . . . , tp ) satisfy model (2.1) and {ξ (t1 , . . . , tp )} the identically distributed and associated random field with E ξ (t1 , . . . , tp ) = 0, E |ξ (t1 , . . . , tp )|q < ∞ for q > 2. If a(i1 , . . . , ip ) = 1 for i1 = · · · = ip = 0, a(i1 , . . . , ip ) = 0 otherwise, then (3.1) holds. Remark 3.1. Note that if a(i1 , . . . , ip ) = 1 for i1 = · · · = ip = 0, a(i1 , . . . , ip ) = 0 otherwise, then u(t1 , . . . , tp ) = ξ (t1 , . . . , tp ). Remark 3.2. Note that Corollary 3.3 is a special case of Theorem 3.1. Hence Theorem 3.1 is an extension of Theorem 1.1. 4. Proof of Theorem 3.1 Proof. From Theorem 1.1 we have 1

[nr1 ] X

σ np/2

t1 = 1

···

[nrp ] X

ξ (t1 , . . . , tp ) ⇒ W (r1 , . . . , rp ).

(4.1)

tp = 1

From Lemmas 1.2 and 3.2, we also have some constant C > 0 such that

X

Emax(t1 ,...,tp )∈V

q q ξγ (t1 , . . . , tp ) < C |V | 2

q > 2.

(4.2)

We start from the case p = 2, where we provide full details; the extension to p > 2 is discussed afterwards. If we apply Lemma 2.1 to the backshift polynomial A(B1 , . . . , Bp ), we find that the following a.s. equality holds: u(t1 , t2 ) = A(1, 1)ξ (t1 , t2 ) + (B1 − 1)A1 (B1 , 1)ξ (t1 , t2 )

+ (B2 − 1)A2 (1, B2 )ξ (t1 , t2 ) + (B1 − 1)(B2 − 1)A12 (B1 , B2 )ξ (t1 , t2 )

3302

T.-S. Kim et al. / Statistics and Probability Letters 78 (2008) 3298–3303

which implies that, for 0 ≤ r1 , r2 ≤ 1 [nr1 ] X [nr2 ] X

[nr1 ] X [nr2 ] X

u(t1 , t2 ) =

t1 = 1 t2 = 1

A(1, 1)ξ (t1 , t2 ) −

t1 = 1 t2 = 1

+

[nr1 ] X

[nr2 ] X

ξ1 ([nr1 ], t2 ) +

t2 = 1

[nr2 ] X

ξ1 (0, t2 ) −

t2 = 1

[nr1 ] X

ξ2 (t1 , [nr2 ])

t1 = 1

ξ2 (t1 , 0) − ξ12 (0, [nr2 ]) + ξ12 (0, 0) − ξ12 ([nr1 ], 0) + ξ12 ([nr1 ], [nr2 ])

t1 = 1

[nr1 ] X [nr2 ] X

=

A(1, 1)ξ (t1 , t2 ) + Rn (r1 , r2 ).

(4.3)

t1 = 1 t2 = 1

Note that ξ1 (·, ·), ξ2 (·, ·) and ξ12 (·, ·) are associated (see Remarks in Section 2). From Markov’s inequality, Lemmas 1.2 and 3.2, for 0 ≤ r1 , r2 ≤ 1, q > 2,

( P

max

n

0≤r1 ,r2 ≤1

−1

[nr2 ] X

) ξ1 ([nr1 ], t2 ) > δ



q [P nr2 ] E max ξ1 ([nr1 ], t2 ) 0≤r1 ,r2 ≤1 t =1 2 nq δ q

t2 = 1 q

≤ Cn− 2 = o(1)

(4.4)

as n → ∞. We can apply exactly the same argument to establish also

( P

max

n

0≤r1 ,r2 ≤1

−1

[nr1 ] X

) ξ2 (t1 , [nr2 ]) > δ

= o(1) as n → ∞.

(4.5)

t1 = 1

By Lemma 3.2 we have for 0 ≤ r1 , r2 ≤ 1 E |ξ12 ([nr1 ], [nr2 ])|q < ∞ and hence by the same argument as above we also have

 P

max

0≤r1 ,r2 ≤1

n−1 ξ12 ([nr1 ], [nr2 ]) > δ



= o(1) as n → ∞.

(4.6)

Thus, [nr2 ] [nr1 ] X X

n− 1

u(t1 , t2 ) = n−1

0≤r1 ,r2 ≤1

A(1, 1)ξ (t1 , t2 ) + n−1 Rn (r1 , r2 )

t1 = 1 t2 = 1

t1 = 1 t2 = 1

sup

[nr2 ] [nr1 ] X X

|n−1 Rn (r1 , r2 )| = op (1),

which implies

(σ n)−1

[nr1 ] X [nr2 ] X

u(t1 , t2 ) ⇒ A(1, 1)W (r1 , r2 ) as n → ∞.

t1 = 1 t2 = 1

In the general case where p > 2, the argument is analogous; we have [nr1 ] X t1 = 1

···

[nrp ] X

u(t1 , . . . , tp ) = A(1, . . . , 1)

tp = 1

[nr1 ] X

···

t1 = 1

[nrp ] X

ξ (t1 , . . . , tp ) + Rn (r1 , . . . , rp )

(4.7)

tp = 1

where Rn (r1 , . . . , rp ) =

X γ ∈Γp ,γ 6=φ

( Y

(Bj − 1)

j∈γ

with Li defined as in (2.9); note that for j ∈ γ

) [nr ] 1 X t1 = 1

···

[nrp ] X tp =1

Aγ (L1 , . . . , Lp )ξ (t1 , . . . , tp )

(4.8)

T.-S. Kim et al. / Statistics and Probability Letters 78 (2008) 3298–3303

3303

[nrj ] X (Bj − 1)Aγ (L1 , . . . , Lp )ξ (t1 , . . . , tp ) tj = 1

=

[nrj ] X

Aγ (L1 , . . . , Lp )ξ (t1 , . . . , tj − 1, . . . , tp ) −

tj = 1

[nrj ] X

Aγ (L1 , . . . , Lp )ξ (t1 , . . . , tp ) + Rn (r1 , . . . , rp )

tj = 1

= Aγ (L1 , . . . , Lp )ξ (t1 , . . . , 0, . . . , tp ) − Aγ (L1 , . . . , Lp )ξ (t1 , . . . , [nrj ], . . . , tp ).

(4.9)

Thus, the right-hand side of (4.8) can be written more explicitly as [nr3 ] [nr2 ] X X

···

t2 = 1 t3 = 1

+

[nrp ] X

A1 (B1 , . . . , 1)ξ (0, . . . , tp ) −

tp = 1

[nr3 ] [nr1 ] X X

···

t1 = 1 t3 = 1

[nr3 ] [nr2 ] X X

···

t2 = 1 t3 = 1

[nrp ] X

A2 (1, B2 , . . . , 1)ξ (0, . . . , tp ) −

tp = 1

[nrp ] X

A1 (B1 , . . . , 1)ξ (n1 , . . . , tp )

tp = 1

[nr3 ] [nr1 ] X X t1 =1 t3 =1

···

[nrp ] X

A2 (1, B2 , . . . , 1)ξ (n1 , . . . , tp )

tp = 1

+ · · · + A12···p (B1 , . . . , Bp )ξ (0, . . . , 0) − A12···p (B1 , . . . , Bp )ξ (0, . . . , np ) − A12···p (B1 , . . . , Bp )ξ (n1 , . . . , 0) + · · · + A12···p (B1 , . . . , Bp )ξ (n1 , . . . , np )

(4.10)

where in view of (4.9) the sums corresponding to each Aγ (·, . . . , ·) run over ti such that i 6∈ γ . Now 1

σn

A(1, . . . , 1) p/2

[nr1 ] X

···

t1 = 1

[nrp ] X

ξ (t1 , . . . , tp ) ⇒ A(1, 1, . . . , 1)W (r1 , . . . , rp )

tp = 1

as in Theorem 1.1, so it is sufficient to prove that sup

0≤r1 ,...,rp ≤1

|n−p/2 Rn (r1 , . . . , rp )| = op (1).

(4.11)

Considering for instance the first term on the right-hand side of (4.10), note that ξ1 (t1 , . . . , tp ) are associated for different values of t1 , . . . , tp . Thus, for 0 ≤ r1 , . . . , rp ≤ 1, q > 2, we have, from the same argument as for p = 2 and from Lemmas 1.2 and 3.2 P

 

max

0≤r1 ,...,rp ≤1

n−p/2

[nr1 ] X t1 = 1

···

[nrp ] X tp = 1

ξ1 ([nr1 ], . . . , tp ) > δ

 

≤ Cn−pq/2 n(p−1)q/2 = Cn−q/2 = o(1) as n → ∞.

(4.12)



More generally, let ](γ ) denote the cardinality of γ ; each other term in (4.11) is n−p/2 times a partial sum of np−](γ ) elements, and we can apply iteratively the same argument to complete the proof.  Acknowledgements The authors are grateful to the referee for carefully reading the manuscript and for offering some comments and suggestions for improving the presentation. This work was supported by the Korea Research Foundation Grant funded by the Korean Government (MOEHRD, Basic Research Promotion Fund)(KRF-2007-314-C00028). References Bulinski, A.V., Keane, M.S., 1996. Invariance principle for associated random fields. J. Math. Sci. 81, 2905–2911. Burton, R.M., Kim, T.S., 1988. An invariance principle for associated random fields. Pacific J.Math. 132, 11–19. Esary, J., Proschan, F., Walkup, D., 1967. Association of random variables with applications. Ann. Math. Stat. 38, 1466–1474. Marinucci, M., Poghosyan, S., 2001. Asymptotics for linear random fields. Probab. Lett. 51, 131–141. Newman, C.M., 1984. Asymptotic independence and limit theorems for positively and negatively dependent random variables. In: Tong, Y.L. (Ed.), Inequalities in Probability and Statistics. In: IMS Lecture Notes Monograph Series, vol. 5. Hayward, CA, pp. 127–140. Phillips, P.C.B., Solo, V., 1992. Asymptotics for linear processes. Ann. Statist. 20, 971–1001. Poghosyan, S., Roelly, S., 1998. Invariance principle for martingale-difference random fields. Statist. Probab. Lett. 38, 235–245. Straf, M.L., 1970. Weak convergence of stochastic processes with several parameters. In: Six Berkeley Symposium on Mathematical Statistics and Probability. vol. 11 pp. 187–221.