ARTICLE IN PRESS
Stochastic Processes and their Applications 115 (2005) 737–768 www.elsevier.com/locate/spa
Conditional convergence to infinitely divisible distributions with finite variance Je´roˆme Dedeckera,, Sana Louhichib a
Laboratoire de Statistique The´orique et Applique´e, Universite´ Paris 6, 175 rue du Chevaleret, 75013 Paris, France b Laboratoire de Probabilite´s, Statistique et mode´lisation,Universite´ de Paris-Sud, Baˆt. 425, F-91405 Orsay Cedex, France
Received 22 September 2003; received in revised form 22 November 2004; accepted 20 December 2004 Available online 11 January 2005
Abstract We obtain new conditions for partial sums of an array with stationary rows to converge to a mixture of infinitely divisible distributions with finite variance. More precisely, we show that these conditions are necessary and sufficient to obtain conditional convergence. If the underlying s-algebras are nested, conditional convergence implies stable convergence in the sense of Re´nyi. From this general result we derive new criteria expressed in terms of conditional expectations, which can be checked for many processes such as m-conditionally centered arrays or mixing arrays. When it is relevant, we establish the weak convergence of partial sum processes to a mixture of Le´vy processes in the space of cadlag functions equipped with Skorohod’s topology. The cases of Wiener processes, Poisson processes and Bernoulli distributed variables are studied in detail. r 2005 Elsevier B.V. All rights reserved. Keywords: Infinitely divisible distributions; Le´vy processes; Stable convergence; Triangular arrays; Mixing processes
Corresponding author. Tel.: +33 1 44 27 33 53; fax: +33 1 44 27 33 42.
E-mail addresses:
[email protected] (J. Dedecker),
[email protected] (S. Louhichi). 0304-4149/$ - see front matter r 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.spa.2004.12.006
ARTICLE IN PRESS 738
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
1. Introduction For any distribution function F of a finite measure and any real g; denote by m1g;F the probability measure with characteristic function Z 1 fg;F ðzÞ ¼ exp izg þ ðeizx 1 izxÞ 2 dF ðxÞ (1.1) x and define for any positive real t the probability mtg;F ¼ m1tg;tF : The distribution mtg;F tþs has mean tg and variance tF ð1Þ and satisfies the equation mtg;F msg;F ¼ mg;F : One says that m1g;F is an infinitely divisible distribution with finite variance. Suppose that for each n, ðX i;n Þ1pipn are i.i.d. random variables such that EðX 20;n Þ tends to 0 as n tends to infinity. From Theorem 2 of Chapter 4 in [11], we know that S n ðtÞ ¼ X 1;n þ þ X ½nt ;n converges in distribution to mtg;F if and only if nEðX 0;n Þ converges to g; and limn!1 nEðX 20;n 1X 0;n px Þ ¼ F ðxÞ for any continuity point x of F. Brown and Eagleson [3] extended this result to (non-necessarily stationary) arrays whose rows are martingale differences sequences. If Mi;n ¼ sðX k;n ; 1pkpiÞ and EðX k;n jMk1;n Þ ¼ 0; the main condition for the convergence to mt0;F is: for any continuity point x of F ½nt
X
EðX 2k;n 1X k;n px jMk1;n Þ converges in probability to
tF ðxÞ.
(1.2)
k¼1
As noticed by Eagleson [8], there is no reason why the function F appearing in (1.2) should be non-random. In fact it is easy to build simple examples for which F is random (see the example of Section 2.5), so that the limiting distribution is a mixture of infinitely divisible distributions. If X i;n ¼ n1=2 X i ; the limit is a mixture of centered Gaussian distributions (i.e. g ¼ 0 and F ¼ l1½0;1½ ; l possibly random). In that case, Aldous and Eagleson [1] proved that S n ðtÞ converges stably in the sense of Re´nyi [20] to a random variable with characteristic function Eðf0;tF ðzÞÞ: If Mi;n Mi;nþ1 ; Jeganathan [15, part I] proved the stable convergence to infinitely divisible distributions under Brown and Eagleson’s conditions. Stable convergence is more precise than convergence in distribution and may be useful in several contexts, especially in connection with randomly normalized or randomly indexed sums (see [1] and Chapters 2, 3 and 9 of Castaing et al. [4]). For arrays ðX i;n Þi2Z with stationary rows and Mi;n ¼ sðX k;n ; kpiÞ; Dedecker and Merleve`de [6] proposed necessary and sufficient conditions for S n ðtÞ to satisfy the conditional central limit theorem, which implies stable convergence to a mixture of Gaussian distributions provided that Mi;n Mi;nþ1 : The conditions may be written as lim sup kEðS n ðtÞjM0;n Þk1 ¼ 0 and n!1 2 S ðtÞ lim lim supE n 1Sn ðtÞpx l1xX0 M0;n ¼ 0, t!0 n!1 t 1
ð1:3Þ
where the second limit holds for some non-negative random variable l and any xa0:
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
739
The natural question is now: what happens when limn!1 kEðS n ðtÞ gtjM0;n Þk1 ¼ 0 for some random variable g; and we replace l1½0;1½ by any (random) distribution function F in (1.3)? Such conditions would beR necessary and sufficient, since we can x easily prove that limt!0 lim supn!1 kt1 1 y2 mtg;F ðdyÞ F ðxÞk1 ¼ 0 for any continuity point of x ! EðF ðxÞÞ: Two other questions are: can we obtain from (1.3) (with any F) sufficient conditions in terms of individual variables X i;n for S n ðtÞ to converge to a mixture of infinitely divisible distributions? Can we say something about the convergence of the process fS n ðtÞ; t 2 ½0; 1 g in the space of cadlag functions equipped with Skorokhod’s distance? In Section 2, we shall give positive answers to these questions. We first show in Theorem 1 that the result of Dedecker and Merleve`de [6] remains valid when replacing 0 and l1½0;1½ in (1.3) by any square integrable random variable g and any random distribution function F such that EðF ð1ÞÞ is finite, and we present the application of this result to stable convergence. Next, we give in Section 2.1 sufficient conditions for (1.3) to hold for a large class of dependent arrays. The dependence conditions are expressed in terms of conditional expectations and may be checked for many processes, such as m-conditionally centered arrays or non-uniform mixing arrays. In some important cases, we show that our conditions are optimal (see Corollary 3). Furthermore, in the particular case of m-dependent Bernoulli-distributed arrays, we infer from Hudson et al. [14] and Kobus [18] that the conditions we impose are necessary and sufficient. In Section 2.2 we give sufficient conditions for the process fSn ðtÞ; t 2 ½0; 1 g to converge stably to a mixture of Le´vy processes in the space of cadlag functions equipped with Skorokhod’s distance. The additional condition we impose is related to the topoligical structure of that space, and may be shown to be necessary in some particular cases (see Remark 6). The cases of Wiener and Poisson processes are studied in detail in Sections 2.3 and 2.4 respectively. In Section 2.5 we give the application of our results to the important case of Bernoulli-distributed random variables. In that case the limiting distribution is a mixture of integer-valued compound Poisson distributions. To prove Theorem 1 of Section 2, we adapt Lindeberg’s method with increasing blocks in place of individual variables. The idea is to split S n ð1Þ into p blocks 1=n distributed as S n ð1=pÞ and to replace them by blocks of i.i.d variables with law mg;F : To go back to individual variables, we use a second adaptation of Lindeberg’s method (see the proof of Lemma 3) and a maximal inequality established in [7]. The latter is used once again to prove the tightness of fS n ðtÞ; t 2 ½0; 1 g in the space of cadlag functions (see the proof of Lemma 5).
2. Convergence to infinitely divisible distributions Let ðO; A; PÞ be a probability space, and T : O 7! O be a bijective bimeasurable transformation preserving P: An element A is said to be invariant if TðAÞ ¼ A: Let I be the s-algebra of all invariant sets. The probability P is ergodic if each element of I has measure 0 or 1.
ARTICLE IN PRESS 740
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
We say that a function F from R O to Rþ is a M-measurable distribution function if for every o the function F ð:; oÞ is a distribution function of a finite measure, and for any x in R [ f1g the random variable F ðxÞ is M-measurable with EðF ð1ÞÞo1: Let H be the space of continuous real functions j such that x ! jð1 þ x2 Þ1 jðxÞj is bounded. Given a M-measurable random variable g and a M-measurable distribution function F, we introduce for each o the probability measure mtgðoÞ;F ð:;oÞ defined via (1.1). Since EðF ð1ÞÞ is finite, the random measure mtg;F maps H into L1 ðMÞ: Theorem 1. For each positive integer n, let M0;n be a s-algebra of A satisfying M0;n T 1 ðM0;n Þ: SDefine the non-decreasing filtration ðMi;n Þi2Z by Mi;n ¼ T i ðM0;n Þ and
1 T1 Mi;inf ¼ s n¼1 k¼n Mi;k : Let X 0;n be an M0;n -measurable and square integrable random variable. Define ðX i;n Þi2Z by X i;n ¼ X 0;n T i ; and let S n ðtÞ ¼ X 1;n þ þ X ½nt ;n ; for t in ½0; 1 : Suppose that EðX 20;n Þ tends to zero as n tends infinity. The following statements are equivalent: S1. There exists an M0;inf -measurable square integrable random variable g and an M0;inf -measurable distribution function F, such that for any j in H; any t in ½0; 1
and any positive integer k, S1ðjÞ lim E jðS n ðtÞÞ mtg;F ðjÞMk;n ¼ 0: n!1
1
S2. (a) There exists an M0;inf -measurable square integrable random variable g such that S n ðtÞ gM0;n ¼ 0. lim lim supE t!0 n!1 t 1
(b) There exists an M0;inf -measurable distribution function F such that, for any continuity point x (including þ1) of the function x ! EðF ðxÞÞ; 2 S n ðtÞ 1Sn ðtÞpx F ðxÞM0;n ¼ 0. lim lim supE (2.1) t!0 n!1 t 1 Moreover, g ¼ g T almost surely, and F ¼ F T almost surely. Remark 1. Let Z n ðtÞ be such that limt!0 lim supn!1 t1 kZ n ðtÞk22 ¼ 0: It is easy to see that S2(b) holds if and only if, for any continuity point x (including þ1) of the function x ! EðF ðxÞÞ; (2.1) holds with S n ðtÞ Z n ðtÞ instead of S n ðtÞ: We shall use this simple remark in Section 2.1, with Z n ðtÞ ¼ ½nt EðX 0;n Þ or Zn ðtÞ ¼ ½nt EðX 0;n jIÞ: Let us look to the case where F ¼ l1½a;1½ : If a ¼ 0; mtg;F is the normal distribution Nðtg; tlÞ: If l ¼ 0; mtg;F is the unit mass at tg: For any ða; lÞ in R Rþ ; mtg;F is the law of aðX ða; tlÞ tl=a2 Þ þ tg; X ða; lÞ having Poisson distribution Pðl=a2 Þ: As an immediate consequence of Theorem 1, we obtain the following corollary:
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
741
Corollary 1. Let X i;n ; Mi;n ; S n ðtÞ be as in Theorem 1. The statements P1, P2 are equivalent: P1. There exist an M0;inf -measurable random variable a, an M0;inf -measurable square integrable random variable g and a non-negative M0;inf -measurable random variable l such that S1 holds for the couple ðg; F ¼ l1½a;1½ Þ: P2. Condition S2(a) holds and S2(b1). There exist an M0;inf -measurable random variable a such that 2 S ðtÞ lim lim sup E n ð1 ^ jS n ðtÞ ajÞ ¼ 0. t!0 n!1 t S2(b2). There exist a non-negative M0;inf -measurable random variable l such that (2.1) holds for x ¼ 1 and F ð1Þ ¼ l: It is clear that S2 imply much more than convergence in distribution. Arguing as in S Corollary 1 in [6], one can prove that, for any bounded sð iX1 Mi;inf Þ-measurable variable Z and any j in H; the sequence EðZjðS n ðtÞÞÞ converges to EðZmtF ðjÞÞ: In particular, the following corollary holds: Corollary 2. Let X i;n ; Mi;n and S n ðtÞ be as in Theorem 1. Suppose that the sequence of s-algebras ðM0;n ÞnX1 is non-decreasing. If Condition S2 is satisfied, then, for any j in H; the sequence jðSn ðtÞÞ converges weakly in L1 to mtg;F ðjÞ: Remark 2. Corollary 2 implies that, if the sequence ðM0;n ÞnX1 is non-decreasing, then S n ðtÞ converges stably to a random variable Y t whose conditional distribution with respect to I is mtg;F : We refer to Aldous and Eagleson [1] and to Chapters 2, 3 and 9 of the book by Castaing et al. [4] for a complete exposition of the concept of stability (introduced by Re´nyi [20]) and its connection to weak L1 -convergence. Note that stable convergence is a useful tool to establish weak convergence of joint distributions as well as randomly indexed sums (see again [1] and the references therein, or the book by Hall and Heyde [12]). Note also that the condition on ðM0;n ÞnX1 is exactly the ‘‘nesting condition’’ (3.21) in Theorem 3:2 of [12], which is known to be related to the stable convergence (see the discussion on [12, p. 59]). If furthermore F is constant, then the convergence is mixing. If P is ergodic, this result is a consequence of Theorem 4 in [9] (see [9, Application 4.2]). For a review of mixing results see [5]. 2.1. Sufficient conditions In this section, we give sufficient conditions in terms of the individual variables X i;n for property S1 to hold. In the sequel, B is either the s-field I of all invariant sets or the trivial s-field f;; Og: We then define the array with stationary rows X 0i;n ¼ X i;n EðX i;n jBÞ: The kind of dependence we consider is described in the two following definitions.
ARTICLE IN PRESS 742
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Definition 1. Let ðX i;n Þ and Mi;n be as in Theorem 1, and define for any positive integer N m 0 X R1 ðN; X Þ ¼ lim lim sup sup nX 0;n EðX 0k;n jM0;n Þ (2.2) t!0 n!1 Npmp½nt k¼N 1
and N 1 ðX Þ ¼ inffN40 : R1 ðNÞ ¼ 0g ðN 1 ðX Þ may be infinite). We say that ðX i;n Þ satisfies the weak-dependence condition WD if S2(a) holds and R1 ðN; X Þ tends to zero as N tends to infinity. If N 1 ðX Þ is finite, we say that the array ðX 0i;n Þ is asymptotically ðN 1 ðX Þ 1Þ-conditionally centered (as usual, it is m-conditionally centered if EðX 0mþ1;n jM0;n Þ ¼ 0). In addition to the weak-dependence condition WD, we need to control some residual terms. Definition 2. Let ðX i;n Þ be as in Theorem 1. For any ðk; nÞ in ðN ZÞ; define S 0k;n by: S 0k;n ¼ 0 if kp0 and S0k;n ¼ X 01;n þ þ X 0k;n otherwise. For any positive integer N define R2 ðN; X Þ ½nt
X 1 0 0 0 0 ¼ lim sup lim sup E ðX 02 k;n þ 2jX k;n ðS k1;n S kN;n ÞjÞð1 ^ jS kN jÞ t n!1 t!0 k¼1
!
ð2:3Þ and N 2 ðX Þ ¼ inffN40 : R2 ðNÞ ¼ 0g (N 2 ðX Þ may be infinite). We say that the array ðX i;n Þ is EQ if nEðX 02 0;n Þ is bounded and R2 ðN; X Þ tends to zero as N tends to infinity. We now give sufficient conditions in terms of finite blocks X 0kN;n þ þ X 0k;n for S1 to hold. We say that a function F from R O is a M-measurable BV function if it can be written as the difference of two M-measurable distribution functions. We say that a sequence of random BV functions F N converges weakly to a random BV function F if R R for any continuous bounded function f, limN!1 k f dF N f dF k1 ¼ 0: Proposition 1. Take X i;n ; Mi;n and S n ðtÞ as in Theorem 1. Assume that ðX i;n Þ is WD and any N and any f, let V N;n ðkÞ ¼ Pk1EQ and0 set N 0 ðX Þ ¼ N 1 ðX Þ _ N 2 ðX Þ: For 0 2 i¼kNþ1 X i;n and Df ðN; n; kÞ ¼ f ðV N;n ðkÞ þ X k;n Þ f ðV N;n ðkÞÞ: Let f x ðyÞ ¼ y 1ypx and consider the assumption: AðNÞ: There exists an M0;inf -measurable BV function F N ; such that, for any continuity point x (including þ1) of x ! EðF N ðxÞÞ; ! ½nt
1 X lim lim supE Df x ðN; n; kÞ F N ðxÞM0;n ¼ 0. t!0 n!1 t k¼1 1
Assume that there exists a non-decreasing sequence of integers ðN i Þi2N converging to N 0 ðX Þ such that AðN i Þ holds for any i in N : Then F N i converges weakly to an
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
743
M0;inf -measurable distribution function F, and S1 holds for F. In particular, if N 0 ðX Þ is finite, S1 holds for F N 0 : If N 0 ðX Þ ¼ 1 (recall that N 0 ðX Þ ¼ N 1 ðX Þ _ N 2 ðX Þ), we have the more precise result: Corollary 3. Let X i;n ; Mi;n and S n ðtÞ be as in Theorem 1. Assume that ðX i;n Þ is WD and EQ with N 0 ðX Þ ¼ 1: Then S1 holds for a couple ðg; F Þ if and only if: for any continuity point x (including þ1) of the function x ! EðF ðxÞÞ; ! ½nt
1 X 0 lim lim supE X 02 (2.4) k;n 1X k;n px F ðxÞM0;n ¼ 0. t!0 n!1 t k¼1 1
In particular, the following result holds: let B ¼ f;; Og; assume that ðX i;n Þ has i.i.d. rows, that nEðX 0;n Þ tends to g; and that nEðX 02 0;n Þ is bounded. Then S1 holds with respect to M0;n ¼ sðX i;n ; ip0Þ if and only if there exists a distribution function F such that, for 0 any continuity point x (including þ1) of the function F, the sequence nEðX 02 0;n 1X 0;n px Þ converges to F ðxÞ: Remark 3. Corollary 3 applies to arrays of martingales differences (with B ¼ f;; Og; X 0i;n ¼ X i;n and g ¼ 0) for which N 2 ðX Þ ¼ 1: If S 0 n ðtÞ ¼ sup jS 0n ðsÞj; define 0pspt !! ½nt
X 1 R3 ðN; X Þ ¼ lim sup lim sup E ð1 ^ S0 n ðtÞÞ EðX 02 i;n jMiN;n Þ t n!1 t!0 i¼1 and N 3 ðX Þ ¼ inffNX0 : R3 ðNÞ ¼ 0g (N 3 ðX Þ may be infinite). We shall see in Proposition 3 that N 2 ðX Þ ¼ 1 as soon as N 3 ðX Þ ¼ 1: From Proposition 8 of Section 2.2, it is easy to see that both (2.4) holds and N 3 ðX Þ ¼ 1 as soon as, for any continuity point of x ! EðF ðxÞÞ; ½nt
1 X 2 lim lim sup EðX k;n 1X k;n px jMk1;n Þ F ðxÞ ¼ 0. (2.5) t!0 n!1 t k¼1 1
For arrays of martingale differences, (2.5) is close to condition ð30 Þ of Theorem 2 in [8]. Note that, in condition ð30 Þ; the array needs not be stationary, and the convergence to F holds for t ¼ 1 and almost surely (in fact, it needs only hold in probability, see [15, part I]). Here, assuming the stationarity and the slightly different condition (2.5), we also obtain the convergence to a Le´vy process in the space of cadlag functions equipped with the Skorohod’s topology (see Section 2.2). In the stationary case, this result is close to that given in Remark 1 of [16]. To conclude this remark, note that Jeganathan [15, part II], [16] also give sufficient conditions involving the conditional probabilities of X i;n given Mi1;n for the convergence to any infinitely divisible distributions and any Le´vy processes. Finally, we give sufficient conditions for stationary arrays of non-uniformly f and r-mixing variables to be WD and EQ and for S1 to hold. Let us recall the definition of the f-mixing coefficients: for two s-algebras U and V of A; set fðU; VÞ ¼ supfkPðV jUÞ PðV Þk1 ; V 2 Vg: If L2 ðUÞ is the space of all square integrable and
ARTICLE IN PRESS 744
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
U-measurable random variables, the r-mixing coefficient is defined by rðU; VÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi supfjCovðX ; Y Þj= VarðX ÞVarðY Þ; X 2 L2 ðUÞ; Y 2 L2 ðVÞg: We define the f-mixing coefficients of the array ðX i;n Þi;n by f1;N ðk; nÞ ¼ supffðM0;n ; sðX i1 ;n ; . . . ; X iN ;n ÞÞ; 0pkpi1 p piN g
(2.6)
and r1;N is defined in the same way. We call these coefficients non-uniform, because they control the dependence between M0;n and any N-tuple ðX i1 ;n ; . . . ; X iN ;n Þ; while the uniform f1;1 and r1;1 -mixing coefficients control the dependence between the past and the whole future. Corollary 4. Let X i;n ; Mi;n and S n ðtÞ be as in Theorem 1. Assume that nEðX 0;n Þ converges to g and let B ¼ f;; Og; so that X 0i;n ¼ X i;n EðX i;n Þ: For any two conjugate exponents ppq and any positive integer N, consider the conditions: C f ðp; NÞ: (a) sup nkX 00;n kp kX 00;n kq o1 and n40
(b) lim lim sup m!1
C r ðNÞ:
n!1
n P k¼m
1=p
f1;N ðk; nÞ ¼ 0:
(a) sup nEðX 02 0;n Þo1 and (b) lim lim sup m!1
n40
C0:
n!1
n P
r1;N ðk; nÞ ¼ 0:
k¼m
0 lim lim sup nEðX 02 0;n 1jX 0;n jXK Þ ¼ 0:
K!1
n!1
1. If C 0 and C f ðp; 1Þ (resp. C r ð1Þ) hold then ðX i;n Þ is WD and EQ. 2. Consider the condition BðNÞ: There exists a BV function F N such that, for any continuity point x (including þ1) of the function F N ; 02 0 0 lim nEðS 02 N ; n 1S N;n px Þ nEðS N1;n 1S N1;n px Þ ¼ F N ðxÞ.
n!1
If C f ðp; NÞ (resp. C r ðNÞ) holds for any finite integer NpN 0 ðX Þ; then S1 holds as soon as BðN i Þ holds for a sequence ðN i Þi2N converging to N 0 ðX Þ: Remark 4. If EðX 0;n Þ tends to g; nEðX 02 0;n Þ is bounded and ðX i;n Þ is m-dependent, then it is WD and EQ with N 0 ðX Þpm þ 1: Hence S1 holds for ðg; F Þ as soon as BðN 0 Þ holds for F. 2.2. Convergence to Le´vy processes Let Dð½0; 1 Þ be the space of caldlag functions equipped with the Skorohod distance d. Let CðDÞ be the space of continuous bounded functions from ðDð½0; 1 Þ; dÞ to R: Let pt be the projection from Dð½0; 1 Þ to R defined by: pt ðxÞ ¼ xðtÞ: For any real number g and any distribution function F such that F ð1Þ is finite, define the Le´vy distribution mg;F as the unique measure on Dð½0; 1 Þ such that: for any t in ½0; 1 ; pt has law mtg;F and for any k-tuple t0 ¼ 0pt1 p ptk p1; the variables ðpti pti1 Þ1pipk are independent. Take X i;n ; Mi;n and Sn ðtÞ as in Theorem 1. We say that fS n ðtÞ; t 2 ½0; 1 g converges conditionally to a mixture of Le´vy processes if:
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
745
LP. There exists an M0;inf -measurable random variable g and an M0;inf -measurable distribution function F, such that for any j in CðDÞ and any positive integer k, lim kEðjðS n Þ mg;F ðjÞjMk;n Þk1 ¼ 0.
n!1
Remark 5. Assume that the sequence ðM0;n ÞnX1 is non-decreasing. As for Corollary 2 (with the same proof), property LP implies that, for any j in CðDÞ; jðSn Þ converges weakly in L1 to mg;F ðjÞ: As a consequence, we obtain that limn!1 PððS n 2 AÞ \ BÞ ¼ Pðmg;F ðAÞ \ BÞ; for any set A with boundary qA satisfying Eðmg;F ðqAÞÞ ¼ 0: According to Re´nyi’s definition (extended to separable metric spaces), this means exactly that fS n ðtÞ; t 2 ½0; 1 g converges stably in Dð½0; 1 Þ: More precisely, following Aldous and Eagleson [1], we see that fS n ðtÞ; t 2 ½0; 1 g converges stably to a random variable Y whose conditional distribution given I is mg;F : Convergence in the Skorohod topology is somewhat restrictive. To obtain the relative compactness of the law of fS n ðtÞ; t 2 ½0; 1 g we impose a more restrictive condition than EQ. Definition 3. Let ðX i;n Þ be as in Theorem 1. Recall that B is either I or f;; Og; and that X 0i;n ¼ X i;n EðX i;n jBÞ and S 0k;n ¼ X 01;n þ þ X 0k;n : Define S 0 k;n ¼ 02 maxfjS01;n j; . . . ; jS0k;n jg: We P½nt say 02that ðX0i;n Þ is 1-EQ if nEðX 0;n Þ is bounded and 1 limt!0 lim supn!1 t Eð k¼1 X k;n ð1 ^ S k1;n ÞÞ ¼ 0: Note that if ðX i;n Þ is 1-EQ, then it is EQ with N 2 ðX Þ ¼ 1: The following proposition provides sufficient conditions for property LP to hold. Proposition 2. Let X i;n ; Mi;n and Sn ðtÞ be as in Theorem 1. Assume that ðX i;n Þ is WD and 1-EQ. If S1 holds for some couple ðg; F Þ; then LP holds for the same couple ðg; F Þ: The following proposition gives a sufficient condition for property 1-EQ. Recall that R3 ðNÞ and N 3 ðX Þ have been defined in Remark 3. Proposition 3. Let X i;n ; Mi;n ; Sn ðtÞ be as in Theorem 1. Assume that ðX i;n Þ is WD and 02 0 that nEðX 02 0;n Þ is bounded. Let L1 ðNÞ : for any 1pioN; limn!1 nEðX i;n ð1 ^ jX 0;n jÞÞ ¼ 0: If R3 ðNÞ tends to 0 as N tends to infinity and L1 ðN 3 Þ holds, then ðX i;n Þ is 1-EQ. Remark 6. A simple bound for R3 ðNÞ is given in item 2 of Proposition 8. If nEðX 02 0;n Þ is bounded, condition L1 is equivalent to the Lindeberg-type condition: for 0 any positive and any 1pioN; nEðX 02 i;n 1jX 0;n j4 Þ converges to zero. If C 0 holds, condition L1 is equivalent to: L01 ðNÞ: for any positive and any 1pioN; limn!1 nPðjX 0i;n j4; jX 00;n j4Þ ¼ 0: For stationary f-mixing processes, Samur [21] proved that L01 ð1Þ is necessary for the weak convergence of fS n ðtÞ; t 2 ½0; 1 g to a Le´vy measure on Dð½0; 1 Þ (see also Corollary 5:9 in [18] for a more precise result in the case of stable limits). 0 Remark 7. The classical Lindeberg’s condition is L0 : limn!1 nEðX 02 0;n ð1 ^ jX 0;n jÞÞ ¼ 0: If L0 holds, then L1 holds for any positive integer i and moreover R3 ð0Þ ¼ 0 (see
ARTICLE IN PRESS 746
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
the proof of Proposition 8 in [6]). We shall see in the next section that, if R3 ð0Þ ¼ 0; then the limiting process is necessarily a mixture of Gaussian processes. It follows from Proposition 2 that for WD and 1-EQ arrays, conditional convergence to Le´vy processes with law mg;F follows from conditional convergence to mtg;F for any t in ½0; 1 : In particular, the conclusion of Proposition 1 remains valid if we replace S1 by LP. Note that, since for such arrays N 2 ðX Þ ¼ 1; we have that N 0 ðX Þ ¼ N 1 ðX Þ: The class of WD-arrays for which N 1 ðX Þ ¼ 1 is much larger than martingale differences arrays. A first example is given by Kernel density estimators (see [6, Section 8]). The following proposition provides useful conditions ensuring that N 1 ðX Þ ¼ 1: Proposition 4. Let X i;n ; and Mi;n be as in Theorem 1. Assume that ðX i;n Þ is WD and 0 nEðX 02 lim supn!1 nEðX 02 0;n Þ is bounded. Consider the condition C 1 : lim!0 0;n 1jX 0;n j pÞ ¼ 0: If C 1 holds and L1 ðNÞ holds for some N in N such that R1 ðN; X Þ ¼ 0; then N 1 ðX Þ ¼ 1: 0 Remark 8. If nEðX 02 0;n 1X 0;n px Þ converges to F ðxÞ; condition C 1 means that F is continuous at zero. For such a F, one says that the Le´vy distribution mtg;F is purely non-Gaussian.
2.3. Convergence to Wiener processes Let ðg; a; lÞ be three M0;inf -measurable random variables as in Corollary 1. We say that P1 ðg; a; lÞ holds if P1 is realized for the parameter ðg; a; lÞ: Definition 4. Let X i;n be as in Theorem 1. We say that the array ðX i;n Þ is 0-EQ if 0 2 nEðX 0;n Þ is bounded and R3 ð0Þ ¼ 0: Note that if ðX i;n Þ is 0-EQ, then it is 1-EQ. The next Proposition shows that if ðX i;n Þ is WD and 0-EQ, then the limiting distribution is necessarily a mixture of Gaussian distributions (i.e. P1 ðg; 0; lÞ holds). From Proposition 2, this implies the functional property LP for g and the distribution function F ¼ l1½0;1½ : Proposition 5. Take X i;n ; Mi;n and S n ðtÞ as in Theorem 1. Assume that ðX i;n Þ is WD and 0-EQ. Then S2(b1) holds with a ¼ 0 and P1ðg; 0; lÞ holds if and only if S2(b2) holds. Moreover 1. Setting U N;n ðkÞ ¼ ðV N;n ðkÞ þ X 0k;n Þ2 ðV N;n ðkÞÞ2 ; condition S2(b2) is equivalent to ! ½nt
1 X lim inf lim sup lim supE U N;n ðkÞ lM0;n ¼ 0. (2.7) N!N 1 ðX Þ t k¼1 n!1 t!0 1
2. Assume that there exists a non-decreasing sequence ðN i Þi2N converging to N 1 ðX Þ; and a sequence lN i of M0;inf -measurable random variables such that, for
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
747
any i in N ;
! ! ½nt
N 1 X i 1 X lim lim supE X 02 X 0k;n X 0kj;n lN i M0;n ¼ 0. k;n þ 2 t!0 n!1 t k¼1 j¼1 1
(2.8) 1
Then lN i converges in L to some non-negative variable l; and S2(b2) holds for this l: Note that if X i;n ¼ n1=2 X i for some centered square integrable random variable X i ; and Mi;n ¼ Mi ; then condition WD with B ¼ f;; Og is equivalent to: the sequence X 0
n X
EðX k jM0 Þ
converges in L1
(2.9)
k¼1
and condition (2.9) also implies S2(b2). Applying the L1 -ergodic theorem to the sequence ðX 2i;n Þ; we see that ðX i;n Þ is 0-EQ. From Proposition 5, we obtain the following conditional invariance principle, which was first proved in [6]: Corollary 5. Let the random variables X i;n and the s-algebras Mi;n of Theorem 1 be such that X i;n ¼ n1=2 X i for some centered square-integrable random variable X i ; and Mi;n ¼ Mi : If (2.9) holds then the Donsker process fSn ðtÞ; t 2 ½0; 1 g satisfies LP for g ¼ 0 and some distribution function F ¼ l1½0;1½ : More precisely, for any j in CðDÞ and any positive integer k Z pffiffiffi lim E jðS n Þ jðx lÞW ðdxÞMk ¼ 0, n!1
1
where W is the standard Wiener distribution and l ¼ EðX 20 jIÞ þ 2
P
k40
EðX 0 X k jIÞ:
2.4. Convergence to Poisson processes In Proposition 6 below, we give sufficient conditions on a WD and 1-EQ array for P1ðg; a; lÞ to hold. Via Proposition 2, this implies the functional property LP for g and F ¼ l1½a;1½ : Recall that the quantities R1 ðN; X Þ and N 1 ðX Þ have been defined in Definition 1. Proposition 6. Let X i;n ; Mi;n and Sn ðtÞ be as in Theorem 1. Assume that ðX i;n Þ is WD with N 1 ðX Þ ¼ 1 and 1-EQ. Then P1 ðg; a; lÞ holds if and only if conditions C 2 and C 3 hold: C2: C3:
0 lim nEðX 02 0;n ð1 ^ ja X 0;n jÞÞ ¼ 0; ½nt
P 02 lim lim sup E 1t X i;n lM0;n ¼ 0:
n!1
t!0
n!1
i¼1
1
If Pða ¼ 0Þ ¼ 0; condition C 2 implies condition C 1 of Proposition 4. Combining Propositions 4 and 6, we obtain the following corollary:
ARTICLE IN PRESS 748
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Corollary 6. Let X i;n ; Mi;n and S n ðtÞ be as in Theorem 1. Assume that ðX i;n Þ is WD and that nEðX 02 0;n Þ is bounded. If L1 ðNÞ is satisfied for some N in N such that R1 ðN; X Þ ¼ 0 and C 2 holds for some a such that Pða ¼ 0Þ ¼ 0; then N 1 ðX Þ ¼ 1: If furthermore R3 ðNÞ ¼ 0; then P1ðg; a; lÞ holds if and only if C 3 holds. Remark 9. We shall see in Section 3.3 that both R3 ðNÞ ¼ 0Pfor N in N and C 3 02 holds as soon as C 4 ðNÞ: limP!N lim supt!0 lim supn!1 kt1 ½nt
k¼1 EðX k;n jMkP;n Þ lk1 ¼ 0: The first application of Corollary 6 is to m-conditionally centered arrays. Corollary 7. Let X i;n ; Mi;n ; Sn ðtÞ be as in Theorem 1. Assume that ðX 0i;n Þ is mconditionally centered and that knEðX 0;n jBÞ gk1 tends to zero. Then P1ðg; a; lÞ holds for some a such that Pða ¼ 0Þ ¼ 0 as soon as C 2 holds, and L1 ðNÞ; C 4 ðNÞ are satisfied for N ¼ m þ 1: Assume furthermore that ðX i;n Þi;n is m-dependent and take M0;n ¼ sðX i;n ; ip0Þ and B ¼ f;; Og: Condition C 4 reduces to: the sequence nEðX 02 0;n Þ converges to l: Remark 10. Eagleson [10] gave a criterion for (non-necessarily stationary) martingale differences arrays (i.e. m ¼ 0). In the stationary case, his result is the same as ours (with B ¼ f;; Og and g ¼ 0), except that he only requires convergence in probability for t ¼ 1 in C 4 ð1Þ: Here, on the one hand, we need to impose L1 convergence in C 4 to obtain the conditional version of the Poisson convergence. On the other hand, the fact that it holds for any t implies the convergence of the process fS n ðtÞ; t 2 ½0; 1 g to a Poisson process. Note also that if ðX i;n Þi;n is a m-dependent array of Bernoulli random variables, then the conditions of Corollary 7 are optimal (see Theorem 2 of Hudson et al. [14]). Therefore Corollary 7 seems to be a reasonable extension of both martingale and m-dependent cases. Corollary 6 contains more information than Corollary 7. As a consequence of Corollaries 4 and 6, we obtain sufficient conditions for stationary arrays of nonuniformly mixing variables. Corollary 8. Let X i;n ; Mi;n and Sn ðtÞ be as in Theorem 1 and let B ¼ f;; Og: Assume that nEðX 0;n Þ converges to g and that condition C f ðp; 1Þ (resp. C r ð1Þ) holds. If furthermore L1 ð1Þ is satisfied and C 2 holds for some a such that Pða ¼ 0Þ ¼ 0; then N 1 ðX Þ ¼ 1 and P1ðg; a; lÞ holds as soon as nEðX 02 0;n Þ converges to l as n tends to infinity. 2.5. The case of Bernoulli distributed variables Let ðX i;n Þ be an array of Bernoulli distributed variables with parameter pn such that npn is bounded. We are interested in the process Sn ðtÞ ¼ X 1;n þ þ X ½nt ;n : An interesting example is X 0;n ¼ 1X 0 4un for some numerical sequence un (see [13] for the importance of the exceedance process Sn ðtÞ in extreme value theory). If for each n the sequence ðX i;n Þ is i.i.d, it is well known that Sn ð1Þ converges in distribution if and only if npn converges to a non-negative number l and that the limiting distribution is
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
749
Poisson with parameter l: For m-dependent sequences, necessary and sufficients conditions for the convergence of Sn ð1Þ are given in [14]. Using our notations, these conditions are equivalent to Bðm þ 1Þ of Corollary 4: there is a distribution function F mþ1 such that lim nEðS 2mþ1;n ð1Þ1Smþ1;n ð1Þpx Þ nEðS2m;n ð1Þ1Sm;n ð1Þpx Þ ¼ F mþ1 ðxÞ.
n!1
(2.10)
Since X i;n is either 0 or 1, it is clear that F mþ1 is piecewise constant with jumps at points 1; . . . ; m þ 1: In fact the limiting distribution is integer-valued coumpound Poisson. More precisely it is theP law of the sum V 1 þ þ V N where N is Poisson 2 distributed with parameter l ¼ mþ1 i¼1 ðF mþ1 ðiÞ F mþ1 ði 1ÞÞ=i and is independent of the sequence ðV k ÞkX1 which is i.i.d. with marginal distribution PðV 1 ¼ iÞ ¼ ðF mþ1 ðiÞ F mþ1 ði 1ÞÞ=ðli2 Þ: See Theorem 1:1 in [18] for more details on this important question. Now, let X i;n and Mi;n be as in Theorem 1. From Theorem 1, property S1 holds if and only if S2 holds. Hence the distribution function F in S2(b) is piecewise constant with jumps at integer points, and the limiting distribution of S n ðtÞ is a mixture of integer-valued coumpound Poisson distributions. If ðX i;n Þ is WD and EQ, then S1 holds as soon as AðN i Þ of Proposition 1 holds with N i converging to N 0 : For instance, suppose that C f ð1; NÞ (resp. C r ðNÞ) holds for any NpN 0 : In that case S1 holds as soon as nEðX 0;n Þ converges to g and condition BðN i Þ of Corollary 4 holds for a sequence N i converging to N 0 : Both g and the distribution function F are nonrandom and the limiting distribution of S n ðtÞ is integer-valued coumpound Poisson. This extends the result of Hudson et al. [14], since (2.10) is exactly Bðm þ 1Þ: Note also that if S2 holds, we can establish the convergence of ðSn ðt1 Þ; . . . ; Sn ðtk ÞÞ for any k-tuple ðt1 ; . . . ; tkP Þ (cf. [6, Section 4]). Therefore, there is convergence for the point process Sn ðAÞ ¼ i2nA X i;n indexed by subsets of ½0; 1 (see [17, Theorem 4:2]). To be complete, let us give a simple example of an array of Bernoulli random variables for which g and F are random. Let ðZ i;n Þ be an i.i.d. array of Bernoullidistributed variables with parameter a=n and e be a Bernoulli-distributed variable with parameter 1=2 independent of ðZ i;n Þ: Set Mi;n ¼ sðe; Z k;n ; kpiÞ: Then X i;n ¼ eZ i;n is Bernoulli-distributed with parameter a=2n and EðX iþ1;n jMi;n Þ ¼ ea=n: X 0i;n ¼ X i;n ea=n is a martingale difference array, so that ðX i;n Þ is WD with N 1 ðX Þ ¼ 1 and g ¼ ae: Furthermore, it is clearly EQ with N 2 ðX Þ ¼ 1: Consequently, N 0 ðX Þ ¼ 1 and Corollary 3 applies: S1 holds if and only if (2.4) holds. Now, (2.4) is satisfied with F ¼ ae1½1;1½ ; and S n ðtÞ converges in distribution to a variable whose conditional distribution with respect to e is Poisson with parameter tae: From Proposition 6 fS n ðtÞ; t 2 ½0; 1 g converges in distribution to a mixture of Poisson processes in Dð½0; 1 Þ:
3. Proofs In the two following sections, we prove Theorem 1. The fact that g and F are invariant by T can be proved as in Section 3.2 in [6].
ARTICLE IN PRESS 750
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
3.1. S1 implies S2 Since mg;F has mean g; it is clear that S1 implies S2(a). We now prove that S1 implies S2(b). Lemma 1. Let Cb be the set of continuous bounded functions and F ¼ fx ! x2 gðxÞ; g 2 Cb g: Let nðdxÞ ¼ x2 dF ðxÞ: For any f in F; we have limt!0 kt1 mtg;F ðf Þ nðf Þk1 ¼ 0: The proof of this lemma will be done at the end of this section. Let f x ðyÞ ¼ y2 1ypx ; and define f x;þ ðyÞ ¼ f xþ ðyÞ þ 1 y2 ðx yÞ1xpypxþ and f x; ðyÞ ¼ f x;þ : We have the inequality 1 E t f x ðSn ðtÞÞ F ðxÞM0;n 1 1 f x;þ ðSn ðtÞÞ nðf x;þ ÞjM0;n þ nðf x;þ Þ nðf x; Þ1 p E t 1 1 þ E f x;þ ðSn ðtÞÞ f x; ðS n ðtÞÞM0;n . ð3:1Þ t 1 Note first that, setting GðxÞ ¼ EðF ðxÞÞ; lim!0 knðf x;þ Þ nðf x; Þk1 ¼ GðxÞ GðxÞ; and since x is a continuity point of G, the latter is zero. Next, we infer from S1 that 1 1 t f x;þ ðS n ðtÞÞ nðf x;þ ÞM0;n p mg;F ðf x;þ Þ nðf x;þ Þ lim supE t t n!1 1 1 and Lemma 1 implies that the latter tends to zero as t goes to zero. It remains to control the last term on right hand in (3.1). Applying first S1 and then Lemma 1, we have 1 lim sup lim sup kEðf x;þ ðSn ðtÞÞ f x; ðS n ðtÞÞjM0;n Þk1 pknðf x;þ Þ nðf x; Þk1 t n!1 t!0 and we know that the latter tends to zero with : Hence, for all continuity point of G, limt!0 lim supn!1 kEðt1 f x ðS n ðtÞÞ F ðxÞjM0;n Þk1 ¼ 0; which is exactly S2(b). Proof of Lemma 1. Arguing as in Corollary 8:9 in [22] and using Theorem 8:7 in [22], we obtain that for any bounded function f of F and any fixed o; 1 t m ðf Þ ¼ nðf Þ. (3.2) t g;F R R Since t1 x2 mtg;F ðdxÞ ¼ tg2 þ x2 nðdxÞ; we infer that (3.2) extends to the class F: Now, every f of F satisfies jf ðxÞjpMx2 ; so that jt1 mg;F ðf Þ nðf Þjp2MF ð1Þ þ tg2 : Since F ð1Þ and g2 are integrable, Lemma 1 follows from (3.2) and the dominated convergence theorem. & lim
t!0
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
751
3.2. S2 implies S1 Let B31 ðRÞ be the class of three-times continuously differentiable real functions such that kh00 k1 p1 and kh000 k1 p1: Assume for a while that S1ðhÞ holds for any h of B31 ðRÞ: In such a case, we know from Dedecker and Merleve`de [6] that S1 extends to any continuous bounded function. Since x ! x2 =2 belongs to B31 ðRÞ; we infer that S 2n ðtÞ is uniformly integrable for any t in ½0; 1 ; which implies that S1 extends to H: Hence, it suffices to prove that S2 implies S1ðhÞ for any h of B31 ðRÞ: If h belongs to B31 ðRÞ; then jhðx þ aÞ hðxÞjpajh0 ð0Þj þ ajxj þ a2 =2: Hence khðS n ðtÞÞ hðS n ðtÞ T k Þk1 pun ðjh0 ð0Þj þ kS n ðtÞk2 þ un =2Þ with un ¼ kSn ðtÞ S n ðtÞ T k k2 : From S2(b), the stationarity of ðX i;n Þ and the fact that EðX 20;n Þ tends to zero, we infer that for any t in ½0; 1 the sequence kS n ðtÞk2 is bounded. The asymptotic negligibility of X 0;n also implies that un goes to zero as n increases. Combining the two preceding arguments, we obtain lim khðS n ðtÞÞ hðS n ðtÞ T k Þk1 ¼ 0.
n!1
(3.3)
(3.3) implies that S1ðhÞ holds if and only if limn!1 kEðhðS n ðtÞ T k Þ mtF ðhÞjMk;n Þk1 ¼ 0: Now, since both F and P are T-invariant, the fact that S2 implies S1 follows from: Proposition 7. Let X i;n and Mi;n be defined as in Theorem 1. If S2 holds, then, for any h in B31 ðRÞ and any t in ½0; 1 ; limn!1 kEðhðSn ðtÞÞ mtg;F ðhÞ jM0;n Þk1 ¼ 0: Proof. We prove S the result for S n ð1Þ; the proof of the general case being unchanged. Let M1 ¼ sð k;n Mk;n Þ: Without loss of generality, suppose that there exists an array ðei;n Þi2Z of i.i.d random variables conditionally to M1 ; with conditional 1=n marginal distribution mg;F : Notations 1. Let i, p and n be three integers such that 1pipppn: Set q ¼ ½n=p and define U i;n ¼ X iqqþ1;n þ þ X iq;n ; Di;n ¼ eiqqþ1;n þ þ eiq;n ;
V i;n ¼ U 1;n þ U 2;n þ þ U i;n , Gi;n ¼ Di;n þ Diþ1;n þ þ Dp;n .
Notations 2. Let g be any function from R to R: For k and l in ½1; p and any positive integer nXp; set gk;l;n ðxÞ ¼ gðV k;n þ x þ Gl;n Þ; with the conventions gk;pþ1;n ðxÞ ¼ gðV k;n þ xÞ and g0;l;n ðxÞ ¼ gðGl;n þ xÞ: Afterwards, we shall apply this notation to the successive derivatives of the function h. For brevity we shall omit the index n and write gk;l for gk;l ð0Þ: Let sn ¼ e1;n þ þ en;n : Since ðei;n Þi2Z is i.i.d conditionally to M1 with 1=n conditional marginal distribution mg;F ; we have, EðhðS n ð1ÞÞ mF ðhÞjM0;n Þ ¼ EðhðS n ð1ÞÞ hðV p;n ÞjM0;n Þ þ EðhðV p;n Þ hðG1;n ÞjM0;n Þ þ EðhðG1;n Þ hðsn ÞjM0;n Þ.
ð3:4Þ
ARTICLE IN PRESS 752
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Here, note that jS n ð1Þ V p;n jpðjX npþ2;n j þ þ jX n;n jÞ: Arguing as in (3.3), we infer that lim khðS n ð1ÞÞ hðV p;n Þk1 ¼ 0 and lim khðG1;n Þ hðsn Þk1 ¼ 0. (3.5) n!1
n!1
In view of (3.5), it remains to control the second term in the right-hand side of (3.4). To this end, we use Lindeberg’s decomposition. p p X X ðhi;iþ1 hi1;iþ1 Þ þ ðhi1;iþ1 hi1;i Þ. (3.6) hðV p;n Þ hðG1;n Þ ¼ i¼1
i¼1
Now, applying Taylor’s integral formula we get that: Z 1 0 2 hi;iþ1 hi1;iþ1 ¼ U i;n hi1;iþ1 þ U i;n ð1 tÞ h00i1;iþ1 ðtU i;n Þ dt, 0
hi1;iþ1 hi1;i ¼
Di;n h0i1;iþ1
D2i;n
Z 0
1
ð1 tÞh00i1;iþ1 ðtDi;n Þ dt.
Set GðxÞ ¼ EðF ðxÞÞ: Let 40 and choose a finite grid x0 px1 p pxN of continuity points of G such that jxj xjþ1 jp; Gðx0 Þp and Gð1Þ GðxN Þp: Let R1 gi ðxÞ ¼ 0 ð1 tÞh00i1;iþ1 ðtxÞ dt: Since h 2 B31 ðRÞ; gi is bounded by 1=2 and 1=6Lipschitz. Hence Z 1 6jgi ðxj Þ ð1 tÞh00i1;iþ1 ðtU i;n Þ dtj1xj oU i;n pxjþ1 p, 0
Z 6jgi ðxj Þ
0
1
ð1 tÞh00i1;iþ1 ðtDi;n Þ dtj1xj oDi;n pxjþ1 p.
It follows that jEðhðV p;n Þ hðG1 ÞjM0;n ÞjpD1 þ D2 þ D3 þ D4 ; where p X 0 D1 ¼ EððU i;n Di;n Þhi1;iþ1 jM0;n Þ, i¼1 p N1 X X 2 2 D2 ¼ EððU i;n 1xj oU i;n pxjþ1 Di;n 1xj oDi;n pxjþ1 Þgi ðxj ÞjM0;n Þ, i¼1 j¼0 D3 ¼
p 1X EðU 2i;n 1U i;n e x0 ;xN þ D2i;n 1Di;n e x0 ;xN jM0;n Þ, 2 i¼1
D4 ¼
p X EðU 2i;n þ D2i;n jM0;n Þ. 6 i¼1
Control of D3 and D4 : The probability P being invariant by T, we have 2 p X S n ð1=pÞ 2 1Sn ð1=pÞe x0 ;xN
kU i;n 1U i;n e x0 ;xN k1 ¼ E and 1=p i¼1 2 p X S n ð1=pÞ 2 kU i;n k1 ¼ E . 1=p i¼1
(3.7)
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Consequently, S2(b) implies both limp!1 lim supn!1 lim lim sup
p!1
n!1
p X
Pp
i¼1
753
kU 2i;n k1 ¼ Gð1Þ and
kU 2i;n 1U i;n e x0 ;xN k1 ¼ Gðx0 Þ þ Gð1Þ GðxN Þp2.
i¼1
From Lemma 1, we infer that the same arguments apply to Di;n and finally lim sup lim sup kD3 þ D4 k1 p2 þ p!1
n!1
Gð1Þ . 3
(3.8)
P Control of D1 : Clearly kD1 k1 p pi¼1 kEððU i;n Di;n Þh0i1;iþ1 jM0;n Þk1 : Define the index lði; nÞ ¼ ði 1Þ½n=p and recall that q ¼ ½n=p : By definition of Di;n ; Z qg q=n EðDi;n h0i1;iþ1 jM1 Þ ¼ Eðh0i1;iþ1 jM1 Þ xmg;F ðdxÞ ¼ Eðh0i1;iþ1 jM1 Þ , n where the random variable Eðh0i1;iþ1 jM1 Þ is Mlði;nÞ;n -measurable and bounded by one. Using that M0;n Mlði;nÞ;n M1 ; we obtain kEððU i;n Di;n Þh0i1;iþ1 jM0;n Þk1 ¼ kEððU i;n n1 qgÞEðh0i1;iþ1 jM1 ÞjM0;n Þk1 pkEðU i;n n1 qgjMlði;nÞ;n Þk1 . Since both P and g are invariant by T, the latter equals kEðS n ð1=pÞ n1 qgjM0;n Þk1 : Consequently, we infer that kD1 k1 ppkEðS n ð1=pÞ n1 qgjM0;n Þk1 and S2(a) implies that lim lim sup kD1 k1 ¼ 0.
p!1
(3.9)
n!1
Control of D2 : We shall prove that, for any non-negative integer j less than N 1; lim lim sup
p!1
n!1
p X
kEððU 2i;n 1xj oU i;n pxjþ1 D2i;n 1xj oDi;n pxjþ1 Þgi ðxj ÞjM0;n Þk1 ¼ 0.
i¼1
(3.10) Define f j ðtÞ ¼ t2 1xj otpxjþ1 and recall that q ¼ ½n=p : By definition of Di;n ; q=n
EðD2i;n 1xj oDi;n pxjþ1 gi ðxj ÞjM1 Þ ¼ mg;F ðf j ÞEðgi ðxj ÞjM1 Þ, where the random variable Eðgi ðxj ÞjM1 Þ is Mlði;nÞ;n -measurable and bounded by one. Since M0;n Mlði;nÞ;n M1 ; we obtain kEððU 2i;n 1xj oU i;n pxjþ1 D2i;n 1xj oDi;n pxjþ1 Þgi ðxj ÞjM0;n Þk1 q=n
¼ kEððU 2i;n 1xj oU i;n pxjþ1 mg;F ðf j ÞÞEðgi ðxj ÞjM1 ÞjM0;n Þk1 q=n
pkEðU 2i;n 1xj oU i;n pxjþ1 mg;F ðf j ÞjMlði;nÞ;n Þk1 . Since both mF ðf j Þ and P are invariant by the transformation T, (3.10) follows from q=n
lim lim sup pkEðf j ðS n ð1=pÞÞ mg;F ðf j ÞjM0;n Þk1 ¼ 0.
p!1
n!1
(3.11)
ARTICLE IN PRESS 754
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768 q=n
We infer from Lemma 1 that limp!1 lim supn!1 kpmg;F ðf j Þ F ðxjþ1 Þ þ F ðxj Þk1 ¼ 0; and (3.11) follows from S2(b). End of the proof of Proposition 7. From (3.8)–(3.10) we infer that, for h in B31 ðRÞ; lim!0 lim supp!1 lim supn!1 kD1 þ D2 þ D3 þ D4 k1 ¼ 0: This fact together with (3.4), (3.5) and (3.7) imply Proposition 7. & 3.3. Sufficient conditions for EQ In Proposition 8, we give conditions for a WD-array to be EQ. We need a maximal inequality. Lemma 2. Let X i;n and Mi;n be as in Theorem 1, and recall that B is either I or f;; Og: P 0 0 0 Let X 0i;n ¼ X i;n EðX i;n jBÞ; S0n ðtÞ ¼ ½nt
X i¼1 i;n and S n ðtÞ ¼ sup0pspt jS n ðsÞj: Assume 02 that ðX i;n Þ is WD and that lim supn!1 nEðX 0;n ÞpC for some positive constant C. Then lim sup lim sup t!0
n!1
1 EððS 0 n ðtÞÞ2 Þo1. t
(3.12)
0 Proof. Let S 0 n ðtÞ ¼ sup0pspt ðS n ðsÞÞþ : Applying Proposition 1(a) of Dedecker and 0 Rio [7] to the array ðX i;n Þ with l ¼ 0; we get ! ½nt X 1 1 1XN 2 0 0 0 EððSn ðtÞÞ Þp8E jX k;n X kþi;n j t t k¼1 i¼0 m 0 X 0 þ 8n sup X 0;n EðX k;n jM0;n Þ . ð3:13Þ Npmp½nt k¼N 1
Since ðX i;n Þ is WD, we can choose NðÞ large enough so that R1 ðNðÞ; X Þp: Now, 02 using the elementary inequality 2jX 0k;n X 0l;n jpX 02 k;n þ X l;n together with the stationar2 ity of the sequence, we obtain lim supt!0 lim supn!1 t1 EððS0 n ðtÞÞ Þp4NðÞC þ 8: 0 Of course the same arguments applies to the array ðX i;n Þ and (3.12) follows. & Definition 5. Let ðX i;n Þ and Mi;n be as Pin Theorem P½nt 1.0 For any 0non-negative integer N, define the variable U M;N;n ðtÞ ¼ N1 i¼M k¼1 jX ki;n jEðjX k;n j jMki;n Þ: For any 1pMpN; define R4 ðM; N; X Þ ¼ lim supt!0 lim supn!1 t1 EðU M;N;n ðtÞð1 ^ S0 n ðtÞÞÞ: Proposition 8. Let X i;n ; Mi;n and S n ðtÞ be as Theorem 1. Recall that R3 ðN; X Þ has been defined in Remark 3. Assume that lim supn!1 nEðX 02 0;n ÞpC for some positive constant C. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (1) If the array ðX i;n Þ is WD, then R2 ðNÞpR3 ðNÞ þ 2ðM 1Þ CR3 ðNÞ þ 2R4 ðM; NÞ; for any 1pMpN: Consequently N 2 ðX ÞpN 3 ðX Þ _ 1 and ðX i;n Þ is EQ as soon as R3 ðNÞ tends to zero as N tends to infinity and limM!1 lim supN!1 R4 ðM; NÞ ¼ 0: (2) For any sequence un of random variables such that nun is equiintegrable, ½nt
X 1 0 02 R3 ðNÞp lim sup lim sup ð1 ^ S n ðtÞÞ ðEðX k;n jMkN;n Þ un Þ . t n!1 t!0 k¼1 1
In particular R3 ðNÞplim
supn!1 nkEðX 02 N;n jM0;n Þ
EðX 02 0;n Þk1 :
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
755
(3) For any sequences ðui;n ÞMpioN of random variables such that nu2i;n is equiintegrable, ½nt
N1 XX 1 0 0 0 R4 ðM; NÞp lim sup lim sup ð1 ^ S n ðtÞÞ jX ki;n jðEðjX k;n j jMki;n Þ ui;n Þ . t n!1 t!0 i¼M k¼1 1
In particular R4 ðM; NÞplim supn!1
PN1 i¼M
nkX 00;n ðEðjX 0i;n j jM0;n Þ EðjX 00;n jÞÞk1 :
P 0 0 0 Proof. (1) For 0pioN define V i;N;n ðtÞ ¼ Eð ½nt
k¼1 jX k;n X ki;n jð1 ^ jS kN jÞÞ 1 and also Qði; NÞ ¼ lim supt!0 lim supn!1 t V i;N;n ðtÞ: By definition of R2 ðN; X Þ; we have R2 ðN; X ÞpQð0; NÞ þ 2
M1 X
Qði; NÞ þ 2 lim sup lim sup n!1
t!0
i¼1
0
1 X 1N V i;N;n ðtÞ. t i¼M
(3.14)
P½nt
02 i¼1 EðX i;n jMiN;n ÞÞ
Clearly V 0;N;n ðtÞpEðð1 ^ jS n ðtÞjÞ P we have the two inequalities 0 and N1 i¼M V i;N;n ðtÞpEðð1 ^ jS n ðtÞjÞU M;N;n ðtÞÞ: Consequently Qð0; NÞpR3 ðNÞ: In the same way, we obtain that lim sup lim sup t!0
n!1
1 X 1N V i;N;n ðtÞpR4 ðM; NÞ. t i¼M
(3.15)
Now, applying Cauchy–Schwarz twice, we have 0 !1=2 ½nt
½nt
X X 02 V i;N;n ðtÞpE@ X X 02 ð1 ^ jS0 ki;n
k¼1
k;n
kN
!1=2 1 A jÞ
k¼1
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ntEðX 02 0;n Þ V 0;N;n ðtÞ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi and we derive that Qði; NÞp CR3 ðNÞ: This completes the proof of (1). (2) By the triangle inequality, we have ½nt
X 1 0 02 ðEðX k;n jMkN;n Þ un Þ R3 ðNÞp lim sup lim sup ð1 ^ S n ðtÞÞ t n!1 t!0 k¼1 p
1
0
þ lim sup lim sup Eðnun ð1 ^ S n ðtÞÞÞ. t!0
n!1
pffiffi By Lemma 2, lim supn!0 Eðð1 ^ S 0 n ðtÞÞ ¼ Oð tÞ; so that the second term in right hand is 0. (3) By the triangle inequality, we have ½nt
N1 XX 1 0 0 0 R4 ðM; NÞplim sup lim sup ð1 ^ S n ðtÞÞ jX ki;n jðEðjX k;n jjMki;n Þ ui;n Þ t n!1 t!0 i¼M k¼1
þ lim sup lim sup t!0
n!1
1 t
N 1 X i¼M
E ui;n ð1 ^ S 0 n ðtÞÞ
½nt
X k¼1
!
jX 0ki;n j .
1
ð3:16Þ
ARTICLE IN PRESS 756
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Next, applying Cauchy–Schwarz inequality, we obtain that ! qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ½nt
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi X 1 0 0 E ui;n ð1 ^ S n ðtÞÞ jX ki;n j p Eðnu2i;n ð1 ^ S0 n ðtÞÞÞ nEðX 02 0;n Þ. t k¼1 and we conclude as for (2).
(3.17)
&
3.4. Proof of Proposition 1 and Corollary 3 We first prove the following lemma, which is the main result of this section: Lemma P 3. Let X i;n and Mi;n be as in Theorem 1. Let X 0i;n ¼ X i;n EðX i;n jBÞ and 0 0 S ¼ ½nt
i¼1 X i;n : For any non-negative integer N and any function f, let V N;n ðkÞ ¼ Pn ðtÞ k1 0 0 i¼kNþ1 X i;n and Df ðN; n; kÞ ¼ f ðV N;n ðkÞ þ X k;n Þ f ðV N;n ðkÞÞ For any positive number C, let FC be the class of three times continuously differentiable functions from R to R such that kh000 k1 pC; kh00 k1 pC; h0 ð0Þ ¼ 0 and hð0Þ ¼ 0: For any function h belonging to FC ; we have ! ½nt
1 X lim sup lim sup E hðS 0n ðtÞÞ DhðN; n; kÞM0;n t n!1 t!0 k¼1 1
pCðR1 ðN; X Þ þ R2 ðN; X ÞÞ. Proof. For any integer N, we have the decompositions hðS0n ðtÞÞ ¼
½nt
X
X 0k;n h0 ðS 0kN;n Þ þ
k¼1
þ
½nt
X
½nt
X
Z
X 02 k;n
0
k¼1
X 0k;n ðS0k1;n
S 0kN;n Þ
k¼1
1
Z 0
ð1 sÞh00 ðS 0k1;n þ sX 0k;n Þ ds
1
h00 ðS0kN;n þ sðS0k1;n S 0kN;n ÞÞ ds
and hðV N;n ðkÞ þ X 0k;n Þ hðV N;n ðkÞÞ Z 1 02 ¼ X k;n ð1 sÞh00 ðS0k1;n S 0kN;n þ sX 0k;n Þ ds 0
þ X 0k;n ðS 0k1;n S0kN;n Þ
Z
1
h00 ðsðS 0k1;n S 0kN;n ÞÞ ds.
0
From this two decompositions, we get hðS 0n ðtÞÞ
½nt
X k¼1
DhðN; n; kÞ ¼
½nt
X k¼1
X 0k;n h0 ðS 0kN;n Þ þ E 1 þ E 2 ,
(3.18)
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
757
where E1 ¼
½nt
X
X 0k;n ðS 0k1;n S0kN;n Þ
k¼1
Z
1
h00 ðS 0kN;n þ sðS 0k1;n S 0kN;n ÞÞ h00 ðsðS0k1;n S 0kN;n ÞÞ ds
0
E2 ¼
½nt
X
X 02 k;n
k¼1
Z
1 0
ð1 sÞðh00 ðS0k1;n þ sX 0k;n Þ h00 ðS 0k1;n S 0kN;n þ sX 0k;n ÞÞ ds.
Let us first study the first term on right hand in (3.18). Obviously ! ½nt
½nt
½nt
X X X X 0k;n h0 ðS 0kN;n Þ ¼ X 0k;n ðh0 ðS 0i;n Þ h0 ðS0i1;n ÞÞ. i¼1
k¼1
k¼Nþi
Taking the conditional expectation with respect to Mi;n and using that h0 is Clipschitz, ! ½nt
½nt ½nt
X X 0 X 0 0 0 0 X k;n h ðSkN;n ÞM0;n pC EðX k;n jMi;n Þ X i;n E k¼1 i¼1 k¼Nþi 1 1 m X 0 0 ptC max nX 0;n EðX k;n jM0;n Þ , Npmp½nt k¼N 1
the second inequality involving the stationarity of ðX 0i;n Þ: This together with (3.18) yields ! ½nt
X 1 0 lim sup lim sup E hðS n ðtÞÞ DhðN; n; kÞM0;n t n!1 t!0 k¼1 1
1 pCR1 ðN; X Þ þ lim sup lim sup kE 1 þ E 2 k1 . t n!1 t!0 Now, since h00 is C-lipschitz and bounded by C, we easily see that jE 1 jp2C
½nt
X
jX 0k;n ðS0k1;n S 0kN;n Þjð1 ^ jS 0kN;n jÞ
and
k¼1
jE 2 jpC
½nt
X
0 X 02 k;n ð1 ^ jS kN;n jÞ.
k¼1
This completes the proof of Lemma 3.
&
End of the proof of Proposition 1. Assume that ðX i;n Þ is WD and EQ. Let ðN i Þi2N be a non-decreasing sequence converging to N 0 ðX Þ such that AðN i Þ holds for any i in N : For any function g denote by g¯ the function y ! y2 gðyÞ: For any positive integer k, we have EðV 2N;n ðkÞÞpN 2 EðX 02 nEðX 02 0;n Þ: Since by assumption 0;n Þ is P½nt the sequence 2 1 bounded, we infer that, for each N, the sequence t k¼1 EðV N;n ðkÞÞ is bounded.
ARTICLE IN PRESS 758
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
This fact together with AðN i Þ implies that, for each i and each continuous bounded function g, ! Z ½nt
1 X lim lim supE D¯gðN i ; n; kÞ g dF N i M0;n ¼ 0. (3.19) t!0 n!1 t k¼1 1
If furthermore g is three times continuously differentiable with compactly supported derivatives, then g¯ belongs to a class FC for a certain constant C. Now Lemma 3 together with (3.19) yields Z 1 0 lim sup lim supE g dF N i M0;n pCðR1 ðN i ; X Þ þ R2 ðN i ; X ÞÞ. g¯ ðS n ðtÞÞ t n!1 t!0 1 (3.20) From (3.20), we first derive that, for jXi; Z Z lim supE g dF N j g dF N i M0;n p2CðR1 ðN i ; X Þ þ R2 ðN i ; X ÞÞ, n!1
1
(3.21) R R T so that lim supn!1 kEð g dF N j g dF N i j kXn M0;k Þk1 p2CðR1 ðN i ; X Þ þ R2 ðN i ; X ÞÞ: Applying the martingale convergence theorem, and bearing in mind that each F N i is M0;inf -measurable, it follows from (3.21) that Z Z g dF N i p2CðR1 ðN i ; X Þ þ R2 ðN i ; X ÞÞ. for jXi; g dF N j R
1
1
Hence, g dF N i converges in L to an M0;inf -measurable limit LðgÞ: Now, from T (3.20) again limt!0 lim supn!1 kEðt1 g¯ ðS 0n ðtÞÞ LðgÞj kXn M0;k Þk1 ¼ 0; and since LðgÞ is M0;inf -measurable, ! 1 \ g¯ ðS0n ðtÞÞ lim lim supE M0;k LðgÞ ¼ 0. (3.22) t!0 n!1 t kXn 1
From (3.22), we infer that for non-negative g, LðgÞ is almost surely non-negative. Furthermore, Lðf þ gÞ ¼ Lðf Þ þ LðgÞ; LðagÞ ¼ aLðgÞ and EðLð1ÞÞ is finite. This enables us to prove the following lemma: Lemma 4. Let C3 be the space of bounded and three times continuously differentiable functions with compactly supported derivatives. There exists an R M0;inf -measurable distribution function F such that, for any function g in C3 ; LðgÞ ¼ g dF almost surely. Applying Lemma 4, we obtain that Z 1 g¯ ðS0n ðtÞÞ g dF M0;n ¼ 0. lim lim supE t!0 n!1 t 1
(3.23)
To prove that (3.23) still holds for gðyÞ ¼ 1ypx where x is a continuity point of x ! EðF ðxÞÞ; we proceed as in Inequality (3.1), Section 3.1. Since S2(a) holds, we can apply Remark 1 with Z n ðtÞ ¼ ½nt EðX 0;n Þ; and Proposition 1 follows.
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
759
Proof of Lemma 4. Let g : R7!½0; 1 be a function of C3 ; equal to one on 1; 0 and to 0 on ½1; þ1½: Define gn;x ðyÞ ¼ gðnðy xÞÞ: For almost every o; the sequence Lðgn;x ÞðoÞ is non-increasing with n. Denote by Lð1 1;x ÞðoÞ its limit. Since Lðgn;x ÞðoÞpLð1ÞðoÞ a:s:; the dominated convergence theorem ensures that Lðgn;x Þ converges to Lð1 1;x Þ in L1 : It is clear that if hX1 1;x then LðhÞXLð1 1;x Þ a:s:; and if xXy then Lð1 1;x ÞXLð1 1;y Þ a:s:: Therefore, on a set A of probability 1, the function from Q to R : x ! Lð1 1;x Þ is almost surely non-decreasing. Define the random function F as follows: for each o of A, F is the unique distribution function equal to x ! Lð1 1;x Þ on Q: For o in Ac ; F 0: According to our definition, F is a M0;inf -measurable distribution function.P Now, let h be any function m of C3 with compact support, and choose a function h ¼ i¼1 ai 1 xi ;xiþ1 ; with xi 2 Q; R such that 0ph h p: Then jLðhÞ h dF jpLð1Þ a:s: and consequently LðhÞ ¼ R h dF a:s:: This result extends to any function of C3 and the proof of Lemma 4 is complete. & Proof of Corollary 3. We begin with the first part of Corollary 3. Let ðX i;n Þ be WD and EQ with N 0 ðX Þ ¼ 1: From Proposition 1, we know that (2.4) implies S2. Assume that S2 holds and let f x ðyÞ ¼ y2 1ypx : To see that (2.4) holds, it suffices to prove that, for any continuity point of x ! GðxÞ ¼ EðF ðxÞÞ; ! ½nt
1 X 0 0 lim lim sup E f x ðS n ðtÞÞ f x ðX k;n ÞM0;n ¼ 0. (3.24) t!0 n!1 t k¼1 1
For any positive let hx;þ (resp. hx; ) be a positive three times continuously differentiable function, bounded by one, equal to one on the interval
1; x (resp. 1; x ) and to zero on ½x þ ; 1½ (resp. ½x; 1½). The functions f x;þ ðyÞ ¼ y2 hx;þ ðyÞ and f x; ðyÞ ¼ y2 hx; ðyÞ belong to a class FC for a certain constant C. Using these functions, we have the inequality ! ½nt
X 1 f x ðX 0k;n ÞM0;n E f x ðS0n ðtÞÞ t k¼1 1 ! ½nt
X 1 0 p E f x;þ ðS n ðtÞÞ f x;þ ðX k;n ÞM0;n t k¼1 1 ! ½nt
X 1 1 þ E ðf x;þ f x; ÞðX 0k;n Þ þ Eððf x;þ f x; ÞðS 0n ðtÞÞÞ. ð3:25Þ t t k¼1 The first term on right hand is well controlled via Lemma 3. From Remark 1 with Z n ðtÞ ¼ ½nt EðX 0;n jBÞ; we infer that S2(b) holds with X 0i;n instead of X i;n : Combining this fact with Lemma 3, we obtain ! ½nt
X 1 0 lim sup lim sup E ðf x;þ f x; ÞðX k;n Þ pGðx þ Þ Gðx Þ (3.26) t n!1 t!0 k¼1
ARTICLE IN PRESS 760
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
and property S2(b) with X 0i;n instead of X i;n provides also 1 lim sup lim sup Eððf x;þ f x; ÞðS 0n ðtÞÞÞpGðx þ Þ Gðx Þ. n!1 t t!0
(3.27)
Collecting (3.25)–(3.27), we obtain (3.24). This completes the proof of the first part of Corollary 3. The second part of Corollary 3 follows from the first part by noting that, if ðX i;n Þ is an array with i.i.d. rows such that nEðX 0;n Þ converges to g and nEðX 02 0;n Þ is bounded (here B ¼ f;; Og), then it is WD and EQ with N 0 ðX Þ ¼ 1: 3.5. Sufficient conditions for WD and proof of Corollary 4 Proposition 9. Let ðX i;n Þ and Mi;n be as in Theorem 1. Consider the two conditions: P (a) S2(a) holds, and limN!1 lim supn!1 n nk¼N kX 00;n EðX 0k;n jM0;n Þk1 ¼ 0: (b) There exists an M0;inf -measurable square integrable random variable g such that the sequence knEðX 0;n jBÞ gk1 converges to 0, and for some conjugate exponents p; q; lim lim sup nkX 00;n kp
N!1
n!1
n X
kEðX 0k;n jM0;n Þkq ¼ 0.
k¼N
We have the implications ðbÞ ) ðaÞ ) WD: Proof. The fact that ðaÞ ) WD is straightforward. Applying Ho¨lder’s inequality, we easily see that (b) implies the second condition required in (a). It remains to see that (b) also implies S2(a), for some g such that kn EðX 0;n jBÞ gk1 converges to 0. Since by assumption kX 0;n k1 tends to zero as n tends to infinity, S2(a) follows from X n EðX k;n jM0;n Þ EðX 0;n jBÞ lim lim sup N!1 n!1 k¼N 1 X n ¼ lim lim sup EðX 0k;n jM0;n Þ ¼ 0. N!1 n!1 k¼N 1
For any conjugate exponent p; q we have the inequality !1=2 X n n X EðX 0k;n jM0;n Þ p nkX 00;n kp kEðX 0k;n jM0;n Þkq . k¼N k¼N 1
Consequently S2(a) follows from (c) and the proof of Proposition 9 is complete.
&
Proof of Corollary 4. This corollary follows from Proposition 1 and the following proposition. & Proposition 10. Let X i;n ; Mi;n and S n ðtÞ be as in Theorem 1 and let B ¼ f;; Og: Assume that nEðX 0;n Þ converges to g: (1) If condition C f ðp; 1Þ (resp. C r ð1Þ) limM!1 lim supN!1 R4 ðM; NÞ ¼ 0:
holds
then
ðX i;n Þi;n
is
WD
and
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
761
(2) Assume that C 0 and C f ðp; 1Þ (resp. C r ð1Þ) hold. Then R3 ðNÞ tends to zero as N ! 1: (3) Assume that C 0 and C f ðp; NÞ (resp. C r ðNÞ) hold. If BðNÞ holds then AðNÞ holds. Proof. We do the proof under condition C f only. In fact, the proof is the same 1=p under C r by taking p ¼ q ¼ 2 everywhere and replacing f1;N by r1;N : (1) We first prove that if Condition C f ðp; 1Þ holds, then ðX i;n Þ is WD. It suffices to see that Condition (b) of Proposition 9 holds. Applying Serfling’s inequality [23], we 1=p have kEðX 0k;n jM0;n Þkq p2f1;1 ðk; nÞkX 00;n kq ; and consequently nkX 00;n kp
n X
kEðX 0k;n jM0;n Þkq p2nkX 00;n kp kX 00;n kq
k¼N
n X
1=p
f1;1 ðk; nÞ.
k¼N
This inequality together with C f ðp; 1Þ imply that (c) holds, and the array ðX i;n Þ is WD. It remains to see that C f ðp; 1Þ implies limM!1 lim supN!1 R4 ðM; NÞ ¼ 0: From (3) of Proposition 8 R4 ðM; NÞp lim sup n!1
n X
nkX 00;n ðEðjX 0i;n j j M0;n Þ EðjX 00;n jÞÞk1 .
(3.28)
i¼M 1=p
Now nkX 00;n ðEðjX 0i;n j jM0;n Þ EðjX 00;n jÞÞk1 p2nkX 00;n kq kX 00;n kp f1;1 ði; nÞ by Serfling’s inequality. This inequality together with C f ðp; 1Þ and (3.28) gives the result. 0 (2) By C 0 ; we can choose K large enough so that lim supn!1 nEðX 02 0;n 1jX 0;n j4K Þp: Hence, it follows from 2 of Proposition 8 that R3 ðNÞ tends to zero as soon as, for any K40; ½nt
X 1 0 lim lim sup lim sup ð1 ^ S 0 n ðtÞÞ ðEðX 02 k;n 1jX k;n jpK jMkN;n Þ N!1 t n!1 t!0 k¼1 02 EðX 0;n 1jX 00;n jpK ÞÞ ¼ 0: 1
pffiffi Since by Lemma 2 lim supn!1 kð1 ^ S0 n ðtÞÞk2 ¼ Oð tÞ; this equality follows from ! ½nt
X 1 02 lim lim sup lim sup Var EðX k;n 1jX 0k;n jpK jMkN;n Þ ¼ 0. (3.29) N!1 t n!1 t!0 k¼1 0 Setting Y k;n ¼ EðX 02 k;n 1jX k;n jpK jMkN;n Þ; we have the elementary inequality ! ½nt
n X X 1 Var Y k;n p2n jCovðY 0;n ; Y k;n Þj. t k¼0 k¼1
(3.30)
ARTICLE IN PRESS 762
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Now, by definition of Y k;n and applying Peligrad’s inequality [19], we have successively 0 jCovðY 0;n ; Y k;n Þj ¼ jCovðY 0;n ; X 02 k;n 1jX k;n jpK Þj
1=p
02 0 0 p2f1;1 ðk þ N; nÞkX 02 0;n 1jX 0;n jpK kp kX 0;n 1jX 0;n jpK kq 1=p
p2K 2 f1;1 ðk þ N; nÞkX 00;n kp kX 00;n kq . This last inequality together with (3.30) yields ! ½nt
n X X 1 1=p 2 0 0 0 Var EðX 02 f1;1 ðk þ N; nÞ. k;n 1jX k;n jpK jMkN;n Þ p4K nkX 0;n kp kX 0;n kq t k¼1 k¼0 Pn
1=p k¼0 f1;1 ðk
Pn
1=p i¼N f1;1 ði; nÞ þ
Pn
(3.31)
1=p i¼nNþ1 f1;1 ði; nÞ;
Now þ N; nÞp since f1;1 ðk; nÞ is non-increasing in k. Combining this inequality with C f ðp; 1Þ and (3.31), we infer that (3.29) holds. (3) With the notations of Proposition 1, set Z k;n ¼ Df x ðN; n; kÞ: To prove that BðNÞ implies AðNÞ; it suffices to see that ½nt
X (3.32) lim ðEðZ k;n jM0;n Þ EðZk;n ÞÞ ¼ 0. n!1 k¼1 1
jZk;n jp2NðX 02 kNþ1
þ þ X 02 Since C 0 holds and k;n Þ; we can choose K large enough so that lim supn!1 nEðjZ 0;n j1jZ0;n j4K Þp: Therefore, to prove (3.32) it suffices to see that for any K, ! ½nt
X EðZ k;n 1jZk;n jpK jM0;n Þ ¼ 0. (3.33) lim Var n!1
k¼1
Setting W k;n ¼ EðZ k;n 1jZk;n jpK jM0;n Þ; we have the elementary inequality ! ½nt
½nt X ½nt
X X W k;n p2 jCovðW i;n ; W j;n Þj. Var k¼1
i¼1
(3.34)
j¼i
Now, by definition of W k;n and applying Peligrad’s inequality [19], we have successively jCovðW i;n ; W j;n Þj ¼ jCovðW i;n ; Z j;n 1jZj;n jpK Þj 1=p
p2f1;N ððj N þ 1Þþ ; nÞkZi;n 1jZi;n jpK kp kZ j;n 1jZj;n jpK kq 1=p
p8KN 2 f1;N ððj N þ 1Þþ ; nÞkX 00;n kp kX 00;n kq . This last inequality together with (3.34) yields ! ½nt
X Var EðZk;n 1jZk;n jpK jM0;n Þ k¼1
pð16KN 2 nkX 00;n kp kX 00;n kq Þ
½nt
1 X 1=p jf ððj N þ 1Þþ ; nÞ n j¼1 1;N
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
763
P 1=p and the right-hand term tends to zero as soon as limn!1 n1 jpn jf1;N ðj; nÞ ¼ 0: Finally, (3.33) holds as soon as C f ðp; NÞ holds, which completes the proof. & 3.6. Proof of Proposition 2 The main reference here is [6, Section 4], where it is shown that property LP with F ¼ l1½0;1½ follows from both finite dimensional convergence and tightness of some sequences of signed measures. This results extends easily to any M0;inf -measurable distribution function F. Now, finite dimensional convergence follows from S2 as in Section 4:1 in [6]. Furthermore, proceeding as in Section 4:2 of [6] and applying Theorem 15:3 in [2], tightness holds as soon as, setting dk ¼ 2k ; for any positive ;
lim lim sup Pðw00 ðS n ; dk ÞXÞ ¼ 0,
k!1
(3.35)
n!1
where w00 ðx; dÞ ¼ supfjxðtÞ xðt1 Þj ^ jxðt2 Þ xðtÞj; t1 ptpt2 ; t2 t1 pdg is defined for x in Dð½0; 1 Þ and d40: Recall that B is I or f;; Og: From S2(a), knEðX 0;n jBÞ gk1 converges to 0. Hence supt2½0;1 j½nt EðX 0;nP jBÞ tgj converges in L1 to 0. Let X 0i;n ¼ 0 0 0 X i;n EðX i;n jBÞ and S n ¼ ft ! S n ðtÞ ¼ ½nt
i¼1 X i;n g: It follows that (3.35) holds if and only if for any positive ;
lim lim sup Pðw00 ðS 0n ; dk ÞXÞ ¼ 0.
k!1
(3.36)
n!1
Consequently, Proposition 2 follows straightforwadly from the two following lemmas: Lemma 5. Let ðX 0i;n Þ be an array with stationary rows, such that X 00;n converges in probability to zero. Then (3.36) holds as soon as, for any positive ; 1 sup jS0n ðtÞj ^ jS0n ðdÞ S 0n ðsÞjX ¼ 0. lim lim sup P (3.37) d!0 n!1 d 0ptpspd Moreover (3.37) is satisfied provided that 1. For any positive Z; there exists K such that lim d!0 lim supn!1 1d PðjS 0n ðdÞj4KÞpZ: 2. For any real a, limd!0 lim supn!1 1d P sup0ptpspd jS 0n ðtÞj ^ jS 0n ðsÞ ajX2; jS0n ðdÞ ajpÞ ¼ 0: Lemma 6. Let ðX i;n Þ be a WD and 1-EQ array. Let a be a real number and ga be a function from R to ½0; 1 such that: ga is two times continuously differentiable, ga ðxÞ ¼ 1 if jx ajp and ga ðxÞ ¼ 0 if jx ajX2: Let f a ðx; yÞ ¼ 1jxj^jyajX2 : Then 1 sup f a ðS0n ðtÞ; S 0n ðsÞÞga ðS 0n ðdÞÞ ¼ 0. lim lim sup E (3.38) d!0 n!1 d 0ptpspd Proof of Lemma 5. For any integer 0pjp2k 1; let I j;k ¼ ½j2k ; ðj þ 1Þ2k : If t2 t1 pdk ; there are two possibility: either both t1 and t2 belongs to the same I j;k or t1 belongs to I j;k and t2 to I jþ1;k : In the first case, jxðtÞ xðt1 Þj ^ jxðt2 Þ xðtÞjpA1 þ A2 þ A3 þ A4 ; with A1 ¼ jxðtÞ xðj2k Þj ^ jxððj þ 1Þ2k Þ xðtÞj; A2 ¼ jxðtÞ
ARTICLE IN PRESS 764
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
xðj2k Þj ^ jxððj þ 1Þ2k Þ xðt2 Þj; and A3 ¼ jxðt1 Þ xðj2k Þj ^ jxððj þ 1Þ2k Þ xðtÞj; A4 ¼ jxðt1 Þ xðj2k Þj ^ jxððj þ 1Þ2k Þ xðt2 Þj: In the second case jxðtÞ xðt1 Þj ^ jxðt2 Þ xðtÞjpB1 þ B2 þ B3 þ B4 ; with the quantities B1 ¼ jxðtÞ xðj2k Þj ^ jxððj þ 2Þ2k Þ xðtÞj; B2 ¼ jxðtÞ xðj2k Þj ^ jxððj þ 2Þ2k Þ xðt2 Þj; B3 ¼ jxðt1 Þ xðj2k Þj ^ jxððj þ 2Þ2k Þ xðtÞj; B4 ¼ jxðt1 Þ xðj2k Þj ^ jxððj þ 2Þ2k Þ xðt2 Þj: Define the quantities vk;j ðxÞ; wk;j ðxÞ; vk ðxÞ and wk ðxÞ by vk;j ðxÞ ¼ j2
ptpspðjþ1Þ2
k
wk;j ðxÞ ¼
jxðtÞ xðj2k Þj ^ jxððj þ 2Þ2k Þ xðsÞj,
sup j2
vk ðxÞ ¼
jxðtÞ xðj2k Þj ^ jxððj þ 1Þ2k Þ xðsÞj,
sup k
k
ptpspðjþ2Þ2
max
0pjp2k 1
k
vk;j ðxÞ and
wk;j ðxÞ ¼
max
0pjp2k 1
wk;j ðxÞ.
0 00 0 From this definition P Pðw ðS n ; dk ÞX4ÞpPðvk ðS 0n ÞXÞ þ Pðwk ðS By subaddiPn ÞXÞ: 2k 1 2k 1 0 0 0 tivity Pðvk ðSn ÞXÞp j¼0 Pðvk;j ðS n ÞXÞ and Pðwk ðSn ÞXÞp j¼0 Pðwk;j ðS0n ÞXÞ: Using both the stationarity and the fact that X 00;n converges in probability to zero, !
lim sup Pðvk;j ðS 0n ÞXÞ ¼ lim sup P n!1
n!1
sup 0ptpsp2
k
jS 0n ðtÞj ^ jS 0n ð2k Þ S 0n ðsÞjX ,
lim sup Pðwk;j ðS0n ÞXÞ n!1
!
¼ lim sup P
sup
n!1
0ptpsp2kþ1
jS 0n ðtÞj
^
jS 0n ð2kþ1 Þ
S 0n ðsÞjX
.
From the two preceding remarks, it is clear that (3.36) follows from (3.37). Now let ðAi Þi2I be a finite covering of ½K; K by intervals with centers ðai Þi2I and length : We have the inequality 0 0 0 P sup jS n ðtÞj ^ jSn ðdÞ Sn ðsÞjX3 0ptpspd
pPðjS0n ðdÞj4KÞ X P sup þ i2I
0ptpspd
jS0n ðtÞj
^
jS0n ðdÞ
S0n ðsÞjX3; jS0n ðdÞ
ai jp .
ð3:39Þ
If jS 0n ðsÞ S 0n ðdÞjX3 and jS 0n ðdÞ ajp then jS0n ðsÞ ajX2: Combining this fact together with (3.39), we infer that (3.37) follows from 1 and 2 of Lemma 5. This completes the proof. & Proof of Lemma 6. Let f ðkÞ ¼ maxff a ðS 0i;n ; S 0j;n Þ; 1pipjpkg if k40 and f ðkÞ ¼ 0 otherwise. Let gðkÞ ¼ ga ðS 0k;n Þ if k40 and gðkÞ ¼ ga ð0Þ otherwise. Using these notations, (3.38) becomes limd!0 lim supn!1 d1 Eðf ð½nd Þgð½nd ÞÞ ¼ 0: We make the
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
765
decomposition f ð½nd Þgð½nd Þ ¼
½nd
X
gðkÞðf ðkÞ f ðk 1ÞÞ þ
k¼1
½nd
X
f ðk 1ÞðgðkÞ gðk 1ÞÞ.
k¼1
(3.40) To control the first term, note that for positive k, f ðkÞ f ðk 1Þp max fa ðS 0i;n ; S 0k;n Þ.
(3.41)
1pipk
Since f a ðx; yÞga ðyÞ ¼ 0; (3.41) implies that the first term on right hand in (3.40) is 0. Hence f ð½nd Þgð½nd Þ ¼
½nd
X
f ðk 1ÞðgðkÞ gðk 1ÞÞ.
(3.42)
k¼1
Let C ¼ kg00a k1 and g0 ðkÞ ¼ g0a ðS 0k;n Þ: Applying Taylor’s formula, we obtain from (3.42) that f ð½nd Þgð½nd Þp
½nd
X
f ðk 1Þg0 ðk 1ÞX 0k;n þ
k¼1
½nd
CX f ðk 1ÞX 02 k;n . 2 k¼1
(3.43)
Now jf ðkÞg0 ðkÞ f ðk 1Þg0 ðk 1Þjpf ðk 1Þjg0 ðkÞ g0 ðk 1Þj þ jg0 ðkÞjðf ðkÞ f ðk 1ÞÞ: Since g0a ðS0k;n Þfa ðS0i;n ; S0k;n Þ ¼ 0; we infer from (3.41) that the second term on right hand is zero. Since furthermore jg0 ðkÞ g0 ðk 1ÞjpCjX 0k;n j; we easily obtain that for positive k, jf ðkÞg0 ðkÞ f ðk 1Þg0 ðk 1ÞjpCf ðk 1ÞjX 0k;n j.
(3.44)
From (3.43) and (3.44), we have, for any positive integer N, f ð½nd Þgð½nd Þp
½nd
X
X 0k;n f ðk NÞg0 ðk NÞ
k¼1
þC
½nd
X
k X
jX 0k;n X 0i;n j f ði 1Þ.
ð3:45Þ
k¼1 i¼ðkNÞþ þ1
Let us first study the first P term on right hand inPinequality (3.45). Setting hðkÞ ¼ ½nd
½nd P½nd
0 0 f ðkÞg0 ðkÞ; we have X hðk NÞ ¼ ð i¼1 k¼1 k;n k¼Nþi X k;n ÞðhðiÞ hði 1ÞÞ: Taking the conditional expectation with respect to Mi;n and using again (3.44), we obtain ! ½nd
½nd ½nd
X X X X 0k;n hðk NÞ pC EðX 0k;n jMi;n Þ E X 0i;n k¼1 i¼1 k¼Nþi 1 m X 0 0 pCd max nX 0;n EðX k;n jM0;n Þ , Npmp½nd k¼N 1
ARTICLE IN PRESS 766
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
the second inequality involving the stationarity of ðX 0i;n Þ: This together with (3.45) yields 1 lim sup lim sup Eðf ð½nd Þgð½nd ÞÞ n!1 d d!0 C ð3:46Þ pCR1 ðN; X Þ þ lim sup lim sup kQðn; dÞk1 , d n!1 d!0 P½nd Pk where R1 ðN; X Þ is defined in (2.2) and Qðn; dÞ ¼ k¼1 i¼ðkNÞþ þ1 jX 0k;n X 0i;n jf ði 1Þ: 02 Using the basic inequality 2jX 0k;n X 0i;n jpX 02 k;n þ X i;n ; and the fact that f increases, we obtain ½nd
½nd
k X 1X N X 02 Qðn; dÞp X 02 f ði 1Þ þ X f ði 1Þ 2 k¼1 i¼ðkNÞ þ1 k;n 2 i¼1 i;n þ
pN
½nd
X
X 02 k;n f ðk 1Þ.
ð3:47Þ
k¼1
Set S 0 k;n ¼ maxfjS01;n j; . . . ; jS0k;n jg: Since f ðk 1Þpð2Þ1 ðS 0 k1;n ^ 1Þ; inequality (3.47) becomes, letting C 0 ¼ ð2Þ1 _ 1; ½nd
X 0 Qðn; dÞpNC 0 X 02 (3.48) k;n ðS k1;n ^ 1Þ. k¼1
Since ðX i;n Þ is 1-EQ, lim supd!0 lim supn!1 1d Eðf ð½nd Þgð½nd ÞÞpCR1 ðN; X Þ by (3.46) and (3.48). Since ðX i;n Þ is WD, the last term is as small as we wish. This completes the proof. & 3.7. Proofs of Propositions 3– 6 and of Corollary 6 Proof of Proposition 3. Assume that R3 ðN; X Þ tends to zero as N tends to infinity and that L1 ðN 3 Þ holds. Note thatPðS0 k;n ^ 1Þ ðS0 k1;n ^P 1ÞpðjX 0k;n j ^ 1Þ: Hence, ½nt
½nt
0 0 02 02 for k¼1 X k;n ðS k1;n ^ 1Þp k¼1 X k;n ðS kP;n ^ 1Þ þ Pk13 _ 1; 02we have 0 that P½nt PpN i¼kPþ1 X k;n ð1 ^ jX i;n jÞ: From L1 ðN 3 Þ; the expectation of the second term k¼1 on right hand vanishes as n increases. Consequently, it remains to study the first term. the conditional expectation P½nt Taking P½nt with02 respect to MkP;n ; we obtain 0 0 02 EðX ðS ^ 1ÞÞpEððS ðtÞ ^ 1Þ kP;n n k;n k¼1 k¼1 EðX k;n jMkP;n ÞÞ: Consequently, ! ½nt
X 1 02 0 lim sup lim sup E X k;n ðS k1;n ^ 1Þ pR3 ðP; X Þ. t n!1 t!0 k¼1 Since R3 ðP; X Þ tends to 0 as P tends to N 3 ; the result follows. Proof of Proposition 5. We have to prove that if ðX i;n Þ is WD and 0-EQ, then S2(b1) holds with a ¼ 0: From Remark 1 applied to Z n ðtÞ ¼P ½nt EðX 0;n jBÞ; it is equivalent to 0 prove this with X 0i;n ¼ X i;n EðX 0i;n jBÞ and S 0n ðtÞ ¼ ½nt
i¼1 X i;n : Write first 02 02 S ðtÞ S ðtÞ 2 E n ð1 ^ jS 0n ðtÞjÞ pE n 1jS0n ðtÞj42 þ EðS02 n ðtÞÞ t t t 4 2 p EðjS0n ðtÞj Þ2þ þ EðS 02 ð3:49Þ n ðtÞÞ. t t
ARTICLE IN PRESS J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
767
From Lemma 2, t1 EðS02 n ðtÞÞ is bounded, so that the second term on right hand is a small as we wish. Consequently, we infer from (3.49) that S2(b1) holds with a ¼ 0 as soon as,
1 (3.50) for any positive ; lim lim sup E ðjS0n ðtÞj Þ2þ ¼ 0. t!0 n!1 t We shall prove that (3.50) holds with S 0 n ðtÞ instead of jS0n ðtÞj: Let Gðt; ; nÞ ¼ fS 0 n ðtÞ4g: From Proposition 1(a) in [7], we have, for any positive integer N, ! ½nt X 1 1 1XN 2 0 0 0 EððSn ðtÞ Þþ Þp8E 1Gðt;;nÞ jX k;n X kþi;n j t t k¼1 i¼0 m 0 X 0 þ 8n sup kX 0;n EðX k;n jM0;n Þ . ð3:51Þ Npmp½nt k¼Nþ1 1
Since ðX i;n Þ is WD, the second term on right hand is as small as we wish by choosing N large enough. To control the first term, note that Gðt; ; nÞp1 ð1 ^ S 0 n ðtÞÞ Since 02 furthermore 2jX 0k;n X 0l;n jpX 02 we infer that, for any positive k;n þ X Pl;n½nt and PðX i;n Þ is0 0-EQ 0 1 ; limt!0 lim supn!1 t Eð1Gðt;;nÞ k¼1 N1 jX X k;n kþi;n jÞ ¼ 0: Consequently, (3.51) i¼0 yields 1 2 for any positive ; lim lim sup EððS 0 (3.52) n ðtÞ Þþ Þ ¼ 0. t!0 n!1 t Of course, the same arguments apply to the array ðX i;n Þ; so that (3.52) holds for S 0 n ðtÞ: This proves (3.50) and hence S2(b1). The first item of Proposition 5 follows directly from Lemma 3 by noting that the function x ! x2 belongs to F2 and that R2 ðN; X Þ ¼ 0 for any positive integer N (so that N 0 ¼ N 1 ). The second item of Proposition 5 follows directly from Proposition 1 by noting that if kXN; then 0 0 0 ðV N;n ðkÞ þ X 0k;n Þ2 ðV N;n ðkÞÞ2 ¼ X 02 k;n þ 2X k;n ðX k1;n þ þ X kNþ1;n Þ: & Proof of Proposition 4. Let ðX i;n Þ be a WD array such that nEðX 02 bounded. 0;n Þ is Assume that C 1 holds and that L1 ðNÞ is satisfied for some N in N such that R1 ðN; X Þ ¼ 0: For any positive integer P, we have R1 ð1; X Þp lim sup n n!1
P1 X
kX 00;n X 0k;n k1 þ R1 ðP; X Þ.
(3.53)
k¼1
Choose a finite integer PpN such that RðP; X Þp: We have the inequality Now, L1 ðNÞ nkX 00;n X 0k;n k1 pnkX 00;n k2 kX 0k;n 1jX 00;n j4 k2 þ nkX 0k;n k2 kX 00;n 1jX 00;n jp k2 : means exactly that for any positive and any integer 1pkoN; the sequence n1=2 kX 0k;n 1jX 00;n j4 k2 tends to zero. Letting first n go to infinity and next go to zero, Condition C 1 implies that n1=2 kX 00;n 1jX 00;n jp k2 vanishes. Since furthermore n1=2 kX 00;n k2 ¼ n1=2 kX 0k;n k2 is bounded, we conclude that nkX 00;n X 0k;n k1 tends to zero. From (3.53) we infer that Rð1; X Þ ¼ 0; and consequently N 1 ðX Þ ¼ 1: & Proofs of Proposition 6 and Corollary 6. Assume that ðX i;n Þ is WD with N 1 ðX Þ ¼ 1 and 1-EQ. From Corollary 3, P1ðg; a; lÞ holds if and only if (2.4) holds with F ¼ l1½a;1½ : Now (2.4) holds for F if and only if C 2 and C 3 holds. This completes the
ARTICLE IN PRESS 768
J. Dedecker, S. Louhichi / Stochastic Processes and their Applications 115 (2005) 737–768
Proof of Proposition 6. Since C 2 with Pða ¼ 0Þ ¼ 0 implies C 1 ; Corollary 6 follows from Propositions 4, 6 and 3. &
References [1] D.J. Aldous, G.K. Eagleson, On mixing and stability of limit theorems, Ann. Probab. 6 (1978) 325–331. [2] P. Billingsley, Convergence of Probability Measures, Wiley, New York, 1968. [3] B.M. Brown, G.K. Eagleson, Martingale convergence to infinitely divisible laws with finite variances, Trans. Amer. Math. Soc. 162 (1971) 449–453. [4] C. Castaing, P. Raynaud de Fitte, M. Valadier, Young Measures on Topological Spaces. With Applications in Control Theory and Probability Theory, Kluwer Academic Publishers, Dordrecht, 2004. + R. Fischler, Some examples and results in the theory of mixing and random-sum central [5] M. Cso¨rgo, limit theorems, Period. Math. Hungar. 3 (1973) 41–57. [6] J. Dedecker, F. Merleve`de, Necessary and sufficient conditions for the conditional central limit theorem, Ann. Probab. 30 (2002) 1044–1081. [7] J. Dedecker, E. Rio, On the functional central limit theorem for stationary processes, Ann. Inst. H. Poincare´ Probab. Statist. 36 (2000) 1–34. [8] G.K. Eagleson, Martingale convergence to mixture of infinitely divisible laws, Ann. Probab. 3 (1975) 557–562. [9] G.K. Eagleson, Some simple conditions for limit theorems to be mixing, Teor. Verojatnost. Primenen. 21 (1976) 653–660. [10] G.K. Eagleson, Martingale convergence to the Poisson distribution, Cˇasopis Peˇst. Mat. 10 (1976) 271–276. [11] B.V. Gnedenko, A.N. Kolmogorov, Limit Distributions for Sums of Independent Random Variables, Addison-Wesley Publishing Company, 1954. [12] P. Hall, C.C. Heyde, Martingale Limit Theory and its Applications, Academic Press, New York, 1980. [13] T. Hsing, J. Hu¨sler, M.R. Leadbetter, On the exceedance point process for a stationary sequence, Probab. Theory Related Fields 78 (1988) 97–112. [14] W.N. Hudson, H.G. Tucker, J.A. Veeh, Limit distributions of sums of m-dependent Bernoulli random variables, Probab. Theory Related Fields 82 (1989) 9–17. [15] P. Jeganathan, A solution of the martingale central limit problem. I, II, Sankhya¯ Ser. A 44 (1982) 299–318, 319–340. [16] P. Jeganathan, A solution of the martingale central limit problem. III. Invariance principles with mixtures of Le´vy processes limits, Sankhya¯ Ser. A 45 (1983) 125–140. [17] O. Kallenberg, Random Measures, Akademie, Berlin, 1975. [18] M. Kobus, Generalized Poisson distributions as limits of sums for arrays of dependent random vectors, J. Multivariate Anal. (1995) 199–244. [19] M. Peligrad, A note on two measures of dependence and mixing sequences, Adv. Appl. Probab. 15 (1983) 461–464. [20] A. Re´nyi, On stable sequences of events, Sankya¯ Ser. A 25 (1963) 293–302. [21] J.D. Samur, On the invariance principle for stationary f-mixing triangular arrays with infinitely divisible limits, Probab. Theory Related Fields 75 (1987) 245–259. [22] K.I. Sato, Le´vy processes and infinitely divisible distributions, Cambridge Studies in Advanced Mathematics, vol. 68, 1999. [23] R.J. Serfling, Contributions to central limit theory for dependent variables, Ann. Math. Statist. 39 (1968) 1158–1175.